INTRODUCATION:
The main advantage of most technical advancements throughout the years has been productivity, which is why I find that AI initiatives from organizations like Microsoft prioritize productivity. This is due to the fact that productivity gains are more simpler to measure monetarily than any other parameter, including quality. Because of this emphasis on productivity, quality and quality issues with AI platforms have received less attention, as seen by the recent WSJ head-to-head AI comparison piece that placed Microsoft’s Copilot last.
Since Copilot is used for coding, this is very challenging. Since mistakes are being introduced into code at machine rates that may exceed the pace at which they can be found and corrected, this might have far-reaching effects on future quality and security.
Furthermore, AI is being directed toward tasks that users want to complete, but it still needs human intervention to complete tasks like reviewing and commenting code. This builds on the meme that stated, “My goal for AI was to clean my house and do my laundry so I could spend more time doing things I enjoy, like drawing, writing creatively, and making music.” AI is being developed to make music, draw, and write creatively, freeing me up to do the things I detest.
Where AI Must Be Directed
While AI solutions like Devin are being developed to help with the labor crisis, productivity is still crucial, and productivity without an emphasis on improved direction might be troublesome. Allow me to clarify my meaning.
I took a lesson at IBM many years ago when I was switching from internal audit to competitive intelligence, and it has stayed with me ever since. The lecturer illustrated this point with an X/Y chart, pointing out that most businesses prioritize reaching the stated objective as quickly as possible when it comes to strategy execution.
Speed shouldn’t be the initial step, according to the teacher. It should be reassuring you that you are moving on the proper route. If not, you are deviating from your intended course at an accelerated rate as you failed to confirm the objective beforehand.
At every organization I have worked for over the years, I have witnessed this unfold. Ironically, it was frequently my responsibility to ensure direction, but most of the time, choices were taken either before my work was turned in or because the person making the decisions saw me and my team as a danger. The decision-maker’s reputation would suffer if we were correct and they were incorrect. I later learned about Argumentative Theory, which contends that we are hardwired to fight to appear right, regardless of being right, because those who are seen to be right got the best mates and the most senior positions in the tribe. At first, I thought this was because of Confirmation Bias, or our tendency to accept information that validates a prior position and reject anything that doesn’t.
I believe that Argumentative Theory, which makes CEOs believe that if AI can make better judgments, aren’t they redundant, is a major factor in why we do not focus AI on ensuring we make better decisions. Why then take that chance?
However, as I have seen firsthand time and time again, poor judgments ruin a business. Even though we are rife with poor decisions (especially those involving strategy), OpenAI doesn’t seem to be interested in using AI to address the issue. Examples of potentially disastrous decisions include Sam Altman stealing Scarlett Johanson’s voice, the way Altman was fired by OpenAI, and the undervaluing of AI quality in favor of speed.
Concluding
We do not have a hierarchy in mind for where AI should be used initially. To avoid heading in the wrong route at machine speeds, that hierarchy should begin with decision support, go to staff enhancement before substituting Devin-like products, and only then move to speed.
Using Tesla as an example, the company’s emphasis on releasing Autopilot before it was capable of performing the function of an Autopilot has resulted in an astonishing number of preventable deaths. We are rife with poor judgments on an individual and professional level, which is losing us employment, degrading our quality of life (global warming), and negatively affecting the quality of our relationships.
Future disastrous events that might have been prevented are probably going to happen as a result of our lack of attention to and opposition to AI’s ability to assist humans make better judgments. Therefore, instead of maybe increasing the rate at which we make mistakes—which is, regrettably, the direction we are now taking—we ought to be concentrating considerably more on ensuring that these mistakes do not occur.