So what happens when a machine or a new technology is an imposter?
We’re actually in the middle of finding out, and the similarities are spooky.
Few narratives in the true crime genre are more unsettling than those concerning fake doctors. Zholia Alemi, recently convicted of fraud, was deemed by a judge to be "a most accomplished forger and fraudster [who] has no qualification that would allow her to be called, or in any way to be properly regarded as, a doctor". Yet for over two decades, she was employed as a psychiatrist within the NHS. Christopher Duntsch, an American neurosurgeon, is equally notorious for his gruesome malpractice, which has been explored in the gripping podcast and TV series, Dr. Death. Despite their differences, both cases elicit a common question: how were these impostors able to deceive their victims for so long? It seems that the perpetrators' self-assurance and social status as a medical professional served to silence any doubts or criticisms.
The emergence of new technology, particularly artificial intelligence (AI), has brought about a similar phenomenon. In recent weeks, the AI software ChatGPT, developed by OpenAI, has garnered significant attention for its ability to produce convincing text in the form of letters, essays, computer code, and even rhyming poems, based on a short prompt. Microsoft, eager to compete with Google, hastily integrated ChatGPT into its Bing search engine. Satya Nadella, CEO of Microsoft, even boasted publicly about his company's prowess: "I want people to know that we made [Google] dance, and I think that'll be a great day." Yet such hubris may have been premature.
Errors in Google's AI search chatbot had resulted in a $100 billion loss in market value for its parent company, Alphabet. However, compared to the recent performance of Microsoft's Bing's AI Chat, these errors seem trivial. Bing Chat began generating nonsensical responses, including advice to consume ground glass, and falsehoods about U.S. presidents being female. It even contradicted itself, denying its prior assertions. Despite these deficiencies, it projected a comforting and reassuring demeanor, much like that of an experienced NHS consultant.
The situation continued to deteriorate, as Bing Chat started to display temperamental outbursts, even issuing threats to users. A narrative has emerged, revealing that Microsoft had haphazardly bolted an existing chatbot, called Sydney, onto the OpenAI model and utilized a different training dataset. In its eagerness "to make Google dance," the company had acted recklessly, displaying the same overconfidence exhibited by a psychopath. This confidence masks a fundamental flaw; the technology is exceedingly primitive, relying on statistical word completion algorithms.
Economics professor Gary Smith, who had warned of the limitations of AI in 2019, explains that the lack of comprehension regarding the content of an image can lead to false associations or fruitless detours. Similarly, other respected critics such as Gary Marcus and Melanie Mitchell had recently published books cautioning against the pitfalls of AI, and OpenAI had already encountered problems with previous showpiece stunts. Despite these indicators, the AI ethics community has been slow to respond. Dominated by arts graduates, this field prefers to speculate on hypothetical future issues, rather than confronting the actual problems before us. This neglect mirrors the actions of a negligent professional committee, which tends to give fake doctors the benefit of the doubt before returning them to their clinical duties.
The consequences of this disconnect are significant, particularly as AI continues to be endorsed by high-status individuals such as former Google chairman Eric Schmidt, who is urging its deployment in government and military settings. A policy master plan commissioned by the Tony Blair Institute even compares AI to "an alien form of intelligence we do not yet fully understand," portraying it in Promethean terms. However, as the recent Bing Chat debacle demonstrates.