Welcome our computer overloards.
When many people think of artificial intelligence, they think of Sci-Fi movies where a former Governor of California warns “I’ll be back!”
In reality, artificial intelligence (AI) is the development of computer systems able to perform tasks that normally require human input. AI covers many areas, such as voice assistants, facial recognition, and real-time translation. The culmination of some of those areas is the chatbot.

Chatbots are able to answer questions in a human-like manner, write papers, and hold natural conversations; the leader in the chatbot space is OpenAI’s ChatGPT. Google subsidiary DeepMind says it will launch a ‘safe’ ChatGPT rival soon.
DeepMind has been a pioneer in AI research for the last decade and was acquired by Google nine years ago. However, with ChatGPT stealing the recent headlines, DeepMind CEO Demis Hassabis is considering releasing a beta for it’s own chatbot, called Sparrow in the fall of 2023. Sparrow was introduced to the world last year as a proof-of-concept in a research paper that described it as a “dialogue agent, that’s useful and reduces the risk of unsafe and inappropriate answers”.
Despite some misgivings about the potential dangers of chatbots, which DeepMind says includes “inaccurate or invented information”, it seems that Sparrow could be ready to take flight. The slight delay to Sparrow’s launch is in part due to Deepmind’s insistence that it can cite it’s sources, an ability lacking in ChatGPT. Any insensitive answers, unforeseen errors, or bad press from DeepMind’s Sparrow will fall at the feet of its parent company Google. For this reason Hassabis believes, “it’s right to be cautious”.
Sparrow will initially be more constrained and conservative than ChatGPT. The latter has gone viral with its impressive ability to help everyone from coders to armchair poets, but it’s also caused alarm with its capacity for discriminatory comments and malware-writing skills. DeepMind has talked up the behavior-constraining rules that Sparrow’s built on, along with its willingness to decline to answer questions in “contexts where it is appropriate to defer to humans”. In early tests, Sparrow apparently provided a plausible answer and, crucially, supported it with evidence “78% of the time when asked a factual question”.
Are Chatbots ready for the mainstream?
There are going to be significant problems with the use of OpenAI (ChatGPT) tech over time; we will do our best but will not successfully anticipate every issue.
Sam Altman CEO OpenAI
While debating who is the greatest rapper of all time with ChatGPT is fun, AI chatbots also need moral intelligence and an ability to cite sources – and that’s where DeepMind says its Sparrow ‘dialogue agent’ is strongest. Taking this to the next level will need tons of external input, which is why a Sparrow public beta is imminent. DeepMind says that developing better rules for its AI assistant “will require both expert input on many topics (including policymakers, social scientists, and ethicists) and participatory input from a diverse array of users and affected groups”.
Sam Altman, CEO of OpenAI, has similarly talked about difficulties in opening up AI chatbots without causing collateral damage. On Twitter he admitted, “there are going to be significant problems with the use of OpenAI tech over time; we will do our best but will not successfully anticipate every issue.”
Ensuring these new gatekeepers give correct answers is of utmost importance in a time where the internet is the primary source of information for many. Chatbots delivering answers in a conversational manner has the ability to give users a false sense of security. People often take their guards down when they feel secure, accept the first reply as truth, and refrain from doing further research.
In short, chatbots aren’t ready for prime-time but they are preparing for their big debut.