‘Trump shooting didn’t happen’: Meta’s AI assistant says; company blames hallucinations for incorrect response
Meta’s AI assistant incorrectly said that the recent attempted assassination attempt on former U.S. President Donald Trump did not happen. The tech giant on its part is now blaming AI hallucinations as the cause behind the inaccurate response, calling the incident “unfortunate”.
Meta also denied that bias in the models could have caused the inaccurate responses.
The company further said that “it’s a known issue that AI chatbots, including Meta AI, are not always reliable when it comes to breaking news or returning information in real time” and that the company is working to address the problem.
“These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems and is an ongoing challenge for how AI handles real-time events going forward,” the company said in a blogpost.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
Earlier, Google also refuted claims that its search autocomplete feature was censoring results about the assassination attempt.
Donald Trump, the current Republican nominee has been a vocal critic of tech companies. Trump in a post on Truth Social said, “Here we go again, another attempt at RIGGING THE ELECTION!!!”, asking his followers to “Go after Meta and Google”.
Hallucination in AI chatbots is when a machine provides convincing but completely made-up answers. It is not a new phenomenon and developers have warned of AI models being convinced of completely untrue facts, responding to queries with made-up answers. This example highlights how difficult it can be to overcome what large language models are inherently designed to do, which is to generate information based on the available data.