If you don’t know an answer to a question already, I would not give the question to one of these systems [AI/ChatGPT],” says Subbarao Kambhampati, a professor and researcher of artificial intelligence at Arizona State University.
This from a good piece, Wednesday morning by The New York Times’ Karen Weise and Cade Metz.
Figuring out why chatbots make things up and how to solve the problem has become one of the most pressing issues facing researchers as the tech industry races toward the development of new A.I. systems.”
Weise and Metz asked ChatGPT when did The New York Times first report on “artificial intelligence”?
And the answer?
“According to ChatGPT, it was July 10, 1956, in an article titled “Machines Will Be Capable of Learning, Solving Problems, Scientists Predict” about a seminal at Dartmouth College.”
The problem was that the 1956 conference was real – but the article was not. ChatGPT simply made it up.
ChatGPT doesn’t just get things wrong at times, it can fabricate information. Names and dates. Medical explanations. The plots of books. Internet addresses. Even historical events that never happened.”
ChatGPT wasn’t alone, Google’s Bard and Microsoft’s Bing provided wrong answers.
Wrong info like this is common in AI, tech companies call it an “hallucination.”
Hallucinations, reports the Times, are big issues when companies rely too heavily on AI for medical and legal advice and other information they use to make decisions.
Metz and Weise reported on one internal Microsoft document that said AI systems are “…Built to be persuasive, not truthful. This means that outputs can look very realistic but include statements that aren’t true.”
Users of ChatCPT know what I mean. It’s almost addictive.
OpenAI, Google and Microsoft have developed ways to improve the accuracy, per the Times.
Because the internet is filled with untruthful information, the technology repeats the same untruths – and sometimes it produces new text, combining billions of patterns in unexpected ways. “This means even if they learned solely from text that is accurate, they may still generate something that is not.”
For lawyers, AI, even with hallucinations, remains very helpful.
You just need to become more effective and strategic in your blogging on topics you know – and become very effective in your legal work using a data set that is mined from defined knowledge.