Skip to content

Yes, AI Hallucinations May Be Increasing—But It Shouldn’t Slow Legal Bloggers and Publishers AI Use

May 14, 2025

Legal journalists/bloggers have recently raised concerns about an increase in hallucinations—factual inaccuracies generated by ChatGPT-4.0. These concerns echo what some tech teams and legal tech vendors using OpenAI’s API are also beginning to notice.

As reported by Kyle Wiggers of TechCrunch, in response, OpenAI has pledged to publish its AI safety testing results more frequently. From am OpenAI blog post on Wednesday,

As the science of AI evaluation evolves, we aim to share our progress on developing more scalable ways to measure model capability and safety. By sharing a subset of our safety evaluation results here, we hope this will not only make it easier to understand the safety performance of OpenAI systems over time, but also support community efforts⁠ to increase transparency across the field.

Having said that, legal bloggers and publishers need to keep in mind a couple points.

One, discussions of halucinations and self-induced fear of hallucinatons should not slow you down for a second in your use of AI in publishing. AI is here, and is only going to enable more effective and impactful legal publishing.

Two, publishing today, means human and AI collaboration. Legal publishers do not just turn on AI and wing it.

As legal bloggers and publishers, we use AI for ideation, drafting, editing, research and the like. Things accomplished more effectively and strategically than ever before—with AI.

Companies developing publishing tools and LLM’s specifically for the law, ie, LexBlog, Inc. or tools and LLM’s for users in general are certainly going to observe and adjust for hallucinatons, but legal publishers and bloggers should acknowledge hallucinatons exist but realize how little there are and how litttle they impact them. We’re not writing briefs and not checking whether the cases we cited even exist.