From a Seth Godin blog post, a couple days ago,
If you ask an AI a question and it’s not confident in the answer, it should say, “I’m not sure.”
That could be followed up with, “do you want me to guess” or “if you could give me more context…”
Maybe not that phrase but I have had AI tell me it needed more context, it couldn’t answer that question and that it would not comment on a New York Times and OpenAI question as they were in litigation.
I use Lou, an AI powered publishing assistant, in my blogging. Lou is a feature on the LexBlog platform and based upon OpenAI’s API.
AI is akin to an open book exam for me—I still need to understand the concepts. I’m not taking what AI tells me and running with it.
I do not use AI to write my blog posts. I use it as an assistant on items such as for blog titles, grammar/spelling, wording of a sentence, tweets, LinkedIn summaries and idea generation for portions of a post.
I do not use all of these on each post. By and large, what I receive is accurate and helpful.
Godin says for AI to be “proudly and confidently bluffing isn’t helpful.”
To which GPT responded,
Some people feel that the tone and style of AI-generated content, including ChatGPT, can be inherently optimistic or positive. This perception might stem from the AI’s design to be helpful, informative, and supportive, avoiding negative or harmful content.
Go figure.