Post by account_disabled on Feb 18, 2024 0:59:27 GMT -5
As marketers begin to use ChatGPT, Google's Bard, Microsoft's Bing Chat, Meta AI, or their own large language models (LLMs), they need to worry about “hallucinations” and how to prevent them. IBM provides the following definition of hallucinations: “AI hallucination is a phenomenon in which a large language model (often a generative AI chatbot or computer vision tool) perceives patterns or objects that are nonexistent or imperceptible to human observers, creating results that are meaningless or completely insensitive. incorrect. “Generally, if a user requests a generative AI tool, they want a result that adequately addresses the message (i.e., a correct answer to a question). However, sometimes AI algorithms produce results that are not based on training data, are incorrectly decoded by the transformer, or do not follow any identifiable pattern. In other words, 'blow your mind' at the response." Suresh a Brown University professor who helped co-author the White House AI Bill of Rights Project, said in a CNN blog post that the problem is that LLMs are simply trained to "produce an answer that sounds plausible” to the user. indications. “So in that sense, any answer that seems plausible, whether accurate or factual, made up or not, is a reasonable answer, and that's what it produces. There is no knowledge of the truth there.
He said a better behavioral analogy than hallucinating or lying, which carries connotations of something being wrong or bad intentions, would be to compare these computer results to how his young son told stories at age four. "You just have to say, 'And then what happened?' and would continue to produce more stories,added. “And he just went on and on.” Frequency of hallucinations If hallucinations were “black swan” events (which rarely occur) they would America Mobile Number List be something marketers should be aware of but not necessarily pay much attention to. However, according to Vectara studies, chatbots fabricate details in at least 3% of interactions (and up to 27%, despite measures taken to prevent this from happening). “We gave the system between 10 and 20 facts and asked for a summary of those facts,” said Amr Awadallah, CEO of Vectara and former Google executive, in an Investis Digital blog post. "It is a fundamental problem that the system can still introduce errors." According to the researchers, hallucination rates may be higher when chatbots perform other tasks (beyond mere summarization). What marketers should do Despite the potential challenges posed by hallucinations, generative AI offers many advantages.
To reduce the possibility of hallucinations, we recommend: Use generative AI only as a starting point for writing: Generative AI is a tool, not a substitute for what you do as a marketer. Use it as a starting point and then develop prompts for solving questions that will help you complete your work. Make sure your content is always aligned with your brand voice. Check LLM Generation Content: Peer review and teamwork are essential. Check sources: LLMs are designed to work with large volumes of information, but some sources may not be credible. Use LLMs tactically: Run your drafts through generative AI to find missing information. If generative AI suggests something, check it first, not necessarily because of the chances of a hallucination occurring, but because good marketers vet their work, as mentioned above. Follow evolution: Stay up to date with the latest advances in AI to continually improve the quality of results and stay aware of new capabilities or emerging problems with hallucinations and anything else. Benefits of hallucinations? However, as dangerous as they may be, hallucinations can have some value, according to FiscalNote's Tim Hwang.
He said a better behavioral analogy than hallucinating or lying, which carries connotations of something being wrong or bad intentions, would be to compare these computer results to how his young son told stories at age four. "You just have to say, 'And then what happened?' and would continue to produce more stories,added. “And he just went on and on.” Frequency of hallucinations If hallucinations were “black swan” events (which rarely occur) they would America Mobile Number List be something marketers should be aware of but not necessarily pay much attention to. However, according to Vectara studies, chatbots fabricate details in at least 3% of interactions (and up to 27%, despite measures taken to prevent this from happening). “We gave the system between 10 and 20 facts and asked for a summary of those facts,” said Amr Awadallah, CEO of Vectara and former Google executive, in an Investis Digital blog post. "It is a fundamental problem that the system can still introduce errors." According to the researchers, hallucination rates may be higher when chatbots perform other tasks (beyond mere summarization). What marketers should do Despite the potential challenges posed by hallucinations, generative AI offers many advantages.
To reduce the possibility of hallucinations, we recommend: Use generative AI only as a starting point for writing: Generative AI is a tool, not a substitute for what you do as a marketer. Use it as a starting point and then develop prompts for solving questions that will help you complete your work. Make sure your content is always aligned with your brand voice. Check LLM Generation Content: Peer review and teamwork are essential. Check sources: LLMs are designed to work with large volumes of information, but some sources may not be credible. Use LLMs tactically: Run your drafts through generative AI to find missing information. If generative AI suggests something, check it first, not necessarily because of the chances of a hallucination occurring, but because good marketers vet their work, as mentioned above. Follow evolution: Stay up to date with the latest advances in AI to continually improve the quality of results and stay aware of new capabilities or emerging problems with hallucinations and anything else. Benefits of hallucinations? However, as dangerous as they may be, hallucinations can have some value, according to FiscalNote's Tim Hwang.