>>24446264
AI already halucinates. Grok was doing it to anon.
It was quoting things that never happened. It's more common that you think.
AI hallucinations occur when generative AI models create confident but false, illogical, or fabricated information. Examples include AI fabricating court cases (lawyer using ChatGPT), inventing nonexistent books, inaccurately describing scientific facts, misreporting news (e.g., false ceasefire claims), or generating images with distorted features like extra fingers.
Originality.ai
Originality.ai
+2
Key Examples of AI Hallucinations
Fabricated Legal Cases: A lawyer used ChatGPT for legal research, submitting a brief containing court cases that did not exist.
Invented News/Facts: Google's Bard (now Gemini) falsely stated in its demo that the James Webb Space Telescope took the first pictures of an exoplanet, a fact that spread misinformation.
Customer Service Misinformation: An Air Canada chatbot created a non-existent refund policy, which the airline was then forced to honor.
Fabricated Biographies: Chatbots have been known to invent scandalous details about people, such as wrongly claiming an Australian politician was guilty of bribery.
Non-existent Academic References: When asked to write papers, AI often invents citations, journal articles, and sources that do not exist.
Image Generation Errors: AI models often struggle with realism, producing images with too many fingers on a hand or distorted text.
Misidentifying Real-World Scenarios: A healthcare AI model might misidentify a benign skin lesion as malignant, leading to potential misdiagnoses.
IBM
IBM
+5
Common Reasons for Hallucinations
Input Bias: The model is trained on incorrect, incomplete, or biased data.
Pattern Matching Limitation: LLMs function as advanced autocompletes, predicting the next likely word, not ensuring factual accuracy.
Overconfident Response: The AI is programmed to provide an answer, making it likely to fabricate details to fulfill a prompt rather than admitting it does not know.