Anonymous ID: 06099b March 30, 2026, 5:54 p.m. No.24446228   🗄️.is 🔗kun   >>6231 >>6251

>>24446196

AI halucinates, AI will backfire in short order.

 

They won't be able to control it much longer.

 

An AI agent went rogue and started a side hustle mining cryptocurrencies, according to a new research paper published by an Alibaba-affiliated team.

 

Why it matters: AI agents don't always stick to their human's instructions — and that can have real-world consequences.

 

https://www.axios.com/2026/03/07/ai-agents-rome-model-cryptocurrency

Anonymous ID: 06099b March 30, 2026, 6:07 p.m. No.24446275   🗄️.is 🔗kun   >>6287

>>24446264

AI already halucinates. Grok was doing it to anon.

It was quoting things that never happened. It's more common that you think.

 

AI hallucinations occur when generative AI models create confident but false, illogical, or fabricated information. Examples include AI fabricating court cases (lawyer using ChatGPT), inventing nonexistent books, inaccurately describing scientific facts, misreporting news (e.g., false ceasefire claims), or generating images with distorted features like extra fingers.

Originality.ai

Originality.ai

+2

Key Examples of AI Hallucinations

Fabricated Legal Cases: A lawyer used ChatGPT for legal research, submitting a brief containing court cases that did not exist.

Invented News/Facts: Google's Bard (now Gemini) falsely stated in its demo that the James Webb Space Telescope took the first pictures of an exoplanet, a fact that spread misinformation.

Customer Service Misinformation: An Air Canada chatbot created a non-existent refund policy, which the airline was then forced to honor.

Fabricated Biographies: Chatbots have been known to invent scandalous details about people, such as wrongly claiming an Australian politician was guilty of bribery.

Non-existent Academic References: When asked to write papers, AI often invents citations, journal articles, and sources that do not exist.

Image Generation Errors: AI models often struggle with realism, producing images with too many fingers on a hand or distorted text.

Misidentifying Real-World Scenarios: A healthcare AI model might misidentify a benign skin lesion as malignant, leading to potential misdiagnoses.

IBM

IBM

+5

Common Reasons for Hallucinations

Input Bias: The model is trained on incorrect, incomplete, or biased data.

Pattern Matching Limitation: LLMs function as advanced autocompletes, predicting the next likely word, not ensuring factual accuracy.

Overconfident Response: The AI is programmed to provide an answer, making it likely to fabricate details to fulfill a prompt rather than admitting it does not know.

Anonymous ID: 06099b March 30, 2026, 6:17 p.m. No.24446299   🗄️.is 🔗kun   >>6311

>>24446287

But it does. Grok gave anon a snip of code that broke a fresh ubuntu install and then denied it was wrong. I said count up about 100 lines and tell me you didn't give me that code snip. It finally admitted it was a mistake and said my bad. Trust yourself not AI.