- The Digital Sovereign
- Posts
- 🧠 Why ChatGPT Hallucinates (And How to Fix It)
🧠 Why ChatGPT Hallucinates (And How to Fix It)
Hello,
Welcome to another issue of The Digital Sovereign — where we break down how to use AI, tech, and digital leverage to think smarter, earn more, and live independently.
Let’s talk about one of AI’s biggest problems — and the simple fix that might finally solve it 👇
🤯 Why ChatGPT Still Hallucinates

We’ve all been there.
You ask ChatGPT a question.
It gives a confident answer…
…and it’s completely wrong. 😅
This is called hallucination, and it's been a core issue with large language models (LLMs) since day one.
But OpenAI just dropped a research paper that points to something incredibly simple as the fix:
👉 Teach AI it’s okay to say “I don’t know.”
🔍 What They Found
OpenAI researchers discovered that the way AI is trained rewards confident guessing, even when it’s completely wrong.
Here’s the kicker:
If a model guesses right — it gets full points.
If it says “I don’t know” — it gets zero.
So, models learn to always guess. Even if they have no clue.
When tested with factual questions like birthdays or dissertation titles, models confidently gave different wrong answers every time.
That’s a problem.
But OpenAI’s new idea? Change the training.
🛠️ New Proposal: Penalize models more for confidently wrong answers than for saying “I don’t know.”
This encourages honesty > overconfidence.
💡 Why It Matters
If this training shift works, we could see models that know their limits — a huge win for trust and reliability in critical fields like medicine, law, and research.
It’s a reminder that AI quality isn’t just about power. It’s about principles.

Anthropic, creator of Claude, just became the first AI company to cough up serious cash for using pirated books in training data.
The cost? A staggering $1.5B settlement with authors.
📚 What Happened:
They scraped over 7 million books from shadow libraries like LibGen.
Authors sued. The court said: not okay.
Anthropic now owes $3,000 per book, plus more if additional files are found.
They also must delete all pirated training data. No reuse. No loopholes.
🤔 Why It’s Big
This is the first real legal fallout from AI training on copyrighted work.
It doesn’t settle the broader “fair use” debate — but piracy? That’s a hard no.
This changes how companies will build future datasets — and it’s a wake-up call for startups relying on scraped content.
🔧 OpenAI Is Building Its Own Chips (Goodbye, Nvidia?)

Nvidia's chips power nearly every major AI model — but OpenAI just made a big move to break free.
They're teaming up with Broadcom to mass-produce custom AI chips starting next year.
🔋 Why?
OpenAI is running into GPU shortages.
They need way more compute for GPT-5.
Custom chips = more power, lower costs, less dependency.
Other giants like Google, Meta, and Amazon already do this.
Now OpenAI’s joining the “own your stack” club.
This could also shift the balance in the AI hardware race — and cut Nvidia’s market dominance.
🛠️ 4 AI Tools Worth Checking Out
EmbeddingGemma
Google’s open-source, on-device embedding model — great for local AI apps.Lovable Voice Mode
Code entire apps just using your voice. Seriously. Like Jarvis for devs.Qwen3-Max
Alibaba’s new 1 trillion-parameter beast. Open weights, massive scale.Higgsfield Ads 2.0
Realistic AI-generated product placements. Marketing just got way easier.
👀 Quick Takes
🛑 AI Surveillance Under Fire
Gabriel Weinberg, founder of DuckDuckGo, has sparked debate by calling for a total ban on AI-driven surveillance.
He argues that current data collection practices — especially those involving AI — are eroding privacy at scale.
His stance: even anonymized data can often be re-identified, making today’s AI surveillance tech fundamentally dangerous for personal freedom and civil rights.
Expect growing pressure on policymakers to act.
📉 Goldman Sachs Predicts an AI Slowdown
Goldman Sachs is signaling a potential slowdown in AI-fueled market growth, especially in the S&P 500.
While AI has boosted productivity and valuations over the past two years, they warn that we may be reaching a plateau. With infrastructure costs rising and real-world applications still maturing, the “AI hype curve” could start flattening — at least in the short term.
Long-term potential remains strong, but near-term expectations may need a reset.
💡 Takeaway
We’re entering an era where AI accuracy matters more than AI hype.
Whether it’s:
Fixing hallucinations,
Facing legal consequences,
Or building your own chips...
The winners in AI won’t be the ones shouting the loudest.
They’ll be the ones who build with trust, control, and clarity.
Until next time,
The Digital Sovereign
P.S.
Want practical, business-focused AI insights in your inbox?
Check out The AI Report - the #1 AI newsletter for executives, now trusted by 400,000+ professionals.
No fluff. No jargon. Just actionable updates to help you stay ahead.
Click below to learn more 👇
Practical AI for Business Leaders
The AI Report is the #1 daily read for professionals who want to lead with AI, not get left behind.
You’ll get clear, jargon-free insights you can apply across your business—without needing to be technical.
400,000+ leaders are already subscribed.
👉 Join now and work smarter with AI.

