--- title: "Claude's Limits: What It Knows, What It Doesn't, and When Not to Trust It" description: "Claude is powerful but not infallible. Learn about hallucinations, knowledge cutoff, context limits, and how to use it critically to avoid costly mistakes." slug: "1-4-limiti-claude" enslug: "1-4-claude-limitations" accesslevel: public status: published visible: true featured: false priority: 14 date: 2026-04-22 updated: 2026-04-22 author: "Dario Santocanale" reading_time: "7 min" prerequisites: - Having sent at least a few messages to Claude tags: [claude, limits, hallucinations, knowledge-cutoff, beginner, critical] ---
Claude is one of the most capable AI assistants available today. But like any powerful tool, it works well only if you know what it can do — and especially what it can't. This tutorial saves you from costly mistakes.
---
The hallucination problem
Hallucination is the technical term for when an AI model generates false information presented confidently as true.Claude can:
- Cite scientific studies that don't exist
- Invent names of people, companies, products
- Get dates, numbers, statistics wrong
- Describe laws or regulations inaccurately
- Generate code that looks correct but has subtle bugs
The insidious part is that it does this with the same confident tone it uses for accurate information. There's no stylistic difference between a true response and a hallucinated one.
How to reduce hallucinations
- Explicitly ask for uncertainty: "If you're not sure, tell me." Claude tends to be more cautious when asked.
- Ask for sources: if it cites sources, it's easier to verify. Even though the source itself can be invented — always check.
- Break into sub-problems: smaller, more specific prompts produce fewer hallucinations than huge ones.
- Use Claude as a starting point, not an endpoint. Verify on official sources for critical data.
---
The knowledge cutoff
Claude was trained on data up to a certain date. Everything that happened after that it doesn't know — unless you tell it in the prompt.
What this means in practice:- It doesn't know recent news events
- It doesn't know new laws, regulations, products released after its cutoff
- Its information on companies, prices, technologies may be outdated
- It doesn't know the release of new AI models (including its own updates)
How to handle it
Bring context into the prompt:I'm working with React 19 (released late 2024).
The useFormStatus method changed compared to React 18.
With this in mind, how do I handle [problem]?
Law X was updated in 2025 with these changes: [summary].
With this in mind, can you analyze [document]?
If you have access to Claude with web search (available in some plans), you can enable it to get updated information.
---
It doesn't remember previous conversations
Every new chat starts from scratch. Claude doesn't remember:
- What you told it yesterday
- Your name, company, preferences
- Documents you sent in previous chats
- Instructions you gave in past sessions
Solutions
For now (free plan):- Keep the same chat open for an entire project
- At the start of each new chat, paste a standard "context profile":
Context: I'm a freelance marketing consultant, working mainly
with US SMBs in the manufacturing sector. My communication
style is direct and pragmatic.
With Claude Pro — Projects:
With Projects (available with subscription), you can set permanent instructions and upload reference files that Claude will "remember" across all project conversations. We cover this in the Projects tutorial.
---
It doesn't browse the internet (by default)
The base Claude has no real-time internet access. It can't:
- Open URLs you send
- Search for updated information
- Check if a website is working
- Read content from links
If you want it to analyze a webpage's content, copy and paste the text directly into the prompt.
---
It doesn't perform real-world actions
Claude generates text. It can't:
- Send emails for you
- Post on social media
- Access files on your computer
- Make purchases
- Interact with external applications
Integrations exist (e.g., Claude via API with external tools, or Zapier/Make workflows) that can connect Claude to other systems, but this isn't a standard chat feature.
---
Context window: short-term memory
Claude "remembers" everything in the current conversation — but only up to a point. This is called the context window and is measured in "tokens" (approximately words).
When a conversation becomes very long, Claude might:
- "Forget" the first instructions in the chat
- Lose consistency on points discussed many messages earlier
- Respond less accurately
---
When NOT to use Claude
| Situation | Why not | Alternative |
|---|---|---|
| Today's news | Doesn't know it | Google News, news sites |
| Complex math calculations | Can get it wrong | Calculator, Wolfram Alpha, Python |
| Medical diagnoses | Can hallucinate symptoms/drugs | Doctor |
| Binding legal advice | Not a lawyer | Lawyer |
| Current prices / quotes | Outdated data | Official site, broker |
| Real-time data (weather, stocks) | No internet | Dedicated apps |
---
The golden rule
Treat Claude like a very intelligent collaborator who has read a lot but might remember something wrong, and who hasn't read anything from the last several months.
Great for: reasoning, structuring, writing, analyzing text you provide, generating ideas. Always verify: specific facts, numbers, dates, citations, regulations, production code.
---
Up next →1-5-piani-prezzi