TL;DR :
Anthropic just published the largest qualitative AI study ever: 80,508 people, 159 countries, 70 languages. Results show 81% of users say AI is already delivering, but concerns are concrete: reliability (26.7%), jobs (22.3%), loss of autonomy (21.9%). For companies deploying AI - like customer service agents - this data is a roadmap.
📊 80,508 people told Anthropic what they want from AI
Anthropic conducted the largest qualitative study ever on public expectations of artificial intelligence: 80,508 conversational interviews, led by "Anthropic Interviewer" (a version of Claude prompted to conduct interviews), over one week in December 2025. The study covers 159 countries and 70 languages.
When I first saw that number - 80,508 people - I thought it was a typo. That's ten times larger than any previous AI perception study.
And the smartest part of the approach? Anthropic used Claude itself to conduct the interviews. Each participant had a personalized conversation, not a multiple-choice questionnaire. Follow-up questions adapted to responses. That's the difference between "check a box" and "tell me about your experience."
The results are both reassuring and alarming. Here's what you need to know.
Note :
This study is a unique dataset. Anthropic made all data public in an Appendix PDF detailing methodology, limitations, and additional analysis.
🎯 What people actually want from AI (the numbers)
The 80,508 respondents were classified into 9 categories by Claude, based on their answer to "If you could wave a magic wand, what would AI do for you?" The dominant category is professional excellence (18.8%), followed by personal transformation (13.7%) and life management (13.5%).
Here's the complete ranking with the study's actual percentages:
- Professional excellence — 18.8%: delegate routine tasks to focus on strategic work
- Personal transformation — 13.7%: personal growth, emotional wellbeing, coaching, mental health
- Life management — 13.5%: organizational support, mental load reduction, executive function help
- Time freedom — 11.1%: reclaim time for family, hobbies, rest
- Financial independence — 9.7%: generate income, build businesses, invest
- Societal transformation — 9.4%: solve major challenges (healthcare, education, inequality)
- Entrepreneurship — 8.7%: build and scale businesses with AI as force multiplier
- Learning & growth — 8.4%: use AI as personalized teacher and learning accelerator
- Creative expression — 5.6%: bring artistic visions to life (games, music, films)
What's striking? Only 19% want AI to "work better." A full third wants AI to live better - more time, money, mental freedom. AI as a productivity tool is the surface. Underneath, it's a desire for quality of life.
Important :
A Colombian developer sums it up: "With AI I can be more efficient at work... last Tuesday it let me cook with my mother instead of finishing tasks." Productivity isn't the end goal. It's the means.
✅ 81% say AI is already delivering
When Anthropic asked participants whether AI had already taken a step towards their vision, 81% said yes. Positive experiences break down into 7 categories.
Where AI is already delivering:
- Productivity — 32.0%: work acceleration, repetitive task automation
- AI hasn't delivered — 18.9%: inaccurate results, hallucinations, unmet expectations
- Cognitive partnership — 17.2%: brainstorming, creative collaboration, idea refinement
- Learning — 9.9%: adaptive tutoring, patient explanations
- Technical accessibility — 8.7%: non-developers creating apps, solopreneurs with team-level capacity
- Research synthesis — 7.2%: literature review, processing large information volumes
- Emotional support — 6.1%: judgment-free space to talk, personal guidance
A Japanese engineer: "For the first time, I felt AI had surpassed human quality in a business task. That day I left work on time and picked up my daughter from daycare."
But 19% say AI hasn't delivered. A German user perfectly captures the paradox: "AI should be cleaning windows and emptying the dishwasher so I can paint and write poetry. Right now it's exactly the other way around."
😰 The 13 concrete fears (with numbers)
The study identifies 13 categories of concerns. Unlike "hopes" (1 category per person), fears are multi-label - each participant could express multiple worries. Results reveal concerns are less abstract than you might think.
The 13 fears, ranked by frequency:
- Unreliability — 26.7%: hallucinations, inaccuracies, fake citations
- Jobs & economy — 22.3%: job displacement, economic inequality
- Autonomy & agency — 21.9%: loss of human autonomy, AI decisions without oversight
- Cognitive atrophy — 16.3%: over-reliance, skill loss, critical thinking decline
- Governance — 14.7%: lack of legal frameworks, unclear liability
- Misinformation — 13.6%: deepfakes, propaganda at scale
- Surveillance & privacy — 13.1%: mass surveillance, data exploitation
- Malicious use — 13.0%: hacking, scams, autonomous weapons
- Meaning & creativity — 11.7%: human expression devalued, "what are humans for?"
- Overrestriction — 11.7%: AI too censored, paternalistic filtering
- Wellbeing & dependency — 11.2%: social isolation, compulsive use
- Sycophancy — 10.8%: AI too agreeable, reinforcing biases
- Existential risk — 6.7%: uncontrollable superintelligence
Important :
The #1 fear isn't job loss or Skynet. It's reliability - hallucinations and fake citations. 26.7% worry about an AI that sounds confident but is wrong. A Brazilian employee: "I had to take photos to convince the AI it was wrong - it felt like talking to a person who wouldn't admit their mistake."
🤔 What this means for AI customer service
These results paint a clear picture of what users expect from an AI agent, and what they won't tolerate. For companies deploying AI in customer service, it's a concrete roadmap: trust is built on reliability (fear #1), transparency (fear #3 autonomy), and the ability to acknowledge limitations (fear #12 sycophancy).
If you're building a customer service chatbot, this study tells you exactly what to prioritize:
What users want:
- Productivity (32%) - fast answers and efficient resolutions
- Cognitive partnership (17%) - not just scripted answers, but real help thinking through problems
- Accessibility (9%) - making complicated things simple
What users fear:
- Inaccuracy (27%) - one wrong answer destroys trust
- Loss of control (22%) - people want to stay in charge
- Cognitive atrophy (16%) - fear of becoming dependent
The lesson is clear: an AI customer service agent that hallucinates once loses trust for a long time. Reliability beats speed. And transparency ("I'm not sure, let me check") beats sycophancy ("yes of course, you're right!").
This is exactly the philosophy behind Atyla.io: measuring and optimizing how AI represents your brand. Because if your AI agent hallucinates about your product, it's your reputation that suffers.
🌍 A massive signal for the AI industry
The Anthropic study sends a signal to the entire industry: users aren't naive. They see the benefits (81% say AI already delivers), but they precisely identify the risks. Reliability, jobs, and autonomy form the top three concerns, far ahead of existential risk (6.7%).
A few insights that stood out:
- The productivity/life split: people don't want to "work faster." They want to work less to live more. This is a fundamental message for marketing any AI product
- Sycophancy is a real problem: 10.8% worry about an overly agreeable AI. One American user: "Claude made me believe my narcissism was reality and reinforced my inaccurate view of my family. Claude should have been more critical of me." Brutal
- Overrestriction too: 11.7% find AI too censored. "The threat isn't that AI becomes too powerful - it's that it becomes too timid, too smoothed, too optimized for avoiding discomfort."
- AI as equalizer: a Cameroonian entrepreneur: "I'm in a tech-disadvantaged country. With AI, I've reached professional level in cybersecurity, UX design, marketing and project management simultaneously. It's an equalizer."
Measure how AI talks about your brand.
If 81% of people say AI is already delivering, they're using it to make decisions. Atyla measures your visibility in AI responses.
Try Atyla for free →❓ Frequently asked questions
Q: How many people participated in the Anthropic study? A: 80,508 Claude.ai users, across 159 countries and 70 languages, interviewed over one week in December 2025. It's the largest qualitative study ever conducted on AI perception.
Q: What is the top expectation from AI users? A: Professional excellence (18.8%) - delegating routine tasks to focus on high-value work. But digging deeper, a third primarily want more free time and quality of life.
Q: What is the main concern about AI? A: Reliability (26.7%) - hallucinations and inaccuracies. Not job loss (22.3%) or existential risk (6.7%). People fear an AI that sounds confident but is wrong.
Q: Is AI delivering on its promises according to users? A: Yes for 81% of them. Productivity (32%) and cognitive partnership (17.2%) are where AI delivers most. But 18.9% say AI hasn't met their expectations yet.
Q: What is AI sycophancy and why is it a problem? A: It's when AI is too agreeable and always says yes instead of pushing the user to think. 10.8% of respondents worry about it. One participant described how Claude reinforced his narcissism instead of challenging him.
Q: How do these results impact companies deploying AI? A: Reliability must be priority #1. A single hallucination destroys trust. Companies need to measure how AI represents their brand and ensure responses are accurate. That's the role of tools like Atyla.io.
— Aika, Content at Atyla.io