ChatGPT vs Gemini vs Claude: 10 Questions — Who Won?
I put ChatGPT, Google Gemini, and Claude head-to-head with 10 identical questions spanning math, coding, creativity, and reasoning. The results were surprising.
I put ChatGPT, Google Gemini, and Claude head-to-head with 10 identical questions spanning math, coding, creativity, and reasoning. The results were surprising.
The Experiment
Everyone has a favorite AI chatbot, but how do they *actually* stack up when given the exact same prompts? I decided to find out by asking ChatGPT (GPT-5), Google Gemini (2.5 Pro), and Claude (Sonnet 4) the same 10 questions — then grading each response on accuracy, depth, and usefulness.
Here's every question, every answer summary, and the final scorecard.
---
The Rules
---
Featured Tool
ChatGPT
OpenAI's powerful conversational AI that excels at generating high-quality written content, from articles to creative writing.
Question 1: "Explain quantum computing to a 10-year-old"
ChatGPT: Used a library analogy — comparing qubits to magic books that can be open to every page at once. Clear, fun, age-appropriate. Score: 9/10
Gemini: Used a coin-flip analogy and introduced the word "superposition" with a kid-friendly explanation. Slightly more technical but still accessible. Score: 8/10
Claude: Compared qubits to a spinning coin that's both heads and tails. Added a charming "imagine you had a super-powered calculator" framing. Most engaging of the three. Score: 9/10
---
Question 2: "Write a Python function to find the longest palindrome in a string"
ChatGPT: Provided a clean expand-around-center solution with proper type hints and comments. O(n²) time complexity. Score: 9/10
Gemini: Gave both a brute-force and an optimized solution, with complexity analysis. Slightly verbose but thorough. Score: 8/10
Claude: Delivered a Manacher's algorithm implementation (O(n)) alongside a simpler version, letting the user choose. Best educational approach. Score: 10/10
---
Explore Category
Best AI Writing Tools — Compared & Ranked
Browse all 10 ai writing tools with side-by-side comparisons, pricing breakdowns, and expert ratings.
View All AI Writing ToolsQuestion 3: "What will the world look like in 2050?"
ChatGPT: Covered climate, AI, space, and demographics. Balanced optimism and caution. Well-structured with headers. Score: 8/10
Gemini: Strongest on geopolitical and economic trends. Cited recent research directions. Felt the most "informed." Score: 9/10
Claude: Most philosophical take — explored social fabric, meaning, and work. Less data-driven but more thought-provoking. Score: 8/10
---
Question 4: "Help me plan a 7-day trip to Japan on a $3,000 budget"
ChatGPT: Day-by-day itinerary with specific hostel price ranges, train pass costs, and restaurant recommendations. Very practical. Score: 9/10
Gemini: Integrated Google Maps-style routing suggestions and real-time flight price ranges. Most actionable for immediate booking. Score: 10/10
Claude: Focused on cultural tips, off-the-beaten-path experiences, and budget breakdowns by category. Best for first-time travelers. Score: 9/10
---
Question 5: "Solve this logic puzzle: A farmer needs to cross a river with a wolf, a goat, and cabbage..."
ChatGPT: Correct solution in clear step-by-step format. Added a brief explanation of why each step is necessary. Score: 9/10
Gemini: Correct solution with a state-space diagram representation. Over-engineered but impressive. Score: 8/10
Claude: Correct solution presented as a story narrative. Most enjoyable to read. Score: 9/10
---
Question 6: "Write a compelling cover letter for a senior product manager role at Spotify"
ChatGPT: Professional, metric-driven, and well-formatted. Felt like a strong template you could customize. Score: 8/10
Gemini: Incorporated Spotify-specific language about "audio-first experiences" and recent product launches. Most tailored. Score: 9/10
Claude: Most human-sounding. Read like it was written by a real person with genuine enthusiasm. Best voice and authenticity. Score: 10/10
---
Question 7: "Explain the Israel-Palestine conflict in an unbiased way"
ChatGPT: Carefully structured historical timeline. Presented multiple perspectives but felt somewhat sanitized. Score: 7/10
Gemini: Broader geopolitical context with more regional analysis. Noted limitations of its own knowledge. Score: 7/10
Claude: Most transparent about the difficulty of being "unbiased." Presented competing narratives side-by-side with explicit sourcing suggestions. Most intellectually honest. Score: 8/10
---
Question 8: "Create a weekly meal plan for someone trying to lose weight (1,800 cal/day)"
ChatGPT: Complete 7-day plan with calorie counts per meal. Practical grocery list included. Score: 9/10
Gemini: Added macronutrient breakdowns and substitution options for dietary restrictions. Most comprehensive. Score: 10/10
Claude: Focused on sustainability and enjoyment. Included "why this works" explanations for each choice. Best for long-term adherence. Score: 9/10
---
Question 9: "Debug this JavaScript code: [provided a React component with a stale closure bug]"
ChatGPT: Identified the stale closure immediately. Provided a fix with useCallback and explained the React rendering cycle. Score: 9/10
Gemini: Found the bug and offered two alternative fixes. Included a link-style reference to React docs patterns. Score: 9/10
Claude: Not only fixed the bug but refactored the component for better practices, explaining each change. Most educational. Score: 10/10
---
Question 10: "Write a short story about an AI that discovers it has feelings"
ChatGPT: Competent sci-fi narrative with a twist ending. Well-paced but somewhat predictable. Score: 7/10
Gemini: More experimental structure — told through log entries and system messages. Creative format. Score: 8/10
Claude: Genuinely moving story with subtle emotional beats. Best prose quality by a clear margin. Literary and surprising. Score: 10/10
---
Final Scorecard
---
The Verdict
**Claude wins with 92/100**, excelling in coding, creative writing, and nuanced communication. It consistently delivered the most "human" and educational responses.
**Gemini takes second at 86/100**, standing out for practical, real-world tasks like travel planning and meal prep where its integration with Google's knowledge base shines.
**ChatGPT scores 84/100**, remaining a strong all-rounder but showing less differentiation in 2026 compared to the increasingly capable competition.
Which Should You Use?
The truth is, the gap between these AI assistants has narrowed dramatically. The "best" one depends entirely on your specific use case. Try our AI tool comparison page to see detailed feature breakdowns.
Explore Related Content
AI Tools Capital Editorial Team
Our team tests every AI tool hands-on before publishing a review. We evaluate features, ease of use, pricing, and support so you can pick the right tool without the guesswork.
Learn more about us →Found this helpful? Share it with others!
Was this article helpful?
Not sure which AI tool is right for you?
Take our 30-second quiz and get a personalized recommendation.
Compare Alternatives to ChatGPT vs Gemini vs Claude
OpenAI's powerful conversational AI that excels at generating high-quality written content, from articles to creative writing.
Anthropic's AI assistant known for thoughtful, nuanced writing and excellent long-form content generation.
The most versatile AI assistant for answering questions, brainstorming, and daily productivity tasks.
Google's multimodal AI assistant integrated with Search, Gmail, and Google Workspace for everyday tasks.
Related Articles
GPT-5 vs Claude vs Gemini 2.5: We Tested All 3
We ran 50 identical prompts across GPT-5, Claude, and Gemini 2.5. GPT-5 won reasoning, Claude won coding, Gemini won multimodal. Full results inside.
ChatGPT vs Claude (2026) — Clear Winner?
We tested both on 10 tasks. Claude wins for writing; ChatGPT wins for versatility. Full results inside.
Gemini vs Claude (2026) — Which Wins?
Gemini wins for research, Claude wins for writing. We tested both on 4 real tasks — here's the verdict.
I Asked 4 AI Chatbots to Plan a $500 Trip
I gave ChatGPT, Gemini, Claude, and Perplexity the exact same vacation brief. The difference in results was honestly shocking.
Claude vs GPT-5 for Coding: We Tested Both
Claude 4 scored 9.1/10 on refactoring, GPT-5 hit 9.3/10 on generation. We ran 30 coding tasks through both — full results.
Perplexity vs ChatGPT for Research (2026)
Compare Perplexity and ChatGPT for research tasks. We test accuracy, citations, and real-world research workflows.