Generative AI is the next generation of chatbots the world has ever seen. Among the biggest players on this front are Google’s Gemini, OpenAI’s ChatGPT, and Microsoft’s Copilot. But a new contender is here to steal the spotlight, and it’s coming from China—DeepSeek R1.
We will then break down the strengths and weaknesses of each, so you are better informed on the differences and which fits your needs best.
Gemini by Google
Formerly known as Bard, Google’s Gemini is integrated directly with its ecosystem, providing powerful tools for research, data analysis, and multimedia processing. When it comes to its most significant area of strength, it is the power to work with a variety of available formats, such as text and images, which is ideal for experts and businesses that utilize Google’s applications.
However, some users have noted that while Gemini is strong on integration, it is also not always the most reliable when it comes to facts. If accuracy is essential, you may want to verify its answers.
Issues Encountered While Using Gemini
- Answers That Are Flat Out Wrong, Absurd, or Misleading: Gemini had a reputation for suggesting nonsensical or impractical solutions.
- Example: It once recommended you put non-toxic glue on top of your pizza instead of cheese.
- Math and Coding Errors:
- Developers have also found errors in programming provided by Gemini, with incorrect syntax and logic errors in the responses.
- AI’s Political Bias:
- Others accuse Gemini of injecting political opinions rather than sticking to the facts.
A.I. Chatbots: Dark Side of Influencing and Affiliating with Gemini
- Medical & Psychological Risks: AI cars can’t substitute for doctors or therapists; the advice it provides can be dangerous and misleading.
- Example: Instead of saying to avoid sugar intake during low blood sugar to a diabetic patient, Gemini once recommended this, which could be a life-threatening mistake.
- Legal and Financial Problems: There can be huge legal and financial problems due to AI hallucinations.
- For instance: An attorney who generated references for cases using AI discovered court cases that never existed with Gemini.
ChatGPT by OpenAI
OpenAI’s ChatGPT has earned its reputation as one of the most versatile AI chatbots on the market today. When it comes to creative writing, idea brainstorms, coding, or even conversation, ChatGPT is capable of it all. It is relied upon by many users for tasks such as drafting emails, summarizing articles, and generating ideas on a wide range of topics.
But ChatGPT is not perfect—it can generate factually inaccurate information that, at times, sounds convincing, something called AI hallucination. It also has a downside with its price being a concern for AI shoppers.
Issues Encountered in ChatGPT
- AI Hallucinations: ChatGPT frequently fabricates facts that sound plausible but are entirely made up.
- For instance: In response to a request for academic citations, it “made up titles and author names” of books that do not exist.
- Wrong Legal and Medical Advice:
- A lawyer came within a hair’s breadth of superficial discipline after filing a legal brief filled with made-up case precedents supplied by ChatGPT.
- Coding Hallucinations:
It once made up a C++ sorting algorithm that didn’t exist, leading developers who depended on its answer to scratch their heads.
The Dark Side of ChatGPT: Risks of Relying on AI Chatbots
- Medical Misinformation: ChatGPT once told a Reddit user to take vitamin C instead of antibiotics for a bacterial infection, which can be deadly.
- Psychological Harm: Instead of providing genuine support, a lot of AI-generated mental health advice downplays the seriousness of conditions.
- Illustration: In response to a question on depression, ChatGPT recommended “thinking positively”, which reduces the gravity of the illness.
Copilot by Microsoft
If you operate in and around the Microsoft Office suite, then Copilot is a game-changer. Built into applications such as Word, Excel, and PowerPoint, it helps automate repetitive tasks, generate content, and provide real-time assistance—all within Microsoft’s ecosystem.
Copilot is an amazing productivity tool for users who work a lot with Microsoft applications. But its major drawback is that, compared to other AI models, it’s not as flexible, as it is largely geared toward Microsoft-related tasks. It’s less flexible if you need an AI chatbot for more general use.
Issues Encountered When Using Copilot
- Lesser Degree of Creativity and Context Independence:
- In Excel, Copilot removed whole cost categories rather than rationalizing spending.
- Context Confusion Within Word:
- When asked for a resignation letter, it added aggressive language, making it sound like a complaint.
- PowerPoint Suggestions That Are Off-Topic:
- When asked to create a presentation on renewable energy, it instead mistakenly focused on lowering electricity bills.
The Downside of AI Chatbots: Dangers of Dependence on Copilot
- Workplace Issues: Copilot’s prompts are often misleading and lack nuance, resulting in wrong financial and legal decisions.
- Dangers in Education: Students who submit work done with the help of Copilot are at risk of failing due to AI-generated misinformation.
DeepSeek R1: The New Kid On The Block
DeepSeek R1 is China’s strong competitor to the established AI models, and it’s gaining traction rapidly. What distinguishes it, however, is that it’s open-source and cheap to develop—it was constructed for a purported $6 million, less than 7 percent of GPT-4’s estimated $100 million price tag.
It has a particular strength in logical reasoning, solving mathematical problems, and handling real-time issues. But it has also triggered potential privacy and censorship concerns. Critics are particularly concerned about its data collection practices and the potential for user information to be accessed by the Chinese government. Also, conversations about politically sensitive issues—like the status of Taiwan and the Tiananmen Square massacre—are strictly censored.
Final Thoughts
The market for AI chatbots is full of exciting options, each with its strengths and weaknesses. AI chatbots must always be fact-checked before treating their advice seriously. Human expertise will always be indispensable in life’s most important decisions.
Sources: Dirox, New York Post, Neontri, Wikipedia, Business Insider, The Australian, American Psychological Association (APA), Australian Psychological Society, BMC Psychology, Parchment AI, School Psych AI, PatientNotes
