Google’s parent company, Alphabet, has a clear message for users: don’t believe everything AI tells you.
In an exclusive interview with the BBC, CEO Sundar Pichai explained that modern AI models are powerful but still “prone to errors,” and should never be used as the only source of truth.
Pichai stressed the importance of keeping a diverse information ecosystem, where tools like Google Search still play a crucial role in providing accurate, verified details.
AI Isn’t Perfect — And Google Admits It
Although AI tools can help with creativity, brainstorming, and quick summaries, Pichai emphasized that users must understand their limits.
According to him:
- AI is helpful for tasks like writing and idea generation
- But it should not be trusted blindly
- Current AI models can still generate inaccurate or misleading answers
Google displays disclaimers on its AI features to remind users of possible mistakes, but even that hasn’t stopped the criticism. One example was the rollout of AI Overviews, which faced heavy backlash for giving bizarre and incorrect search summaries.
Experts argue that tech companies must take more responsibility instead of asking users to fact-check AI outputs.
Professor Gina Neff from Queen Mary University of London explained the danger:
AI systems often “make up answers to please us.” That might be harmless for movie recommendations, she said, but it becomes risky for sensitive topics like health, science, or news.
The Push Toward Better, Faster AI
Despite the concerns, Google continues to evolve its AI ecosystem. The company recently launched Gemini 3.0, its latest consumer AI model, calling it “a new era of intelligence.”
Gemini 3.0 promises:
- Better reasoning abilities
- Smarter responses across text, images, audio, and video
- Deeper integration into Google Search through the new AI Mode
This rollout is Google’s major move to stay competitive against rapidly growing platforms like ChatGPT.
AI Still Struggles With News Accuracy
Earlier research from the BBC highlights a major issue: AI chatbots frequently misrepresent news stories.
In tests comparing ChatGPT, Copilot, Gemini, and Perplexity AI, all systems produced significantly inaccurate answers based on BBC content.
And while technology is improving, broader tests show AI still gets news wrong 45% of the time.
Moving Fast — But Responsibly
Pichai acknowledged the tension between pushing AI innovation forward quickly and ensuring safety measures are in place. Google, he said, aims to be “bold and responsible at the same time.”
The company has increased its investment in AI safety, including tools to help users identify AI-generated images.
He also responded to resurfaced concerns from Elon Musk about AI “dictatorship,” saying that no single company should own technology as powerful as AI, and that today, the ecosystem is far too diverse for that to happen.
Disclaimer
NextNews strives for accurate tech news, but use it with caution - content changes often, external links may be iffy, and technical glitches happen. See full disclaimer for details.