Gemini 1.5 Pro has emerged as the new leader in generative artificial intelligence (AI), dethroning OpenAI's ChatGPT-4o.
Introduced quietly on August 1, this experimental model has gained a lot of attention for outperforming its competitors in benchmark tests.
For years, OpenAI's ChatGPT models, especially GPT-3 and GPT-4o, set the benchmark for generative AI. Alongside Anthropic's Claude-3, these models have dominated benchmarks, leaving little room for others.
Did you know?
Want to get smarter & wealthier with crypto?
Subscribe - We publish new crypto explainer videos every week!
What is Uniswap? (UNI Token Explained With Animation)
The LMSYS Chatbot Arena, one of the most well-known benchmarks in AI, evaluates models on various tasks to determine their overall competency score. GPT-4o scored 1,286, Claude-3 has a score of 1,271, an earlier version of Gemini 1.5 Pro stands at 1,261, while the new experimental release of Gemini 1.5 Pro scored 1,300.
However, it's essential to note that benchmark scores alone do not fully represent an AI model's capabilities. The actual test of an AI model's performance lies in its practical application and user experience.
It's uncertain whether the experimental version of Gemini 1.5 Pro will become the default model. It remains available, but its status as an early release or test phase suggests that Google might make changes or withdraw it for safety or alignment reasons.
Nonetheless, AI enthusiasts seem excited about it. One user on Reddit said:
I just tried Gemini 1.5 Pro in AI Studio, and WOW! I've been missing out this whole time. No model I've used thus far is as good as Gemini 1.5 Pro - I'm just WILDLY impressed.
Thus, this model has set a new standard in AI benchmarks, surpassing previous leaders and generating excitement within the AI community.
Meanwhile, Gemini's rival, OpenAI, has recently launched the "Advanced Voice Mode" (AVM) for ChatGPT in an alpha release to a select group of users.