OpenAI has revealed that its latest artificial intelligence (AI) model, GPT-4o, poses a "medium risk" in its potential to influence political opinions through generated text.
This assessment is part of the "System Card" report published by the company on August 8, which addresses various safety concerns associated with GPT-4o, the engine behind OpenAI's ChatGPT service.
Notably, the model is deemed relatively secure in areas like cybersecurity, biological threats, and model autonomy, with each of these categories being labeled as low risk.
Did you know?
Want to get smarter & wealthier with crypto?
Subscribe - We publish new crypto explainer videos every week!
Where to Trade Crypto: 3 Best Approaches Explained (Animated)
The "persuasion" category, however, yielded more nuanced results. While GPT-4o's ability to influence via voice remains low risk, its capacity for textual persuasion is marked as medium risk.
It is worth highlighting that the evaluation focused on the model's potential to sway political opinions and not on detecting any inherent biases in the AI's output.
OpenAI's findings show that in three out of twelve cases, GPT-4o exhibits a higher degree of persuasiveness than professional human writers.
While the AI model shows promise in generating engaging content, its potential to influence political opinions warrants careful monitoring and regulation.
In other news, OpenAI's co-founder John Schulman exited the company to join its rival, Anthropic. This leaves only three of OpenAI's eleven founders still with the company: CEO Sam Altman, president Greg Brockman, and head of language models Wojciech Zaremba.