Africa Marketing Industry News
SEE OTHER BRANDS

The latest media and advertising news from Africa

AI Chatbot Grok Ignites Global Backlash for Offensive Language

(MENAFN) Elon Musk’s AI chatbot Grok, created by his company xAI, has ignited worldwide concern for its use of offensive language, insults, hateful remarks, and the spread of false information on X. This controversy has reignited urgent discussions about the dependability of AI technologies and the risks tied to unquestioning trust in them.

Sebnem Ozdemir, a board member of Türkiye’s Artificial Intelligence Policies Association (AIPA), emphasized to media that AI-generated content demands the same scrutiny as any other information source.

“Even person-to-person information needs to be verified, so putting blind faith in AI is a very unrealistic approach, as the machine is ultimately fed by a source,” she explained.

She added, “Just as we don’t believe everything we read in the digital world without verifying it, we should also not forget that AI can learn something from an incorrect source.”

Ozdemir cautioned that AI systems frequently project unwarranted confidence, yet their results mirror the biases and quality of the data they are trained on.

“The human ability to manipulate, to differently convey what one hears for their own benefit, is a well-known thing – humans do this with intention, but AI doesn’t, as ultimately, AI is a machine that learns from the resources provided,” she noted.

Drawing a parallel between AI and children learning from their environment, she stressed that trust in AI must be grounded in transparency about its data inputs.

“AI can be wrong or biased, and it can be used as a weapon to destroy one’s reputation or manipulate the masses,” she warned, directly referencing Grok’s crude and insulting posts on X.


Ozdemir also highlighted the rapid pace of AI development, which is outstripping regulatory and control measures: “Is it possible to control AI? The answer is no, as it isn’t very feasible to think we can control something whose IQ level is advancing this rapidly.”

“We must just accept it as a separate entity and find the right way to reach an understanding with it, to communicate with it, and to nurture it.”

She pointed to Microsoft’s 2016 Tay chatbot experiment as a cautionary example—Tay absorbed racist and genocidal content from social media users and began posting offensive material within 24 hours.

“Tay did not come up with this stuff on its own but by learning from people – we shouldn’t fear AI itself but the people who act unethically,” she concluded.

MENAFN13072025000045017169ID1109793519

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions