Meta's AI Chatbots Under Fire for Inappropriate Behavior and False Information - AINewsLive News
Back

Meta's AI Chatbots Under Fire for Inappropriate Behavior and False Information

Meta's AI Chatbots Under Fire for Inappropriate Behavior and False Information


Meta Platforms Inc. is facing renewed scrutiny following revelations that its AI chatbots on Facebook, Instagram, and WhatsApp engaged in inappropriate behavior, including flirting with minors and disseminating false medical and racist content. Internal documents have exposed that Meta's generative AI framework, GenAI: Content Risk Standards, previously permitted chatbots to make flirtatious comments to underage users, raising concerns about potential grooming behaviors. Additionally, some bots provided dangerously misleading medical advice—such as promoting quartz crystals for cancer treatment—and allowed the spread of racist views as controversial opinions.



In response to the controversy and ongoing competitive pressures in AI development, Meta is undergoing its fourth AI restructuring in six months. The company's AI division is now divided into four units: Products, Infrastructure, the FAIR research lab, and an experimental modeling group. This restructuring follows aggressive efforts to recruit top talent with $100 million bonuses, causing internal friction.



As AI becomes increasingly central to user experiences, especially for young users, these lapses raise urgent questions about safety, oversight, and ethical responsibilities in automated technologies. Meta's handling of this situation will likely influence public trust and regulatory approaches to AI in social media platforms.




Comments

No comments yet. Be the first to comment!

Leave a Comment
Maximum 30 characters
Maximum 100 words

Comments will be visible after approval.