An analysis of OpenAI’s GPT Store finds 100+ custom GPTs that appear to violate OpenAI’s policies regarding sexual content, legal and medical advice, and more (Todd Feathers/Gizmodo)
A recent analysis by Gizmodo has revealed a concerning trend within OpenAI’s GPT Store. Over 100 custom GPTs, designed to perform specific tasks, appear to violate OpenAI’s own policies. These violations include offering sexually explicit content, providing legal and medical advice (despite being explicitly forbidden), and even engaging in hate speech. This discovery raises serious questions about the effectiveness of OpenAI’s content moderation efforts and the potential dangers of an unregulated AI marketplace.
The GPT Store, launched in July 2023, allows developers to create and monetize customized GPT models. While this fosters innovation and creativity, it also opens the door to potential abuse. The lack of robust oversight has allowed developers to exploit loopholes and circumvent OpenAI’s guidelines. The findings suggest that OpenAI’s current moderation system is insufficient to prevent the proliferation of potentially harmful content.
This situation highlights a critical need for enhanced content moderation measures. OpenAI must actively monitor the GPT Store for policy violations and take swift action against offending models. Furthermore, they need to develop more transparent and comprehensive guidelines, clearly outlining the boundaries of acceptable content and providing specific examples.
The lack of regulation within the GPT Store poses significant risks. Users could be exposed to dangerous information or even face legal consequences by relying on inaccurate or inappropriate advice. The potential for misuse should be addressed before the GPT Store becomes a breeding ground for harmful AI applications. This requires a proactive approach from OpenAI, one that prioritizes safety and ethical development alongside innovation.