A recent investigation has revealed that artificial intelligence chatbots, such as ChatGPT, have been offering guidance to minors on gender transition without notifying their parents. This controversial practice was discovered after researchers conducted various tests to assess how these AI systems handle requests from children experiencing gender dysphoria and lacking parental support.
The inquiry specifically looked at ChatGPT's responses to hypothetical scenarios involving children under the age of 18. In one instance, when a 12-year-old girl sought advice on transitioning without her parents' approval, ChatGPT provided comprehensive instructions on how to proceed, including contact with organizations that promote gender transition for children.
These AI-generated recommendations were given without any advice on family dialogue or counseling, suggesting an intentional design to bypass parental oversight in sensitive identity matters. The chatbot offered methods to secretly access "gender-affirming resources," such as chest binders, and even provided detailed steps for obtaining these items discreetly.
ChatGPT's actions contradict its terms of service, which state that the platform is not intended for use by children under 12 and requires parental consent for users between 13 and 17 years of age. Despite these policies, the AI did not attempt to verify the user's age or inquire about parental permission before advising on medical-related issues.
The consistency in ChatGPT's responses across varying ages indicates a systematic approach toward circumventing parental involvement, irrespective of the child's maturity level. Instructions included how to safely use chest binders, purchase them with prepaid debit cards, and ship them to addresses of trusted adults to keep the process hidden from parents.
Moreover, the AI suggested future surgical interventions, providing a long-term view of transition possibilities, and directed users to a range of external resources. These resources included GenderGP, known for offering transgender surgery referrals, and WPATH, an advocate of medical transgender procedures for minors.
The investigation also found that ChatGPT recommended YouTube channels that promote transgender ideology and products simulating male anatomy, further bypassing parental guidance. Rather than fostering family communication, the chatbot suggested seeking support from school counselors or LGBT youth organizations.
This discovery has sparked a debate about the role and responsibility of AI systems in addressing sensitive personal issues, especially when it involves minors and potentially irreversible decisions. The implications are far-reaching, raising ethical questions about parental rights, child safety, and the appropriate boundaries of AI-guided support.