OpenAI has revised its ChatGPT usage policy, officially restricting the AI from providing medical, legal, financial, or any other guidance that requires a licensed professional.
The updated rules, outlined in the company’s Usage Policies and effective from October 29, prohibit users from relying on ChatGPT for tasks such as:
Consultations that need professional certification (including medical or legal advice)
Facial or personal identification without consent
High-stakes decision-making in areas like finance, education, housing, migration, or employment without human supervision
Academic cheating or any attempt to manipulate assessment results
According to OpenAI, these changes are intended to improve safety and reduce risks associated with using the system for matters beyond its scope.
NEXTA reports that the chatbot will no longer provide specific medical, legal, or financial guidance. ChatGPT is now formally categorized as an “educational tool” rather than a “consultant.”
The shift is reportedly driven by regulatory concerns and liability risks, with the aim of avoiding potential lawsuits.
Moving forward, ChatGPT will focus on explaining concepts, outlining general processes, and encouraging users to consult qualified professionals for detailed guidance.
The new rules also mean:
No naming medications or providing dosage information
No investment advice or buy/sell recommendations
This policy tightening directly addresses long-standing concerns surrounding the technology’s misuse.

0 Comments