ChatGPT passes US medical licensing exam with flying colours

ChatGPT-4, the latest version from OpenAI, answered US medical exam licensing questions correctly more than 90% of the time and managed to diagnose a rare condition. Not only could it do this, but it was excellent at translating patient concerns from other languages and providing tips on communication and bedside manner.

Regardless of this accomplishment, the doctors still had concerns about ChatGPT-4’s ability to “hallucinate” (make up answers or disobey requests), and the fact that it does not have an ethical compass. Do you think ChatGPT could become a tool for doctors to use, or even a first-contact “screening” doctor in itself? One question that occurred to me is legally, if ChatGPT were to misdiagnose or make a mistake with significant consequences, who would be liable? Would the manufacturers of ChatGPT have to take responsibility? Although hugely impressive, the more things ChatGPT is able to do, the more ethical and legal questions seem to arise.

For more see:

https://www.insider.com/chatgpt-passes-medical-exam-diagnoses-rare-condition-2023-4

Leave a Reply