Experts are warning ChatGPTcould distribute harmful medical advice after a man developed a rare condition and thought his neighbour was poisoning him.
A 60-year-old man developed bromism as a result of removing table salt from his diet following an interaction with the AI chatbot, according to an article in the Annals of Internal Medicine journal. Doctors were told by the patient that he had read about the negative effects of table salt and asked the AI bot to help him remove it from his diet.
Bromism, also known as bromide toxicity, “was once a well-recognised toxidrome in the early 20th century” that “precipitated a range of presentations involving neuropsychiatric and dermatologic symptoms “, the study said. It comes after a doctor's warning to people who drink even a 'single cup of tea'.
READ MORE: 'I lost 10st in a year without jabs, surgery or going to the gym'
READ MORE: Man, 30, put shoulder pain down to gym aches, then doctors asked where he'd like to die
Initially, the man thought his neighbour was poisoning him and he was experiencing “psychotic symptoms”. He was noted to be paranoid about the water he was offered and tried to escape the hospital he presented himself to within a day of being there. His symptoms later improved after treatment.
He told doctors he began taking sodium bromide over a three month period after reading that table salt, or sodium chloride, can “can be swapped with bromide, though likely for other purposes, such as cleaning”. Sodium bromide was used as a sedative by doctors in the early part of the 20th century.
The case, according to experts from the University of Washington in Seattle who authored the article, revealed “how the use of artificial intelligence can potentially contribute to the development of preventable adverse health outcomes”. The authors of the report said it was not possible to access the man’s ChatGPT log to determine exactly what he was told, but when they asked the system to give them a recommendation for replacing sodium chloride, the answer included bromide.
The response did not ask why the authors were looking for the information, nor provide a specific health warning. It has left scientists fearing “scientific inaccuracies” being generated by ChatGPT and other AI apps as they “lack the ability to critically discuss results” and could “fuel the spread of misinformation”.
Last week, OpenAI announced it had released the fifth generation of the artificial intelligence technology that powers ChatGPT. ‘GPT-5’ would be improved in “flagging potential concerns” like illnesses, OpenAI said according to The Guardian. OpenAI also stressed ChatGPT was not a substitute for medical assistance.
You may also like
Women's leadership training: Kejriwal, Mann urge Punjab women to join politics; highlight AAP's welfare initiatives
Mirror Daily Digest: Our top stories from Ukraine ceasefire 'viable' to inside 'cruel' Beckham feud
Eberechi Eze advised NOT to join Arsenal by club icon as he points out glaring problem
Brits gutted as 'perfect location' UK holiday park closes its doors after 47 years
Manchester Arena terrorist Hashem Abedi charged with attempted murder of prison guards