Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
We’re used to medical chatbots giving dangerous advice, but one based on OpenAI’s GPT-3 took it much further. Researchers experimenting with GPT-3, the AI text-generation model, found that it is not ready to replace human respondents in the chatbox. Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves.
How many times have you seen an all-female and black and ethnic minority (BME) panel talking about technology? For many people their first time would have been the Shuri Network launch last July. The Shuri Network was launched in 2019 to support women of colour in NHS digital health develop the skills and confidence to progress into senior leadership positions and help NHS leadership teams more closely represent the diversity of their workforce.
New Algorithms Could Reduce Racial Disparities in Health Care
Machine learning programs trained with patients’ own reports find problems that doctors miss—especially in Black people. Full study here: https://go.nature.com/3pnOwWP