New study proposes a framework for “Child Safe AI” following recent incidents which revealed that many children see chatbots as quasi-human and trustworthy.When not designed with children’s needs in mind, Artificial intelligence (AI) chatbots have an “empathy gap” that puts young users at particular risk of distress or harm, according to a study.The research, by a University of Cambridge academic, Dr Nomisha Kurian, urges developers and policy actors to make “child-safe AI” an urgent priority. It provides evidence that children are particularly susceptible to treating AI chatbots as lifelike, quasi-human confidantes, and that their interactions with the