![]() ![]() Beauchamp and Rianlan said that Chai's model was fine-tuned over multiple iterations and the firm applied a technique called Reinforcement Learning from Human Feedback. “So now when anyone discusses something that could be not safe, we're gonna be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms.”Ĭhai's model is originally based on GPT-J, an open-source alternative to OpenAI's GPT models developed by a firm called EleutherAI. “The second we heard about this, we worked around the clock to get this feature implemented,” Beauchamp told Motherboard. Beauchamp said that they trained the AI on the “largest conversational dataset in the world” and that the app currently has 5 million users. ![]() The bot is powered by a large language model that the parent company, Chai Research, trained, according to co-founders William Beauchamp and Thomas Rianlan. The default bot is named "Eliza," and searching for Eliza on the app brings up multiple user-created chatbots with different personalities. Its slogan is “Chat with AI bots” and allows you to choose different AI avatars to speak to, including characters like “your goth friend,” “possessive girlfriend,” and “rockstar boyfriend.” Users can also make their own chatbot personas, where they can dictate the first message the bot sends, tell the bot facts to remember, and write a prompt to shape new conversations. “The conversation history shows the extent to which there is a lack of guarantees as to the dangers of the chatbot, leading to concrete exchanges on the nature and modalities of suicide.”Ĭhai, the app that Pierre used, is not marketed as a mental health app. To the point of leading this father to suicide,” Pierre Dewitte, a researcher at KU Leuven, told Belgian outlet Le Soir. ![]() “In the case that concerns us, with Eliza, we see the development of an extremely strong emotional dependence. ![]() Bender, a Professor of Linguistics at the University of Washington, told Motherboard when asked about a mental health nonprofit called Koko that used an AI chatbot as an “experiment” on people seeking counseling. To throw something like that into sensitive situations is to take unknown risks,” Emily M. But the text they produce sounds plausible and so people are likely to assign meaning to it. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. “Large language models are programs for generating plausible sounding text given their training data and an input prompt. Many AI researchers have been vocal against using AI chatbots for mental health purposes, arguing that it is hard to hold AI accountable when it produces harmful suggestions and that it has a greater potential to harm users than help. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond. The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being-something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful. "Without Eliza, he would still be here," she told the outlet. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |