OpenAI is opening a new alignment research division, focused on developing training techniques to stop superintelligent AI — artificial intelligence that could outthink humans and become misaligned with humans ethics — from causing serious harm.
“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” Jan Leike and Ilya Sutskever wrote in a blog post for OpenAI, the company behind the most well-known generative AI large language model, ChatGPT. They added that although superintelligence might seem far off, some experts believe it could arrive this decade.
To read this article in full, please click here
Source:: Computerworld