As generative AI developers such as ChatGPT, Dall-E2, and AlphaCode barrel ahead at a breakneck pace, keeping the technology from hallucinating and spewing erroneous or offensive responses is nearly impossible.
Especially as AI tools get better by the day at mimicking natural language, it will soon be impossible to discern fake results from real ones, prompting companies to set up “guardrails” against the worst outcomes, whether they be accidental or intentional efforts by bad actors.
To read this article in full, please click here
Source:: Computerworld