There’s no doubt generative AI models such as ChatGTP, BingChat, or GoogleBard can deliver massive efficiency benefits — but they bring with them major cybersecurity and privacy concerns along with accuracy worries.
It’s already known that these programs — especially ChatGTP itself — make up facts and repeatedly lie. Far more troubling, no one seems to understand why and how these lies, coyly dubbed “hallucinations,” are happening.
In a recent 60 Minutes interview, Google CEO Sundar Pichai explained: “There is an aspect of this which we call — all of us in the field — call it as a ‘black box.’ You don’t fully understand. And you can’t quite tell why it said this.”
To read this article in full, please click here
Source:: Computerworld