The IT community of late has been freaking out about AI data poisoning. For some, it’s a sneaky mechanism that could act as a backdoor into enterprise systems by surreptitiously infecting the data large language models (LLMs) train on and then getting pulled into enterprise systems. For others, it’s a way to combat LLMs that try to do an end run around trademark and copyright protections.
To read this article in full, please click here
Source:: Computerworld