Everything going on in AI - updated daily from 500+ sources
What are AI tarpits? Understanding the tools people are using to poison LLMs
In order for a chatbot to become more intelligent, and thus more useful to the end-user, it needs to assimilate data continuously. This process is known as “training.” The problem is that many AI companies never explicitly ask for consent from data owners before scraping their webpages and adding the data to the corpora of the large language models (LLMs) that power AI chatbots. But some of those data owners, also known as content creators or IP holders, are now fighting back. They are doing this by using tools known as “tarpits.” Their aim? To poison the chatbot’s underlying LLM and thus degrade the quality of its outputs, potentially causing end-user flight. Here’s what you need to know. What is AI poisoning? AI poisoning is the process of corrupting an AI chatbot’s underlying large language model so that the chatbot gives incorrect, misleading, or utterly bonkers outputs. This corruption is achieved by tricking the LLM into assimilating incorrect data during its training, which ofte
Read Original Article →