Protection of speech generated by AI models under section 230 likely depends on how the speech comes to exist. If a user of an internet company's platform posts output from an AI model containing hallucinations, then it is more likely the internet company is protected. However, it is an open question if AI product developers are shielded by Section 230. This means that if an internet company developed a large language model that outputs hallucinations, then this may subject the company to liability. But what if the internet company could show that the hallucination would not have existed but for a user's interaction with the AI model?
A trend we are watching for is how courts weigh the input from a user when considering the ultimate output. Hallucinations will likely continue to be an issue for generative AI and large language models. By design, these models attempt to generate new output, which could lead them to fill in factual gaps that are not accurate. If a large language model was asked to describe the name of an animal with the head of a lion and a chicken's body, it may make up a brand new animal that you could have only dreamed of. This is unsurprising as this is what these models are designed to do and, ultimately, how the model was prompted to respond.
It will be interesting to watch how future legislation and the courts handle potentially damaging hallucinations from AI models where a user has a significant role in the ultimate output. We will continue to watch as jurisdictions attempt to craft policies aimed at AI technology and courts wrestle with these issues. Stay tuned!