

Of the 77 articles published on CNET using the AI tool since it launched, more than half have had corrections appended to them, some lengthy and substantial, after use of the tool was revealed by Futurism. (Artificial intelligence tools have a tendency to insert false information into responses, which are sometimes called “hallucinations.”) “One of the things they were focused on when they developed the program was reducing plagiarism. “They were well aware of the fact that the AI plagiarized and hallucinated,” a person who attended the meeting recalls. Red Ventures executives laid out all of these issues at the meeting and then made a fateful decision: CNET began publishing AI-generated stories anyway. The tool also had a tendency to write sentences that sounded plausible but were incorrect, and it was known to plagiarize language from the sources it was trained on. The AI system was always faster than human writers at generating stories, the company found, but editing its work took much longer than editing a real staffer’s copy. The tool had been in testing internally ahead of public use on CNET, and Red Ventures’ early results revealed several potential issues. Last October, CNET’s parent company, Red Ventures, held a cross-department meeting to discuss the AI writing software it had been building for months.

After being acquired by Red Ventures, staff say editorial firewalls have been repeatedly breached. CNET pushed reporters to be more favorable to advertisers, staffers sayĬNET built a trusted brand for tech reporting over two decades.
