Opinion

As much as AI improves we can’t trust AI news

With misinformation and AI on the rise, should AI ever be used in journalism?

By Jake Boyette

With AI still being very new in its current state, it is not a reliable or credible way to generate news stories. This hasn’t stopped companies from shifting their focus to streamlining the writing process to get as many stories out as fast as possible. 

AI is a hot-button issue as stories continue to come out about how it threatens to take over more jobs, with articles like Built In’s article “What jobs will AI replace?” speculating that AI will take over jobs from computer programmers to graphic designers. 

The New York Times wrote about using generative AI to help with creating headlines and NBCUniversal showed in their video how they streamline transcribing videos and captions for images in an effort to cut costs. 

Although companies use AI to help them write their articles, they still warn about the many issues it brings, including the potential for misinformation and plagiarism. However, there are some benefits to AI in streamlining news production. Many AI-powered grammar programs like “Grammarly” help the writing process by catching mistakes in the writing process and even offer new ways of writing sentences that might sound repetitive or awkward. 

The biggest talk with AI is how it threatens jobs, like technical and copy writers. More recently, there’s been talk about how AI can be used in journalism. Entirely AI-driven news sites like NewsGPT offer news sources with a wide range of updated news. 

Tools like automated-voice technology help people with reading disabilities, allowing people to listen to articles with a generated voice, according to IBM. This also allows for the automated transcribing of audio, allowing for press conferences to be turned into a readable script. 

These tools do genuinely help writers, allowing them to spend less time on tedious processes and to focus on their larger writing projects. However, the optimistic look at the positives of AI overlooks the many issues that AI still has.  

For tools like Grammarly, if you don’t turn off the specific privacy setting, they can use your work to train their AI, according to Grammarly’s website

Automatic transcribers also run into issues with being able to properly transcribe audio, especially with overlapping voices or with poor-quality audio. AI was found to be 30% less accurate than a human transcriber, needing edits from humans to get a reliable transcript, according to Ditto

There is the argument that these tools are only getting better, though it is not good that a tool already used in major news sources is having these issues. The programs are pushing out humans who could have done things better. 

If this doesn’t feel like that big of an issue, it should be known that these tools are already creating real problems for workers who use them daily.  

A writer of an unspecified tech blog had most of their team cut in favor of an automated system that created articles out of headlines. This left the remaining member of the team to edit the generated articles, dealing with larger mistakes made by the AI than humans would have made, according to BBC News

This speaks to a larger issue with AI journalism and the credibility of their stories and sources. AI has a habit of hallucinating, which is the term used when AI generates incorrect information. These hallucinations can happen up to 27% of the time, according to The New York Times

This has led to AI news sites like NewsGPT to produce misleading information, according to Annie Lab. Without human fact-checkers, the program doesn’t stop itself from posting misinformation. 

It is incredibly important that news sites post credible and factual information as without accurate information, misinformation spreads, leading to harmful conspiracy theories. 

Google’s AI feature was presented as a way to get a quick overview of what you searched, with multiple linked sources like NewsGPT appearing in the recommendations. However, these sources were dubious at best, pulling inaccurate information from Reddit threads, with some infamous examples of telling people to glue pizza and eat pebbles, according to BBC News

There are some human-run news websites that still spread misinformation, but looking into their sources provides a way to fact-check if the information presented is credible. Without knowing what sources AI is referencing or their credibility, AI journalism cannot be trusted like human sources can. 

Another worry is that since AI learns from human sources to get its knowledge, the generated information presented is stolen or plagiarized. 

Image of  Terminator with a press hat on. Illustrated by Jake Boyette.

ChatGPT was found to generate nearly 60% of its content with some form of plagiarism, varying with the subjects it is generating, but resulting in similarities to existing works, according to Copy Leaks. 

NewsGPT was also found to have articles that were similar to CNN, taking their articles, changing some of the wording, which is known as patchwork plagiarism, according to Annie Lab. 

There have already been several lawsuits with AI companies like OpenAI and Microsoft that were sued by the Center for Investigative Reporting, according to the Court House News Service.  

It’s a disheartening thing to hear, as much of the research that I do for my articles relies heavily on the hard work of researchers, reporters and writers who allow for a better and clearer understanding of current topics. 

Without them there would be no opinion column or proper newspaper. To think an AI program can reuse their work without being able to credit them is shameful. 

AI journalism is a dangerous and slippery slope. As much as AI tools can help in the process of writing articles and transcribing audio and video, that is as ethical as it can get. In most of the articles talking about the benefits of AI, they also talk about how it can be used unethically. 

These issues are well known and companies are trying to make changes to get away with the theft of other people’s work. Microsoft’s CEO Satya Nadella, calling for a change in Copyright Law to make it considered fair use to use copyrighted material to train its AI, according to Quantum Zeitgeist

Luckily, writing guilds like The Authors Guild have already taken a stance against AI stealing their work. They argue that the training that the AI received shouldn’t have happened on copyrighted work, but instead the public domain. Whether it actually changes something is yet to be seen. 

What we can do is support those who work hard for our research. Reading articles and staying up to date on the news written by humans is the best way to show that there is still support for human journalism.