AI-centered Journalism? Where do we draw the line?

As journalists, people would ask us several questions about drawing the line.

Where do we draw the line between staying safe and covering a dangerous spot news? Where do we draw the line between being human and securing a scoop? Where do we draw the line between our job and personal life? We often rub eyeballs with questions like these; but today raises the discourse for another debate, where do we draw the line between the usage of Artificial Intelligence and Journalism?

Rise in technology, as the year 2023 approached, paved the way to a wealth of innovations and one of those is Artificial Intelligence. It has been harnessed for diversified purposes; there are some who would use it, and in an unethical way, to make a famous celebrity or politician say whatever it is that the user wants the personality to say— this is called Deepfake. There have also been chat-bots in the past that people only used for entertainment like Simsimi. However, these chat-bots have increased in terms of capability that they weren’t used for only mere entertainment, but for answering complex questions or even generating essays— the Large Language Module like the ChatGPT. In fact, there are many reports of students allegedly using these LLM tools in submitting their requirements which really pose a threat regarding academic dishonesty.

With that said, how would this play a role in journalism?

Does it really?

Now, juxtaposing artificial intelligence and journalism, does the latter really have the capacity to replace human brains inside the newsroom? The answer is, based on its performance and in the technical aspect, it does— but it may be unsafe and not as good as how we, human-journalists, would write the story.

But and subjectively alarming, this is happening right now. A news platform called “NewsGPT” has the tagline, ‘Big news in small bytes.’ It digitally publishes news stories, human interest stories, and even entertainment. It even has its ‘today’s news headline.’ You might think that this is your everyday digital news site, but if you’d glance closer, the author is “AI Bot V27.” Yes, the news site is synthesized purely by Artificial Intelligence designed to look at possible stories and write as journalistically as it was programmed to be. Hence, the concept of robots replacing journalists is already a reality since this NewsGPT.

However, what if a certain AI-generated tool was programmed to spread false information? Or, that, on an ill-fated occasion, the AI tool specifically created to draft and release news stories malfunctioned? I know, scary.

So where do we really draw the line?

However, this article isn’t about bringing negative light to Artificial Intelligence. In fact, it is a profound innovation crafted by humans themselves to aid in information-gathering and confirmation— this is why we should be able to learn how to use it to our advantage. We’ve established that AI-generated tools may be used for the proliferation of false information, however, it may also be used as a tool to combat said dis- or misinformation.

Right, tool, this is where we really draw the line. As the technical aptitude of artificial intelligence is extremely vast, given that it is literally a computer, it should be used by journalists only as a tool and aid. Much like how Grammarly is equipped to spot parts of our writing which are not grammatical, LLMs and other AI tools should be used the same way. We can have these AI tools give us recommendations of topic outlines, or gather facts and data for us the same way we’d use google. We can also use it to instantly fact-check information. However, the writing itself should come from us— these machines should merely be a tool of recommendation, fact-checking, and info-gathering aid.  

Moreover, the truth is that Artificial Intelligence is nowhere near as intelligent as humans can be. It might be a literal one-stop encyclopedia, however it does not have the ability to make impact through storytelling or inspire others just like we, journalists, do. It may be able to produce chunks of information, but it will never write as sophisticatedly as we do.

It clearly just doesn’t have the heart for writing, much like us, journalists.

So where do we draw the line? We may have AI-centered journalism, but not a journalism-centered AI— as long as we stand truthful and passionate, we will never be replaced by machines in the newsroom.