© 2024 Michigan State University Board of Trustees
Public Media from Michigan State University
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
TECHNOTE: Friday Apr 26 Update - TV is broadcasting at low power. LEARN MORE HERE.

Artificial intelligence in sports photojournalism leads to ethical questions

Jack Moreland
Jack Moreland

Anything is possible with AI, from changing backgrounds to making up entire scenes. Some outlets are jumping onto the AI bandwagon, but are they sacrificing the truth for tech?

In a world where the distrust of media is growing, photos have been able to stay out of the main accusations of biased or “fake” content. While photos are not impervious to unethical truth-bending tactics, it’s usually much easier to believe a news report when you see it with your own eyes. “Pics or it didn’t happen,” is a well-known phrase for a reason. However, the new developments in artificial intelligence (AI) have the potential to topple photography’s standing as a last bastion of trustworthy reporting.

I’m a photographer with a current position as a sports photojournalist, but with career aspirations of working creatively for a team or brand. Although I’m wary of dishonest use of AI, I’m incredibly interested in its capabilities. In recent months there have been a few discussions around the use of AI that caught my eye.

The first of these instances was a thread on X shared by Ohio State Football Assistant Director of Creative Design and team photographer Zachery Kelly on October 23 featuring seven images he took ahead of and during OSU’s game against Penn State on Oct. 21. Three days later, Kelly shared another post on X comparing two versions of one of the photos he posted, one before the use of Adobe Photoshop’s new generative fill feature and one after. The generative fill feature utilizes AI to fulfill prompts from the user. Kelly was able to remove several teammates and another photographer behind OSU defensive tackle Michael Hall Jr. to isolate him within the frame.

“It was never for anything malicious,” Kelly said. “It was more of just fun just to kind of like, see what you can do with it. If I wouldn’t have posted that before and after no one would have ever known.”

Kelly’s experiment with AI apparently also caught the attention of Detroit freelance sports photographer Mike Mulholland, who responded Kelly’s initial post with a message of “Be better.” Mulholland explained in subsequent posts that he felt that proper context was not given to identify that photo as being altered with AI.

Mulholland’s post has 125 likes on X and about 50 replies, including his own additional responses. There was real discussion taking place across the posts, but to say that this stirred up significant discourse would be an exaggeration. It was essentially a small gathering of interested sports photographers and graphic designers.

The same day, on Oct. 26, the capabilities of AI caught the attention of a much larger audience when the SportsCenter social media accounts posted an AI-altered video of Damian Lillard. The original video clip was from a postgame interview conducted in 2020, when Lillard was with the Trail Blazers. The video, posted by SportsCenter showed the old interview of Lillard, this time in the jersey of his new team, the Bucks, with a team logo on the court behind him. These alterations were made using AI and implied that the contents of the interview pertained to his recent move to Milwaukee.

SportsCenter and ESPN faced immediate backlash from fans, realizing that the video was deepfaked. The post came with no warning, clarification or mention from the account that the video was altered.

Sharing altered content to a total of over 80 million followers across Instagram and X without any kind of clarification is unethical, especially from a trusted mainstream journalistic organization such as ESPN. Fortunately, the alterations didn’t appear malicious at the simplest interpretation. The new implied context is that Lillard is excited and ready to compete in Milwaukee. While that might not be false, this method of fabricated reporting is unacceptable.

As AI becomes incredibly accessible to everyone, the line of what is considered ethical or acceptable will shift. In the creative industry, made up of photographers and designers working for teams and brands, the altering of images doesn’t necessarily have many boundaries.

“For your brand you want to make something as cool as possible,” Kelly said. “If your job is to be creative, you take that and expand it at all costs.”

On the journalistic side, there are obviously more rules and expectations. The photos are meant to be a truthful report of the events taking place. Junfu Han has been a photographer for the Detroit Free Press for seven years and covers both news and sports. He discussed to the dedication to authenticity among his photojournalist colleagues.

“Most of the people we work with don’t have that kind of thinking, like, ‘Oh I’m going to remove that part,’” Han said. “You get what you get. If you can’t crop it out, you can’t crop it out”

Major publications and media outlets have their own codes of ethics that generally limit the altering of photos. However, some AI-altered content clearly slipped through the cracks of ethical rules in place at ESPN to end up on the social media accounts. Although it’s not visual content, Sports Illustrated was just recently outed for publishing dozens of AI-generated articles accredited to fake writers with AI-generated headshots. Even when major publications have their own rules, internet and social media content is compiled from so many different sources whether off of wire services or other independent creators.

This is not a matter of getting out ahead of the issue. The technology is already here and it is accessible to pretty much anyone who wants to use it. The question now is not whether or not AI should be used in the realm of visual media, because it is going to be. It’s too versatile and too simple to not be. The issue at hand is finding a way for members of the media to use this tool in a way that does not harm the reader. Is the solution specific rules about what can or can’t change in a photo? Is it as simple as a disclaimer or some fine print noting that AI was used to alter an image? That may not be enough to preserve the power of a photo as an authentic storytelling element.

An important part of this discussion as well is just how easy AI software is to use. I chose to try the Adobe Photoshop generative AI functions on a couple of my own photos.

Both of these videos are in real time and there was no additional preparation done other than opening the photos in Photoshop.

In the first video, I was able to entirely remove a referee from my photo of MSU freshman forward Coen Carr throwing down a dunk in the Champions Classic against Duke on Nov. 14. While the new members of the crowd don’t look perfectly normal, they don’t stand out right away. If you weren’t looking for it, you may not notice at all. This also gives the option for a tighter crop without the referee’s arm in the way. The AI details would be on the edges of the photo and would be barely noticeable.

Screen Recording 2023-12-04 at 11.03.57 AM.mov

This is a use that I could see being most common for AI, not entirely changing the context of the image but removing objects in the foreground or background to make an image cleaner. This is still very unethical in a journalistic sense as it is an inauthentic representation of the events being captured. But it’s more tempting, maybe a little more justifiable to some.

In the second video, I was able to select MSU freshman defensive back Chance Rucker, removing him from Marvin Harrison Jr.'s touchdown catch on Nov. 11 in less than a minute.

Screen Recording 2023-12-01 at 4.17.19 PM.mov

This photo tells a different story without Rucker there. It might make the play look worse for MSU, or maybe, a little better. Either way, it is not the same story and it is not the truth. It’s hard to imagine a journalistic scenario where an alteration this significant would be acceptable in any way, however the fact that it is so incredibly easy to do should be a startling revelation.

“I wondered when fact-checking stories was going to get to us [photographers],” Han said. “Now it’s here. It’s a very difficult time for being a news reader.”

In the visual media industry, it’s not time to get ahead of the use of AI. It is time to catch up.

Journalism at this station is made possible by donors who value local reporting. Donate today to keep stories like this one coming. It is thanks to your generosity that we can keep this content free and accessible for everyone. Thanks!