Making Sense Of AI: Orleans Author Weighs On The Pros And Cons Of Modern Technology

by Ryan Bray
Orleans Author Michael E. Jones, left, discusses the state of artificial intelligence and its impact on art with Clark Doody during last week’s annual meeting of the Orleans Citizens Forum. RYAN BRAY PHOTO Orleans Author Michael E. Jones, left, discusses the state of artificial intelligence and its impact on art with Clark Doody during last week’s annual meeting of the Orleans Citizens Forum. RYAN BRAY PHOTO

ORLEANS – In 2025, people go about their lives with small computers in their pockets and bags, a world of information once unthinkable always within immediate reach. On a broader scale, continuing advancements in technology are making great gains in everything from medical research to environmental science. 
 That’s the good part of the rapidly evolving technological age we live in. But when it comes to artificial intelligence, or AI, it’s a much grayer area. Does the convenience and novelty of AI outweigh the dangers and risks posed by “deep fake” imagery? What about the threats AI poses to copyright law?
 It’s all a lot to untangle. Just ask Michael E. Jones, who has spent years researching and teaching in the areas of media ethics and digital law. Jones, an Orleans resident, has also studied the impacts of AI on art. His latest book, “What Art Is Now: Creativity In The Age of AI,” is due to be released later this year.
 Jones joined Clark Doody, host of the Cape and Islands Podcast, for lengthy discussion on the state of AI during last week’s annual meeting of the Orleans Citizens Forum on June 25. The wide-ranging talk touched upon historical breaches of and threats to copyright law, and the challenges that AI poses to artists and the art community at large.
 “It can be a wonderful collaborative tool, but you’ve got to be careful and you’ve got to set boundaries,” he said at the outset of last week’s discussion.
 Jones described AI as an advanced form of “computational science” that operates through the collection of data. That includes through our individual uses of the internet and social media, often without our knowledge.
 “Data is us,” he said. “We’re data. Every time you’re on Amazon, they’re collecting information about you.” In many cases, people sign off on companies’ right to use their data through “terms and conditions” that many of us fail to read in detail before clicking “accept,” Jones said.
 In many cases, the use of AI toes the line between what’s ethical and what’s legal. As an example, Jones pointed to Adrian Brody’s recent Oscar-winning performance in the 2024 film “The Brutalist,” where it was revealed after the fact that AI was used in places to give the actor a more realistic-sounding Hungarian accent. But the question of what is and isn’t acceptable in the use of AI in many ways lies in the eye of the beholder, Jones said.
 “It depends on context, social context,” he said. “It depends on your value system. I also think it depends on the artist’s intent.”
 Legally, any image or work that is created without direct human involvement is not considered protected by copyright law, Jones said. But he said he’s skeptical about how that standard will hold up as the use of AI continues to advance. For instance, does the fact that AI programs are developed by an engineer constitute human involvement?
 In what Jones referred to "algorithmic art,” AI has made reproducing and modifying preexisting human-based works, such as photographs, easier than ever. But when it comes to art, he said it’s hard for any program to completely take the place of work of human beings. He said the work that goes into creating art is in many cases what gives it value. 
 “AI has its place, but it’s not what Andy Warhol did with the Brillo box,” he said.
 Time was given at the end of last week’s discussion for audience questions. One person noted the challenge that comes with trying to catch up with widely-used platforms such as ChatGPT.
 “Every time we try to come up with a law or policy, there’s a whole new thing happening. It’s just fascinating in that regard.”
 “The law is always two steps behind whatever the technology is, and that’s the trouble,” Jones said.
 Doody referenced a recent trial in which three ChatGPT programs were communicating between one another, but switched to communicating via “clicks” upon learning that a human was in the room with them.
 “There’s arguments to be made that the IQ of AI right now is in the thousands,” he said. “And they could just easily start speaking a language that…we wouldn’t have enough time on earth to calculate what they’re actually saying. We just don’t have the ability.”
 Others asked about what efforts are being made to regulate the use of AI. According to Jones, the answer is very few. As an example, Doody pointed to President Trump’s domestic policy bill that’s currently before the U.S. Senate. One of its provisions is that there be no federal regulation of AI for at least the next five years (as of this writing).
 “The scale of the money and the venture capitalist money that’s behind these teams is scary,” he said.
Meanwhile, AI developers have come under increased legal scrutiny in recent years, with numerous lawsuits having been filed against them by companies and individuals claiming copyright infringement. But Jones said there are measures that these companies can take to better protect themselves from legal trouble.
“How can Midjourney and Chat GPT protect themselves? Get a license. Pay for [the use of copyrighted materials] the way the rest of us do.”
Email Ryan Bray at ryan@capecodchronicle.com