Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • The Atlanta Journal-Constitution

    Atlanta music icon Jermaine Dupri says AI ‘not good for our industry’

    By Savannah Sicurella - The Atlanta Journal-Constitution,

    3 hours ago

    https://img.particlenews.com/image.php?url=3i5y6k_0vTbM3lQ00

    They call him out of touch.

    Younger producers, engineers and other professionals in the music industry have said as much to Jermaine Dupri about the use of generative artificial intelligence in music.

    Relying on technology to create a piece of music in a matter of seconds doesn’t sit right with Dupri, who has spent more than three decades writing and producing music for himself and other artists. It sucks the integrity out of the craft.

    “There’s not a lot of people that feel like me. A lot of people want to turn their lives over to tech,” Dupri said in an interview with The Atlanta Journal-Constitution. “They have this thing where it’s like: ‘You’re getting old. You’ve got to get with the new thing.’ And it’s not even about being old or young. It’s just about being smart and not being controlled.”

    WATCH: "The South Got Something to Say"

    Dupri is a pillar of the city’s music scene, penning its namesake anthem, “Welcome to Atlanta.” He’s one of several influential figures in entertainment opposing the use of generative AI, a branch of the technology trained to create new content from studying and recognizing patterns in massive data sets of existing material. Stevie Wonder, R.E.M. and Aerosmith, among a litany of other artists and bands, are also pushing back.

    Generative AI has the capability to drastically change the entertainment industry. In music, generative AI tools can be used to write lyrics or create beats that sound like, say, prolific Atlanta rapper Gucci Mane made them. They can come up with guitar riffs or saxophone solos matching a certain style or tone that a user writes in a prompt, such as “dark, sultry and sounding like it’s being played from the next room.” And, in a capacity that is perhaps the most problematic for artists themselves, they can replicate the voice of an artist without having the artist present.

    These tools can help an artist’s creative process by simplifying or automating time-consuming tasks, reducing the time it takes to get an idea off a page. By 2030, generative AI tools could automate up to 30% of hours worked across the U.S., according to research from consulting firm McKinsey & Co. These tools also can level the playing field — not every musician has the means or the physical ability to play the drums or book studio time.

    But these tools also have the potential to cut jobs, further disrupt an industry already grappling with changing economics and push the boundaries of copyright law. Atlanta, an international epicenter for rap, R&B and hip-hop, and home to a rich community of musicians across all genres, could feel an impact. So could other cities where music is a major economic engine, such as Nashville or Los Angeles. The music industry accounts for more than 2 million jobs across the U.S., according to a study prepared for the Recording Industry Association of America. In 2018, the industry contributed $170 billion in value to the U. S. economy, the study said.

    During the past year, entertainment unions such as SAG-AFTRA have incorporated safeguards against generative AI into their contracts with major studios and record labels, bipartisan groups of senators have begun to introduce bills to regulate the technology , and artists themselves are calling on software developers, companies and digital music services not to use AI tools that undermine their work.

    “No one from a different industry can tell me what I need to do in my industry,” said Dupri. “I will not allow that to happen.”

    ‘Find the easiest route’

    Dupri has witnessed several disruptions to the music industry.

    Napster enabled millions of internet users to share MP3 files of music for free, sparking panic across the industry over concerns of lost record sales and copyright infringement. It was an oh-my-god moment, Dupri said — an oh-my-god, Napster-is-taking-over-the-world moment.

    But that threat was short-lived. At least 18 different record companies sued Napster, alleging copyright infringement. The court granted a preliminary injunction that ordered Napster to remove copyrighted material on its network, and three months later, the service shut down.

    “That’s all it takes — people to band together and say ‘This is not good for our industry,’ the same way it happened with Napster,” Dupri said.

    Dupri likens Napster to AI.

    But killing Napster was like cutting off the head of a Hydra, as peer-to-peer sharing competitors such as LimeWire, BitTorrent or BearShare continued to operate for years afterward. The music industry was forever changed, and the groundwork was laid for streaming platforms.

    The world’s largest music streaming service by market share, Spotify, was modeled off the consumer experience of Napster. In 2014, Spotify founder Daniel Ek told the New Yorker : “We said, ‘The problem with the music industry is piracy. Great consumer product, not a great business model. But you can’t beat technology. Technology always wins. But what if you can make a better product than piracy?’”

    To ban AI as a tool will not lead anywhere, said Alexander Lerch, a professor in Georgia Tech’s College of Design who has been studying machine learning in music for more than 20 years. The technology will advance to a point where people who do use AI will have an edge over their competition.

    “You can feel that AI is not the right thing for you, and that’s perfectly fine,” Lerch said. “But it is a tool, and if a tool makes you better at your job, maybe you should look at it.”

    https://img.particlenews.com/image.php?url=2Ttelq_0vTbM3lQ00
    Jermaine Dupri, shown in "Freaknik: The Wildest Party Never Told."

    Credit: HULU

    The industry has used data-driven tools that are now classified as AI for decades, Lerch said. David Bowie even designed a computer program called the Verbasizer that randomizes lyrics in 1995. But part of the reason this new generation of AI technology is discussed extensively is because the end consumer has access to these tools — they’re not just used by the industry to improve processes.

    Not all AI tools in music are designed to generate content. Many are designed solely to improve productivity. Developers have created systems to analyze and categorize music by its genre, mood or tempo, insights that would’ve otherwise come from human expertise. They also have created systems to process music, such as LANDR, which performs mastering processes such as equalization, which is the process of adjusting different frequencies in an audio signal, or dynamic compression, which balances the volume of a song by quieting louder sounds and amplifying quiet ones.

    But generative AI has the most open questions, Lerch said, because it emphasizes existing problems over artistic agency, ownership and copyright.

    Many models are trained on data sets containing copyrighted material. In June, the RIAA sued Udio and Suno, two AI startups, over the unauthorized use of copyrighted sound recordings to train their models. In a news release announcing the lawsuits, RIAA Chief Legal Officer Ken Doroshow said: “These lawsuits are necessary to reinforce the most basic rules of the road for the responsible, ethical, and lawful development of generative AI systems and to bring Suno’s and Udio’s blatant infringement to an end.”

    Plus, there is no federal legislation regulating the use or creation of deepfakes, a term to describe deceptive images, video or audio edited or generated by AI. Earlier this year, Drake released a dis track that featured a verse delivered in an AI-generated version of Tupac Shakur’s voice without approval from the late rapper’s estate. The song sparked an immediate uproar, and Shakur’s estate issued Drake a cease and desist, forcing the artist to pull the song off streaming services.

    Fans, legal experts and artists such as Dupri feared the track was opening up the floodgates for future uses of AI technology to mimic or replicate artists’ voices.

    “There needs to be regulations and guardrails in place, just as they are on the internet. We don’t allow everything online, and we prosecute things that are illegal. Similar actions have to be taken for AI,” Lerch said.

    Some younger producers or artists tell Dupri that AI would make his life easier — that typing in a few words and letting an AI application take hold would cut out all the time he would spend standing around the studio. There’s less work involved and fewer people required to generate a song.

    “Their mentality is crazy. It’s like they’re trying to find the easiest route to get out and make money at the same time,” Dupri said.

    Dupri frets the chain reaction that could occur if more people start thinking like this. He envisions a future in which labels won’t want to sign deals with artists if everything can be automated. Labels will start dealing with machines, and it’ll be a machine talking to a machine.

    “The tech companies are preparing you for a day when music is made by nobody but one person,” Dupri said. “You consume the music, but you don’t care who the artists are and the artists don’t matter.”

    Get all the news about the Atlanta Braves delivered each morning. Sign up for Braves Report.

    Expand All
    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News
    The Atlanta Journal-Constitution2 days ago
    The Atlanta Journal-Constitution1 day ago

    Comments / 0