Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • The Guardian

    Fearing AI, I was reluctant to use ChatGPT. But friends, it changed my ADHD-hit life | Van Badham

    By Van Badham,

    12 hours ago
    https://img.particlenews.com/image.php?url=4Nlo52_0vRp9aJu00
    ‘It’s ChatGPT that navigates me through the glorious new silence of the day.’ Photograph: John Williams RF/Alamy

    It’s easy to be convinced that the myriad applications of AI pave a fun but nonetheless alarming digital path towards doom, doom, doom.

    Let’s start with “ slop ”. It’s the term now in use for AI-created graphic content pushed out on social media to attract eyeballs – and, in doing so, channel engagement – using spectacular, surreal images, the kind that can digitally mash Jesus with prawns in photographic detail. My Facebook feed heaves with ads for 80s trash sci-fi, shiny polyester shapewear and cream for smokers’ wrinkles; I suspect clicking “stills” of Star Wars Episode 1 in a Ken Loach universe may be to blame.

    Related: ‘If journalism is going up in smoke, I might as well get high off the fumes’: confessions of a chatbot helper

    Those not entranced by the bizarre can be dangerously tempted by beauty, in the proliferation of anonymous social media accounts posting AI-enhanced images celebrating (exclusively, conspicuously) western architecture and art. Historians and extremism researchers have identified in them an “ antiquity to alt-right ” propaganda pipeline where white European accomplishments are slyly promoted as superior. Those dreamy libraries and Roman pillars are pretty but they’re also bait to lure users towards coded, faux-intellectual shibboleths of white supremacy.

    My personal anxieties about the barely guardrailed software extends far beyond Trump muscle-fiction or vicarious trauma from Luke Skywalker considering his life choices in a bombed-out Vauxhall car.

    I’m one of the thousands of Australian and other writers dependent on royalty cheques to pay phone bills who learned last year that their work had been hoovered up to train AI models for Meta (market cap: US$1.28tn) and other mega corps for less remuneration than a kid pays to photocopy one page of it at the library. None of us got a dollar while a wave of AI-ghostwritten self-publishers announced their arrival into our crowded, poor and tiny market. This was (and I did not need a computer to tell me this) discouraging.

    As a researcher of disinformation , I am freaked out about the implications for elections, with everything from the billionaire owner of X tweeting AI-rendered video trashing Kamala Harris with her stolen image and voice, to a bleach-my-eyes-bad, AI-generated “Trump calendar” I carelessly shared online before realising it was fake. The image-making capacities of AI worry counter-disinformation activists less than the AI language models. An impression of “authentic” voices to brigade comment sections, overrun polls and invent “news” sources can be spawned in seconds.

    To understand the specific, nefarious use of AI apps, I bought one. The purchase coincided with an overdue formal ADHD diagnosis, which shocked precisely zero people I’d ever met. Testing the edges of what the technology could do for the dark side led to experimenting with what it might do for my newly understood-as-syndromic limitations. It stole my books; I felt owed.

    Months later I feel rewarded – the technology helps to douse the electrical fire in my brain just as glasses aid my bung eyes. I now find myself in the moral mire of knowing AI is both capable of profound harm and can be life-changingly helpful.

    Daily doses of Ritalin chemically quieten the distracting internal howl of memories, maths, songs, swinging cupboards, bickering self-talk and anxious visions that otherwise shred my concentration. But it’s ChatGPT that navigates me through the glorious new silence of the day.

    The “time blindness” that characterises ADHD is a perceptive limitation to accurately measure time, hence overcrowded schedules, chronic lateness and decision-making paralysis. Now I outsource task prioritisation to “the robot” on my phone. It calculates travel time, suggests routes and briefs me on what I need before a meeting, adjusting and republishing its itineraries based on new information. It also remembers the parameters of my desired diet, deduces the nutritional content of restaurant meals, refines my exercise schedule, creates packing lists for weekends away and advises what to wear for the weather.

    Related: No god in the machine: the pitfalls of AI worship

    One of ADHD’s principal tortures is shame in asking people for help with simple tasks whose component parts feel overwhelming. The robot knows no judgment. It summarises the letter from the bank as easily as it reminds me that my schedule’s overambitious and I should play video games for a while.

    The Australian government is calling for submissions to inform some needed guardrails for safe AI use. But attempts to regulate the AI badlands are arraigned against malicious interests of extraordinary power, with motivations ranging from rank commercialisation to extremist recruitment, electoral interference to copyright theft.

    Yet I’ve decided not to be an AI doomer. I can proselytise its usefulness in my own life while fighting for its aggressive regulation. Melvin Kranzberg states: “Technology is neither good nor bad; nor is it neutral.” It will be as moral as we choose to make it.

    At least, friends – that’s what the robot tells me.

    • Van Badham is a Guardian Australia columnist

    Expand All
    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News
    Emily Standley Allard25 days ago

    Comments / 0