Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Richmond County Daily Journal

    John Richard Schrock | A “small language model”

    By John Richard Schrock Contributing Columnist,

    2024-02-19
    https://img.particlenews.com/image.php?url=4V65XH_0rPdNn7e00

    In this last year, AI or artificial intelligence has become major media hype. Despite artificial intelligence having been discussed as machine intelligence by Turing in the WWII era and entered into academics as a course in the mid-1950s, the recent flurry of concern over AI is based upon large language models and the recent complex programming that allows computers to appear to imitate humans in writing supposedly “original” papers.

    I am not impressed, and neither should you be. For decades, we have been using a small language model every time we bring up a spell check system to detect misspellings. As an English editor for a journal that describes new species of Asian insects, I carefully read the submissions and make corrections when the authors do not follow the rules of the International Commission for Zoological Nomenclature. We also use a shortened format for descriptions, where instead of using long sentences to describe the “color of the wing is black” or the “veins are bifurcated,” (two-branched), we shorten to “wing black” and “veins bifurcated,” an accurate but shorter “telegraphic style” that saves space and avoids difficulty for authors or readers whose first language is not English and where grammar is different.

    At the end of the editing process, I activate a spell check. It makes a comparison with a small language model: an English dictionary. And nine-out-of-ten “errors” the spell check flags, I ignore because it is making a brainless comparison of words without context or understanding. Some of the articles I edit are in British or Australian English, and I run the spell check using those alternative small language models, allowing “colour” and words to end in “-izing” rather than “ising.” Spell check systems allow me to add specialized terms (such as “bifurcated”) to the dictionary base in order to accommodate the specialized language. But this small language model is in no “thinking” nor does it “know” what the words actually mean. Its artificial “intelligence” is really artificial.

    The current buzz over AI is based on the recent development of large language models (LLMs) that are incorrectly portrayed as having “understanding.” LLMs superficially appear to have these abilities by using computer programming based on artificial neural networks. While early computer programming was linear, based on “if X then Y, else go to Z,” so-called neural networks now model neurons that have many inputs, some positive and some negative, and they then transmit onward the dominant input (if more positive signals, then transmit onward; if more negative, do not transmit).

    When these more complex programs are then used in generative AI, they take inputted text and predict the next likely word. By using huge amounts of online texts, they gain the ability to mimic the sentence structure and form coherent sentences based on this machine “learning.” And yet the product is little more than a reorganization of the material solicited from the large language model, and will reflect incorrect associations and biases present in the language database. Again: garbage in, garbage out.

    A far more complex set of processes are used in generating essays from LLMs, including tokenization, cleaning datasets, reinforcement learning from human feedback, fine-tuning responses, etc. But unfortunately, the discussions around this “AI” are loaded with the term “knowledge.” The correct term is “information.” Knowledge requires experience and the knowledge agent must have sensory understanding. Computers do not have any sensory experience to make the information they manipulate “meaningful.” As a longtime teacher-trainer, I have long taught: no experience, no meaning. Simply, AI chatbots have no conscious recognition or understanding of what they generate.

    Large numbers of us are very familiar with the use of spell check, and the description I provided above shows that in the end, it is our role to actually understand and correct the output of the spell checker because it is a mindless tool that understands nothing. This remains the case with the more complex AI chatbots that use LLMs to manipulate the massive word associations to produce imitation essays.

    The lesson for parents and teachers is to increase the extent of face-to-face instruction and to provide assignments that require a student to incorporate their real lab, field and life experiences into essays that are based on their meaningful experiences in the real world. Instead of continuing to sit in front of computers, students need to get out and experience life to provide real meaning.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    Total Apex Sports & Entertainment15 hours ago

    Comments / 0