Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • TechRadar

    Time for reality check on AI in software testing

    By Robert Salesas Martin,

    14 hours ago

    https://img.particlenews.com/image.php?url=2Ga0Sa_0uT0xHPR00

    Gartner has forecasted that by 2027, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchain. That’s a huge shift from just 15% in 2023, but what does this mean for software testers?

    Before test automation and AI, programming was an essential skill for software testers. With rising demand for AI/ML and no-code testing tools, that’s just not the case anymore. And it means that AI skills, not programming, are becoming the preferred skills for testers.

    But it’s not happening overnight. Gartner has already said that by 2028, most companies will use AI tools to code their software. Many have yet to use AI for testing specifically but the sheer increase in volume of code each year, partly due to AI, is going to make no-code testing more important. It will also pressure IT leaders to make educated decisions on which AI-augmented tools to invest in.

    For that, they have to understand the differences between them.

    Human testers and AI-augmented testing

    Let’s consider what AI can and cannot do for testing. We're still far from testing without human input. AI-augmented tools can assist testers in things like test generation and maintenance, but you still need human validation and oversight to ensure accurate tests.

    Having said that, 52% of IT leaders expect to use GenAI to build software, according to Practitest. That number will only trend upward in the next few years, alongside a dramatic acceleration in software production. That means they will have to test the results AI generates, potentially with AI.

    But how will AI speed up software development? The benefits of AI are really the same as what we see from automation – i.e quality at speed. By automating test generation and maintenance, AI-augmented testing tools can speed up development, allowing for faster testing cycles and quicker adaptation to market changes and customer needs - ultimately improving market responsiveness for software.

    Some AI tools can also process vast amounts of data , identifying patterns in complex applications far more exhaustively than human testers. This comprehensive process means superior test coverage, less likelihood of overlooked edge cases and missed bugs, and generally improved software quality.

    Through machine learning, AI tools can also analyze historical defect data and test execution logs, and predict potential defects. This lets AI refine and optimize test cases, resulting in more robust and reliable tests that are less susceptible to flakiness or false positives.

    Practically speaking…

    While AI-augmented testing tools are still in their early stages, the 2024 State of Testing Report highlights that organizations are already utilizing them for test case creation (25%), test case optimization (23%), and test planning (20%).

    But it’s not one size fits all. The AI-augmented testing technologies on today’s market aren’t all equally advanced. None of them are magic wands, either. Every company needs to set proper expectations about what each tool can do. The areas each one specializes in will determine the value of the tool to your business.

    Even if you can technically generate lots of test cases with AI, are those really going to be quality test cases? Will those test cases need so much editing and validation from humans that it outweighs the time saved from generating them?

    One organization might prioritize streamlining the initial stages of test planning with automated test generation, which maybe means they leverage AI to derive test cases and scenarios from user stories. On the other hand, a company dealing with sensitive data or privacy concerns might prioritize synthetic data creation, using AI to generate data that mimics production environments, while addressing test reliability and confidentiality issues.

    These are things IT leaders must evaluate when researching tools.

    The roadblocks

    Before AI can transform software testing, there are some roadblocks to overcome. The main one is finding both IT leaders and testers with the skill sets that are actually consistent with how tools are evolving.

    One of the most important skill sets IT leaders will need to be cognizant of among testers is those pertaining to AI/ML skills. The demand for AI/ML skills have surged, according to the 2024 State of Testing Report, from 7% in 2023 to 21% in 2024. Meanwhile, the perceived importance of traditional programming skills in testing has decreased from 50% in 2023 to 31% in 2024. That shift in skills is inevitably going to transform the testing tools themselves, nudging the industry away from fully code-based automation approaches to a much greater adoption of no-code, AI-powered tools.

    Given that AI-augmented testing tools are derived from data used to train AI models, IT leaders will also be more responsible for the security and privacy of that data. Compliance with regulations like GDPR is essential, and robust data governance practices should be implemented to mitigate the risk of data breaches or unauthorized access. Algorithmic bias introduced by skewed or unrepresentative training data must also be addressed to mitigate bias within AI-augmented testing as much as possible.

    But maybe we’re getting ahead of ourselves here. Because even with AI’s continuing evolution, and autonomous testing becomes more commonplace, we will still need human assistance and validation. The interpretation of AI-generated results and the ability to make informed decisions based on those results will remain a responsibility of testers.

    AI will change software testing for the better. But don’t treat any tool using AI as a straight-up upgrade. They all have different merits within the software development life cycle. It’s about being conscious of what your organization actually needs, not what is shiny on the market. And no matter how important those AI/ML skills become…you’ll still need humans.

    We list the best IDE for Python .

    This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    Total Apex Sports & Entertainment24 days ago

    Comments / 0