Go to content | Go to the main menu | Go to search

edhouse-CookieGdpr-Policy-s
2183657
2
/en/gdpr/
542650B6A

Back to Blog

SQA

AI brings new challenges to software testing

Tech_blog

When OpenAI introduced ChatGPT two years ago, some feared that developers and testers would lose their jobs. But two years on, we can safely say that hasn’t happened. So, where did those predictions go wrong – and what new challenges has AI really brought us?

AI is now the number one topic at nearly every tech conference. We hear about it not just at work, but also on trains and at family gatherings. It seems to have truly become a part of everyday life — much like the internet did years ago. No matter how we look at it, one thing is clear: AI is here to stay, and it will keep evolving. That means not only benefits, but also risks, downsides — and new challenges.

Generated code

According to research by GitClear, the number of source code lines shared on GitHub has risen significantly since the launch of the Copilot AI assistant. That’s hardly surprising — AI is great for rapid prototyping, and today, the gap between an idea and a working app can be a matter of minutes. But the same research also points to a drop in quality. Again, no surprise — code generated from other previously generated code may not meet all the standards needed to maintain quality. It’s possible that this downward trend could continue, eventually resulting in code that’s unreliable or even dangerous.

Source: GitClear
Source: GitClear

Developers and testers should take notice. Code generated — or even just modified — by AI isn’t flawless, and we shouldn’t assume it will be bug-free. If we’re using AI to assist with testing, we also need to ensure the testing process itself is solid and trustworthy.

Old challenges in a new light

Developers use AI to build a new payment system. Testers use AI to test it. All the tests pass. It goes into production. And somewhere on the other side of the world, someone loses their life savings due to a bug in the payment gateway.

One of the fundamental rules of working with AI is this: you need to be able to verify the results. Even today, you can’t fully trust AI-generated outputs — and while tools like xAI (Explainable AI) aim to provide transparency through sources and reasoning, it still takes knowledge and experience to properly assess what the AI delivers.

Image: “When to use GenAI and when NOT to use GenAI” by Rika Marselis
Image: “When to use GenAI and when NOT to use GenAI” by Rika Marselis

Artificial intelligence may not be replacing testers or developers just yet — but that doesn’t mean nothing has changed. The real, and perhaps even greater, challenge is learning how to use AI beyond casual prompts in a browser. We need to start systematically integrating it into our development workflows — and, crucially, we need to understand the outcomes it generates.

AI offers not just more results, but faster ones — and at scale. In the future, we can expect it to autonomously generate test scenarios, which will create new complexities in how we evaluate them. Each scenario will need to be checked for relevance and correctness.

So while AI isn’t taking our jobs just yet, it is reshaping them — and bringing entirely new challenges we’ll need to adapt to.

Share article

Author

Jan Zatloukal

Jan Zatloukal Tester and developer with a passion for automation and improving the development process. I am currently working on an electron microscope automation project in Python.

Edhouse newsletter

Get the latest updates from the world of Edhouse – news, events, and current software and hardware trends.

By signing up, you agree to our Privacy Policy.

Thank you for your interest in subscribing to our newsletter! To complete your registration you need to confirm your subscription. We have just sent you a confirmation link to the email address you provided. Please click on this link to complete your registration. If you do not find the email, please check your spam or "Promotions" folder.