Skip to main content

By Chris Sherlock, Head of Test Capability at Nimble Approach

In this blog, we take a closer look at how AI can help teams move faster without compromising on quality. From analysing code to tracking exploratory testing sessions, discover how Nimble Approach is using AI to make testing smarter and more efficient.

In software development, ensuring robust quality often feels like a race against time. We’ve all been there: a project without a comprehensive regression test pack, low confidence in releases, and a high defect escape rate.

At Nimble Approach, we recently tackled such a scenario head-on: a low-code system pulling data from multiple sources that was difficult to wrap in conventional unit tests. Our solution involved a powerful combination of a test swarm and AI assistance. Here’s how we did it:

The AI-Assisted Test Swarm in Action

Our goal was to rapidly establish a solid test pack. We leveraged AI to streamline several critical steps:

  • Code Repository Analysis with Cursor: We began by using Cursor to analyse the code repository, efficiently identifying key functionalities. This provided us with a foundational understanding of the system’s architecture and capabilities.

  • From Functionality to Scenarios: The identified functionalities were then converted into concise test charters. From these charters, we developed detailed Behavior-Driven Development (BDD) scenarios. A secondary analysis with AI helped us identify and eliminate any duplicated scenarios, ensuring an optimised test suite.

  • Scenario Validation: We rigorously ran through these scenarios to confirm their validity and correctness, ensuring they accurately reflected the system’s expected behaviour, plugging any gaps in test coverage as we went .

  • Session-Based Exploratory Testing with Gemini: Alongside our structured scenarios, we conducted session-based exploratory testing. A key innovation here was using Gemini to keep track of our testing efforts in real-time. The live transcripts from these sessions were invaluable, allowing us to create a detailed defect list that could be quickly converted into bug tickets for the team to triage. This significantly reduced manual note-taking and maintained the flow of our testing sessions.

The immediate result was a robust set of functional test cases, ready for component and integration testing.

Augmenting Tests with User Journey Walkthroughs

While functional tests are crucial, understanding the user experience is equally vital. We wanted to augment our existing tests with comprehensive user journey tests. To achieve this, we conducted a system walkthrough with a Subject Matter Expert (SME):

  • Understanding User Journeys: This collaborative session allowed us to gain deep insights into the actual paths users take through the system.

  • Contextual Data Understanding: We gained a better understanding of the data’s context from a user’s perspective, which is critical for realistic testing.

  • Live Transcription and Querying with Gemini: Once again, Gemini proved indispensable. We utilised live transcription and real-time querying to identify defects as we progressed through the walkthrough. This approach drastically reduced the need for manual note-taking, allowing the session to remain fluid and focused on discovery.

Key Outcomes

A testing process that once required an entire sprint was completed in just three days, producing a robust set of consistently structured Behaviour-Driven Development (BDD) scenarios ready for ongoing testing. The team achieved rapid turnaround from exploratory testing to actionable bug reports, significantly accelerating feedback and defect resolution.

Putting AI into Practice

To get the most out of AI in testing, start by using tools like Cursor to quickly understand your system’s architecture and pinpoint the areas that matter most. Use AI to inspire and shape your initial test ideas, making exploratory sessions sharper and more productive. And don’t overlook live transcription – capturing defects in real time keeps you focused on discovery instead of note-taking.

By combining these approaches, you can speed up testing, improve coverage, and turn what used to take an entire sprint into a matter of days.

What’s Next for AI-Powered Testing?

Our journey with AI in testing is ongoing, and we’re excited about where it’s heading.

One of our next steps is fast defect logging integration. We plan to connect our live querying capabilities directly to tools like Jira via Model Context Protocol (MCP) tools. This will enable faster defect logging and accelerate the overall defect management process.

We’re also exploring automated test scenario execution. By leveraging tools like Playwright MCP, we aim to automatically run our test scenarios and generate automation scripts in real time. This approach will enhance both our testing efficiency and coverage, taking us another step closer to continuous quality assurance.

By embracing AI, we’re not just finding defects faster – we’re fundamentally transforming our approach to quality assurance. The result is a process that’s more efficient, comprehensive, and ultimately more effective. We believe this blend of human expertise and AI assistance represents the future of software testing.

Are you just starting out incorporating AI into your testing activities? Reach out to a member of our team today.

Get In Touch