Agenda

The future is Now. Are you ready?

nexi-00.jpg
nexi-00.jpg

Part Two

17:00-17:15 CEST
11:00-11:15 AM EDT

Opening note

17:15-18:00 CEST
11:15 AM-12:00 PM EDT

The Future-Ready Tester: Expanding Your Virtual Toolbox for the 2030s

18:00-18:30 CEST
12:00-12:30 PM EDT

AI in Testing: Quantity vs. Quality in Early-Stage Development

18:30-18:45 CEST
12:30-12:45 PM EDT

Break

18:45-19:15 CEST
12:45-1:15 PM EDT

Build Your Own AI Testing Agent

19:15-19:45 CEST
1:15-1:45 PM EDT

BI & Data Testing

19:45-20:15 CEST
1:45-2:15 PM EDT

The Lost Art of Prompt Engineering: Making AI Work for Testers Again

Part one

8:00-8:15 CEST
4:00-4:15 PM AEST

Opening note

8:15-9:00 CEST
4:15-5:00 PM AEST

The Cognition Catalyst: Authentic Intelligence for AI beyond Approximation

9:00-9:30 CEST
5:00-5:30 PM AEST

How I Improved the Tests for RestAssured.Net Using Mutation Testing

9:30-9:45 CEST
5:30-5:45 PM AEST

Break

9:45-10:15 CEST
5:45-6:15 PM AEST

The Road To QA – An Okayish Success Story

10:15-10:45 CEST
6:15-6:45 PM AEST

The Good, the Bad, and the Gauge

10:45-11:15 CEST
6:45-7:15 PM AEST

Ensuring Software Security: A New Era in Quality Assurance

Part Two

17:00-17:15 CEST
11:00-11:15 AM AEST

Opening note

17:15-18:00 CEST
11:15 AM-12:00 PM EDT

The Future-Ready Tester: Expanding Your Virtual Toolbox for the 2030s

18:00-18:30 CEST
12:00-12:30 PM EDT

AI in Testing: Quantity vs. Quality in Early-Stage Development

18:30-18:45 CEST
12:30-12:45 PM EDT

Break

18:45-19:15 CEST
12:45-1:15 PM EDT

Build Your Own AI Testing Agent

19:15-19:45 CEST
1:15-1:45 PM EDT

BI & Data Testing

19:45-20:15 CEST
1:45-2:15 PM EDT

The Lost Art of Prompt Engineering: Making AI Work for Testers Again

See you again next time?

Be a part of the magic

The Cognition Catalyst: Authentic Intelligence for AI beyond Approximation

AI is everywhere—but are we settling for sophisticated approximation when we could be building more reliable systems?

In this session, Lalit shares his perspective as a skeptical practitioner who has explored AI tools and LLMs. While useful, they remain limited to pattern matching and mimicry, lacking true reasoning and context.

Drawing inspiration from the Nyaya school of Indian philosophy, a 2,000-year-old framework for knowledge and logic, Lalit explores how timeless principles of inquiry, reasoning, and ethics can guide AI toward complementing—not replacing—human intelligence.

For testers, this shift is crucial. By applying Nyaya’s methods, they can sharpen critical thinking, avoid AI pitfalls, and contribute to building AI that is consistent, reliable, and deeply aligned with human wisdom.

Main Takeaways:

Lalit Bhamare

For the past 18 years, Lalit has excelled in test engineering and software quality leadership, currently serving as Engineering Manager at Accenture Song in Germany. He leads the Innovation and Thought Leadership group for Accenture Quality Engineering Services across EMEA region. Lalit’s proprietary framework, Quality Conscious Software Delivery (QCSD), earned recognition at EuroSTAR 2022.

A dedicated contributor, he founded the non-profit Tea-time with Testers and serves as Director at the Association for Software Testing. Lalit is also an international keynote speaker and thought leader, continuously advancing the field of software testing.

How I Improved the Tests for RestAssured.Net Using Mutation Testing

When building and maintaining RestAssured.Net, an open source library that makes writing HTTP API tests in C# easy and fun, I relied on a suite of acceptance tests to catch bugs and validate new features. But how much can you really trust your tests?
Many teams still turn to code coverage as the primary measure of test quality—but as we know, coverage alone doesn’t tell the full story. In this talk (with a live demo!), I’ll show you how mutation testing can provide deeper insights into the effectiveness of your automated tests. Using RestAssured.Net’s real-world source code and test suite, we’ll explore how my tests once fooled me into believing everything was working perfectly—when it wasn’t.
You’ll see mutation testing in action, discover how it helped me strengthen both my tests and my application code, and walk away with practical tips for applying this often-overlooked technique in your own projects. Whether you’re new to mutation testing or looking to level up your quality practices, this session will give you the tools to better trust your tests.

Main Takeaways:

Bas Dijkstra

An independent test automation consultant and trainer with nearly two decades of experience in the field. Over the past 19 years, he has designed and implemented automation solutions across a wide range of programming languages, frameworks, and technology stacks.

Bas is also a highly regarded trainer, having delivered test automation workshops to dozens of companies and hundreds of conference attendees worldwide. He is the creator of RestAssured.Net, a library that simplifies writing tests for HTTP APIs in C#.

Based in Amersfoort, The Netherlands, Bas combines his professional expertise with a passion for sharing knowledge, making him a sought-after speaker at international conferences.

The Road To QA - An Okayish Success Story

Throughout my career as a software developer, I’ve observed numerous QA professionals transitioning into test automation roles, with some turning into full-fledged developers. My journey, however, took the opposite path—I moved from being a developer to specializing in test automation.

In this talk, I aim to delve into why this transition felt like a natural progression for me. I’ll share the lessons I learned along the way and how this shift fundamentally altered my perspective on both developers and QA professionals.

Moreover, I’ll touch on the importance of challenging the status quo and how this significantly enhances career development. By questioning your role, you can uncover opportunities for growth that you might not have considered otherwise.

Main Takeaways:

Benjamin Bischoff

After 15 years of being a software developer and trainer, I transitioned to test automation in 2016. Currently, I work as a Test Automation Engineer at trivago N.V. in Düsseldorf, Germany. There, I focus on backend and frontend test technologies and pipelines. I authored the book “Writing API Tests With Karate” and maintain some test-related open source projects. I am a regular conference speaker and also write blog posts about testing, automation and software craftsmanship on my website.

The Good, the Bad, and the Gauge

Automation is supposed to make our lives easier — until it doesn’t. In this talk, I’ll share the real-world journey of the Automation team as we transformed a flaky, overly complex automation setup into a streamlined, maintainable test framework powered by Gauge.
You’ll hear about the good: how Gauge enabled readable, modular, behavior-driven tests, improved collaboration between QA and developers, and empowered non-QA team members to contribute confidently.
And the bad: migration pains, missing integrations, and the learning curve of combining Kotlin and Gauge in a real CI/CD environment. We’ll cover how we tackled flaky legacy code, built custom libraries, and ultimately structured tests using reusable step libraries, ran everything in Jenkins, and monitored results in Slack.
Whether you’re a test automation engineer, developer, or exploring behavior-driven development, this session offers practical takeaways, honest lessons, and maybe a laugh — showing that good testing can be easier, effective, and even fun.

Main Takeaways:

Liran Yushinsky

Leads the automation strategy across infrastructure and product layers. Experienced in building test frameworks, managing CI pipelines, and bridging the gap between QA and Dev teams. Strong advocate for test readability, developer collaboration, and using the right tools for the right reasons.

Ensuring Software Security: A New Era in Quality Assurance

Cyberattacks are rising rapidly, and companies fall into two categories: those that have been attacked and those that will be. How can testers contribute in this environment without being cybersecurity specialists?

In this session, we’ll review recent high-profile cyberattacks and highlight the key hazards and vulnerabilities that software products face throughout their lifecycle. Testers play a crucial role in addressing these risks.

We’ll explore how automation and tool integration within the Secure Software Development Life Cycle (SSDLC) can enhance team agility, expertise, and even simplify certification processes for standards like ISO 27001. Practical tips, examples, and a demo will illustrate how to implement these strategies.

Finally, we’ll discuss how testers can use static code analysis, targeted data sets, and critical use cases to mitigate risks, ultimately helping to lead the cybersecurity strategy and culture within their software teams.

Main Takeaways:

Sara Martinez Giner

Began her career in 2014 as a Software Validation Engineer in the communications domain, gaining hands-on experience across projects in telecommunications, geolocation, big data, and power electronics.

In 2019, she shifted focus to cybersecurity software testing, discovering a field that combines fascination with challenge and has fueled her growth as both a software and quality professional.

Since then, she has been building expertise at the intersection of software quality and cybersecurity, driven by a passion for delivering secure, reliable, and high-quality solutions.

The Future-Ready Tester: Expanding Your Virtual Toolbox for the 2030s

Each tester carries a virtual toolbox — made up of the skills, knowledge, and experiences that shape how we work every day. This toolbox defines us, but it’s also something we can actively grow and adapt as the world around us changes.
In this talk, we’ll take a look at the toolboxes testers are carrying today (2025) and explore what we’ll need to add for the 2030s and beyond. We’ll cover the skills that matter, the gaps we often overlook, and the ways we can expand our own toolboxes — while also helping our peers and teams do the same.

Main Takeaways:

Joel Montvelisky

PractiTest’s Co-Founder and Chief Product Officer (CPO).
Joel has been part of the testing and QA world since 1997, working as a tester, QA Manager and Director, and Consultant for companies across the globe. During this time he was guiding changes in testing from legacy to modern-day approaches. Joel is a Forbes Council member, a blogger, and a lecturer.
Joel is the founder and Chair of the OnlineTestConf, the co-founder of the State of Testing survey and report. These “for the community” initiatives are a representation of his belief in sharing knowledge and making it available to as many people as possible.
Joel is a seasoned conference speaker worldwide, among them the STAR Conferences, STPCon, JaSST, TestLeadership Conf, CAST, QA&Test, and more.

AI in Testing: Quantity vs. Quality in Early-Stage Development

Startups don’t live in theory — they live in chaos. Code changes every day, deadlines are brutal, and there’s never enough QA power. AI comes in promising “magic”: hundreds of tests in minutes. But here’s the real question — do you want a lot of tests right now, or a few good ones later?
In this talk, I’ll show why, at the MVP stage, even noisy AI-generated tests can be a lifesaver, and why waiting for “perfect” coverage is often the biggest mistake. We’ll walk through real cases where quantity gave teams breathing room, and quality kept their product from crashing. And most importantly, how to balance the two without wasting months.

Main Takeaways:

Igor Goldshmidt

Is leading the Quality Engineering path at Skipper Soft, where he helps startups and engineering teams build end-to-end quality solutions — from QA strategy to scalable automation frameworks. He is also the founder of IGG Quality Expert Services, focused on training, workshops, and master classes that empower QA professionals and leaders. With over a decade of experience, Igor is a recognized conference speaker, mentor, and the Israel Software Testing Champion (2018), now shaping the next generation of Quality Engineering in the GenAI era.

Build Your Own AI Testing Agent

Learn to develop a custom AI-powered testing agent from scratch. Leveraging large language models, these agents can simulate diverse user interactions, generate robust test cases, and detect issues early in the development cycle. Using accessible tools and frameworks, you’ll build an intelligent testing solution tailored to your needs. By the end of the session, you’ll be ready to supercharge your testing process with an AI agent that fits seamlessly into your team’s workflows and accelerates release cycles.

Main Takeaways:

Sparsh Kesari

As a Senior Developer Relations Engineer at LambdaTest, Sparsh is committed to empowering developer communities and advancing the field of software testing. He excels at bridging the gap between technology and its users by delivering impactful content and innovative solutions that enable teams to thrive in quality assurance and automation. Sparsh collaborates closely with community members, customers, and partners to cultivate a culture of continuous learning and collaboration, driving excellence among quality engineering teams worldwide.

BI & Data Testing

Data is becoming increasingly important in the daily processes of many companies. The data landscape is becoming increasingly complex and dynamic. We see many companies struggling with the issue of how to guarantee data quality and maintain high confidence in the data. As a tester, this offers you new opportunities to broaden your skills, because what does a BI (business intelligence) landscape look like? What do you test in that? How do you check data quality? But also, in functional testing, we should take ‘data’ into consideration a lot more! Often, you end up testing an application that is created as a replacement for an old application. Data is going to get migrated, which means you must take the old, migrated data and the newly generated data both into account in your test strategy. This talk is about the basic principles of BI & Data testing. I’ll show you what the landscape looks like and give tips on how to test this while also addressing the importance of data in functional test projects.

Main Takeaways:

Suzanne Kraaij

With almost 15 years of testing experience for a wide variety of clients and industries. Within Sogeti, she is the guild leader of the BI & Testing Guild, and she is also a member of the testing community’s Core Team, where she is responsible for the BI & Data testing portfolio. In these roles, she actively promotes the development and growth of BI & Data testing, while also being more conscious of the importance of data and data quality in her own work as a functional test engineer.

The Lost Art of Prompt Engineering: Making AI Work for Testers Again

Prompt engineering isn’t dead—it’s just misunderstood. While many dismiss it as irrelevant, this session argues the opposite: in QA, prompt engineering is alive and essential. Testers already excel at precision language—steps, conditions, expected results—and this skill uniquely positions them to harness AI effectively. By treating prompt engineering as a structured, test-centric discipline, QA professionals can transform large language models (LLMs) into reliable partners for generating test cases, reproducing defects, and amplifying exploratory testing.\r\n\r\nThis talk introduces a contrarian thesis: despite the industry’s claim that \prompt engineering is dead.

Main Takeaways:

Arbaz Surti

Is a seasoned QA professional with over a decade of experience in software testing, quality engineering, and product analysis. He specializes in bridging traditional QA practices with emerging technologies like AI and prompt engineering. Arbaz is a published author, with his recent article on AI-assisted browser testing featured on StickyMinds. He currently works at Inspire Brands, where he leads QA initiatives for digital products at Dunkin’ and Baskin Robbins. Arbaz brings a practitioner’s perspective to AI in testing, grounded in real-world workflows, cross-functional collaboration, and scalable quality strategies.