Quality Advocacy: The Next Generation of Testing Excellence​

In this talk, we will explore the inevitable shift from traditional Quality Assurance to Quality Advocacy. This evolution moves beyond defect detection, driving a proactive and strategic approach to quality at every stage of the development lifecycle. We’ll discuss how automation and Continuous Integration are key drivers of this revolution, and how T-shaped QA professionals are becoming architects of digital excellence, focusing on delivering value-driven services rather than just software. This new paradigm advocates for quality as a collaborative, organization-wide effort that aligns with the modern enterprise’s needs.

Are You Having Cheese or Steak?​

Building an automation solution that will support our teams for years to come can be a challenge. But sometimes what we hope to milk actually turns out to be a rodeo. What are some early indicators that the solution we are working on might not work, and should we be better off shooting it?

Model Based Testing: A Powerful Way to QA​

Model-based testing is a novel technique that makes QA teams more powerful. It focuses on intended system behavior, and then automatically derives test plans and scenarios from it. The intended behavior is visualized and verified against regulations/policies, helping early detection of requirement errors.

The Future of QA: Integrating AI for Intelligent Test Management​

In this session, we will explore the transformative potential of AI in Quality Assurance, particularly how it can be leveraged for intelligent test management. We will discuss practical implementations, the benefits of AI-driven testing, and strategies for integrating these tools into existing QA processes. Attendees will gain insights into the future of QA and how to stay ahead in an increasingly automated landscape.

Automate Smarter, Not Harder: GitHub Copilot your AI Test Buddy

Explore how GitHub Copilot streamlines test automation by generating and optimizing scripts, from unit to API tests. Learn how it reduces development time, suggests best practices, and improves code quality. This session is essential for those aiming to boost efficiency and maintainability in test automation.

This one is for our audience in Australia

Part One

Introduction and Greetings

Test Automation: Friend or Foe?
by Maaret Pyhäjärvi

Break

Break

And here's for our audience in the Americas

Part Two

Introduction and Greetings

Taking Your IT Leadership to the Next Level
by Mike Lyles

Break

Break

A Holistic Approach to Testing in an Agile Context

In the software world, we talk a lot about quality. Business leaders say they want the best quality product – though they often fail to understand how investing in quality pays off.  Customers have their own views of what quality means to them, which may be surprising to the business. Delivery teams are concerned about code correctness, and the many types of testing activities.  

With so many different perspectives, it’s no wonder organizations get confused about how to deliver a product that delights their customers. The Holistic Testing Model helps teams identify the levels of quality they need for their product. It helps them plan the types of testing activities they need all the way around the continuous software development loop. Using this holistic approach to agile development helps teams feel confident in delivering changes frequently. Lisa will share her experiences with this whole-team approach to quality and testing. 

Key learnings: 

  • A holistic quality and testing approach throughout the continuous loop of software  development, using the Holistic Testing Model 
  • Apply the Holistic Testing Model to create an effective test strategy
  • The importance of bug prevention and value injection over bug detection How to plan and fit testing activities at all levels into short agile iterations with frequent  delivery, and continuous delivery

Gil Zilberfeld

Has been in software since childhood, writing BASIC programs on his trusty Sinclair ZX81. He is a trainer and mentor working to make software better.
With more than 25 years of developing commercial software, he has vast experience in software methodology and practices. From unit testing to exploratory testing, design practices to clean code, API to web testing – he’s done it all.
Gil speaks frequently at international conferences about testing, TDD, clean code, and agile practices. He blogs and posts videos on these topics at testingil.com and YouTube channel. Gil is the author of “Everyday Unit Testing”,
In his spare time, he shoots zombies, for fun.

Lisette Zounon

Is an award-winning tech executive, serial entrepreneur, and engineering leader with two decades of experience helping people and companies improve the quality of their applications, with solid tools, a simple process, and a smart team. She firmly believes that industry best practices including implementing agile methodologies, DevOps practices, and leveraging Artificial Intelligence are invaluable to the success of any software delivery.

Lisette was responsible for leading and managing high-performing quality-testing teams throughout all phases of the software development testing cycle; ensuring that all information systems, products, and services meet or exceed organization and industry quality standards as well as end-users requirements. This includes establishing and maintaining the Quality strategy, processes, platforms, and resources needed to deliver 24×7 operationally critical solutions for many of the world’s largest companies.

Lisa Crispin

Is an independent consultant, author, and speaker based in Vermont, USA.  Together with Janet Gregory, she co-authored Holistic Testing: Weave Quality Into Your  Product; Agile Testing Condensed: A Brief Introduction; More Agile Testing: Learning  Journeys for the Whole Team; and Agile Testing: A Practical Guide for Testers and Agile  Teams; and the LiveLessons “Agile Testing Essentials” video course. She and Janet co-founded a training company offering two live courses worldwide: “Holistic Testing:  Strategies for Agile Teams” and “Holistic Testing for Continuous Delivery”. 

Lisa uses her long experience working as a tester on high-performing agile teams to help organizations assess and improve their quality and testing practices, and succeed with continuous delivery. She’s a DORA Guide for the DORA community of practice. Please visit: https://lisacrispin.com, https://agiletester.ca, https://agiletestingfellow.com, and  https://linkedin.com/in/lisacrispin/ for details and contact information.

Suzanne Kraaij

With almost 15 years in the field of testing, Suzanne has gained a lot of experience with various clients in various industries. In the company where she works, Suzanne has a pioneering role as a core member within their testing community. In this position, she is actively involved in knowledge sharing and further development of the field of software testing and quality engineering.

Mike Lyles

Is an international keynote speaker, author, and coach. He is the Head of IT with Maxwell Leadership, an amazing company founded by leadership expert, author, and speaker, John C. Maxwell. Mike has over 30 years of experience in IT, coaching, mentoring, and building successful teams with multiple organizations, including Fortune 50 companies. As a Maxwell Leadership Certified coach and speaker, Mike’s “purpose” is to inspire others with value-based leadership and growth principles and to serve others in their journey toward significance and success. Mike has traveled to dozens of countries and hundreds of events to share his experiences with thousands through keynotes, workshops, and other special events. Mike is the author of the self-help motivational book, “The Drive-Thru Is Not Always Faster”.

George Ukkuru

Is a performance-driven technocrat with over two and a half decades of experience in Test Engineering, Product Management, and User Experience. He specializes in optimizing costs, improving market speed, and enhancing quality by deploying the right tools, practices, and platforms. Throughout his career, George has worked with several Fortune 500 companies, delivering impactful solutions that drive efficiency and innovation. Currently, he serves as General Manager at McLaren Strategic Solutions, where he continues to leverage his expertise to lead teams and projects that align with business goals, ensuring high-quality outcomes and strategic growth.

Esther Okafor

Is a Quality Assurance Engineer at Storyblok, bringing unique experience in API testing and a strong passion for building high-quality software. She has previously worked with renowned companies like Flutterwave, Renmoney, and Venture Garden Group. Over her four years in the tech industry, Esther has trained and mentored over 100 women in tech through initiatives such as She Code Africa and Bug Detective. Her perspective offers valuable insights into the world of QA, and she is committed to helping others succeed. Additionally, she has authored several blog posts that provide essential guidance to Quality Assurance professionals, helping them excel in their day-to-day roles.

Michael Bar-Sinai

Software engineer by training, with a mid-career PhD in formal methods and requirement modeling. Created various information systems for NGOs, ranging from work accident tracking to geopolitics information systems. Worked on data science tools at Harvard’s IQSS. CTO and Co-Founder at Provengo, a start-up creating model-driven software engineering tools. Married, 3 kids, sadly no dogs.

Tim Munn

Technical Test Leader with almost 20 years experience in the field. Have automated apps and lead international technical qa teams in areas from Pharma to Fintech. Currently a Senior SDET at Spotlight.

Joel Montvelisky

Is a Co-Founder and Chief Product officer at PractiTest, and has been in testing and QA since 1997, working as a tester, QA Manager and Director, and Consultant for companies in Israel, the US, and the EU. Joel is a Forbes council member, and a blogger and is constantly imparting webinars on a number of testing and Quality Related topics.
In addition, Joel is the founder and Chair of the OnlineTestConf, the co-founder of the State of Testing survey and report, and a Director at the Association of Software Testing.
Joel is a seasoned conference speaker worldwide, among them the STAR Conferences, STPCon, JaSST, TestLeadership Conf, CAST, QA&Test, and more.

Maaret Pyhäjärvi

Is an exploratory tester extraordinaire and Director, Consulting at CGI. She is a tester, (polyglot) programmer, speaker, author, conference designer, and community facilitator. She has been awarded prestigious testing awards, Most Influential Agile Testing Professional Person 2016 (MIATPP) and EuroSTAR Testing Excellence Award (2020), Tester Worth Appreciating (2022), and selected as Top-100 Most Influential in ICT in Finland 2019-2023.

Francisco Di Bartolomeo

An experienced Test Discipline Lead with over ten years of expertise across diverse testing areas. Passionate about cultivating a quality-driven culture, he excels in coaching and mentoring teams both technically and professionally. He has led numerous quality assurance initiatives, advocating for risk-based testing and shift-left practices to integrate quality at every stage of development. Guided by the belief that “Quality is a habit,” Francisco is dedicated to making quality a constant practice, establishing himself as a visionary in software testing.

API Test Planning LIVE

How do you come up with cases for your APIs? Is it enough to check they return the right status? No.

APIs are complex, so even a couple cause us to be overwhelmed by the options. But the options are good. We want the ideas, so we can prioritize based on our needs. We just need to understand our system and come up with proper ones.

History has proven that the best way to come up with ideas is collaboration. So that’s what we’ll do.

This is an interactive session on API Test Planning. Given only two APIs (and a semi-sane moderator), we’ll come up with creative ways to test them.

Sounds easy? APIs are complex. And in this session, we’ll see how much, and how to think of different aspects of APIs when testing.

Search for a Tool Is Like Dating​

Choosing the right toolset for your ecosystem can be a lot like dating. You need to know what you’re looking for, be prepared to make a list, and be willing to check off those boxes to find the right match. In this lightning session, we’ll explore the similarities between choosing a toolset and choosing a date, and learn how to make the best decisions for your needs.

You will learn the following:

  • What should be on your list of requirements when evaluating new toolsets, and how to approach the search to ensure you find the right match for your ecosystem
  • We’ll discuss the importance of compatibility, communication, and trust in both dating and tool selection, and how to use these principles to make the best decisions for your organization.

 

By the end of this session, you’ll walk away with a clear understanding of how to approach the tool selection process like a pro, and how to make the best decisions to support your organization’s success.

OSRS: Your New Test Strategy Multitool​

Have you ever inherited a testset and wondered about its worth? If it’s complete? If it’s good? A lot of test strategy approaches focus on new projects where you start from scratch, but in reality, you’ll often inherit an existing testset of an already running project. So how do you evaluate the value of that testset? How do you see if you are missing something or if you are overtesting things? How to choose what to automate? What to put in a regression testset? What to test first when under time pressure?

This approach can be applied to all of these questions and helps give the whole team insight into the potential value of testing. It will open up the conversation about what you will and won’t be testing with evidence to substantiate those choices.

Taking Your IT Leadership to the Next Level​

What does a day in the life of YOU look like at work? Do you struggle to complete projects on time? Are there issues that seem to pop up with every deliverable? Does your team respect you and your contributions? Does your boss understand what you are trying to accomplish? Do your stakeholders appreciate the outcomes that you and your team provide?

 

It’s very likely that one of these questions resonated with you when you read it. In fact, there is a chance that ALL of them do! We live in a fast-paced world where it’s easy to get so caught up in a routine where every day looks the same.

 

There seems to be minimal time allowed for growing, improving, and building connections.

 

Imagine a new world where you focus daily on personal growth. A world where you engage effectively with your boss, your peers, and your subordinates. A world where you go from just “showing up” or “keeping up” to “growing up” and improving not only your workplace but your personal life.

 

Join Mike Lyles as he shares decades of experiences in IT and leadership roles and how he has used these experiences to help him grow as a leader in IT.

 

Key Takeaways:

  • Key learnings from years of IT experiences
  • Suggestions for how to move from “communicating” to “connecting”
  • Tips to move from leading “followers” to leading “leaders”

Test Automation: Friend or Foe?

Supporting evidence does not teach us as much as opposing evidence. We are people who support and oppose on principle, in search of knowledge. We are balancing the perspectives of friends and foes, professionally, all day long.

We’ve been at this test automation thing quite a while, and three decades have given me the space to come to a principle that helps my projects succeed slightly more often with test automation: Time used on warning about test automation is time away from succeeding with it. We know from a particular literature genre of romance novels that tropes starting with friends or foes both end up with love, and we could adult up to improve our communication to more purposeful change.

A year ago, we set up a panel conversation to seek ideas for ending well off with test automation. In this talk, we lend the tension of disagreements of the past, to enable learning, combined with the stories of real projects.

Maybe at this time of AI and getting computers to ‘act humanly’, we need to team with an old enemy to make sense of how we peacefully coexist with the new, with healthy boundaries keeping the friendship in check.

Key takeaways:

  • When the stakes are high, we work with friends and foes
  • How to increase the odds of good results we can like with the tools
  • How to navigate the recruitment trap and the in-company growth plains

Not Too Little, Not Too Much: How to Test Just the Right Amount​

As business demands for a shorter time to market continue to rise, testing teams can struggle to find the right balance between adhering to these demands and maintaining sufficient coverage to ensure the released products meet the desired quality standards.

Using the power of AI, we offer a model that combines the value each test provides along with the current time limitation to determine which tests are best to execute.