Sessions Line-up

November 29th

05:00-05:15 AM EST 
11:00-11:15 AM CET

Introduction – Welcome to Fall OTC

05:15-6:00 AM EST 
11:15-12:00 AM CET

Who Tests The Tests? Mutation Testing for UI Automation

Session with Louise Gibbs

Automated tests provide an additional safety net, alerting us to any unintended failures introduced because of a recent change. If a test fails when it should pass, then time is wasted investigating an issue that doesn’t exist and the organization loses faith in them. If a test passes when it should fail, then a critical bug might get released and the team.

As Testers, we would never let a new feature get released without first testing it. Automated tests are software applications in their own right and should be treated that way. This means that all new tests should be rigorously tested the same way a new feature might be rigorously tested.

Mutation testing is typically used for verifying the feedback of unit tests. It involves making changes to the source code, which should cause the unit tests to fail. Similar methods can be used to assess the quality of UI Automated tests.

In this talk, I will discuss methods for testing UI Automated tests that help reduce the risk of false positives, test flakiness, and wasted time in test runs.

Louise GibbsLouise is a Senior Automation Tester at PebblePad. Her main job is to review and maintain the automated tests that are run overnight and investigate the causes of any failures. She has also worked for companies in the e-commerce, scientific research, and automotive industries, and runs a personal blog at louisegibbstest.wordpress.com, where she talks about her experiences as a software tester. She enjoys improving her testing skills and her main method for achieving this is by speaking to other Testers and discussing ideas.

6:15-7:00 AM EST 
12:15-13:00 PM CET

Get Your Test Strategy Set

Session with Varuna Srivastava

In this session, Varuna will share a specific case study of how an effective test strategy can get away from Push and Pray deployment to Continuous and Confident deployment to production.

Participate in this talk to learn:

1. How to form a core team to define the external and internal strategy?

2. How to apply recursive feedback and updated the strategy when and required after a production release?

3. Internal Test Strategy: A strategy within a Team of the same vertical as checkout API and UI

4. External Test Strategy: An end-to-end test strategy of a product

5. How does it help in getting away from Push and Pray deployment to Continuous and Confident deployment to production?

Varuna is a technical tester who’s worked on award-winning projects across a wide variety of technology sectors, including retail, travel, financial, and the public sector, and worked with various web, mobile, and IoT technologies. Varuna is a passionate advocate of shipping quality code to production using agile practices. When not working, Varuna likes to get her hands dirty experimenting with her culinary skills. Most of her weekends are spent in “cook-ography”—cooking plus photography!

7:15-8:00 AM EST 
13:15-14:00 PM CET

A test management tool for everyone

Session with Gagandeep (Gagan) Sharma

The test management tools we were using were giving us troubles. Especially when it came to catering to all the stakeholders. It was also difficult to scale up and down, when needed. 

We then moved to PractiTest which is full Saas and supports Agile/waterfall and integrated with Jira and Automation well and have never looked back. This session will focus on how we made the shift and upscaled to an ene-to-end test management tool, the benefits, the challenges and the successful results.

Gagandeep (Gagan) Sharma

My name is Gagan, I live in Utrecht, Netherlands, with my beautiful wife and a 4 years old daughter. Currently working for a coffee and tea giant, my role is Global IT Quality and Automation manager, I cover a wide portfolio of Testing practices, Test management Tooling, Automation testing, Test factory, Performance testing, security testing, CICD, and RPA. Working with onsite and offshore teams, based in Netherlands, India and the Philippines. Support projects and teams spread across the globe in 39 countries. On the way, learning new things every day which keeps me active and updated.

8:15-9:00 AM EST 
14:15-15:00 PM CET

Try not to laugh – a testers fail compilation 

Session with Anssi Lehtelä

During my almost 20 years of career in software development, I have seen and committed a lot of failures. Actually, my whole career started from one 🙂

In this session, I will share the worst, most embarrassing, and most educational fails I have done or seen during my career, from areas like working hands dirty in production, missing and creating bugs, failing in project work – and more.

Welcome to my very own fail compilation.

Anssi has been failing in software development as a QA and way of working specialist for nearly 20 years. Outside he likes to fail in playing football, and when spending time with his family and friends.

9:15-10:00 AM EST 
15:15-16:00 PM CET

9 out of 10 testers think onboarding at a new job sucks

A panel session with Sanne Visser, Chris Armstrong, Hanna Schlander, Shey Crompton and Veerle Verhagen

Onboarding is the process of introducing a newly hired tester to the team and organisation. It is an essential part of helping new testers understand their new role and job requirements. Despite its importance, it’s very frequently a horrible experience for us and a period we need to survive through instead of a time period that sets us up for success.We will vent and rant about:
  • Missing managers during onboarding
  • Unorganized onboarding
  • No access or accounts on day one
  • No overlap/handover with the previous person in the role
  • Too much information leading to overload.
Join us for the panel about the bad, worst and most atrocious onboardings we experienced and maybe you can avoid some of these pitfalls.

 

Sanne Visser
Sanne Visser, Agile Test Coach

 

chris armstrong
Chris Armstrong, QA Strategy Consultant 

 

hannah schlander
Hanna Schlander,Quality Catalyst
 
Shey Crompton
Shey Crompton, Senior Quality Engineer 

 

Veerle Verhagen, Quality Engineer

10:15-11:00 AM EST 
16:15-17:00 PM CET

How to not waste time

Session with Huib Schoots

Personal stories on how many testers waste a considerable amount of time!

I have seen testers waste a lot of time. In this interactive talk, I am sharing some stories and we’ll discuss how to not waste your valuable time!

Do you recognize the feeling that you could have tested more or you could have spent your time doing more valuable things? I do! This talk was inspired by my marvel and my frustration that we waste a lot of time on unimportant things in testing. I want to create insight into how testers often waste their valuable time. Organizations want to speed up their product delivery and they should stay ahead of the competition. So not wasting time is essential and will help the team tremendously. I’ll share my ideas on how we can reduce waste and spend our time on activities that add value.

Huib Schoots

I’m Huib Schoots, nice to meet you. My personal mission is shaping better people and software quality by connecting, innovating, facilitating, coaching, enabling, and teaching.

I’m fascinated by mindset, thinking, behavior, and collaboration. I’m active in many communities. I’m a humanist, servant leader, open, direct, creative, idea generator, result-driven, humor, problem solver, curious, confronting, critical thinker, passionate and energetic, lifelong learner, entrepreneurial, analytic, and continuous (world) improver.

I like hanging out with friends, trombone in a brass band, board & computer games, LEGO, photography, running, beer brewing, magic tricks, traveling, and reading.

I work as a Managing consultant & Quality Coach at www.qualityaccelerators.nl, Agile Test Expert at www.deagiletesters.nl, and organizer of www.frogsconf.nl

11:15-12:00  EST 
17:15-18:00 PM CET

Test manager, orchestrator or quality coach?

Session with Gitte Ottosen

For some years we have been talking about the role of the tester, and how it should/might change in the light of new ways of delivering software – with agile, scaled agile, DevOps, etc. But the role of the test manager is changing too, and is being questioned in some situations – should the role exist anymore? For some, the traditional test management role is still 100% relevant (I have recently been in such a context myself), but for many of us, the test manager role is transitioning from the traditional test management role to a role focusing on orchestration and quality coaching.


So, if you haven’t done so already, maybe it’s time to consider questions such as:
Do I have the competencies to coach my team in tests?
How do I support the team in the focus on continuous quality assurance?
How do I ensure that we have the proper focus on value while testing?
Ask these questions while recognizing that a test strategy is still essential, and that especially in a scaled context, someone still needs to orchestrate tests for the solution end-to-end, and also focus on ensuring that dependencies across teams are addressed.


This presentation will take you through some of the competencies needed to be a good-quality orchestrator and quality coach, focusing on both the soft and hard skills which will help you be the best possible support
to your team/project/train.

Key Learnings
1. Discover how to draw skills from different aspects of testing
2. Learn how to be a Quality Coach
3. Understand the skills you need to succeed in test Management

Gitte Ottosen

Gitte Ottosen is a test manager and agile/quality coach with a strong focus on a value driven approach to software development. She has more than twenty years of experience in IT, primarily within test, along with test management and process improvement, in both traditional and agile contexts.

The last fifteen years she has primarily worked within an agile context, focusing on supporting a quality mindset across teams and organizations, and improving the processes for some of the largest international companies in Denmark.

As a self-confessed test and agile evangelist who preaches the need for a strong quality and value driven focus. Gitte is a dedicated trainer within the areas of agile and test, and is a regular speaker at international conferences.

12:15-13:00 PM EST 
18:15-19:00 PM CET

Feature Flags – The Good, The Bad, and How to Prevent The Ugly

Session with Jeff Sing

More and more companies are using Feature Flags to get all types of changes – new features, configuration changes, bug fixes, and experiments – into production in a safer, faster, and most importantly, sustainable way.

Software companies that shift to deploying with Feature Flags benefit from low-risk releases, faster time to market, higher quality, and happier teams. Sounds great right? But what happens when your system isn’t implemented correctly, or worse, tested properly?

This talk takes you on a journey of why teams use Progressive Delivery and the path from basic to advanced feature flag usage.

Session Takeaways:

– Make sure you build the right product and build the product right!

– How to decide what testing strategies should be implemented (how to set up your unit test, end-to-end automation, etc)

– What tactics are most effective for keeping your implementation healthy and effective (feature flag governance!).

Jeff Sing is a Quality Leader who has been in the testing industry for over 15 years. During this time, he has built automation frameworks, test strategies, and executed quality initiatives for fields such as medical devices, infrastructure security, web identification, marketing tech, and experimentation and progressive delivery.

Jeff is currently the Sr Engineering Manager at Iterable where he is leading the Quality Engineering organization that orchestrates Iterable’s quality control plan utilizing a combination of automated testing, implementing QA procedures, and acting as Iterable’s customer experience champion to ensure Iterable remains the world’s leading customer engagement platform. He also has built and leads Iterable’s Engineering Operations Team which runs the services and programs to systematically improve effectiveness and productivity across the engineering organization as it scales.

13:15-14:00 PM EST 
19:15-20:00 PM CET

The Aztec Automation Pyramid of Sacrifices – Why you may have been sacrificing your QA teams

Session with Leandro Melendez (Señor Performo)

The days of just one type of automations for all QA endeavors are gone.
Our applications have evolved from bulky monoliths to modern applications tiered in services, spread over the cloud, and even with the capacity to expand.  On top of that, our teams are caught up in hyper-fast release cycles that require us to change how we do things. Especially those QA things called automation. In other words, we must rethink how, how much, and where we do those automation things.

In this fun presentation, Leandro will show the audience the principles of the automation pyramid with a set of fun analogies that will help the audience understand the importance of the pyramid.
At the same time, they will learn its principles to avoid wasteful behaviors that only bring tiredness, sweat, tears, and even blood.
Organizations may have been sacrificing their teams by not following the pyramid.

Takeaways:
Overall, the audience will have lots of fun learning from these examples taking away multiple great ideas on how, where, and how much to automate QA on their projects.

– Understand the importance of sticking to the pyramid when automating QA tests.
– Examples on automating at different levels of the pyramid
– List of tools per tier of the pyramid
– Fun ways to explain to your team and management

Leandro Melendez

Leandro is a performance testing advocate with K6-Grafana helping everyone to ramp up on their performance practices.
He has over 20 years of experience in IT and over 10 in the performance testing practice where he served multiple S&P500 customers all over the USA, Mexico, Canada, Brazil, India, Austria, etc.
Author of the popular performance testing blog Señor Performo (www.srperf.com ) where he curates a diverse set of learning material for performance testers and engineers.

He is an international public speaker participating in multiple conferences, events and webinars, with keynotes, workshops and multiple talks on his belt.
And last, author of “The Hitchhikers Guide To Load Testing Projects”, a fun walkthrough that will guide you through the phases or levels of an IT load testing project.

14:15-15:00 PM EST 
20:15-21:00 PM CET

Align Testing with Business by Shifting Left & Right

Session with Joel Montvelisky

Joel Montvelisky

Joel Montvelisky is a Co-Founder and Chief Solution Architect at PractiTest.

Joel has been in testing and QA since 1997, working as a tester, QA Manager and Director, and a Consultant for companies in Israel, the US and the EU. Joel is a Forbes council member, a blogger and is constantly imparting webinars on a number of testing and Quality Related topics.

In addition, Joel is the founder and Chair of the OnlineTestConf, the co-founder of the State of Testing survey and report and a Director at the Association of Software Testing.

Joel is a seasond conference speaker worldwide, among them the STAR Conferences, STPCon, JaSST, TestLeadership Conf, CAST, QA&Test, and more.

Follow us on Twitter!

For continuous updates and sneak peeks at what’s to come

Quality Advocacy: The Next Generation of Testing Excellence​

In this talk, we will explore the inevitable shift from traditional Quality Assurance to Quality Advocacy. This evolution moves beyond defect detection, driving a proactive and strategic approach to quality at every stage of the development lifecycle. We’ll discuss how automation and Continuous Integration are key drivers of this revolution, and how T-shaped QA professionals are becoming architects of digital excellence, focusing on delivering value-driven services rather than just software. This new paradigm advocates for quality as a collaborative, organization-wide effort that aligns with the modern enterprise’s needs.

Are You Having Cheese or Steak?​

Building an automation solution that will support our teams for years to come can be a challenge. But sometimes what we hope to milk actually turns out to be a rodeo. What are some early indicators that the solution we are working on might not work, and should we be better off shooting it?

Model Based Testing: A Powerful Way to QA​

Model-based testing is a novel technique that makes QA teams more powerful. It focuses on intended system behavior, and then automatically derives test plans and scenarios from it. The intended behavior is visualized and verified against regulations/policies, helping early detection of requirement errors.

The Future of QA: Integrating AI for Intelligent Test Management​

In this session, we will explore the transformative potential of AI in Quality Assurance, particularly how it can be leveraged for intelligent test management. We will discuss practical implementations, the benefits of AI-driven testing, and strategies for integrating these tools into existing QA processes. Attendees will gain insights into the future of QA and how to stay ahead in an increasingly automated landscape.

Automate Smarter, Not Harder: GitHub Copilot your AI Test Buddy

Explore how GitHub Copilot streamlines test automation by generating and optimizing scripts, from unit to API tests. Learn how it reduces development time, suggests best practices, and improves code quality. This session is essential for those aiming to boost efficiency and maintainability in test automation.

This one is for our audience in Australia

Part One

Introduction and Greetings

Test Automation: Friend or Foe?
by Maaret Pyhäjärvi

Break

Break

And here's for our audience in the Americas

Part Two

Introduction and Greetings

Taking Your IT Leadership to the Next Level
by Mike Lyles

Break

Break

A Holistic Approach to Testing in an Agile Context

In the software world, we talk a lot about quality. Business leaders say they want the best quality product – though they often fail to understand how investing in quality pays off.  Customers have their own views of what quality means to them, which may be surprising to the business. Delivery teams are concerned about code correctness, and the many types of testing activities.  

With so many different perspectives, it’s no wonder organizations get confused about how to deliver a product that delights their customers. The Holistic Testing Model helps teams identify the levels of quality they need for their product. It helps them plan the types of testing activities they need all the way around the continuous software development loop. Using this holistic approach to agile development helps teams feel confident in delivering changes frequently. Lisa will share her experiences with this whole-team approach to quality and testing. 

Key learnings: 

  • A holistic quality and testing approach throughout the continuous loop of software  development, using the Holistic Testing Model 
  • Apply the Holistic Testing Model to create an effective test strategy
  • The importance of bug prevention and value injection over bug detection How to plan and fit testing activities at all levels into short agile iterations with frequent  delivery, and continuous delivery

Gil Zilberfeld

Has been in software since childhood, writing BASIC programs on his trusty Sinclair ZX81. He is a trainer and mentor working to make software better.
With more than 25 years of developing commercial software, he has vast experience in software methodology and practices. From unit testing to exploratory testing, design practices to clean code, API to web testing – he’s done it all.
Gil speaks frequently at international conferences about testing, TDD, clean code, and agile practices. He blogs and posts videos on these topics at testingil.com and YouTube channel. Gil is the author of “Everyday Unit Testing”,
In his spare time, he shoots zombies, for fun.

Lisette Zounon

Is an award-winning tech executive, serial entrepreneur, and engineering leader with two decades of experience helping people and companies improve the quality of their applications, with solid tools, a simple process, and a smart team. She firmly believes that industry best practices including implementing agile methodologies, DevOps practices, and leveraging Artificial Intelligence are invaluable to the success of any software delivery.

Lisette was responsible for leading and managing high-performing quality-testing teams throughout all phases of the software development testing cycle; ensuring that all information systems, products, and services meet or exceed organization and industry quality standards as well as end-users requirements. This includes establishing and maintaining the Quality strategy, processes, platforms, and resources needed to deliver 24×7 operationally critical solutions for many of the world’s largest companies.

Lisa Crispin

Is an independent consultant, author, and speaker based in Vermont, USA.  Together with Janet Gregory, she co-authored Holistic Testing: Weave Quality Into Your  Product; Agile Testing Condensed: A Brief Introduction; More Agile Testing: Learning  Journeys for the Whole Team; and Agile Testing: A Practical Guide for Testers and Agile  Teams; and the LiveLessons “Agile Testing Essentials” video course. She and Janet co-founded a training company offering two live courses worldwide: “Holistic Testing:  Strategies for Agile Teams” and “Holistic Testing for Continuous Delivery”. 

Lisa uses her long experience working as a tester on high-performing agile teams to help organizations assess and improve their quality and testing practices, and succeed with continuous delivery. She’s a DORA Guide for the DORA community of practice. Please visit: https://lisacrispin.com, https://agiletester.ca, https://agiletestingfellow.com, and  https://linkedin.com/in/lisacrispin/ for details and contact information.

Suzanne Kraaij

With almost 15 years in the field of testing, Suzanne has gained a lot of experience with various clients in various industries. In the company where she works, Suzanne has a pioneering role as a core member within their testing community. In this position, she is actively involved in knowledge sharing and further development of the field of software testing and quality engineering.

Mike Lyles

Is an international keynote speaker, author, and coach. He is the Head of IT with Maxwell Leadership, an amazing company founded by leadership expert, author, and speaker, John C. Maxwell. Mike has over 30 years of experience in IT, coaching, mentoring, and building successful teams with multiple organizations, including Fortune 50 companies. As a Maxwell Leadership Certified coach and speaker, Mike’s “purpose” is to inspire others with value-based leadership and growth principles and to serve others in their journey toward significance and success. Mike has traveled to dozens of countries and hundreds of events to share his experiences with thousands through keynotes, workshops, and other special events. Mike is the author of the self-help motivational book, “The Drive-Thru Is Not Always Faster”.

George Ukkuru

Is a performance-driven technocrat with over two and a half decades of experience in Test Engineering, Product Management, and User Experience. He specializes in optimizing costs, improving market speed, and enhancing quality by deploying the right tools, practices, and platforms. Throughout his career, George has worked with several Fortune 500 companies, delivering impactful solutions that drive efficiency and innovation. Currently, he serves as General Manager at McLaren Strategic Solutions, where he continues to leverage his expertise to lead teams and projects that align with business goals, ensuring high-quality outcomes and strategic growth.

Esther Okafor

Is a Quality Assurance Engineer at Storyblok, bringing unique experience in API testing and a strong passion for building high-quality software. She has previously worked with renowned companies like Flutterwave, Renmoney, and Venture Garden Group. Over her four years in the tech industry, Esther has trained and mentored over 100 women in tech through initiatives such as She Code Africa and Bug Detective. Her perspective offers valuable insights into the world of QA, and she is committed to helping others succeed. Additionally, she has authored several blog posts that provide essential guidance to Quality Assurance professionals, helping them excel in their day-to-day roles.

Michael Bar-Sinai

Software engineer by training, with a mid-career PhD in formal methods and requirement modeling. Created various information systems for NGOs, ranging from work accident tracking to geopolitics information systems. Worked on data science tools at Harvard’s IQSS. CTO and Co-Founder at Provengo, a start-up creating model-driven software engineering tools. Married, 3 kids, sadly no dogs.

Tim Munn

Technical Test Leader with almost 20 years experience in the field. Have automated apps and lead international technical qa teams in areas from Pharma to Fintech. Currently a Senior SDET at Spotlight.

Joel Montvelisky

Is a Co-Founder and Chief Product officer at PractiTest, and has been in testing and QA since 1997, working as a tester, QA Manager and Director, and Consultant for companies in Israel, the US, and the EU. Joel is a Forbes council member, and a blogger and is constantly imparting webinars on a number of testing and Quality Related topics.
In addition, Joel is the founder and Chair of the OnlineTestConf, the co-founder of the State of Testing survey and report, and a Director at the Association of Software Testing.
Joel is a seasoned conference speaker worldwide, among them the STAR Conferences, STPCon, JaSST, TestLeadership Conf, CAST, QA&Test, and more.

Maaret Pyhäjärvi

Is an exploratory tester extraordinaire and Director, Consulting at CGI. She is a tester, (polyglot) programmer, speaker, author, conference designer, and community facilitator. She has been awarded prestigious testing awards, Most Influential Agile Testing Professional Person 2016 (MIATPP) and EuroSTAR Testing Excellence Award (2020), Tester Worth Appreciating (2022), and selected as Top-100 Most Influential in ICT in Finland 2019-2023.

Francisco Di Bartolomeo

An experienced Test Discipline Lead with over ten years of expertise across diverse testing areas. Passionate about cultivating a quality-driven culture, he excels in coaching and mentoring teams both technically and professionally. He has led numerous quality assurance initiatives, advocating for risk-based testing and shift-left practices to integrate quality at every stage of development. Guided by the belief that “Quality is a habit,” Francisco is dedicated to making quality a constant practice, establishing himself as a visionary in software testing.

API Test Planning LIVE

How do you come up with cases for your APIs? Is it enough to check they return the right status? No.

APIs are complex, so even a couple cause us to be overwhelmed by the options. But the options are good. We want the ideas, so we can prioritize based on our needs. We just need to understand our system and come up with proper ones.

History has proven that the best way to come up with ideas is collaboration. So that’s what we’ll do.

This is an interactive session on API Test Planning. Given only two APIs (and a semi-sane moderator), we’ll come up with creative ways to test them.

Sounds easy? APIs are complex. And in this session, we’ll see how much, and how to think of different aspects of APIs when testing.

Search for a Tool Is Like Dating​

Choosing the right toolset for your ecosystem can be a lot like dating. You need to know what you’re looking for, be prepared to make a list, and be willing to check off those boxes to find the right match. In this lightning session, we’ll explore the similarities between choosing a toolset and choosing a date, and learn how to make the best decisions for your needs.

You will learn the following:

  • What should be on your list of requirements when evaluating new toolsets, and how to approach the search to ensure you find the right match for your ecosystem
  • We’ll discuss the importance of compatibility, communication, and trust in both dating and tool selection, and how to use these principles to make the best decisions for your organization.

 

By the end of this session, you’ll walk away with a clear understanding of how to approach the tool selection process like a pro, and how to make the best decisions to support your organization’s success.

OSRS: Your New Test Strategy Multitool​

Have you ever inherited a testset and wondered about its worth? If it’s complete? If it’s good? A lot of test strategy approaches focus on new projects where you start from scratch, but in reality, you’ll often inherit an existing testset of an already running project. So how do you evaluate the value of that testset? How do you see if you are missing something or if you are overtesting things? How to choose what to automate? What to put in a regression testset? What to test first when under time pressure?

This approach can be applied to all of these questions and helps give the whole team insight into the potential value of testing. It will open up the conversation about what you will and won’t be testing with evidence to substantiate those choices.

Taking Your IT Leadership to the Next Level​

What does a day in the life of YOU look like at work? Do you struggle to complete projects on time? Are there issues that seem to pop up with every deliverable? Does your team respect you and your contributions? Does your boss understand what you are trying to accomplish? Do your stakeholders appreciate the outcomes that you and your team provide?

 

It’s very likely that one of these questions resonated with you when you read it. In fact, there is a chance that ALL of them do! We live in a fast-paced world where it’s easy to get so caught up in a routine where every day looks the same.

 

There seems to be minimal time allowed for growing, improving, and building connections.

 

Imagine a new world where you focus daily on personal growth. A world where you engage effectively with your boss, your peers, and your subordinates. A world where you go from just “showing up” or “keeping up” to “growing up” and improving not only your workplace but your personal life.

 

Join Mike Lyles as he shares decades of experiences in IT and leadership roles and how he has used these experiences to help him grow as a leader in IT.

 

Key Takeaways:

  • Key learnings from years of IT experiences
  • Suggestions for how to move from “communicating” to “connecting”
  • Tips to move from leading “followers” to leading “leaders”

Test Automation: Friend or Foe?

Supporting evidence does not teach us as much as opposing evidence. We are people who support and oppose on principle, in search of knowledge. We are balancing the perspectives of friends and foes, professionally, all day long.

We’ve been at this test automation thing quite a while, and three decades have given me the space to come to a principle that helps my projects succeed slightly more often with test automation: Time used on warning about test automation is time away from succeeding with it. We know from a particular literature genre of romance novels that tropes starting with friends or foes both end up with love, and we could adult up to improve our communication to more purposeful change.

A year ago, we set up a panel conversation to seek ideas for ending well off with test automation. In this talk, we lend the tension of disagreements of the past, to enable learning, combined with the stories of real projects.

Maybe at this time of AI and getting computers to ‘act humanly’, we need to team with an old enemy to make sense of how we peacefully coexist with the new, with healthy boundaries keeping the friendship in check.

Key takeaways:

  • When the stakes are high, we work with friends and foes
  • How to increase the odds of good results we can like with the tools
  • How to navigate the recruitment trap and the in-company growth plains

Not Too Little, Not Too Much: How to Test Just the Right Amount​

As business demands for a shorter time to market continue to rise, testing teams can struggle to find the right balance between adhering to these demands and maintaining sufficient coverage to ensure the released products meet the desired quality standards.

Using the power of AI, we offer a model that combines the value each test provides along with the current time limitation to determine which tests are best to execute.