Spring OnlineTestConf 2021 - Program

OTC Sessions - Day 1

10:00-10:15 EDT 

16:00-16:15 CEST

Introduction – Welcome to Spring OTC

10:15-11:00 EDT 

16:15-17:00 CEST

 

Improving Your Quality and Testing Skills with Gamification

Session with Ben Linders

So many challenges, so little time.
As testers we need to sharpen the saw, but how? Gamification can be a way to look at how you’re doing and find out where to improve. It’s a great way to have everyone involved and get the best out of people.

In this presentation, Ben Linders will play games with the Agile Testing Coaching Cards and Agile Quality Coaching Cards to show how you can explore your current quality and testing practice and decide in your team on what to improve or experiment with.

Players can use the coaching cards to discuss quality and testing values, principles, and practices. In teams, people can use the cards to share their experiences and learnings.

Different game formats can be used to share experiences on testing and quality principles and practices and explore how they can be applied effectively. Show how to use gamification to self-assess your current way of working.

Play games with the Agile Testing Coaching Cards and Agile Quality Coaching Cards.

Explore how to facilitate games to enhance quality and testing in agile teams.

 


 

About Ben Linders

 

Ben Linders is an Independent Consultant in Agile, Lean, Quality, and Continuous Improvement.

Author of Getting Value out of Agile Retrospectives, Waardevolle Agile Retrospectives, What Drives Quality, The Agile Self-assessment Game, Problem? What Problem?, and Continuous Improvement. Creator of many Agile Coaching Tools, for example, the Agile Self-assessment Game.

 

Ben is a well-known speaker and author; he’s much respected for sharing his experiences and helping others share theirs. His books and games have been translated into more than 12 languages and are used by professionals in teams and organizations all around the world.

 

As an adviser, trainer, and coach, he helps organizations with effectively deploying software development and management practices. He focuses on continuous improvement, collaboration and communication, and professional development, to deliver business value to customers.

 

Ben is an active member of networks on Agile, Lean, Kanban, and Quality. He shares his experiences in a bilingual blog (Dutch and English), as an editor for InfoQ, and as a practitioner in communities like Computable, Quora, DZone, Stackoverflow, and TechTarget. Follow him on Twitter: @BenLinders.

11:15-12:00 EDT 
17:15-18:00 CEST

 

 

How to create a QA department

Session with Anna Ondrish

There are software development companies that are just starting out or have been in business for a long time that don’t have a Quality Assurance department. They might not even have anyone that formally does the testing. Critical bugs are being introduced into production and causing clients to lose confidence.

I will walk through the steps needed to start a Software Quality Assurance department from nothing. I’ll address the importance of keeping yourself organized with a Test Case Management tool and using the tool to help build metrics around the testing efforts.

Attendees will obtain information to help them start a QA department and/or ways to enhance their current department. There will be valuable information for everyone.Key lessons:
Getting organized and identifying when it’s time to hire
Implementing a Test Case Management tool
Hiring contractors, full time employees, or both
How to identify place the right person on the right project or task
Equip the team to work independently

 


 

About Anna Ondrish

 

Anna Ondrish has over 15 years of experience in the Quality Assurance field. She currently works at Orases in Frederick, MD as the Director of Quality Assurance. Twice in her career, she has been hired as the first Quality Assurance Lead for a company. Both times she has had the opportunity to structure a department that fully integrates with each step within the SDLC. Each experience opened doors to fully implement best practices company wide.

12:15-13:00 EDT 
18:15-19:00 CEST


 

As testers, do we do more harm than good?

Session with Conor Fitzgerald

Today, there are several high profile companies who no longer hire traditional testers.
There is a wealth of evidence to show the detrimental impact of traditional testing, in particular, the separation of testing from the implementation of code.
As testers, we assume our work does good and it’s painful to realise that we can unwittingly do harm.
Much of what we do as testers is Context-based, harmful on one project may be good on another project.
This harm can become apparent through quality issues and reduced frequency of releases.

In this talk, I emphasise the importance of transitioning from traditional to modern testing, including my own painful lessons over the past 15 years.
My story involves moving from test executor, to test partner and influencer.
Traditional testers have really valuable skills but need to apply those skills in new ways.


At the heart of this talk, is the good we do as testers and ultimately our future is rooted in collaboration.

– Examples of the harm we can do as testers supported by data
– Practical examples of activities that you can try to move towards modern testing
– Questions to help guide you towards doing good as a tester, and future-proofing your career.

 


 

About Conor Fitzgerald

 

Conor Fitzgerald is a Quality Advocate with 15 years of experience. He is passionate about whole team testing and working with teams on quality improvements. Currently, he is working as Head of Testing for Poppulo in Cork, Ireland. He has spoken at a number of conferences in recent years, including SoftTestDublin, TestBash,OnlineTestConf, and RebelCon.

Conor is an active member of the test community and is a Co-Founder of the Ministry of Testing Cork.

Previous positions included Test Consultant, Test Lead/Manager and Automation focused roles. These positions were held in a wide variety of industries from embedded systems to financial systems with companies ranging from startups to large multinationals such as Intel.

Occasionally blogs here at conorfi.com and frequently tweets at @conorfi.

13:15-14:00 EDT 
19:15-20:00 CEST


 

How to make developers LOVE writing E2E tests

Session with Yevheniia Hlovatska

Developers writing e2e tests is a great practice. It makes code testable, provides a fast feedback loop and helps to shift QA left. But it doesn’t mean that developers love to do it. In most cases they were not taught how to do it correctly, which makes tests flaky and creates a big pain in the ass. 

We’ve recently started the transition from QA-only writing tests on Java to developers writing e2e tests in JS. We needed to convince teams that this new approach is amazing and help them make it a part of the culture. We needed to “sell” it to our target audience – Front End developers. 

Process was not that fast. Almost every time I just said “e2e tests” close to the developers they started imagining thousands of failing builds and years of debugging. And started to cry, kidding:) Eventually I’ve started to notice, that there are some things that make this process more smooth and attractive. We’ve created our list of tips to help developers write meaningful coverage and love it at the same time.



But wait, this talk is for QA Engineers, where is their place in this process? Will it mean they will never write tests again?
Trust me, there will be a huge room for their work. I believe QA is the best person to make such changes happen and continue working along the way.  – well-designed coverage is 50% of success – tests must be short, focused on 1 use case, independent, designed for parallel execution.



– build trust with 1 nice test – flakiness kills all good intentions, better to have 1 good test, then 100 tests you developer will skip



– create best practices – improvisation and over engineering are not the best friends with good coverage, it is always better to create best practices in advance and use them



– monitor tests success with QA hands – always keep an eye on stability, improve tests based on incidents

 


 

About Yevheniia Hlovatska

 

QA Guild Leader in Wix, ICAgile Authorised Instructor in Agile Testing, Founder of Alpha IT School, Global Ambassador of WomenTech Network

Passionate about Agile Testing, Test Automation, building quality on all levels and bringing value to products and teams. Growing as a public speaker, willing to share my knowledge as trainer and consultant.

14:15-15:00 EDT 
20:15-21:00 CEST


 

How to Accelerate Cross Browser Testing using Cypress and Selenium

Session with Eran Kinsbruner

As digital reality becomes a win-lose situation for the majority of enterprises today, having a solid test automation strategy for your web applications is key for business success. In the current landscape, there are two strong technologies, Cypress and Selenium, that when utilized properly, can enable a sustainable continuous testing workflow. In this session, Perfecto by Perforce Chief Evangelist, author, and Sr. Director Eran Kinsbruner, will provide a deep overview of both Selenium and Cypress and address the key benefits of using both as part of your testing strategy.Attend this session to learn the following:

1) The core benefits of Cypress and Selenium.
2) The main differences between the two frameworks, and why teams should leverage both.
3) How teams can boost their velocity and productivity by running Selenium and Cypress in the cloud.


 

About Eran Kinsbruner

 

Eran Kinsbruner is a bestselling author, TechBeacon Top 30 test automation leader, and the Chief Evangelist and Senior Director at Perforce Software. His published books include the 2016 Amazon bestseller, “The Digital Quality Handbook”, “Continuous Testing for DevOps Professionals”, and “Accelerating Software Quality – ML and AI in the Age of DevOps”, which was named the “Best New Software Testing Book” by Book Authority. With a background of over 20 years’ experience in development and testing at companies such as Sun Microsystems, Neustar, Texas Instruments, General Electric, and more, Eran holds various industry certifications such as ISTQB, CMMI, and others. Eran is a recognized influencer on continuous testing and DevOps thought leadership, an international speaker, blogger, and also a patent-holding inventor (test exclusion automated mechanisms for mobile J2ME testing). Eran is active in the community and can be found across social media and has his own blog (https://continuoustesting.dev/)

15:15-16:00 EDT 
21:15-22:00 CEST


 

Roadmap to becoming an Impactful QA Engineer

Session with Julia Pottinger

QA Engineers are amazing people that are tasked with ensuring the quality of a product is of a certain standard. They do this through manual and automated tests. But how do you become a QA engineer and what skills do you need to be impactful in your role?

Join me as I walk you through my journey to being an impactful QA Engineer as well as give you a roadmap on how you can become an impactful QA Engineer.

 – Role of a QA Engineer
 – Skills needed to be an impactful QA Engineer
 – Roadmap of skills and techniques needed

 


 

About Julia Pottinger

 

Julia Pottinger is a Training and Development Manager at QualityWorks with expertise in manual, automated and API testing and training. She is passionate about sharing her knowledge and experience and contributes to the testing community through writing articles, and delivering testing content on Test Automation University as well as her Youtube Channel and blog. She also conducts testing bootcamps for persons interested in entering the field.

OTC Sessions - Day 2

10:00-10:15 EDT 

16:00-16:15 CEST

Introduction – Welcome to day 2 of Spring OTC

10:15-11:00 EDT 

16:15-17:00 CEST

 

Don’t Let Your Automation Step On Its Toes

Session with Paul Grizzaffi

You’ve done your due diligence. You’ve reviewed your automation implementation. You’ve responsibly removed or refactored appropriate components. You’ve even culled scripts that no longer provide value. But despite these efforts, your automation takes longer to execute than you can tolerate; your team just can’t wait that long for feedback, particularly in the CI/CD pipeline.

No problem! You can just parallelize your automation runs, right? Not so fast.

Automated test script concurrency can absolutely reduce the duration of an automation suite’s execution. Having success and consistency with concurrent execution, however, requires upfront work to obtain detailed knowledge of the application being tested and dependencies in the automation suite. Omitting this work will result in your automation being unable to get out of its own way; automation will inevitably step on its own toes.

Join Paul Grizzaffi as he walks through important aspects of automation parallelization, aspects that must be addressed in order to be successful when implementing concurrency.• A basic introduction to threads and some of the considerations when using them
• Data dependencies a system may have, how to handle them, and limitations thereof
• Considerations about automation execution environments for concurrency
• An alternate, administrative approach to resource management

 


 

About Paul Grizzaffi

 

As a Principal Automation Architect at Magenic, Paul Grizzaffi is following his passion for providing technology solutions to testing, QE, and QA organizations, including automation assessments, implementations, and through activities benefiting the broader testing community. An accomplished keynote speaker, international conference speaker, and writer, Paul has spoken at local and national conferences and meetings. He is an advisor to Software Test Professionals and STPCon, as well as a member of the Industry Advisory Board of the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas where he is a frequent guest lecturer. In addition to spending time with his twins, Paul enjoys sharing his experiences and learning from other testing professionals; his mostly cogent thoughts can be read on his blog at https://responsibleautomation.wordpress.com/.

11:15-12:00 EDT 
17:15-18:00 CEST

 

 

Extreme learning situations as testers – How to add value while you’re still learning

Session with Christian Baumann

As software testers, we accept that each new role will require us to learn new technologies and skills. We also know that we often feel the need (or are told of the need) to provide value to the project quickly. Both of these competing expectations are normal to a certain degree. When I joined a new project about testing an API against a European Union standard for payment services, I had to do both to an extreme I had never experienced before.

 

The list of things to learn from almost the ground was long: an API and how to test it (including exploratory, automated and performance testing), understanding more than 400 pages of specification and learning the business domain. Despite this, other project members were expecting valuable contributions from me shortly after joining.

 

In this talk I will share my story and the strategies I used to manage this challenge. I’ll go into:
* How to find out what the most important priority is
* Dealing with multiple parallel tasks without losing focus through too much context switching
* Learning while doing
* Expectation management
* Keeping myself healthy despite of the challenges

 

I will package my experiences in lessons learned you can use to make solid progress in conditions of uncertainty, and in need of learning new tools, techniques and products.

 

In summary, I’ll look at what aspects testers and other IT professionals can take to reduce these sorts of situations, while also providing takeaways on how to deal with them in case you are in this kind of project.

* Learn how to manage overwhelming learning requirements
* Protect your time and focus to enable continuous progress on the project
* Help you recognise and talk about such projects, even perhaps help to prevent

 


 

About Christian Baumann

Christian is a test engineer with 15+ years of experience in the field of software testing. He has successfully held different roles in the context of testing: From Test Automation Engineer to Test Team Lead.

During his career he worked with various test (automation) tools using programming languages, but also applied certain development/ testing methodologies.

Christian is strongly driven by his context, always searching for the best fitting solution for a given situation. He’s able to understand business’ and people’s problems, and is always eager to learn and improve himself, while staying curious, open minded and willing to share his knowledge.

12:15-13:00 EDT 
18:15-19:00 CEST


 

Career Crafting – Dare, Prepare, Share

Session with Lena Wiberg

Growing up, I believed I could become anything. Do anything. No one ever told me my dreams were too big, or too unrealistic.
As the years passed, hitting walls and obstacles, something happened to that ability to look to the stars. I started hiding my dreams, partly because I didn’t want to look like a failure when they didn’t come through and partly because I stopped believing I could achieve them. I stopped dreaming and I fiercely told myself, and people around me, I did not want certain things.

As luck would have it, that view was unexpectedly challenged by an innocent comment made by my boss at that point in time. First, it made me laugh out loud. But honestly, even I could hear the hurt hiding behind the laugh and it made me start on the journey that has taken me where I am today.
Admitting my first goal, to myself and others, was incredibly hard, but once spoken out loud – I reached it in half the time I thought possible with the help of people around me. After that, it has been a weird chain of events taking me through public speaking, collaborating with people I admire, being elected into boards and other positions, creating a card deck, writing a book and even singing on stage!

What this journey has taught me is that I do indeed have dreams, aspirations. They might be incredibly hard to find after years of oppressing them, but once you start looking – they start appearing everywhere!

And speaking them out loud – you will find that people everywhere will go out of their way to help you achieve them!

So, don’t be careful what you wish for!
Wish for the moon and the stars and be prepared to reach them, and so much more.Making dreams explicit is the first step to reaching them
Telling others of your dreams allows them to help you
People want to help you, they want you to succeed!
Taking actions on your dreams changes you into the person you need to become
Learning how to define your goal helps you teach others, making you a better coach and/or mentor

 


 

About Lena Wiberg

 

Lena has been in IT since 1999 when she started out as a bright-eyed developer. After a decade of code, she found her calling in testing and has since then worked in most testing-related roles, from being tester in a team so building and leading testing organizations. She believes continuous improvement is something we should all strive for by keeping up to date and always challenge ourselves, our assumptions and the way things are done. Since 2017 she works as a manager and finds that the skills that makes her a good tester also works wonders when making people, teams and organizations grow.

She is an avid blogger, speaker and workshop facilitator as well as the creator of “Would heu-risk it?” – a risk based deck of cards. Lena lives outside of Stockholm and shares her house with her family, loads of gaming stuff and books. She is currently working as an Engineering Manager at Mentimeter.

13:15-14:00 EDT 
19:15-20:00 CEST


 

Approach your testing like a S.W.A.T Operation

Session with Joel Montvelisky

Different projects require different testing approaches.
It is true that in many instances we require a very defined and structured approach, making sure all the high-risk areas of the product are completely covered before releasing our products to the field.
But many times we need a different approach, one that is aimed specifically at providing fast and critical information. Without much knowledge and even less time for preparations. Just like the legendary Israeli Commando Units, deploying fast and efficiently, in order to fulfill a specific and hard to achieve objective.

 

We will review what the preparations for S.W.A.T units are, and how they approach their work in order to succeed in their hard missions. We will then review how to apply a similar approach in order to develop in our team the skills required for these special tasks when we are asked to perform specific and complex testing tasks, under hard conditions and in very short time frames.

 

In this session we will:

  • – Learn testing tactics based on military methods
  • – Generate a plan and test tool-set for immediate testing tasks
  • – Understand how quick testing doesn’t have to be dirty and how quality can be obtained even in the hardest situations.

 


 

About Joel Montvelisky

Chief Solution Architect at PractiTest

Joel has been in testing since 1997. He has been a tester, test manager and QA Lead working in both SU companies as well as global enterprises. .

He is a keynote speaker, Lecturer, Blogger, and the chairman of the OnlineTestConf

14:15-15:00 EDT 
20:15-21:00 CEST

 

 

Succeeding as a Testers, Test Lead, Test Manager and Test Coach in today’s environment”

Roundtable discussion with QA professionals: Pete Walen, Joel Montvelisky & Lena Wiberg

15:15-16:00 EDT 
21:15-22:00 CE

 

OTC Happy hour! 
Quick 5 minutes pitches by OnlineTestConf attendees – to awe and inspire

Follow us on Twitter!

For continuous updates and sneak peeks at what’s to come

Quality Advocacy: The Next Generation of Testing Excellence​

In this talk, we will explore the inevitable shift from traditional Quality Assurance to Quality Advocacy. This evolution moves beyond defect detection, driving a proactive and strategic approach to quality at every stage of the development lifecycle. We’ll discuss how automation and Continuous Integration are key drivers of this revolution, and how T-shaped QA professionals are becoming architects of digital excellence, focusing on delivering value-driven services rather than just software. This new paradigm advocates for quality as a collaborative, organization-wide effort that aligns with the modern enterprise’s needs.

Are You Having Cheese or Steak?​

Building an automation solution that will support our teams for years to come can be a challenge. But sometimes what we hope to milk actually turns out to be a rodeo. What are some early indicators that the solution we are working on might not work, and should we be better off shooting it?

Model Based Testing: A Powerful Way to QA​

Model-based testing is a novel technique that makes QA teams more powerful. It focuses on intended system behavior, and then automatically derives test plans and scenarios from it. The intended behavior is visualized and verified against regulations/policies, helping early detection of requirement errors.

The Future of QA: Integrating AI for Intelligent Test Management​

In this session, we will explore the transformative potential of AI in Quality Assurance, particularly how it can be leveraged for intelligent test management. We will discuss practical implementations, the benefits of AI-driven testing, and strategies for integrating these tools into existing QA processes. Attendees will gain insights into the future of QA and how to stay ahead in an increasingly automated landscape.

Automate Smarter, Not Harder: GitHub Copilot your AI Test Buddy

Explore how GitHub Copilot streamlines test automation by generating and optimizing scripts, from unit to API tests. Learn how it reduces development time, suggests best practices, and improves code quality. This session is essential for those aiming to boost efficiency and maintainability in test automation.

This one is for our audience in Australia

Part One

Introduction and Greetings

Test Automation: Friend or Foe?
by Maaret Pyhäjärvi

Break

Break

And here's for our audience in the Americas

Part Two

Introduction and Greetings

Taking Your IT Leadership to the Next Level
by Mike Lyles

Break

Break

A Holistic Approach to Testing in an Agile Context

In the software world, we talk a lot about quality. Business leaders say they want the best quality product – though they often fail to understand how investing in quality pays off.  Customers have their own views of what quality means to them, which may be surprising to the business. Delivery teams are concerned about code correctness, and the many types of testing activities.  

With so many different perspectives, it’s no wonder organizations get confused about how to deliver a product that delights their customers. The Holistic Testing Model helps teams identify the levels of quality they need for their product. It helps them plan the types of testing activities they need all the way around the continuous software development loop. Using this holistic approach to agile development helps teams feel confident in delivering changes frequently. Lisa will share her experiences with this whole-team approach to quality and testing. 

Key learnings: 

  • A holistic quality and testing approach throughout the continuous loop of software  development, using the Holistic Testing Model 
  • Apply the Holistic Testing Model to create an effective test strategy
  • The importance of bug prevention and value injection over bug detection How to plan and fit testing activities at all levels into short agile iterations with frequent  delivery, and continuous delivery

Gil Zilberfeld

Has been in software since childhood, writing BASIC programs on his trusty Sinclair ZX81. He is a trainer and mentor working to make software better.
With more than 25 years of developing commercial software, he has vast experience in software methodology and practices. From unit testing to exploratory testing, design practices to clean code, API to web testing – he’s done it all.
Gil speaks frequently at international conferences about testing, TDD, clean code, and agile practices. He blogs and posts videos on these topics at testingil.com and YouTube channel. Gil is the author of “Everyday Unit Testing”,
In his spare time, he shoots zombies, for fun.

Lisette Zounon

Is an award-winning tech executive, serial entrepreneur, and engineering leader with two decades of experience helping people and companies improve the quality of their applications, with solid tools, a simple process, and a smart team. She firmly believes that industry best practices including implementing agile methodologies, DevOps practices, and leveraging Artificial Intelligence are invaluable to the success of any software delivery.

Lisette was responsible for leading and managing high-performing quality-testing teams throughout all phases of the software development testing cycle; ensuring that all information systems, products, and services meet or exceed organization and industry quality standards as well as end-users requirements. This includes establishing and maintaining the Quality strategy, processes, platforms, and resources needed to deliver 24×7 operationally critical solutions for many of the world’s largest companies.

Lisa Crispin

Is an independent consultant, author, and speaker based in Vermont, USA.  Together with Janet Gregory, she co-authored Holistic Testing: Weave Quality Into Your  Product; Agile Testing Condensed: A Brief Introduction; More Agile Testing: Learning  Journeys for the Whole Team; and Agile Testing: A Practical Guide for Testers and Agile  Teams; and the LiveLessons “Agile Testing Essentials” video course. She and Janet co-founded a training company offering two live courses worldwide: “Holistic Testing:  Strategies for Agile Teams” and “Holistic Testing for Continuous Delivery”. 

Lisa uses her long experience working as a tester on high-performing agile teams to help organizations assess and improve their quality and testing practices, and succeed with continuous delivery. She’s a DORA Guide for the DORA community of practice. Please visit: https://lisacrispin.com, https://agiletester.ca, https://agiletestingfellow.com, and  https://linkedin.com/in/lisacrispin/ for details and contact information.

Suzanne Kraaij

With almost 15 years in the field of testing, Suzanne has gained a lot of experience with various clients in various industries. In the company where she works, Suzanne has a pioneering role as a core member within their testing community. In this position, she is actively involved in knowledge sharing and further development of the field of software testing and quality engineering.

Mike Lyles

Is an international keynote speaker, author, and coach. He is the Head of IT with Maxwell Leadership, an amazing company founded by leadership expert, author, and speaker, John C. Maxwell. Mike has over 30 years of experience in IT, coaching, mentoring, and building successful teams with multiple organizations, including Fortune 50 companies. As a Maxwell Leadership Certified coach and speaker, Mike’s “purpose” is to inspire others with value-based leadership and growth principles and to serve others in their journey toward significance and success. Mike has traveled to dozens of countries and hundreds of events to share his experiences with thousands through keynotes, workshops, and other special events. Mike is the author of the self-help motivational book, “The Drive-Thru Is Not Always Faster”.

George Ukkuru

Is a performance-driven technocrat with over two and a half decades of experience in Test Engineering, Product Management, and User Experience. He specializes in optimizing costs, improving market speed, and enhancing quality by deploying the right tools, practices, and platforms. Throughout his career, George has worked with several Fortune 500 companies, delivering impactful solutions that drive efficiency and innovation. Currently, he serves as General Manager at McLaren Strategic Solutions, where he continues to leverage his expertise to lead teams and projects that align with business goals, ensuring high-quality outcomes and strategic growth.

Esther Okafor

Is a Quality Assurance Engineer at Storyblok, bringing unique experience in API testing and a strong passion for building high-quality software. She has previously worked with renowned companies like Flutterwave, Renmoney, and Venture Garden Group. Over her four years in the tech industry, Esther has trained and mentored over 100 women in tech through initiatives such as She Code Africa and Bug Detective. Her perspective offers valuable insights into the world of QA, and she is committed to helping others succeed. Additionally, she has authored several blog posts that provide essential guidance to Quality Assurance professionals, helping them excel in their day-to-day roles.

Michael Bar-Sinai

Software engineer by training, with a mid-career PhD in formal methods and requirement modeling. Created various information systems for NGOs, ranging from work accident tracking to geopolitics information systems. Worked on data science tools at Harvard’s IQSS. CTO and Co-Founder at Provengo, a start-up creating model-driven software engineering tools. Married, 3 kids, sadly no dogs.

Tim Munn

Technical Test Leader with almost 20 years experience in the field. Have automated apps and lead international technical qa teams in areas from Pharma to Fintech. Currently a Senior SDET at Spotlight.

Joel Montvelisky

Is a Co-Founder and Chief Product officer at PractiTest, and has been in testing and QA since 1997, working as a tester, QA Manager and Director, and Consultant for companies in Israel, the US, and the EU. Joel is a Forbes council member, and a blogger and is constantly imparting webinars on a number of testing and Quality Related topics.
In addition, Joel is the founder and Chair of the OnlineTestConf, the co-founder of the State of Testing survey and report, and a Director at the Association of Software Testing.
Joel is a seasoned conference speaker worldwide, among them the STAR Conferences, STPCon, JaSST, TestLeadership Conf, CAST, QA&Test, and more.

Maaret Pyhäjärvi

Is an exploratory tester extraordinaire and Director, Consulting at CGI. She is a tester, (polyglot) programmer, speaker, author, conference designer, and community facilitator. She has been awarded prestigious testing awards, Most Influential Agile Testing Professional Person 2016 (MIATPP) and EuroSTAR Testing Excellence Award (2020), Tester Worth Appreciating (2022), and selected as Top-100 Most Influential in ICT in Finland 2019-2023.

Francisco Di Bartolomeo

An experienced Test Discipline Lead with over ten years of expertise across diverse testing areas. Passionate about cultivating a quality-driven culture, he excels in coaching and mentoring teams both technically and professionally. He has led numerous quality assurance initiatives, advocating for risk-based testing and shift-left practices to integrate quality at every stage of development. Guided by the belief that “Quality is a habit,” Francisco is dedicated to making quality a constant practice, establishing himself as a visionary in software testing.

API Test Planning LIVE

How do you come up with cases for your APIs? Is it enough to check they return the right status? No.

APIs are complex, so even a couple cause us to be overwhelmed by the options. But the options are good. We want the ideas, so we can prioritize based on our needs. We just need to understand our system and come up with proper ones.

History has proven that the best way to come up with ideas is collaboration. So that’s what we’ll do.

This is an interactive session on API Test Planning. Given only two APIs (and a semi-sane moderator), we’ll come up with creative ways to test them.

Sounds easy? APIs are complex. And in this session, we’ll see how much, and how to think of different aspects of APIs when testing.

Search for a Tool Is Like Dating​

Choosing the right toolset for your ecosystem can be a lot like dating. You need to know what you’re looking for, be prepared to make a list, and be willing to check off those boxes to find the right match. In this lightning session, we’ll explore the similarities between choosing a toolset and choosing a date, and learn how to make the best decisions for your needs.

You will learn the following:

  • What should be on your list of requirements when evaluating new toolsets, and how to approach the search to ensure you find the right match for your ecosystem
  • We’ll discuss the importance of compatibility, communication, and trust in both dating and tool selection, and how to use these principles to make the best decisions for your organization.

 

By the end of this session, you’ll walk away with a clear understanding of how to approach the tool selection process like a pro, and how to make the best decisions to support your organization’s success.

OSRS: Your New Test Strategy Multitool​

Have you ever inherited a testset and wondered about its worth? If it’s complete? If it’s good? A lot of test strategy approaches focus on new projects where you start from scratch, but in reality, you’ll often inherit an existing testset of an already running project. So how do you evaluate the value of that testset? How do you see if you are missing something or if you are overtesting things? How to choose what to automate? What to put in a regression testset? What to test first when under time pressure?

This approach can be applied to all of these questions and helps give the whole team insight into the potential value of testing. It will open up the conversation about what you will and won’t be testing with evidence to substantiate those choices.

Taking Your IT Leadership to the Next Level​

What does a day in the life of YOU look like at work? Do you struggle to complete projects on time? Are there issues that seem to pop up with every deliverable? Does your team respect you and your contributions? Does your boss understand what you are trying to accomplish? Do your stakeholders appreciate the outcomes that you and your team provide?

 

It’s very likely that one of these questions resonated with you when you read it. In fact, there is a chance that ALL of them do! We live in a fast-paced world where it’s easy to get so caught up in a routine where every day looks the same.

 

There seems to be minimal time allowed for growing, improving, and building connections.

 

Imagine a new world where you focus daily on personal growth. A world where you engage effectively with your boss, your peers, and your subordinates. A world where you go from just “showing up” or “keeping up” to “growing up” and improving not only your workplace but your personal life.

 

Join Mike Lyles as he shares decades of experiences in IT and leadership roles and how he has used these experiences to help him grow as a leader in IT.

 

Key Takeaways:

  • Key learnings from years of IT experiences
  • Suggestions for how to move from “communicating” to “connecting”
  • Tips to move from leading “followers” to leading “leaders”

Test Automation: Friend or Foe?

Supporting evidence does not teach us as much as opposing evidence. We are people who support and oppose on principle, in search of knowledge. We are balancing the perspectives of friends and foes, professionally, all day long.

We’ve been at this test automation thing quite a while, and three decades have given me the space to come to a principle that helps my projects succeed slightly more often with test automation: Time used on warning about test automation is time away from succeeding with it. We know from a particular literature genre of romance novels that tropes starting with friends or foes both end up with love, and we could adult up to improve our communication to more purposeful change.

A year ago, we set up a panel conversation to seek ideas for ending well off with test automation. In this talk, we lend the tension of disagreements of the past, to enable learning, combined with the stories of real projects.

Maybe at this time of AI and getting computers to ‘act humanly’, we need to team with an old enemy to make sense of how we peacefully coexist with the new, with healthy boundaries keeping the friendship in check.

Key takeaways:

  • When the stakes are high, we work with friends and foes
  • How to increase the odds of good results we can like with the tools
  • How to navigate the recruitment trap and the in-company growth plains

Not Too Little, Not Too Much: How to Test Just the Right Amount​

As business demands for a shorter time to market continue to rise, testing teams can struggle to find the right balance between adhering to these demands and maintaining sufficient coverage to ensure the released products meet the desired quality standards.

Using the power of AI, we offer a model that combines the value each test provides along with the current time limitation to determine which tests are best to execute.