Fall OnlineTestConf 2021 - Session Recordings

Fill out this form to view all session recordings

OTC Sessions - Day 1

Quality Acceleration: delivering quality software at speed

Session with Huib Schoots

All companies want fast delivery of high quality software nowadays. Delivering quality software at speed is the new mantra for many. CIO’s are afraid of being behind the curve. Managers are afraid of losing money. So speed up and increase efficiency! But speeding up brings quite a lot of interesting challenges. Going faster without the proper “measures” will get an organisation in trouble pretty fast. It is like driving a formula 1 car without knowing how to drive… Bill Gates once said: “The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.”

Tools, robots, automation in testing, continuous integration/delivery promises remarkable acceleration in software development. The new kid on the block is AI and machine learning, that will definitely help speed things up or make some people redundant I hear people claim. What is really happening globally? How can you speed up without losing control?

Delivering successful (IT) products is much more than technology. In IT, skills in communication, collaboration and leadership define success. It requires human experience and technological integration. Before even thinking about technology, automation or building pipelines, some basics need to be in place to ensure success. This presentation brings you my lessons in how teams can accelerate and deliver quality software.

Quality Acceleration is dealing with risks effectively. It emerges from the sum of many things, among which are smooth processes, the right mindset, leadership, rapid learning and experimentation, fast feedback loops, diversity in thinking, people with excellent knowledge and skills, collaboration, common understanding and measuring if you are on the right track. Oh and of course the “right technology”.

Key takeaways:
1. How can we create quality software fast?
2. How can your team manage risks and value?
3. Learn about great collaboration in teams, new ways of working to create valuable software!

 

Huib Schoots
Huib Schoots

Making better people and software quality to accelerate by:
Connecting – Innovating – Facilitating – Coaching – Enabling – Teaching


With international experience in the field of software development. Huib is an expert in the field of software quality and testing and has in-depth knowledge of and experience with agile working methods, coaching, project and test management and change processes. He is one of the five Rapid Software Testing trainers in the world and is a welcome guest at conferences as an experienced presenter, workshop facilitator and trainer.

Connect on Linkedin 

Testers – the constant chameleon

Session with Hanna Schlander

As a tester, there are so many group dynamics that you can be part of. Like a chameleon, you need to be able to fit in. It could be that you’re part of a test team, surrounded by your peers. You could be the only tester in your team but have a testing team in the organization.
Or you can be all on your own, without any other testers.

In a project, there is almost always a group of developers, as a tester, the group dynamic is rarely the same. So how do you adjust to all these different situations? Like a chameleon, we need to adapt to our surroundings.

The chameleon role does not end there, we also need to talk to all of the different members of the team or perhaps even external parties. In that case, you also need to adapt your language and vocabulary depending on who your audience is. To a developer, the language is probably quite technical but with the solution owner, it’s more on a higher level. Adapting to your surroundings will gain you more respect and a better result in the end.

What I will share during my talk is my experience with different team setups:
– The pros and cons of the different structures as well as some advice for the different team compositions.
– An overview of how you would adapt your language depending on the audience you are addressing.

The session will end with a roundtable to hear about your experience as a chameleon! 

Hanna Schlander 

Hanna Schlander

Hanna Schlander is a Quality Catalyst at Jayway by Devoteam. In her current job she works in the retail industry with a DevOps project built with Microservices. Hanna started her testing career by getting some training in testing at her first job after university, it was love at first sight!

 

Throughout her 10 years in testing, she has worked in multiple fields such as MedTech and Telecom and in a lot of different stages of testing. Being a very curious person Hanna loves the ever-changing world of testing. When she’s not working you’ll most likely find her outside walking with her husband and dog or binge-watching some old tv-series.

Connect on LinkedIn

The  Tester’s  Role:  Balancing  Technical  Acumen  and  User  Advocacy 

Session with Melissa Tondi

Many  of  us  didn’t  start  our  careers  in  testing. We  generally  moved  from  a  different  internal  role  or  “happened  upon “QA”  via  another  career  path.  It  was  common  for people  who  were  product  users  to  be  hired  to  jump  start  their  technical  career. 

 

Now,  we  see  the  growth  of  tester  positions  that  require  coding  experience  or  a  computer  science  degree  with  little  emphasis  on  the  testing  profession.  Melissa  Tondi  discusses  the  changing  landscape  of  the  role  of  testers,  the  challenges  of  hiring,  and  a  way  to  shift  the  pendulum  back  to  balance  technical  acumen  with  a  user  advocacy  role. 

 

Melissa will lead  a  thoughtful  discussion  on  what  makes  a  good  tester,  how  test  leaders  can  continue  to  promote  our  profession,  and  how  to  accentuate  the  value  testers  bring  to  organizations. 
She will identify  the factors  that have caused  the  test/QA  role  to  become  mainstream  and  how  it  has shifted  to  become  more  technically  focused.  Melissa  will help  fill  in  the  gaps  with  a  test  strategy  that  incorporates  a  solid  automation strategy  which  allows  for  balance  between  supporting  the  development  efforts  while  equally  emphasizing  user  advocacy  tests.

Melissa Tondi 

Over 15 years of experience in Software Quality Engineering, and Process Engineering and 10 years in management/leadership

Melissa is a Solutions driven and a strong advocate to effect change and a Practitioner of Management by Collaboration.

Connect on LinkedIn


Full-stack Testing in/is the New Normal

Session with Christina Thalayasingam

Many teams aim only in having a bug free system. How can you make your team believe in delivering the true quality of the end product? How can you drive your team to understand that skipping non functional testing like performance and security testing could lead to the breach of your product quality ? How can you make them understand the importance of CI/CD in the testing lifecycle? How can you pour the passion into them to move forward to make a change? Setting up a team that has these skills could make this possible, but do they believe in what they do? How can you make them actually feel the essence of quality being a culture that does not focus alone on reporting bugs? Let us discuss ways on making your team walk in the path of Full Stack Testing, so that the team knows their vision and the mission. Quality is key and the world is evolving into have Full-stack testing as the new normal.

 

Similar to the rise of full stack engineering that brought the end of specialized front-end and back-end developers, and brought about the age of engineers that can build a product end to end independently, the time for QA to follow suit is near. And let us discuss how this can be achieved.

This talk will cover how we can get our teams to explore this venture.

– Avoiding team communication gaps
– Deliver high quality products.
– Help to enhance the quality practices followed.
– Avoid major risks like resource constraints as all members will be jacks of all trades in testing aspects.

Christina Thalayasingam 

Christina Thalayasingam has more than 7 years of experience in both functional and non-functional testing and possesses quite a development background.

Christina is currently working as a Test Engineering Manager at NorthWestern Mutual a Fortune100 Financial services company, where she is managing the testing effort for their Customer Experience Web Applications, which comprises of micro services and micro apps. Also, she has been part of various prestigious conferences, technical meetups and webinars. She is a software testing evangelist.

Connect on LinkedIn

Deming’s Management Philosophy

Session with Steve Hoeg

Deming is often known as the father of quality, pioneering many aspects of statistical process control. As strong as his contributions here were, he felt that the most important road to quality led through leadership practices.

 

This talk will walk through Deming’s 14 points and system of profound knowledge, and how much is still relevant for managing modern software teams.

* Deming’s 14 points
* Continuous improvement
* Modern applications

Steve Hoeg 

Steve Hoeg is VP of Engineering at Maxon, creating 2D & 3D creative tools. Prior to Maxon, he spent 15 years at Adobe, as Director of Engineering for Adobe’s audio and video products – Premiere Pro, Rush, After Effects and Audition. Personal big-screen Hollywood movie credits include Deadpool, Only the Brave, Hail Caesar and Gone Girl.

 

Connect on LinkedIn

The Do’s and Don’t of Accessibility

Session with Michael Larsen

Accessibility is a large topic and one that often gets a variety of approaches to deal with. Often it is seen as having to focus on a large checklist (the WCAG standard) and make sure that everything complies. While this is a great goal and focus, often it is overwhelming and frustrating, putting people in the unfortunate role of having to read and understand an entire process before they feel they can be effective.

My goal is to help condense this a little and give some key areas to focus on and be effective in identifying Accessibility issues quickly and helping testers become effective advocates. We will look at ways to find issues, advocate for them and help make strides to greater understanding and focus moving forward. We can use a little to provide a lot of benefits.

Michal Larsen 


Michael has worked on a broad array of technologies and industries including virtual machine software, capacitance touch devices, video game development, and distributed database and web applications. He currently works with PeopleFluent, located in Raleigh, NC, USA. He writes a software testing blog called TESTHEAD (http://mkltesthead.com/).

 

Michael served as a member of the Board of Directors for the Association for Software Testing from 2011-2015. He was their Treasurer and then their President. Currently, he helps teach their Black Box Software Testing classes. Michael is also the current producer and a regular commentator for The Testing Show, a podcast produced for QualiTest (available in Apple Podcasts, Google Podcasts, and Spotify)

Connect on LinkedIn

OTC Sessions - Day 2

How to Tame Bugs in Production: Successful Bug Managing Strategy in Five Steps

Session with Elena Bazina

“There is a bug in production” have been the scariest words for me as a tester and QA analyst. I was sweating, blushing, panicking. What made it worse is that these bugs were often brought by the 3rd parties SDKs, so I didn’t have full control over this part of the code myself.

 

My attitude towards bugs in production changed after I observed a team of firefighters at work. They were very quick, well organized but also very calm. I thought that we could learn from them. Of course, we should try to prevent fires and bugs as much as we can. But we also need to admit that sometimes there still will be some fires and well… bugs, and when it happens we need to have a great strategy in place how to handle these incidents.

 

1. What aspects need to be covered by the bugs managing strategy?
2. Is it necessary to loop your customers in?
3. Who should take the most important decisions?
4. How make bugs work for you?

 

Elena Bazina 

Elena Bazina is a Senior QA Analyst at King, currently working with Candy Crush Saga. She has more than 5 years of experience in the QA field, mainly working in the mobile gaming industry. She coaches teams in terms of improving the quality of the games and features and incorporating agile testing into their development process.

 

Connect on LinkedIn

Why Should we take things personally?

Session with Indranil Sinha

In present day work environment, we frequently hear why we should not take things personally. But in his IT career for the past 10 years, Indranil has learnt why we should take things personally and what is the benefit of it.

His presentation will dissect his career, identify the problems which he took personally and how the relevant solutions helped him and his team alleviate those problems and took them one step further each time.

 

Main takeaways:
o How to be your own boss at work
o Which things to take personally and how to act on them
o Interaction with the management and promote the importance of quality
o How the above actions can improve your personal career and empower you to build a team.

“In this presentation I want to share experiences and ideas which are 100% mine and developed “organically” over the years, only to improve our current work process. As a proof that all my ideas worked extremely well, just look at my career graph within IT.”

Indranil Sinha 

In 2011, I started my IT career as an IT technician in a Swedish consumer bank. And from September 2021, I will be heading a brand new QA department in the same organisation. I am proud of what I have accomplished within last 10 years and I want to share my exciting journey with a larger, international audience.

 

Connect on LinkedIn

“Make it public!” And other things that annoy developers about testability

Session with Gil Zilberfeld

Everyone agrees testing is good for the quality. But to change the code just so you can test it? That would break up the developer’s perfect design! Just to make the tester’s life easier!

And if they expose inner data to the test, another not-so-smart developer will eventually call it and will blow up the world! Code doesn’t become testable by itself, we have to make it like that. And that conflicts with developer ideas of good design and how code should look.

In this session, Gil will be going to discuss the false beliefs about testability, and how testers can discuss them with developers. Then he’s going to break them down into dust with proper testable design principles. Gil will show examples, and explore testability scenarios.

In a perfect agile world, if developers want their code to work, it should be testable. Making those changes is not even a sacrifice for testability — they are good for everyone

• Testable design impacts the ability to test the application properly.
• Testers need to identify code patterns, so they can discuss testability with developers.
• Correcting the anti-patterns can improve testability immensely for both developers and testers.

Gil Zilberfeld

Gil Zilberfeld (TestinGil) has been in software since childhood, writing BASIC programs on his trusty Sinclair ZX81. With more than 25 years of developing commercial software, he has vast experience in software methodology and practices.

Gil has been teaching and applying modern development and testing principles more than a decade. From automated testing to exploratory testing, testing methodology, unit and integration testing, clean code and testability – he’s done it all. He is still learning from his successes and failures.

Gil speaks frequently in international conferences about unit testing, TDD, testing in general and design practices. He is the author of “Everyday Unit Testing” and “Everyday Spring Testing”, blogs and post videos, co-organizer of the Agile Practitioners conference and in his spare time he shoots zombies, for fun.

 

Connect on LinkedIn

Expect to Inspect – Performing Code Inspections on Your Automation

Session with Paul Grizzaffi

Breaking news! Automation development is software development. Even if we are using a drag-and-drop or record-and-playback interface to create that automation, somewhere, in the stack, under the hood, or behind the curtain, there is code sequenced by our actions. We must start treating our automation endeavors as software development endeavors, lest we end up in a quagmire of unsustainability and early project death.

One beneficial tactic used in software development is to have a different team member look for issues and risks in newly written or modified code; we call this a code inspection or code review. Much as a proofreader or editor will provide feedback on a book or article, code inspectors review and analyze areas of the code that may benefit from rework, such as supportability, readability, extensibility, and issues or risks of issues.

In this session, Paul Grizzaffi explains why we should do code inspections for our automation software, how these investigations for automation can differ from those for product software, and real-life issues found during these reviews.• Automation development is software development.
• Think of an inspection as a “second set of eyes”.
• Let business value drive how code is inspected.
• There are tools to help with inspection-related activities.
• Examples of what a code inspection finds.

Paul Grizzaffi 

As a Principal Automation Architect at Magenic Studio for Cognizant Softvision, Paul Grizzaffi is following his passion for providing technology solutions to testing, QE, and QA organizations, including automation assessments, implementations, and through activities benefiting the broader testing community.

 

An accomplished keynote speaker, international conference speaker, and writer, Paul has spoken at local and national conferences and meetings. He is an advisor to Software Test Professionals and STPCon, as well as a member of the Industry Advisory Board of the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas where he is a frequent guest lecturer. In addition to spending time with his twins, Paul enjoys sharing his experiences and learning from other testing professionals; his mostly cogent thoughts can be read on his blog at https://responsibleautomation.wordpress.com/.

Analytics Matter: What Are Your Users Really Doing?

Session with Amanda DeGroof

A good tester advocates for the user. A great tester knows what that user wants and why. Stop making assumptions about your users!

 

Learn how to use the analytics reports from tools like Google Analytics, Adobe Analytics, and CoreMetrics to inform all of your testing and development:

– Pay attention to the analytics that are generated for the site or app you’re working with
– The right analytics can inform your testing and shore up (or break!) any assumptions you may be making about your users
– Learn which paths users are really taking through your site, so you can change or update your regression and automation testing (users may not be doing what you and the devs expect them to do!)

Amanda DeGroof 

Amanda DeGroof (she/they) started out in the digital marketing world, doing everything from analytics to programming. They found their calling in Quality Assurance, and switched careers in their mid-30s to become a QA Analyst. They worked their way up within digital agency WillowTree, and now manages an international QA team for inMotionNow. In their spare time they enjoy traveling and spending time with their spouse and a finicky calico.

 

Connect on LinkedIn

About Human Issues and some possible solutions

Session with a Joel Montvelisky

Follow us on Twitter!

For continuous updates and sneak peeks at what’s to come

A big thanks to our sponsors:

Quality Advocacy: The Next Generation of Testing Excellence​

In this talk, we will explore the inevitable shift from traditional Quality Assurance to Quality Advocacy. This evolution moves beyond defect detection, driving a proactive and strategic approach to quality at every stage of the development lifecycle. We’ll discuss how automation and Continuous Integration are key drivers of this revolution, and how T-shaped QA professionals are becoming architects of digital excellence, focusing on delivering value-driven services rather than just software. This new paradigm advocates for quality as a collaborative, organization-wide effort that aligns with the modern enterprise’s needs.

Are You Having Cheese or Steak?​

Building an automation solution that will support our teams for years to come can be a challenge. But sometimes what we hope to milk actually turns out to be a rodeo. What are some early indicators that the solution we are working on might not work, and should we be better off shooting it?

Model Based Testing: A Powerful Way to QA​

Model-based testing is a novel technique that makes QA teams more powerful. It focuses on intended system behavior, and then automatically derives test plans and scenarios from it. The intended behavior is visualized and verified against regulations/policies, helping early detection of requirement errors.

The Future of QA: Integrating AI for Intelligent Test Management​

In this session, we will explore the transformative potential of AI in Quality Assurance, particularly how it can be leveraged for intelligent test management. We will discuss practical implementations, the benefits of AI-driven testing, and strategies for integrating these tools into existing QA processes. Attendees will gain insights into the future of QA and how to stay ahead in an increasingly automated landscape.

Automate Smarter, Not Harder: GitHub Copilot your AI Test Buddy

Explore how GitHub Copilot streamlines test automation by generating and optimizing scripts, from unit to API tests. Learn how it reduces development time, suggests best practices, and improves code quality. This session is essential for those aiming to boost efficiency and maintainability in test automation.

This one is for our audience in Australia

Part One

Introduction and Greetings

Test Automation: Friend or Foe?
by Maaret Pyhäjärvi

Break

Break

And here's for our audience in the Americas

Part Two

Introduction and Greetings

Taking Your IT Leadership to the Next Level
by Mike Lyles

Break

Break

A Holistic Approach to Testing in an Agile Context

In the software world, we talk a lot about quality. Business leaders say they want the best quality product – though they often fail to understand how investing in quality pays off.  Customers have their own views of what quality means to them, which may be surprising to the business. Delivery teams are concerned about code correctness, and the many types of testing activities.  

With so many different perspectives, it’s no wonder organizations get confused about how to deliver a product that delights their customers. The Holistic Testing Model helps teams identify the levels of quality they need for their product. It helps them plan the types of testing activities they need all the way around the continuous software development loop. Using this holistic approach to agile development helps teams feel confident in delivering changes frequently. Lisa will share her experiences with this whole-team approach to quality and testing. 

Key learnings: 

  • A holistic quality and testing approach throughout the continuous loop of software  development, using the Holistic Testing Model 
  • Apply the Holistic Testing Model to create an effective test strategy
  • The importance of bug prevention and value injection over bug detection How to plan and fit testing activities at all levels into short agile iterations with frequent  delivery, and continuous delivery

Gil Zilberfeld

Has been in software since childhood, writing BASIC programs on his trusty Sinclair ZX81. He is a trainer and mentor working to make software better.
With more than 25 years of developing commercial software, he has vast experience in software methodology and practices. From unit testing to exploratory testing, design practices to clean code, API to web testing – he’s done it all.
Gil speaks frequently at international conferences about testing, TDD, clean code, and agile practices. He blogs and posts videos on these topics at testingil.com and YouTube channel. Gil is the author of “Everyday Unit Testing”,
In his spare time, he shoots zombies, for fun.

Lisette Zounon

Is an award-winning tech executive, serial entrepreneur, and engineering leader with two decades of experience helping people and companies improve the quality of their applications, with solid tools, a simple process, and a smart team. She firmly believes that industry best practices including implementing agile methodologies, DevOps practices, and leveraging Artificial Intelligence are invaluable to the success of any software delivery.

Lisette was responsible for leading and managing high-performing quality-testing teams throughout all phases of the software development testing cycle; ensuring that all information systems, products, and services meet or exceed organization and industry quality standards as well as end-users requirements. This includes establishing and maintaining the Quality strategy, processes, platforms, and resources needed to deliver 24×7 operationally critical solutions for many of the world’s largest companies.

Lisa Crispin

Is an independent consultant, author, and speaker based in Vermont, USA.  Together with Janet Gregory, she co-authored Holistic Testing: Weave Quality Into Your  Product; Agile Testing Condensed: A Brief Introduction; More Agile Testing: Learning  Journeys for the Whole Team; and Agile Testing: A Practical Guide for Testers and Agile  Teams; and the LiveLessons “Agile Testing Essentials” video course. She and Janet co-founded a training company offering two live courses worldwide: “Holistic Testing:  Strategies for Agile Teams” and “Holistic Testing for Continuous Delivery”. 

Lisa uses her long experience working as a tester on high-performing agile teams to help organizations assess and improve their quality and testing practices, and succeed with continuous delivery. She’s a DORA Guide for the DORA community of practice. Please visit: https://lisacrispin.com, https://agiletester.ca, https://agiletestingfellow.com, and  https://linkedin.com/in/lisacrispin/ for details and contact information.

Suzanne Kraaij

With almost 15 years in the field of testing, Suzanne has gained a lot of experience with various clients in various industries. In the company where she works, Suzanne has a pioneering role as a core member within their testing community. In this position, she is actively involved in knowledge sharing and further development of the field of software testing and quality engineering.

Mike Lyles

Is an international keynote speaker, author, and coach. He is the Head of IT with Maxwell Leadership, an amazing company founded by leadership expert, author, and speaker, John C. Maxwell. Mike has over 30 years of experience in IT, coaching, mentoring, and building successful teams with multiple organizations, including Fortune 50 companies. As a Maxwell Leadership Certified coach and speaker, Mike’s “purpose” is to inspire others with value-based leadership and growth principles and to serve others in their journey toward significance and success. Mike has traveled to dozens of countries and hundreds of events to share his experiences with thousands through keynotes, workshops, and other special events. Mike is the author of the self-help motivational book, “The Drive-Thru Is Not Always Faster”.

George Ukkuru

Is a performance-driven technocrat with over two and a half decades of experience in Test Engineering, Product Management, and User Experience. He specializes in optimizing costs, improving market speed, and enhancing quality by deploying the right tools, practices, and platforms. Throughout his career, George has worked with several Fortune 500 companies, delivering impactful solutions that drive efficiency and innovation. Currently, he serves as General Manager at McLaren Strategic Solutions, where he continues to leverage his expertise to lead teams and projects that align with business goals, ensuring high-quality outcomes and strategic growth.

Esther Okafor

Is a Quality Assurance Engineer at Storyblok, bringing unique experience in API testing and a strong passion for building high-quality software. She has previously worked with renowned companies like Flutterwave, Renmoney, and Venture Garden Group. Over her four years in the tech industry, Esther has trained and mentored over 100 women in tech through initiatives such as She Code Africa and Bug Detective. Her perspective offers valuable insights into the world of QA, and she is committed to helping others succeed. Additionally, she has authored several blog posts that provide essential guidance to Quality Assurance professionals, helping them excel in their day-to-day roles.

Michael Bar-Sinai

Software engineer by training, with a mid-career PhD in formal methods and requirement modeling. Created various information systems for NGOs, ranging from work accident tracking to geopolitics information systems. Worked on data science tools at Harvard’s IQSS. CTO and Co-Founder at Provengo, a start-up creating model-driven software engineering tools. Married, 3 kids, sadly no dogs.

Tim Munn

Technical Test Leader with almost 20 years experience in the field. Have automated apps and lead international technical qa teams in areas from Pharma to Fintech. Currently a Senior SDET at Spotlight.

Joel Montvelisky

Is a Co-Founder and Chief Product officer at PractiTest, and has been in testing and QA since 1997, working as a tester, QA Manager and Director, and Consultant for companies in Israel, the US, and the EU. Joel is a Forbes council member, and a blogger and is constantly imparting webinars on a number of testing and Quality Related topics.
In addition, Joel is the founder and Chair of the OnlineTestConf, the co-founder of the State of Testing survey and report, and a Director at the Association of Software Testing.
Joel is a seasoned conference speaker worldwide, among them the STAR Conferences, STPCon, JaSST, TestLeadership Conf, CAST, QA&Test, and more.

Maaret Pyhäjärvi

Is an exploratory tester extraordinaire and Director, Consulting at CGI. She is a tester, (polyglot) programmer, speaker, author, conference designer, and community facilitator. She has been awarded prestigious testing awards, Most Influential Agile Testing Professional Person 2016 (MIATPP) and EuroSTAR Testing Excellence Award (2020), Tester Worth Appreciating (2022), and selected as Top-100 Most Influential in ICT in Finland 2019-2023.

Francisco Di Bartolomeo

An experienced Test Discipline Lead with over ten years of expertise across diverse testing areas. Passionate about cultivating a quality-driven culture, he excels in coaching and mentoring teams both technically and professionally. He has led numerous quality assurance initiatives, advocating for risk-based testing and shift-left practices to integrate quality at every stage of development. Guided by the belief that “Quality is a habit,” Francisco is dedicated to making quality a constant practice, establishing himself as a visionary in software testing.

API Test Planning LIVE

How do you come up with cases for your APIs? Is it enough to check they return the right status? No.

APIs are complex, so even a couple cause us to be overwhelmed by the options. But the options are good. We want the ideas, so we can prioritize based on our needs. We just need to understand our system and come up with proper ones.

History has proven that the best way to come up with ideas is collaboration. So that’s what we’ll do.

This is an interactive session on API Test Planning. Given only two APIs (and a semi-sane moderator), we’ll come up with creative ways to test them.

Sounds easy? APIs are complex. And in this session, we’ll see how much, and how to think of different aspects of APIs when testing.

Search for a Tool Is Like Dating​

Choosing the right toolset for your ecosystem can be a lot like dating. You need to know what you’re looking for, be prepared to make a list, and be willing to check off those boxes to find the right match. In this lightning session, we’ll explore the similarities between choosing a toolset and choosing a date, and learn how to make the best decisions for your needs.

You will learn the following:

  • What should be on your list of requirements when evaluating new toolsets, and how to approach the search to ensure you find the right match for your ecosystem
  • We’ll discuss the importance of compatibility, communication, and trust in both dating and tool selection, and how to use these principles to make the best decisions for your organization.

 

By the end of this session, you’ll walk away with a clear understanding of how to approach the tool selection process like a pro, and how to make the best decisions to support your organization’s success.

OSRS: Your New Test Strategy Multitool​

Have you ever inherited a testset and wondered about its worth? If it’s complete? If it’s good? A lot of test strategy approaches focus on new projects where you start from scratch, but in reality, you’ll often inherit an existing testset of an already running project. So how do you evaluate the value of that testset? How do you see if you are missing something or if you are overtesting things? How to choose what to automate? What to put in a regression testset? What to test first when under time pressure?

This approach can be applied to all of these questions and helps give the whole team insight into the potential value of testing. It will open up the conversation about what you will and won’t be testing with evidence to substantiate those choices.

Taking Your IT Leadership to the Next Level​

What does a day in the life of YOU look like at work? Do you struggle to complete projects on time? Are there issues that seem to pop up with every deliverable? Does your team respect you and your contributions? Does your boss understand what you are trying to accomplish? Do your stakeholders appreciate the outcomes that you and your team provide?

 

It’s very likely that one of these questions resonated with you when you read it. In fact, there is a chance that ALL of them do! We live in a fast-paced world where it’s easy to get so caught up in a routine where every day looks the same.

 

There seems to be minimal time allowed for growing, improving, and building connections.

 

Imagine a new world where you focus daily on personal growth. A world where you engage effectively with your boss, your peers, and your subordinates. A world where you go from just “showing up” or “keeping up” to “growing up” and improving not only your workplace but your personal life.

 

Join Mike Lyles as he shares decades of experiences in IT and leadership roles and how he has used these experiences to help him grow as a leader in IT.

 

Key Takeaways:

  • Key learnings from years of IT experiences
  • Suggestions for how to move from “communicating” to “connecting”
  • Tips to move from leading “followers” to leading “leaders”

Test Automation: Friend or Foe?

Supporting evidence does not teach us as much as opposing evidence. We are people who support and oppose on principle, in search of knowledge. We are balancing the perspectives of friends and foes, professionally, all day long.

We’ve been at this test automation thing quite a while, and three decades have given me the space to come to a principle that helps my projects succeed slightly more often with test automation: Time used on warning about test automation is time away from succeeding with it. We know from a particular literature genre of romance novels that tropes starting with friends or foes both end up with love, and we could adult up to improve our communication to more purposeful change.

A year ago, we set up a panel conversation to seek ideas for ending well off with test automation. In this talk, we lend the tension of disagreements of the past, to enable learning, combined with the stories of real projects.

Maybe at this time of AI and getting computers to ‘act humanly’, we need to team with an old enemy to make sense of how we peacefully coexist with the new, with healthy boundaries keeping the friendship in check.

Key takeaways:

  • When the stakes are high, we work with friends and foes
  • How to increase the odds of good results we can like with the tools
  • How to navigate the recruitment trap and the in-company growth plains

Not Too Little, Not Too Much: How to Test Just the Right Amount​

As business demands for a shorter time to market continue to rise, testing teams can struggle to find the right balance between adhering to these demands and maintaining sufficient coverage to ensure the released products meet the desired quality standards.

Using the power of AI, we offer a model that combines the value each test provides along with the current time limitation to determine which tests are best to execute.