STARWEST 2024 - Test Techniques
Sunday, September 22
Software Tester Certification Foundation Level—ISTQB CTFL
Monday, September 23
Ramping Up Modern Performance
PreviewModern software is developed in a continuous manner, with several backend services frequently being deployed and scaled in the cloud. Furthermore, as organizations move toward agile, DevOps, and continuous delivery, it is vital for them to move away from traditional approaches to evaluate performance. Are you interested in ramping up or polishing your skills on performance testing? Leandro Melendez will introduce attendees to modern, agile, and continuous performance testing. You’ll learn performance assurance principles and everything from fundamental performance concepts like...
Test Automation: How to Start and Succeed
Many organizations invest a lot of effort in test automation at the system level but then have serious problems as their product matures and changes over time. As a leader, how can you ensure that your new automation efforts will get off to a good start? What can you do to ensure that your automation work provides continuing added value? Chris Loder will explain the critical issues you need to know to get a good start, and he will share his extensive experience in building great automation. He covers the most important management issues you should address for test automation success,...
Smarter Test Design with Classification Trees and Pairwise Techniques
In many teams, the total number of possible combinations of inputs, outputs, browsers, and devices for the software we need to test has grown to an alarming number. As testers, we need to choose the most important tests first, but how do we do that without understanding the potential scope in the first place? In this tutorial, Julie Gardiner will share two powerful testing techniques that can help us be more efficient and effective with our testing. Classification trees are a structured, visual approach to identifying test objects and documenting test ideas and data in a way that allows...
Testing on the Right: Lessons in Monitoring and Observability
Observability has exploded onto the software engineering zeitgeist over the last five years, and for a good reason. However, it suffers from being misunderstood and sometimes equated with a closely related subject—monitoring. This confusion is compounded by the fact that some of the existing tools and frameworks just adopted a lot of the observability terminology in just the letter of the word, not the intent. Not having a solid grasp on the basics of observability is becoming unacceptable in the world of effective software quality engineering. Kaushal Dalvi shares his experiences in the...
Tuesday, September 24
A Quality Engineering Introduction to AI and Machine Learning
Although there are several controversies and misunderstandings surrounding AI and machine learning, one thing is apparent — people have quality concerns about the safety, reliability, and trustworthiness of these types of systems. Not only are ML-based systems shrouded in mystery due to their largely black-box nature, they also tend to be unpredictable since they can adapt and learn new things at runtime. Validating ML systems is challenging and requires a cross-section of knowledge, skills, and experience from areas such as mathematics, data science, software engineering, cyber-security,...
Getting Smart on AI-Assisted API Testing
API testing has become more and more popular as service-oriented architectures have become common. In addition, testing at the API level can be effectively automated to provide maintainable regression tests that work well in a DevOps process. In this tutorial, Jeffery Payne discusses what API testing is all about and how AI is being leveraged today to make it easier to perform. Techniques and tools are discussed that highlight where in the testing process API testing makes the most sense. Various open-source and commercial tools will be demonstrated and the pros and cons of various...
Web Security Testing: The Basics and More
Web applications are often security critical or serve as front-ends for security critical applications, making web testing for vulnerabilities an essential part of software testing. Unfortunately, most software testers have not been taught how to identify web security issues while testing applications. Join Tom Stiehm as he shares what you need to know to security test web-based applications as part of your overall testing process. Learn about the most common web security vulnerabilities and how they are introduced into web code and exploited by hackers. Explore test techniques for...
Exploratory Testing in the Heat of the Sprint
Agile teams are burdened with the challenge of delivering working product increments after short iterations of development. Getting software from an ambiguous terse, incomplete requirement–to a done, working, solid, valuable, high-quality code requires testers to continuously adapt to change in a turbulent context and deliver actionable results. Chris Blain will illustrate how charter-driven session-based exploratory testing techniques can empower agile teams and help them learn quickly and adapt based on what really matters. Testers can design and execute tests on the fly as they explore...
Wednesday, September 25
Building and Testing Serverless API Applications with AWS SAM
The primary draw for implementation of AWS serverless applications is the supposed simplicity. Anyone that has attempted to implement testing on a serverless application, however, knows that it is anything but simple. Serverless technologies allow for the faster construction of more complex applications with more complex integrations while also providing new technologies and execution environments, all of which pose a challenge to those used to testing in a more traditional way. This presentation looks at an API-based serverless application as an example and introduces how the application...
Testing Retrospective: Lessons from the Past
We prepare for the future by learning from the lessons of the past. During this session, you will look back at some of the craziest bugs John Jenkins has run across during his career, and see what lessons can be gleaned from them to help tackle the problems of tomorrow. Bugs come in all shapes and sizes, and can exist in processes just as easily as they can exist in code. In this session, both types will be examined, including: the too much free space bug, the V1 bug, the too much test data bug, and more. The session will also explore some of the best practices John has developed in his...
Enabling a DevOps Culture with Embedded Systems
In today's technology-driven landscape, where software and hardware intertwine seamlessly in embedded systems, adopting a DevOps culture becomes imperative. This talk delves into the dynamic world of embedded systems and explores how organizations can successfully implement and nurture a DevOps culture within this unique domain. DevOps, with its emphasis on collaboration, automation, and continuous improvement, has revolutionized software development. However, adapting these principles to the realm of embedded systems presents distinct challenges and opportunities. Our discussion will...
Reinventing the Art of Software Testing with Google Cloud AI Platform
PreviewThis session explores the innovative ways to approach and revolutionize the art software testing by harnessing the full power of Google Cloud AI Platform. Utilizing AI-powered regression testing and natural language processing (NLP) capabilities, developers can automate mundane and repetitive tests while also analyzing software functionality and usability. Predictive analytics and custom machine learning models can be used to anticipate and identify potential issues, improve testing efficiency, and provide actionable insights. Applying reinforcement learning algorithms for GUI...
Thursday, September 26
Bridging AI-Generated Acceptance Criteria with Comprehensive Test Scenarios
PreviewThe generation of acceptance criteria through generative AI is an innovative approach to streamline the requirements gathering process. However, ensuring that the implemented code aligns with these criteria is crucial for delivering high-quality software. This talk explores the integration of generative AI generating acceptance criteria with test case scenarios, aiming to establish a seamless connection between the development and testing phases. By leveraging pull requests as a central hub, this approach facilitates the validation of whether the test case scenarios adequately cover...
Testing for Synergy: Progressive Testing Strategies for Interdependent Product Suites
In today's dynamic business landscape, organizations grapple with the challenge of innovating across multiple product developments simultaneously. This session explores the intricacies of concurrent development involving seven products, highlighting the pivotal roles of scrum teams, engineering approach, and collaborative testing strategies. Traditional independent product testing approaches prove insufficient in the context of interdependence and a unified platform. The discussion centers on the need for an evolved testing strategy that shifts left, meticulously addressing integration...
Automation Zero to Hero in Two Weeks
Several years ago, Dave was a seasoned QA Engineer starting a new job. As the new guy, he was initially tasked with the mind-numbing task of verifying that data in reports matched expected data in a spreadsheet. This task took three weeks to complete manually. He vowed he would never do that again, so the next time around, Dave automated the process in two weeks using an open source automation tool he had never used before, all while laying the foundation for an automation framework that now has over 90 contributors across 12 product lines. Session attendees will learn how to build,...
“Low Code”—Coded Automation Using Free Tools
Using artificial intelligence to generate test code is a hybrid automation strategy that combines the best of both worlds. Tests can be created very quickly by almost anyone using AI, yet the tests are still planned by humans and maintainable by humans. With the right prompts, you can have AI construct traditional test code using open source testing tools that the world is already familiar with (Chai, Mocha, Cypress). As a result, you end up with structured code that is logical and easy to maintain without having to wonder what the AI is testing. In this session, Timothy will look at...
Manual Test Cases Suck...So Get Rid of Them!
It has been a commonly held belief that an ever growing list of manual test cases, kept and maintained by the QA team is the best way to provide training, metrics and a roadmap for future automation efforts. But what if this base assumption is wrong? What if instead of having just a number of passing or failing test cases, we could better speak to factors like risk and quality before our products go live and have a little extra time to be creative with our testing? Rebecca Peterson wants to teach you the steps she and her coworkers have made to move towards living documentation that can...
Pillars of Quality: Building a Structured Test Program in Phases
As the lone tester amongst a sea of developers, Sylvia Solorzano struggled to scale reactive efforts into a cohesive program. Haphazard bug reports strained relationships and failed to provide direction. By installing a phased methodology based on industry best practices first, quality became her north star. Sylvia enshrined requirements traceability and manually executed test cases for each function delivered. Then she evaluated integrations between components and expanding to validate full system behavior as users would experience. Finally, leveraging test standards, she layered on...
AI-Assisted Exploratory Testing for Healthcare Software Based on STEEEP Domain of Healthcare Quality
PreviewIn this presentation, the primary objective is to introduce an innovative AI-assisted self-generating exploratory test automation model designed specifically for healthcare software applications. The overarching goal of this platform is to elevate the quality assurance process within the realm of healthcare software by dynamically formulating, executing, and adapting test scenarios. These scenarios are meticulously crafted to align with the crucial STEEEP (Safe, Timely, Effective, Efficient, Equitable, and Patient-Centered) healthcare quality domain, thereby fostering an improvement...
Multi-Modal GPTs Are Coming For Your Testing! How to Adapt?
As you research the latest in generative AI technology, you will see that the development and availability of multi-modal GPT engines are fundamentally changing the way applications are tested and described. These new GPT models can generate and interpret voice, text, and images seamlessly. For example, you can ask them to navigate an application to accomplish a business task and comment on their actions. This means that we’re for the first time entering the world of AI-assisted/performed exploratory testing. When you couple this with the capabilities of GPT models to identify UI elements...
The Art of Winning Leadership Support for Web Accessibility
PreviewFeeling unheard in your fight for web accessibility? What if you could turn this struggle into a success story? Join this candid conversation on navigating the nuanced journey of integrating accessibility, even when convincing leadership seems like an uphill battle. Together, we'll reflect on Renata's initial stumbles, identify common pitfalls, and strategize to avoid them. Anticipate leadership concerns by understanding deeper motivations behind their resistance. Master the art of speaking their language and tailor your message to resonate with decision-makers. Empower your...
Leveraging and Measuring the Use of Formal Testing Methods in Product Development
PreviewWithout training in software testing that includes formal methods such as equivalence class partitioning, boundary value analysis, decision tables, state diagrams, and others, an engineer, in good faith, will test the code intuitively until reaching “qualitative confidence” that testing is sufficient. The team at Trimble, Inc. provided training to engineers in a wide variety of formal testing methods so they could gain “quantitative confidence” that code has been tested sufficiently, by using methods known to achieve concrete coverage in requirements and code. Subsequently, Trimble...
Compile to Combat in 24 Hours—The Death of Regression Tests
As a Quality Engineering Director for a consulting firm delivering innovative data products to the US Department of Defense, Sufyan was faced with a new challenge. The US Government had directed all agencies delivering software solutions to the Department of Defense to provide the ultimate advantage to its warfighters. All software solutions must meet the Compile to Combat (C2C24) directive in that all requests for changes to software being used by the military must be delivered in 24 hours from the point of development to delivery. As a Quality Engineering team, this meant long and...