STARWEST 2024 - Consultant | TechWell

STARWEST 2024 - Consultant

Customize your STARWEST 2024 experience with sessions for consultants.

Sunday, September 22

ISTQB Certified Tester—Test Automation Engineer

Sunday, September 22, 2024 - 8:30am to Tuesday, September 24, 2024 - 5:00pm

Monday, September 23

Jason_Arbon
Checkie.AI
MG

Evaluating and Testing Generative AI: Insights and Strategies

Monday, September 23, 2024 - 8:30am to 12:00pm

Generative AI (GenAI), exemplified by groundbreaking systems like ChatGPT and LLAMA, is revolutionizing the software landscape. These advanced technologies represent some of the most sophisticated software ever devised, capable of navigating an unprecedented range of prompts and questions, many of which have never been posed in human history. Their ability to generate varied responses to the same query and even fabricate answers when uncertain poses unique challenges in verification and testing. This talk delves into the intricacies of validating such systems and identifies areas needing...

Chris Loder
BluWave-ai
MK

Automation Framework Essentials

Monday, September 23, 2024 - 1:00pm to 4:30pm

Automation is critical in today’s software delivery lifecycle, and yet many organizations struggle to keep their automation running. How can we mitigate difficulties and get consistent automation runs and results we can trust? The secret is implementing a solid automation framework, but that isn’t as easy as it seems. Chris Loder has built several automation frameworks over his career and has learned what works—and, more importantly, what doesn’t. This tutorial will cover what an automation framework is, the benefits of having one, and the keys to a successful framework, including...

Tuesday, September 24

Mike_Sowers
Coveros
TD

Quality and Testing Measures and Metrics

Tuesday, September 24, 2024 - 8:30am to 12:00pm

To be most effective, leaders—including development and testing managers, ScrumMasters, product owners, and IT managers—need metrics to help direct their efforts and make informed recommendations about the software’s release readiness and associated risks. Because one important evaluation activity is to “measure” the quality of the software, the progress and results of both development and testing must be measured. Collecting, analyzing, and using metrics are complicated because developers and testers often are concerned that the metrics will be used against them. Join Mike Sowers as he...

Wednesday, September 25

W5

Delete Responsibly: A Guide to Managing Flaky Tests in iOS UI Automation

Wednesday, September 25, 2024 - 11:30am to 12:30pm

Flaky tests pose a significant challenge in maintaining the reliability and efficiency of UI test suites. This talk delves into practical approaches for handling flaky tests, emphasizing the importance of responsibly removing or rewriting tests that consistently demonstrate flakiness. Join Zhanat to explore strategies such as implementing retries judiciously, utilizing monitoring tools for better oversight, and engaging in collaborative problem-solving sessions with iOS teams to address ambiguous failures. Through real-world examples, attendees will learn how to avoid common pitfalls that...

W13

Reinventing the Art of Software Testing with Google Cloud AI Platform

Wednesday, September 25, 2024 - 2:45pm to 3:45pm

This session explores the innovative ways to approach and revolutionize the art software testing by harnessing the full power of Google Cloud AI Platform. Utilizing AI-powered regression testing and natural language processing (NLP) capabilities, developers can automate mundane and repetitive tests while also analyzing software functionality and usability. Predictive analytics and custom machine learning models can be used to anticipate and identify potential issues, improve testing efficiency, and provide actionable insights. Applying reinforcement learning algorithms for GUI testing,...

Thursday, September 26

T1

Bridging AI-Generated Acceptance Criteria with Comprehensive Test Scenarios

Thursday, September 26, 2024 - 9:45am to 10:45am

The generation of acceptance criteria through generative AI is an innovative approach to streamline the requirements gathering process. However, ensuring that the implemented code aligns with these criteria is crucial for delivering high-quality software. This talk explores the integration of generative AI generating acceptance criteria with test case scenarios, aiming to establish a seamless connection between the development and testing phases. By leveraging pull requests as a central hub, this approach facilitates the validation of whether the test case scenarios adequately cover the...

Tania Katan
HALO Strategies
T2

Change: It's All We Have

Thursday, September 26, 2024 - 9:45am to 10:45am

As philosopher Heraclitus once said, “The only thing that is constant is change.” Which means that when stop fighting change, stop staying stuck in “the way things have always been” and START embracing the power of change—within our mindsets, offices, objectives—we START transforming into the most relevant and resonant versions of ourselves, our teams, and our organizations. As a change-agent both inside technology and WAY OUTSIDE, this interactive (and fun) session is focused on you gaining the clarity, creativity, and confidence necessary to wield the power of change and have impact...

Vinod Kashid
Cognizant Technology Solutions
T10

End-to-End Automation for Performance Testing and Engineering

Thursday, September 26, 2024 - 11:15am to 12:15pm

Join Vinod Kashid as he walks through his experience building end-to-end automation for performance testing and engineering. His team was tasked to do daily performance testing activities for a critical application and business executives were expecting detailed application performance outcomes on a daily basis. Manual performance test execution and analysis were time consuming processes and were delaying the overall run time for final report. Business stakeholders wanted to implement zero touch automation from performance test execution to analysis and reporting (including deep-dive...

Carlos Kidman
Qualiti
T11

How to Test AI Systems

Thursday, September 26, 2024 - 11:15am to 12:15pm

Companies have lost millions because they didn't have proper MLOps or testing processes. They relied on the training metrics, like accuracy, but software quality goes beyond that. As the barrier to entry to using AI tools, like ChatGPT and Midjourney, becomes smaller and companies start building their own systems and products with AI, the need for setting quality standards, testing practices, and thinking about ethics and safety has become crucial. Testing these goes beyond validation metrics like accuracy, precision, and recall. Instead, quality attributes like behaviors, usability, and...

Richelle Bixler
Edward Jones
Panikar
Atlas Revolutions, Inc.
T14

Writing Great Objectives

Thursday, September 26, 2024 - 1:30pm to 2:30pm

A good objective has five components that effectively communicate a business outcome and why it matters: Activity: What will we be doing? Scope: What are the boundaries of the work we will touch? Beneficiary: Who is the intended recipient of the new work? User Value: Why does this work matter to the new user? Business Value: Why does this work matter to the business? PI Objectives can be daunting but can be made much easier when you add in some key components or a formula: [Activity] + [Scope] so that [Beneficiary] have [User Value] to [Business Value]. In this session, Richelle will talk...

Sylvia Solorzano
Expedition Technology
T15

Pillars of Quality: Building a Structured Test Program in Phases

Thursday, September 26, 2024 - 1:30pm to 2:30pm

As the lone tester amongst a sea of developers, Sylvia Solorzano struggled to scale reactive efforts into a cohesive program. Haphazard bug reports strained relationships and failed to provide direction. By installing a phased methodology based on industry best practices first, quality became her north star. Sylvia enshrined requirements traceability and manually executed test cases for each function delivered. Then she evaluated integrations between components and expanding to validate full system behavior as users would experience. Finally, leveraging test standards, she layered on...