Prompt Engineering for Software Quality Professionals
NewWith the sudden rise of ChatGPT and large language models (LLMs), professionals have been attempting to use these types of tools to improve productivity. Building off prior momentum in AI for testing, software quality professionals are leveraging LLMs for creating tests, generating test scripts, automatically analyzing test results, and more. However, if LLM's are not fed good prompts describing the task that the AI is supposed to perform, their responses can be inaccurate and unreliable, thereby diminishing productivity gains. Join Tariq King as he teaches you how to craft high-quality AI prompts and responsibly apply them to software testing and quality engineering tasks. After a brief walkthrough of how LLMs work, you'll get hands-on with few-shot, role-based, and chain-of-thought prompting techniques. Learn how to adapt these techniques to your own use cases, while avoiding model "hallucinations" and adhering to your company's security and compliance requirements.
Tariq King is the Vice President of Product-Service Systems at EPAM, where he manages a portfolio that lies at the intersection of software products and services, and supports the business through technology consulting. Tariq has over 15 years' experience in software engineering and testing and has formerly held positions as Chief Scientist, Head of Quality, Director of Quality Engineering, Manager of Software Engineering and Test Architect. Tariq holds Ph.D. and M.S. degrees in Computer Science from Florida International University, and a B.S. in Computer Science from Florida Tech. His areas of research are software testing, artificial intelligence, autonomic and cloud computing, model-driven engineering, and computer science education. He has published over 40 research articles in peer-reviewed IEEE and ACM journals, conferences, and workshops, and has been an international keynote speaker at leading software conferences in industry and academia.