博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

STAR Conference 2006 将在 2006 年 10 月 16-20 召开

Posted on 2006-09-29 22:09  Jackei  阅读(2234)  评论(6编辑  收藏  举报

STAR(Software Testing Analysis&Review) Conference 2006 将在 2006 年 10 月 16-20 召开——可惜地点在美国,咱们无福参加啊 ^_^。

本次交流会仍旧分为东、西两个分会场,其中西部的会场设置在美国加利福尼亚州西南部城市 Anaheim(阿纳海姆)的 Disneyland 酒店。STAR 上将会展示了业界在最佳实践、工具等方面的最先进的研究成果。涉及到的主题包括软件外包,安全,自动化技术以及其他当前业界的热点话题。

首先是西部分会场的热点推荐。

Wednesday, October 18, 8:45 AM   Back to the Master Schedule
How to Build Your Own Robot Army
Harry Robinson, Google, Inc.

Software testing is tough—it can be exhausting and there is never enough time to find all the important bugs. Wouldn't it be nice to have a staff of tireless servants working day and night to make you look good? Well, those days are here. Two decades ago, software test engineers were cheap and machine time was expensive, demanding test suites to run as quickly and efficiently as possible. Today, test engineers are expensive and CPUs are cheap, so it becomes reasonable to move test creation to the shoulders of a test machine army. But we're not talking about the run-of-the-mill automated scripts that only do what you explicitly told them … we're talking about programs that create and execute tests you never thought of and find bugs you never dreamed of. In this presentation, Harry Robinson will show you how to create your robot army using tools lying around on the Web. Most importantly, learn how to take appropriate credit for your army's work!

Harry Robinson is a Software Engineer in Test for Google. He coaches teams around the company in test generation techniques. His background includes ten years at AT&T Bell Labs, three years at Hewlett-Packard, and six years at Microsoft before joining Google in 2005. While at Bell Labs, he created a model-based testing system that won the 1995 AT&T Award for Outstanding Achievement in the Area of Quality. At Microsoft, he pioneered the test generation technology behind Test Model Toolkit, which won the Microsoft Best Practice Award in 2001. He holds two patents in software test automation methods, maintains the site www.model-based-testing.org, and speaks and writes frequently on software testing and automation issues.

 Wednesday, October 18, 10:00 AM   Back to the Master Schedule
Software Security Testing: It’s Not Just for Functions Anymore
Gary McGraw, Cigital, Inc.

What makes security testing different from classical software testing? Part of the answer lies in expertise, experience, and attitude. Security testing comes in two flavors and involves standard functional security testing (making sure that the security apparatus works as advertised), as well as risk-based testing (malicious testing that simulates attacks). Risk-based security testing should be driven by architectural risk analysis, abuse and misuse cases, and attack patterns. Unfortunately, first generation ''application security'' testing misses the mark on all fronts. That's because canned black-box probes—at best—can show you that things are broken, but say very little about the total security posture. Join Gary McGraw to learn what software security testing should look like, what kinds of knowledge testers must have to carry out such testing, and what the results may say about security.

Gary McGraw Gary McGraw, Cigital, Inc.'s CTO, is a world authority on software security. Gary is author of several best selling books including: Software Security, Exploiting Software, Building Secure Software, Software Fault Injection, Securing Java, and Java Security. Gary holds a dual Ph.D. in Cognitive Science and Computer Science from Indiana University and a BA in Philosophy from University of Virginia. He is a member of the IEEE Security and Privacy Task Force and was recently elected to the IEEE Computer Society Board of Governors. Gary produces the Silver Bullet Security Podcast for IEEE Security & Privacy magazine, writes a monthly column for darkreading.com, and is often quoted in the press. www.cigital.com/~gem

 Wednesday, October 18, 4:30 PM   Back to the Master Schedule
Dispelling Testing’s Top Ten Illusions
Lloyd Roden, Grove Consultants

Are illusions running your organization—distorting the truth and ultimately limiting testing’s effectiveness? Join Lloyd Roden as he unveils his list of the top ten illusions that we may face as testers and test managers. One illusion that we often encounter is “quality cannot be measured.” While it is difficult to measure, Lloyd believes it can and should be measured regularly, otherwise we never improve. Another illusion Lloyd often encounters is “anyone can test.” Typically when the project is behind schedule, inexperienced people are “drafted” to help with testing. While this gives us the illusion that more hands are better, we know the real impact of inexperienced people on the final product. While it is important to identify illusions when they appear, Lloyd will describe ways to reduce their impact or eliminate them entirely from your organization. Only then can we become ultra-effective test professionals who are respected within our organizations.

Lloyd Roden has been involved in the software industry since 1980, studying computer science at Leicester University. He has worked as a programmer with Pearl Assurance, as a Senior Independent Test Analyst for Royal Life, and a project manager for the Product Assurance department at Peterborough Software. In 1999 he joined Grove Consultants where he provides consultancy and training in all aspects of testing, specializing in test management, people issues in testing, and testautomation. Lloyd is a lively and enthusiastic speaker at conferences and seminars including EuroSTAR, AsiaSTAR, STAREAST, Software Test Automation, Test Congress, and Unicom conferences, as well as Special Interest Groups in Software Testing in a variety of different countries.

 Thursday, October 19, 8:30 AM   Back to the Master Schedule
What Every Tester Needs to Know to Succeed in the Agile World
Jean Tabaka, Rally Software Development Corporation

Agile methodologies may be coming soon to a project near you. Agile software development holds the promise of faster development, less cost, fewer defects, and increased customer value, all while maintaining a sustainable work pace in a high morale environment. As a tester, you may be wondering, ''How will agile affect me?'' We’ve all heard stories that agile methodologies have no place for testers. In this presentation, Jean Tabaka changes that perspective. She will highlight the fundamental tenets of agile software development, the project management frameworks that support these tenets, and the engineering disciplines that naturally fit in these frameworks. For some testers, the agile approach can be a jolt to their long-held beliefs of how testing should be done. Jean will help you adapt to this new world by explaining how to make tests talk, using testing as a communications mechanism, eliminating defect logs, and identifying what you will not commit to do. In addition, she will provide guidance on avoiding common traps that newly commissioned agile testers encounter.

Jean Tabaka An agile coach with Rally Software, Jean Tabaka specializes in creating, coaching, and mentoring agile software teams. Jean brings more than twenty-five years of experience in software development to the agile plate in a variety of organizational contexts including internal IT departments, ISVs, government agencies, and consulting organizations. Jean’s work has spanned industries and continents, and she has implemented both plan-driven and agile development approaches for a variety of large and small ventures. A Certified Scrum Master, Certified Scrum Trainer, and Certified Professional Facilitator, Jean holds a Masters in Computer Science from Johns Hopkins University and is the author of Collaboration Explained: Facilitation Skills for Software Project Leaders.

 Thursday, October 19, 4:15 PM   Back to the Master Schedule
Say Yes—or Say No? What to Do When You’re Faced with the Impossible
Johanna Rothman, Rothman Consulting Group, Inc.

The ability to communicate is a tester's—and test manager's—most important skill. Imagine this scenario. You’re a test manager. Your team is working as hard as they can. You’re at full capacity, trying to find time to test the new system your boss just gave you. And now your boss is in your office, asking you to take on one more assignment. What do you do? Say “Yes” or say “No”? Johanna Rothman shows you how to make a compelling case and communicate effectively the work you have and the work you can accomplish, making an impossible situation possible.

Johanna Rothman consults on managing high-technology product development. She uses pragmatic techniques for managing people, projects, and risk to create successful teams and projects. She’s helped a wide variety of organizations hire technical people, manage projects, and release successful products faster. Johanna is the co-author of the pragmatic Behind Closed Doors, Secrets of Great Management, author of the highly acclaimed Hiring the Best Knowledge Workers, Techies & Nerds: The Secrets & Science of Hiring Technical People, and is a regular columnist on StickyMinds.com.

 Friday, October 20, 8:30 AM   Back to the Master Schedule
Session-Based Exploratory Testing: A Large Project AdventureWinner: Best Presentation STAREAST 2006
Bliss, Captaris, Inc.

Session-based exploratory testing has been proposed as a new and improved approach to software testing. It promotes a risk-conscious culture that focuses on areas where there are likely to be defects and allows for rapid course corrections in testing plans to accommodate testing “discoveries”, feature-creep, and schedule changes. How can a test manager take a highly talented manual testing team, accustomed to running test scripts, and introduce the agility of an exploratory approach? What can be done to communicate the risks inherent in feature-creep and schedule changes to senior stakeholders in a meaningful way? Bliss will demonstrate how he successfully implemented session-based exploratory testing while maintaining and even improving the code quality. Using the tool he developed (available for free download) and metrics available with this approach, stakeholders get real-time testing status reports and begin to understand their responsibilities in the process. They then learn how their decisions actually affect the quality of the product. With their new awareness, project stakeholders are more willing to negotiate changes that they might otherwise impose on the engineering teams. With session-based exploratory testing, you will discover that quality rapidly becomes everyone’s concern.

Bliss has worked in the software industry for fifteen years, beginning as a software engineer in 1991, development manager in 1993, and entering the Quality Assurance arena in 1998. Working for Captaris since 1996, he led the Quality Assurance Department until taking the role of RightFax Engineering Manager. Studying Geology at Western Kentucky University, he worked for the Center for Cave and Karst Studies exploring and mapping beneath Bowling Green, Kentucky. Bliss received a BS degree in Computer Science and Mathematics in 1991 from Grand Valley State University. He also works with Minor Planet Research discovering Potentially Hazardous Asteroids (PHAs).

 

下面是西部分会场 5 天全部的日程安排。可谓是异彩纷呈啊!^_^

Tutorials for Monday, October 16, 8:30-5:00

A Essential Test Management and Planning
Rick Craig, Software Quality Engineering   Back to the Master Schedule

The key to successful testing is effective and timely planning. Rick Craig introduces you to proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, systems, integration, and unit testing. Rick explains how to customize an IEEE-829-type test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical, risk analysis technique to prioritize your testing and help you become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and renewed energy for taking test management to the next level in your organization.

 
About the Instructor
A frequent speaker at testing conferences, Rick Craig is well received worldwide as a test and evaluation instructor with Software Quality Engineering. He has implemented and managed testing efforts on large-scale, traditional, and embedded systems, and co-authored a study that benchmarked industry-wide processes. Rick is co-author of the reference book Systematic Software Testing.


B Introduction to Systematic Testing
Dale Perry, Software Quality Engineering   Back to the Master Schedule

All too often testers are thrown into the quality assurance/testing process without the knowledge and skills essential to perform the required tasks. To be truly effective, you first must understand what testing is supposed to accomplish and then see how it relates to the bigger project management and application development picture. After that, you can ask the right questions: What should be tested? How much testing is enough? How do I know when I’m finished? How much documentation do I need? Dale Perry details a testing lifecycle that parallels software development and focuses on defect prevention and early detection. As Dale shares the basics for implementing a systematic, integrated approach to testing software, learn when, what, and how to test—plus ways to improve the testability of your system.

 
About the Instructor
With over twenty-five years of experience in information technology, Dale Perry has been a developer, DBA, project manager, tester, and test manager. His project experience includes large system conversion, distributed systems, on-line applications, client/server, and Web applications. Dale is a seasoned instructor on subjects including software development, application design, testing and reviews, and software management.


C How to Break Software
Joe Basirico, Security Innovation, Inc.   Back to the Master Schedule

What do you do when you are asked to test a particular feature of an application? In truth, testing theory only provides general guidelines and often falls short of helping you design a total testing strategy capable of guiding your testing activities. “How to Break Software” demonstrates a set of specific techniques you can use to effectively test any software application. With his explanation of software fault models, Joe Basirico helps you understand what software does—and how it can fail. He expands these fault models into a set of “attacks” that target the software’s most vulnerable points. Joe presents this new software testing paradigm, using real bugs in real software applications as examples. Anyone who loves breaking software will gain a lot from—and enjoy—this tutorial.

 
About the Instructor
Joe Basirico has spent the majority of his professional career studying security and developing tools that assist in the discovery of security. His primary responsibility at Security Innovation is to deliver the company’s security training curriculum to software teams in need of application security expertise. He has trained developers and testers from numerous world-class organizations such as Microsoft, HP, EMC, Symantec, and ING. Joe is a practitioner and researcher in the field of incorporating security into the SDLC and is a highly regarded presenter in this field.


D Managing Test Outsourcing
Martin Pol, POLTEQ IT Services BV   Back to the Master Schedule

When outsourcing all or part of your testing efforts to a third party vendor, a special approach is required to make testing effective and controlled. Martin Pol explains the roadmap to successful outsourcing, how to define the objectives and strategy, and what tasks should be outsourced. He describes how to select your supplier and how to migrate, implement, and cope with people issues. He discusses contracts, service level agreements, compensation issues, and monitoring and controlling the outsourced test work. To help you gain a practical perspective of all the steps in the outsourcing process, Martin shares a real-life case study, including a spreadsheet-based monitoring tool. The good news for testers is that outsourcing requires more testing—not less—and that new testing jobs are coming into existence. Testing the outsourcing is becoming a very popular control mechanism for outsourcing in general.

 
About the Instructor
Martin Pol has played a significant role in helping to raise the awareness and improve the performance of testing worldwide. Martin provides international testing consulting services through POLTEQ IT Services BV. He’s gained experience by managing testing processes and implementing structured testing in many organizations in different branches.


E Becoming an Influential Test Team Leader
Randall Rice, Rice Consulting Services Inc.   Back to the Master Schedule

Have you been thrust into the role of test team leader or are you in a test team leadership role and want to hone your leadership skills? Test team leadership has many unique challenges, and many test team leaders—especially new ones—find themselves ill-equipped to deal with the problems they face on a daily basis. The test team leader must be able to motivate and influence people while keeping the testing on track with time and budget constraints. Randall Rice focuses on how to grow as a leader, how to influence your team and those around you, and how to influence those outside your team. Learn how to become a person of influence, how to deal with interpersonal issues, and how to influence your team in building their skills and value. Discover how to communicate your value to management, how to stand firm when asked to compromise principles, and how to improve by learning from your successes and failures. Develop your own action plan to implement the things you plan to do to grow as a leader.

 
About the Instructor
Randall Rice is a leading author, speaker, and consultant in the field of software testing and software quality. A Certified Software Quality Analyst, Certified Software Tester, and Certified Software Test Manager, Randall has worked with organizations worldwide to improve the quality of their information systems and to optimize their testing processes. Randall is co-author of Surviving the Top Ten Challenges of Software Testing.


F Key Test Design Techniques
Lee Copeland, Software Quality Engineering   Back to the Master Schedule

Go beyond basic test methodology and discover ways to develop the skills needed to create the most effective test cases for your systems. All testers know we can create more test cases than we will ever have time to run. The problem is choosing a small, “smart” subset from the almost infinite number of possibilities. Learn how to design test cases using formal techniques including equivalence class and boundary value testing, decision tables, state-transition diagrams, and all-pairs testing. Learn to use more informal approaches, such as random testing and exploratory testing, to enhance your testing efforts. Choose the right test case documentation format for your organization. Use the test execution results to continually improve your test designs.

 
About the Instructor
Lee Copeland has more than thirty years' experience in the field of software development and testing. He has worked as a programmer, development director, process improvement leader, and consultant. He has developed and taught a number of training courses focusing on software testing and development issues based on his experience and is the author of A Practitioner's Guide to Software Test Design. Lee is the managing technical editor for Better Software magazine and is a regular columnist for StickyMinds.com.


G Implementing a Test Automation Framework
Linda Hayes, Worksoft, Inc.   Back to the Master Schedule

Learn how to accelerate your test automation effort, dramatically shorten the learning curve, allow non-technical analysts to develop and execute automated tests, and even simplify test library management and maintenance. Linda Hayes presents a guided tour through six levels of test automation, from beginner to advanced implementation approaches, with analyses of the advantages and disadvantages of each. The course provides detailed, step-by-step instructions for how to select and implement a framework. Learn how to use this practical and proven table-driven approach with any commercial or internally developed testing tool for Web, client/server, mainframe, and character-based applications. Linda provides real world examples, new knowledge, and skills you can use as the framework for a new automation project or to make an existing project more successful.

 
About the Instructor
Linda Hayes is Chief Technology Officer at Worksoft, Inc., a software company specializing in test automation. She has more than twenty years of experience in software quality and testing and holds degrees in accounting, tax, and law. Linda is a frequent speaker and award-winning author of books and articles, including the Automated Testing Handbook and regular columns for StickyMinds.com, Computerworld, and Datamation.


H Agile Software Product Testing Using Fit and FitNesse  
Rob Myers, Net Objectives   Back to the Master Schedule

Thorough testing of a use-case (or story) is critical to the success of any software product. Testers on an agile team play a pivotal role, but they must first revisit their own practices and preconceptions about testing. Rob Myers will introduce modified practices and powerful new tools, which allow for stringent, automated requirements testing. This agile approach alters the way testers view software, software developers, and their own careers. Rather than spending weeks stepping manually through point-and-click scenarios, testers will again find professional joy and intriguing challenge in their day-to-day activities.

A laptop is required for this course.

 
About the Instructor
Rob Myers has nearly twenty years of professional experience in software development, including projects for industry leaders in medical, aerospace, and financial services. In the late 90s, Rob became an eXtreme Programming coach and traveled throughout the country assisting teams with agile software development practices and object-oriented design techniques. Rob brings to the classroom his passion for value-oriented software development, team development, and sane work environments. He currently teaches Test-Driven Development and Refactoring, Effective .NET, and the new, cutting edge Test-Driven ASP.NET course.


I How to Build, Support, and Add Value to Your Test Team
Lloyd Roden, Grove Consultants   Back to the Master Schedule

Creating a test team is one thing . . . maintaining an effective and efficient team is quite another. Focusing on a people-oriented approach to software testing, Lloyd Roden examines how to build—and retain—successful test teams within an organization. Discover the characteristics of successful testers and test managers and the qualities you should look for to recruit the right people. Lloyd identifies seven key factors to motivate a test team, including establishing career paths for testers. Discover how a Test Manager can successfully promote the value of testing within the organization, encourage good working relations with Development and other departments, and become a ''trusted advisor'' to Senior Management. Discuss relevant issues facing the people side of test management and take back utilities, spreadsheets, and templates to help you build a successful test team.

 
About the Instructors
Lloyd Roden has been involved in the software industry since 1980, studying computer science at Leicester University. He has worked as a programmer with Pearl Assurance, as a Senior Independent Test Analyst for Royal Life, and a project manager for the Product Assurance department at Peterborough Software. In 1999 he joined Grove Consultants where he provides consultancy and training in all aspects of testing, specializing in test management, people issues in testing, and test automation. Lloyd is a lively and enthusiastic speaker at conferences and seminars including EuroSTAR, AsiaSTAR, STAREAST, Software Test Automation, Test Congress, and Unicom conferences, as well as Special Interest Groups in Software Testing in many different countries.

 

 


J Microsoft® Visual Studio® 2005 Team System for Testers  This Session is a Workshop!
Chris Menegay, Notion Solutions, Inc.   Back to the Master Schedule

Microsoft® Visual Studio® 2005 Team System is an entirely new series of productive, integrated lifecycle tools that help test and development teams communicate and collaborate more effectively. In this hands-on tutorial you will gain a comprehensive knowledge of the testing capabilities available to you with Visual Studio Team System. Chris Menegay will help you understand the challenges the test teams face and how Visual Studio Team System can help. Learn how to create and execute functions including defect reporting, defect tracking, and manual test execution, as well as Web, load and unit tests. Chris will demonstrate how to use reporting features and create quality reports to analyze the status of projects. You will become familiar with Team Foundation version control, where all tests are stored and historical changes are tracked. The testing portions of this course are taught using a shared Team Foundation Server, which allows students to get acquainted with the new collaborative features of Team System. This course is built using Team Foundation Server 1.0® and Visual Studio Team Suite.

Limited seating, register early!

 
About the Instructor
Chris Menegay is a Principal Consultant for Notion Solutions, Inc. He has been helping clients develop business applications for over ten years. Chris works with customers to help with Team System adoption, deployment, customization and learning. In his role with Notion Solutions, Chris has written Team System training for Microsoft that was used to train customers using the beta versions of Team System. Chris holds his MCSD.NET & MCT certification and is a member of the Microsoft South Central District Developer Guidance Council. Chris is a Team System MVP, a Microsoft Regional Director and a member of the INETA speaker's bureau.


K Performance Testing Secrets in Context  This Session is a Workshop!
Scott Barber, PerfTestPlus, Inc.   Back to the Master Schedule

Are you performance testing a regulated, safety-critical application or a corporate Web site? Do you have limited time to conduct your tests? Do you have formal, testable performance requirements? Do you have empirical usage data or marketing hopes and dreams? Through a series of hands-on exercises derived from real projects, Scott Barber demonstrates techniques to effectively plan, design, and manage performance testing in both agile and regulated contexts. Attendees will propose their solutions and then compare and contrast them with the implemented solutions. Specific topics include determining appropriate performance testing goals and requirements, planning for effective and efficient performance investigation and validation, designing tests that increase confidence in results, and managing performance testing activities. Attendees will leave with examples, counter-examples, experience, and a full toolkit for planning, designing, and managing performance tests for a wide variety of contexts.

Limited seating, register early!

 
About the Instructor
Scott Barber is the CTO of PerfTestPlus, Inc., and co-founder of the Workshop on Performance and Reliability (WOPR). A recognized expert in performance testing and analysis, he combines experience and passion for solving performance problems with a scientific approach to produce accurate results. Scott is a frequent speaker and writer of articles including a monthly column for Software Test and Performance Magazine.


Tutorials for Tuesday, October 17, 8:30-5:00

L Model-Based Testing: The Dynamic Answer to Test Automation
Harry Robinson, Google, Inc.

People should think—machines should test. One way to achieve high-quality software releases while maintaining your sanity is to get your test machines to do much of the ''heavy lifting'' of creating and executing tests. Model-based testing does exactly that. Model-based testing automatically generates tests from a description of an application's desired behavior. These tests are cost-effective and more dynamic than traditional scripted automation. But the modeling approach requires greater tester design skills, and it calls for a re-thinking of measuring the test team's contribution and where test fits into the development process! Harry Robinson introduces you to state machines, graph traversals, combinatorics, and heuristic oracles that will improve your testing skills and your software's quality. Learn how to generate and automatically execute millions of tests for GUIs, APIs, and Web applications.

 
About the Instructor
Harry Robinson is a Software Engineer in Test for Google. He coaches teams around the company in test generation techniques. His background includes ten years at AT&T Bell Labs, three years at Hewlett-Packard, and six years at Microsoft before joining Google in 2005. While at Bell Labs, he created a model-based testing system that won the 1995 AT&T Award for Outstanding Achievement in the Area of Quality. At Microsoft, he pioneered the test generation technology behind Test Model Toolkit, which won the Microsoft Best Practice Award in 2001. He holds two patents in software test automation methods, maintains the site www.model-based-testing.org, and speaks and writes frequently on software testing and automation issues.


M Measurement and Metrics for Test Managers
Rick Craig, Software Quality Engineering

To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both development and testing processes. The difficulty of collecting, analyzing, and using metrics is complicated further because many developers and testers feel that the metrics will be used “against them.” Rick Craig addresses common metrics: measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the benefits and pitfalls of each metric and how you can use these measurements to determine when to stop testing. Rick offers guidelines for developing a test measurement program, rules of thumb for collecting data, and tips on avoiding “metrics dysfunction.” Various metrics paradigms including Goal-Question-Metric are addressed with a discussion of the pros and cons of each. Attendees are urged to bring their metrics problems and issues to use as discussion points.

 
About the Instructor
A frequent speaker at testing conferences, Rick Craig is well received worldwide as a test and evaluation instructor with Software Quality Engineering. He has implemented and managed testing efforts on large-scale, traditional, and embedded systems, and co-authored a study that benchmarked industry-wide processes. Rick is co-author of the reference book Systematic Software Testing.


N How to Break Software Security
Joe Basirico, Security Innovation, Inc.

Software testing is a discipline that has become increasingly more competent at finding requirements-based defects. As an industry, we’ve developed and nurtured test harnesses, tools, techniques, and talents to find many bugs before software is ever released. Security testing, however, is a different story. Security bugs tend to manifest themselves as extra functionality that does not violate the requirements but nevertheless can produce catastrophic holes in software. Joe Basirico introduces a fault model to help testers conceptualize these types of bugs and takes you through a set of software attacks that has proven effective at exposing security bugs. Take back with you a full arsenal of software attacks and the tools you need to detect security vulnerabilities in your software—before hackers discover them for you.

 
About the Instructor
Joe Basirico has spent the majority of his professional career studying security and developing tools that assist in the discovery of security. His primary responsibility at Security Innovation is to deliver the company’s security training curriculum to software teams in need of application security expertise. He has trained developers and testers from numerous world-class organizations such as Microsoft, HP, EMC, Symantec, and ING. Joe is a practitioner and researcher in the field of incorporating security into the SDLC and is a highly regarded presenter in this field.


O Just In Time Testing
Robert Sabourin, AmiBug.com, Inc.

Turbulent Web development projects experience daily requirements changes, as well as changes to user interfaces and the continual integration of new functions, features, and technologies. Robert Sabourin shows you effective techniques to keep your testing efforts on track while reacting to fast-changing priorities, technologies, and requirements. Topics include: test planning and organization techniques, scheduling and tracking, blending scripted and exploratory testing, identifying key project workflows, and using testing and test management tools. Learn how to create key decision making workflows for test prioritization and bug triage, adapt testing focus as priorities change, and identify technical risks and respect business importance.

 
About the Instructor
Robert Sabourin has over twenty years of management experience, leading teams of software development professionals. A well-respected member of the software engineering community, he has managed, trained, mentored, and coached hundreds of top professionals in the field and frequently speaks at conferences and writes on software engineering, SQA, testing, management, and internationalization. The author of I am a Bug!, the popular software testing children’s book, Robert is an adjunct professor of Software Engineering at McGill University.


P Test Process Improvement
Martin Pol, POLTEQ IT Services BV

What is the maturity of your testing process? How do you compare with other organizations or with industry standards? Join Martin Pol for an introduction to the Test Process Improvement (TPI®) model, an industry standard for test process maturity assessment. Improving your testing requires understanding twenty key test process areas, your current position in each of these areas, and the next steps to take for improvement. Many organizations want to focus on achieving the highest level of maturity without first creating the foundation required for success. Rather than guessing what to do next, the TPI® model guides the way. Using real world TPI® assessments he has performed in a variety of organizations, Martin describes an assessment approach that is suitable for both smaller, informal organizations and larger, formal companies.

Each attendee will receive a copy of the reference book, Test Process Improvement, by Tim Koomen and Martin Pol.


TPI® is a registered trademark of Sogeti USA LLC.

 
About the Instructor
Martin Pol has played a significant role in helping to raise the awareness and improve the performance of testing worldwide. Martin provides international testing consulting services through POLTEQ IT Services BV. He’s gained experience by managing testing processes and implementing structured testing in many organizations in different branches.


Q Establishing a Fully-Integrated Test Automation Architecture
Edward Kit, Software Development Technologies

The third generation of test automation has proven to be the best answer to the current software quality crisis—a shortage of test resources to validate increasingly complex applications with extremely tight deadlines. This tutorial describes the steps to design, manage, and maintain an overall testing framework using a roles-based team approach and a state-of-the-practice process, along with the key phases of test planning, test design, building and automating tests, executing tests, and reporting results. While demonstrating commercial examples of first-, second-, and third-generation test automation tools, Edward Kit provides tips for creating a unified automation architecture to address a wide variety of test environment challenges, including Web, client/server, mainframe, API, telecom, and embedded architectures.

 
About the Instructor
Founder and president of Software Development Technologies, Edward Kit is a recognized expert in the area of software testing and automation. His best-selling book, Software Testing in the Real World: Improving the Process, has been adopted as a standard by many companies, including Sun Microsystems, Exxon, Pepsico, FedEx, Wellpoint, Southwest Airlines, and Cadence Design Systems.


R Test Estimation Using Test Point Analysis  
Ruud Teunissen, POLTEQ IT Services BV

How do you estimate your test effort? And how reliable is your estimate? Ruud Teunissen discusses the implementation of a practical and useful test estimation technique directly related to your test process. The basic elements of a reliable test estimate are the size of the system under test (or the changes to it), the test strategy, available test design techniques, staff productivity, the software development process, and your environment. Discover the possibilities to grow from a budget based on “experiences in similar projects” or a “predefined percentage” of the total project budget to more formal estimation techniques such as Test Point Analysis (TPA®). TPA®is a useful, reliable, and practical instrument for estimating black-box tests as described in the Test Management Approach (TMap®). Ruud includes life experiences from different projects he has estimated and describes additional test effort estimation methods to employ depending on the maturity of your test process and your software development methods.

TPA® and TMAP® are registered trademarks of Sogeti USA LLC.

 
About the Instructor
In the testing world since 1989, Ruud Teunissen has held numerous test functions in different organizations and projects: tester, test specialist, test consultant, test manager, and business unit manager of all testing. Ruud is co-author of Software Testing—A Guide to the TMap® Approach and is a frequent speaker at national and international conferences and workshops. He was a member of the program committee for Quality Week Europe andEuroSTAR. Ruud is currently an International Test Consultant at POLTEQ IT Services BV.


S Requirements Based Testing
Richard Bender, Bender RBT, Inc.

Testers use requirements as an oracle to verify the success or failure of their tests. Richard Bender presents the principles of the Requirements Based Testing methodology in which the software's specifications drive the testing process. Richard discusses proven techniques to ensure that requirements are accurate, complete, unambiguous, and logically consistent. Requirements based testing provides a process for first testing the integrity of the specifications. It then provides the algorithms for designing an optimized set of tests sufficient to verify the system from a black-box perspective. Find out how to design test cases to validate that the design and code fully implement all functional requirements. Based on the situation and their respective strengths and weaknesses, determine which test design strategy to apply to your applications under cause—effect graphing, equivalence class testing, orthogonal pairs, and more. By employing a requirements based testing approach, you will be able to quantify test completion criteria and measure test status.

 
About the Instructor
Involved in test and evaluation since 1969, Richard Bender has authored and co-authored books and courses on quality assurance and testing, software development lifecycles, analysis and design, software maintenance, and project management. Richard has worked in both large and small corporations with international clientele from a wide range of financial, military, government, and academic institutions.


T Behind Closed Doors: Secrets of Great Test Management  
Johanna Rothman, Rothman Consulting Group, Inc., and Esther Derby, Esther Derby Associates, Inc.

Great management happens one interaction at a time. Many of those interactions happen behind closed doors—in one-on-one meetings. So if great management happens in private, how do people learn how to be great managers? Great managers consistently apply a handful of simple—but not necessarily easy—practices. Management consultants Johanna Rothman and Esther Derby reveal management practices they—and their clients—have found useful and will help you learn how to perform them. Bring your big management issues and get ready to practice the skills you need to solve them. Learn to: conduct effective one-on-one meetings, uncover obstacles to your success, learn when and how to coach, and how to provide feedback. In this interactive workshop, Johanna and Esther explore how managers can create an environment for success, keep progress visible, and coach their team to be the best they can be.

Some experience in Java programming is required. Attendees should bring a laptop with the Java Development Kit 1.4 or later installed.

 
About the Instructors
Johanna Rothman consults on managing high-technology product development. She uses pragmatic techniques for managing people, projects, and risk to create successful teams and projects. She’s helped a wide variety of organizations hire technical people, manage projects, and release successful products faster. Johanna is the co-author of the pragmatic Behind Closed Doors, Secrets of Great Management, author of the highly acclaimed Hiring the Best Knowledge Workers, Techies & Nerds: The Secrets & Science of Hiring Technical People, and is a regular columnist on StickyMinds.com.

 
Esther Derby is one of the rare breed of consultants who blend technical and managerial issues with the people-side issues. Project retrospectives and project assessments are two of Esther's key practices that serve as tools to start a team's transformation. Recognized as one of the world's leaders in retrospective facilitation, Esther often receives requests to work with struggling teams. Esther is one of the founders of the Amplify Your Effectiveness (AYE) Conference.

 


U Risk Based Testing  
Julie Gardiner, QST Consultants Ltd.

Risks are endemic in every phase of every project. The key to project success is to identify, understand, and manage these risks effectively. However, risk management is not the sole domain of the project manager, particularly with regard to product quality. It is here that the effective tester can significantly influence the project outcome. Shortened time scales, particularly in the latter stages of projects, is a frustration that most of us are familiar with. In this tutorial, Julie Gardiner explains how effective risk based testing can still shape the quality of the delivered product in spite of such time constraints. Join Julie as she reveals how various approaches to product risk management can be applied to a variety of differing organizational challenges. Receive practical advice—gained through interactive exercises—on how to apply risk management techniques throughout the whole testing lifecycle, from planning through to execution and reporting. Learn a practical way to apply risk analysis to testing.

 
About the Instructor
Founder and Principal Consultant of QSTConsultants Ltd., Julie Gardiner has more than fourteen years of experience in the IT industry including time spent as an analyst programmer, Oracle DBA, and Project Manager. Julie works on the ISEB examination panel and is a committee member for the BCS SIGIST. Julie is a regular speaker at software testing conferences including, STAREAST, STARWEST, EuroSTAR, ICSTest, and the BCS SIGIST.


Wednesday, October 18, 2006, 11:30 AM  Go to 1:45 PM   Go to 3:00 PM   
 W1  TEST MANAGEMENT
The Nine “Forgettings”Back to the Master Schedule
Lee Copeland, Software Quality Engineering

People forget things. Simple things like keys, passwords, and the names of friends long ago. People forget more important things like passports, anniversaries, and backing up data. But Lee Copeland is concerned with things that the testing community is forgetting—forgetting our beginnings. We forget the grandfathers of formal testing and the contributions they made. We forget organizational context, the reason we exist and where we fit in our company. We forget to grow, to learn, and to practice the latest testing techniques. And we forget process context, the reason that a process was first created but which may no longer exist. Join Lee for an explanation of these nine “forgettings”, the negative effects of each, and how we can use them to improve our testing, our organization, and ourselves.

• Learn how we must constantly rediscover
• Understand that each “forgetting” limits our personal and organizational ability
• Discover the power we have to grow and to improve


 W2  TEST TECHNIQUES
Back to the Beginning: Testing Principles Revisited
Erik Petersen, Emprove

In 1976, Glenford Myers listed a set of testing principles in his book Software Reliability. Computing has changed dramatically since those days! iPods have more computing power than the Apollo spacecraft. Testing has even been recognized as a profession—but testing approaches have not changed substantially since Myers’ book. Erik Petersen examines classic testing principles to help us understand what still works and what doesn't. He compares some of the originals with newer principles, including those from the international ISTQB™ testing syllabus. Along the way, Erik takes a light-hearted look at the state of software reliability today.

• Review old testing principles that are still applicable
• Consider new principles that the first generation of testers missed
• Evaluate the quality of software testing today


 W3  TEST AUTOMATION
Positioning Your Test Automation Team as a Product
Satya Mantena, Nielsen Media Research

Test automation teams are typically created with the expectation of facilitating faster testing and higher product quality. To achieve these goals, the test automation team must overcome many challenges—stale test data, burdensome test script maintenance, too-frequent product upgrades, insufficient resources, and unfamiliarity with the systems under test. Satya Mantena describes a creative approach to test automation that overcomes these challenges. The first step is implementing keyword-driven testing. Satya demonstrates how the keyword testing approach is implemented proving this approach is not just theory but has been “proven in action.” Satya concludes by showing how positioning the test automation team as a “product” rather than as a central service, or embedded within each testing team, results in better testing.

• Examine the difference between a service and a product
• Increase the probability of a successful test automation effort
• Learn how to reduce time while increasing the success of test automation


 W4  SECURITY TESTING
Security Testing: From Threat to Attack to Fix
Julian Harty, Google, Inc.

Based on his years of experience in security testing, Julian Harty believes that most system stakeholders don’t understand—or even recognize—the need for security testing. Perhaps they will pay an external consultant to perform an occasional security audit, but they do not recognize the need for ongoing security testing. Julian will explain why security testing is vital, though often unappreciated. He will describe the security testing lifecycle, from threat, to attack, to fix. Julian shows how to gather information to become productive quickly if we’re invited late to security testing. Julian prefers that we prevent attacks but also describes how to repair damage—to both data and reputation—if your systems are attacked. Join this session to begin security testing at your organization.

• Examine the typical software security issues lifecycle
• Determine how to get involved in security testing without a huge, up-front investment
• Learn how to recognize your limitations, and when to get help


 W5  SPECIAL TOPICS
Software Disasters and Lessons Learned
Patricia McQuaid, California Polytechnic State University

Software defects come in many forms—from those that cause a brief inconvenience to those that cause fatalities. Patricia McQuaid believes it is important to study software disasters, to alert developers and testers to be ever vigilant, and to understand that huge catastrophes can arise from what seem like small problems. Examining such failures as the Therac-25, Denver airport baggage handling, the Mars Polar Lander, and the Patriot missile, Pat focuses on factors that led to these problems, analyzes the problems, and then explains the lessons to be learned that relate to software engineering, safety engineering, government and corporate regulations, and oversight by users of the systems.

• Learn from our mistakes—not in generalities but in specifics
• Understand the synergistic effects of errors
• Distinguish between technical failures and management failures



 Wednesday, October 18, 2006, 1:45 PM  Go to 11:30 AM   Go to 3:00 PM   
 W6  TEST MANAGEMENT
Quantifying the Value of Your Testing to Management
Arya Barirani, Mercury

Congratulations, you're a true testing expert. You know all there is to know about test planning, design, execution and reporting, performance tests, usability tests, regression tests, agile, SCRUM, and all the rest. But it’s still possible that your IT executives and business stakeholders do not fully understand the value of your work. It's time to communicate with them in a language they understand: Return on Investment (ROI). Arya Barirani will show you how to calculate the ROI of common test activities including test automation, defect reduction, and downtime prevention; how to create reports for maximum effect; and how to evangelize the value of your testing efforts. You will learn how to make better decisions about investments like strategic sourcing, lab infrastructure, and staffing through better quantification of their business value.

• Learn how to use ROI as a metric to demonstrate the value of testing
• Consider reporting techniques for maximum executive buy-in
• Discover marketing (yes, marketing!) techniques for promoting your testing activities


 W7  TEST TECHNIQUES
Implementing a Final Regression Testing Process
Jeff Tatleman, Avaya, Inc.

After applications move into production, it is vital that subsequent additions or modifications are thoroughly tested and that the entire system is re-tested to ensure that it still functions after these changes. This process, called final regression testing, should be repeated for every new release. Many organizations that have attempted to implement a final regression test process have discovered that it isn’t as easy as it sounds. In Jeff Tatleman’s presentation you will learn a step-by-step approach to ensure successful implementation of a process that meets your technical needs and is attractive to your management. These steps include documenting manual test cases, creating a dedicated testing environment, standardizing test data, and using automation.

• Analyze risk to ensure proper regression test coverage
• Use manual tests to drive test automation
• Simulate the process of migrating into production


 W8  TEST AUTOMATION
Ruby and WATIR: Your New Test Automation Tools
Kalen Howell, LexisNexis

Ready to start writing your own test scripts? Not sure of what tools to use? Kalen Howell discovered Ruby, a powerful scripting language that is easy to learn. Using Ruby led Kalen to WATIR, an open source tool written in Ruby. WATIR is used to drive Web sites through Internet Explorer just as a user would. Just by following a few examples, Kalen was able to create automated test scripts in a matter of minutes. Learning more about Ruby enabled Kalen to write more robust scripts. Ruby connects to databases, writes XML, creates and reads data files, and can be used to create customized libraries. Combining the powerful features of WATIR with the robust and easy to learn language of Ruby gives the tester powerful tools for automated scripting.

• Discover how Ruby can be used as a powerful scripting language
• Explore how WATIR libraries can be used to effectively test your Web applications
• Learn how Ruby and WATIR are ideal in both agile and traditional development processes


 W9  SECURITY TESTING
Testing for the Five Most Dangerous Security Vulnerabilities
Joe Basirico, Security Innovation, Inc.

Today, secure applications are vital for every organization. Security attacks seem to come from every corner of the globe. If your applications are breached, your organization could lose millions. Currently, the biggest holes in IT security are found in applications rather than system or network software. Perimeter and network defenses are not enough to protect your organization from attacks. Unfortunately, most development and testing teams do not have the expertise or the tools they need to properly secure their applications. Joe Basirico, an experienced software security expert, will highlight the top five security vulnerabilities that testers face today and offer practical how-to tips for testing their applications with security in mind.

• Address security issues before the product ships
• Understand the trade-offs among functionality, usability, and security
• Select system designs that are “security safe”


 W10  SPECIAL TOPICS
Building a Fully-Virtualized Test Lab
Ian Robinson, VMware, Inc.

For many organizations, creating a testing environment to replicate every combination of hardware and software that their users have is cost prohibitive. If your organization faces this challenge, the solution may be to create an infrastructure that is based upon virtual machines. Virtualization allows a single physical server to run the workloads of many different servers. Virtual test environments save time and money and support sophisticated test cases that are not possible in a traditional physical environment. For multi-tiered systems, an interconnected set of servers (application server, Web server, database server, domain controller, and firewall) can be implemented within a family of virtual machines running on a single system. Ian Robinson describes how to transition your test systems from a physical to virtual infrastructure, resulting in a far smaller and more cost-effective number of systems, increased manageability, and the ability to test across a broader range of platforms.

• Learn how to create virtual machine “libraries” of common platforms
• Discover how to reproduce defects in a virtualized environment
• Use only one system to support multi-tier testing configurations



 Wednesday, October 18, 2006, 3:00 PM  Go to 11:30 AM   Go to 1:45 PM   
 W11  TEST MANAGEMENT
Step Away From the Tests: Take a Quality Break
John Lambert, Microsoft

Designing, implementing, and executing tests is critically important, but testers sometimes need to take a break. John Lambert describes four un-testing techniques that can quickly improve quality: watching bugs, helping developers, talking to other testers, and increasing positive interactions. Watching bugs enables us to see defect patterns that might otherwise go unnoticed. Helping developers allows you to understand their process and help them understand yours. Talking to other testers helps you learn new techniques and share your experience. Increasing positive interactions builds a cohesive team that works together to solve problems. Join John as he presents ways to easily incorporate these un-testing activities into your schedule to help improve the quality of your products.

• Learn why testers need to step away from their daily testing activities
• Make a positive impact on your systems’ quality
• Add these activities to your schedule


 W12  TEST TECHNIQUES
A Risk-Based Approach to End-to-End System Testing
Marie Was, CNA

You’ve performed unit, integration, functional, performance, security, and usability testing. Are you ready to go live with this new application? Not unless you’ve performed end-to-end system testing. What’s so important about end-to-end testing? It is the only testing that exercises the system from the users’ point of view. Marie Was presents a case study detailing the introduction of a new insurance product in her organization. Their first step was to create an end-to-end system diagram showing how transactions and data flowed through the system. Next, the risk associated with each of those flows was evaluated. Test cases, and their order of execution, were derived based on the risks identified through interviews of subject matter experts and past experience. A “subway map” identifying the various flows was created and color-coded to help non-technical business stakeholders understand the testing approach. This method was highly effective in both planning the tests and communicating the approach to important stakeholders.

• Create an end-to-end system diagram
• Create a “subway map” to help stakeholders understand critical functionality
• Learn a highly successful, customer oriented testing process


 W13  TEST AUTOMATION
Introducing Test Automation: The Pain and Gain of the First Year
Andy Redwood, Neutrino Systems

Are you contemplating moving from totally manual testing to automated testing? Andy Redwood shares a case study of a leading financial organization in the UK that did exactly that. Their goal was to automate testing using the latest tools across multiple projects. They have just finished the first year of the project and have learned some valuable lessons. Andy will describe this organization’s starting position and the goals they set; a step-by-step tour through the processes, tasks, and activities they performed; the new roles that were needed; and how the organizational structure was changed to support automation. Andy will also share the mistakes they made with decisions, processes, environments, and automation and how they dealt with them. Overall, after the first year, they have laid a foundation for future success based on sound automation principles.

• Learn how to create an automation strategy
• Analyze the team structures and people you will need
• Discover the issues, risks, and solutions to automation problems


 W14  SECURITY TESTING
Testing Web Applications for Security Defects
Brian Christian, SPI Dynamics

Approximately three-fourths of today’s successful system security breaches are perpetrated not through network or operating system security flaws, but through customer-facing Web applications. How can you ensure that your organization is protected from holes that let hackers invade your systems? Only by thoroughly testing your Web applications for security defects and vulnerabilities. Ryan English describes the three basic security testing approaches available to testers—source code analysis, manual penetration testing, and automated penetration testing. Ryan also explains the key differences in these methods, the types of defects and vulnerabilities that each detects, and the advantages and disadvantages of each. Learn how to get started in security testing and how to choose the best strategy for your organization.

• Understand the basic security vulnerabilities in Web applications
• Discover the skills needed in security testing
• Learn who should be performing security assessments


 W15  SPECIAL TOPICS
ISTQB™ Certification: Setting the Standard for Tester Professionalism
Rex Black, RBCS and ISTQB™ President

A good test certification program confirms, through objective exams, the knowledge and professional capabilities of software testers. The International Software Testing Qualifications Board (ISTQB™) was formed as a non-profit organization to develop and promote just such a certification throughout the world. The ISTQB™ is comprised of volunteer representatives from eighteen national boards, including the United States, United Kingdom, Germany, Sweden, Israel, India, Japan, Korea, Poland, and other European countries. Rex Black, current President of both the ISTQB™ and the US national board (ASTQB), presents an overview of the first truly international tester certification program. He describes the development of the standard syllabus outlining required knowledge and skills and presents an overview of the three levels of certification available to professional testers.

• Learn about the ISTQB™—an open, international tester certification program
• Discover how the syllabus is the distilled wisdom of many experts including practitioners, consultants, trainers, and academicians
• Participate in a program with over 32,000 certified testers around the world


 

 

Thursday, October 19, 2006, 9:45 AM  Go to 11:15 AM   Go to 1:30 PM   Go to 3:00 PM   

Double-Track Session!

 T1  TEST MANAGEMENT
Management Networking
Johanna Rothman, Rothman Consulting Group, Inc.

Sometimes, it feels as if you're the only test/development/project manager/director/VP you know with your particular problems. But I can guarantee you this—you're not alone. If you have problems you’d like to discuss and start to solve, this session is for you. Each participant will have a chance to both air their concerns and help others. You'll have a chance to meet other managers across industries and countries; hear how your peers have solved problems; listen to the current issues your peers are addressing; solve some problems; hear from experts; and build your personal contact network. Bring your notebook, a pen, and plenty of business cards.

• Learn multiple problem-solving techniques
• Practice some peer coaching
• Ask for and receive expert advice



 T2  TEST TECHNIQUES

Rapid Thinking: When Time Is Tight
Jon Bach, Quardev, Inc.

How many different kinds of yellow fruit can you name in one minute? Try it and the tension may feel familiar, like testing under a deadline—ideas quickly come to mind (or perhaps they don’t), flashes of victory when you find a good one, keeping your mind agile-but-organized as time counts down. Since the main constraint on most software projects is time, Jon Bach will demonstrate some heuristics to trigger your imagination that will help you rapidly generate a variety of meaningful test ideas, whether through quiet contemplation or group brainstorming. Jon will discuss a way to help you know when you've thought of enough ideas—achieving a reasonable sense of completeness and minimizing the chance that you have overlooked some important test.

• Learn techniques for triggering your imagination
• Research and results from brainstorming experiments
• Discover a heuristic framework reaching “completeness”


 T3  TEST AUTOMATION

Keyword-Driven Testing: An Automation Success Story
Paulo Barros and Uday Thonangi, Progressive Insurance

Successfully implementing any automation tool is challenging. Using keyword-driven testing for system and regression testing is an additional challenge. Paulo Barros shares the techniques he used to build, manage, and deliver effective testing using a custom built keyword-driven automation tool. Paulo describes in depth six important changes that must be implemented. First, organizational change where testers adopt a generalist rather than a specialist approach. Second, creating a support infrastructure for the tool. Third, developing the processes to be used by testers, developers, and project managers. Fourth, implementing a training plan giving testers the required tool skills to effect organizational change. Fifth, creating a new test design methodology that focuses on automation rather than manual testing. And sixth, creating a team to support other testers making the transition to automation.

• Examine the process of keyword-driven testing
• Discover how the approach to testing must change
• Learn how to create a center for supporting all of automation’s stakeholders


 T4  EXPLORATORY TESTING

Using Mind Maps to Document Exploratory Testing
Samuli Lahnamäki, Tieto-X

Mind maps were developed in the late 1960s by Tony Buzan as a way of helping students take notes using only key words and images. Mind maps are quick to record and because of their visual approach, much easier to remember and review. Samuli Lahnamäki describes how mind mapping can be used as a logging tool for exploratory testing and what information can be later derived from the testing maps. A pair of testers, one performing exploratory testing while the other records their journey with a mind map, is an effective documentation style. One concern with exploratory testing has always been its lack of a testing trail. Mind maps provide the documentation that can be converted to a formal test script if required.

• Discover how mind maps can be an effective documentation tool in exploratory testing
• Convert mind maps to testing scripts
• Explore the mind mapping technique


Double-Track Session!

 T5  SPECIAL TOPICS
Lightning Talks: A Potpourri of 5-Minute Presentations
Facilitated by Robert Sabourin, AmiBug.com, Inc.

Lightning Talks: A Potpourri of 5-Minute Presentations

Lightning Talks are nine five-minute talks in a fifty-minute time period. LightningTalks represent a much smaller investment of time than track speaking and offer the chance to try conference speaking without the heavy commitment. Lightning Talks are an opportunity to present your single biggest bang-for-the-buck idea quickly. Use this as an opportunity to give a first time talk or to present a new topic for the first time. Maybe you just want to ask a question, invite people to help you with your project, boast about something you did, or tell a short cautionary story. These things are all interesting and worth talking about, but there might not be enough to say about them to fill up a full track presentation. For more information on how to submit your Lightning Talk, visit www.sqe.com/lightningtalks.asp. Hurry! The deadline for submissions is August 28, 2006.





 Thursday, October 19, 2006, 11:15 AM  Go to 9:45 AM   Go to 1:30 PM   Go to 3:00 PM   

 T6  TEST TECHNIQUES

All I Need to Know about Testing I Learned from Dr. Seuss
Robert Sabourin, AmiBug.com, Inc.

Through the stories and parables of Theodor Geisel, we can learn simple, yet remarkably powerful approaches for solving testing problems. In a tour of common issues we encounter in testing—test planning, staff training, communications, test case design, test execution, status reporting, and more, Robert Sabourin explains how you can apply lessons from the great books of Dr. Seuss to testing. Green Eggs and Ham teaches us combinations; Go, Dog, Go teaches us the value of persistence; Because a Little Bug Went Kachoo teaches us about side effects, chaos, and risk management. Others such as Hop on Pop, Marvin K. Mooney, I Can Read with My Eyes Shut, and Inside Outside UpSide Down all have important lessons about how to get things done on software projects. Learn some simple truths and take away some heuristic testing aids to become a more productive and effective tester.

• Learn important heuristics to better test planning
• Discover different testing approaches that can be used for the same problem
• Examine a back to basics way to improve performance


 T7  TEST AUTOMATION

Tomorrow’s Test Lab Today: One Touch Test Bed Automation
Steve Kishi, VMware

Many software organizations are struggling with the complexity of their testing environments especially with the rapidly growing number of production environments. In many cases, the cost of creating those testing environments is prohibitive. Functional testing tools combined with new virtual lab automation (VLA) technology is changing the way test teams deal with this problem. Steve Kishi will demonstrate how VLA software can create myriads of virtual environments quickly and at far less cost than physical environments. In addition, Steve will discuss how an automated test bed framework can shave months off software development projects, reduce development and test equipment costs, and dramatically increase the quality of delivered software systems. Learn about this new technology and evaluate whether it is right for your organization.

• Differentiate test beds from test environments
• Create one-touch test beds ready for executing tests
• Determine the ROI of VLA technology


 T8  EXPLORATORY TESTING

Squeezing Bugs Out of Mission Critical Software with Session-Based Testing
David James, HEI, Inc.

Software created in regulated industries such as medical devices must be developed and tested according to agency-imposed process standards. Every requirement must be tested, and every risk must be mitigated. Could defects still lurk in software wrung out by such an in-depth process? Unfortunately, yes. In fact, software defects are a major cause of medical device recalls each year. However, by supplementing mandated requirements-based verification with session-based exploratory testing (SBT), the overall quality of mission-critical software can be significantly improved. Based on eight studies, David James describes how to fit targeted exploratory testing into a regulated process. Specifically, David has found that defect discovery is twenty times less expensive through SBT than through formal verification. Applying SBT early, before formal verification allows a less formal and cheaper defect-resolution process. When used after formal verification, SBT found an average of fifty defects per product—defects that formal verification missed.

• Learn what doesn’t work in regulated testing processes
• Understand how to implement SBT
• Evaluate how session-based testing might benefit your projects


 T9  SPECIAL TOPICS

Testing for Global Customers
Bj Rollison, Microsoft

More and more organizations are creating applications that are used around the globe. These applications must be customized for various national conventions including time, date, number, and currency formats. In addition, these applications must process data from non-English keyboards in languages such as Russian, Japanese, Hindi, and Arabic. Additional complications include string processing, sorting, and sequencing; character conversion; and bi-directional language support for Middle Eastern languages. Bj Rollison shows how an English-language Windows platform can be used to perform globalization testing without testers having knowledge of non-English languages. Bj shows how to select and use non-English character strings as test data. In addition, Bj provides examples of typical bugs found during globalization testing, methods to detect them, and techniques to generate automated tests using foreign character sets.

• Explore the basics of globalization testing
• Learn the value of early globalization testing
• Discover how to identify common globalization defects



 Thursday, October 19, 2006, 1:30 PM  Go to 9:45 AM   Go to 11:15 AM   Go to 3:00 PM   

 T10  TEST MANAGEMENT

Skill Diversity: The Key to Building the Ideal Test Team
Barry Power, Bayer HealthCare

The dictionary defines “diversity” as “variety”—and that’s just what an effective test team needs. It’s much easier to hire people just like you—after all, they are easy to understand and manage. But Barry Power has found that teams consisting of all thinkers, all planners, all doers, all coordinators, or all finishers are not as effective as teams with a diverse composition. Barry has built powerful teams when combining leading-edge thinkers with nose-to-the-grindstone doers, the steadiness of experience with the enthusiasm of rookies, and the benefits of knowledge with the vision that only new eyes possess. Join Barry as he describes successful teams in fields as diverse as aerospace rockets and medical devices. Learn how you can create more effective teams through diversity.

• Discover the powerful meaning of diversity
• Learn what characteristics to value in teams
• Match team members with team roles and responsibilities


Double-Track Session!

 T11  INSPECTIONS
Software Inspections: Key Elements of Success
Ed Weller, Software Technology Transition

Inspections have over thirty years of history improving software quality and productivity. Numerous studies have shown inspection is the most effective process for discovering defects. Yet today, inspections are not widely used in the software industry. Why are they not more prevalent? Ed Weller knows that successful implementation of inspections requires a thorough understanding of the process as well as the cultural and organizational roadblocks to implementation. Knowing when to apply inspections, or other defect identification techniques, also requires a cost-benefit analysis. Measuring and improving inspections requires an understanding of inspection process metrics and appropriate corrective actions. Ed discusses the inspection process, measurement, common pitfalls, and how to implement a successful program in your organization.

• Learn what makes inspections different from other types of reviews
• Understand when and how to begin inspections
• Discover the key elements needed for successful inspections



 T12  TEST AUTOMATION

Right Under Your Fingertips: Built-in Windows Tools for Test Automation
Matt Lowrie, Anark Corporation

Launching a test automation effort can be a daunting undertaking. An abundance of testing tools are available—but if you do not have previous automation experience, how can you know if you are investing in the right solution? A safe alternative is to begin with automation tools already included in the Microsoft Windows operating system. You can use these tools to build your own test automation system that produces professional results. Matt Lowrie demonstrates several Windows utilities that can be linked together to create a basic test automation framework. To begin, you’ll need a basic knowledge of JScript (Javascript) or VBScript. Windows Script Host can be used to execute applications and gather and report test results. Learn how to automate tests using Internet Explorer and the Microsoft® Office Suite.

• Learn how to access the Windows file system
• Use XML for documenting test results
• Develop HTML-based graphical user interfaces for your tests


 T13  SOA TESTING

Testing SOA Software: The Headless Dilemma
John Michelsen, iTKO, Inc.

Once we were able to ensure quality with some degree of certainty by testing our applications through their user interfaces. As SOA systems based on Web services proliferate, testing through the GUI isn’t going to be sufficient. SOA systems are assembled from components, “headless” chunks of encapsulated business functionality. If we are building the components themselves, we will want to test their functionality and their interfaces. We will want to ensure their proper behavior no matter what application uses them and no matter how unruly it is. If we are building SOA applications from components, we will want to test our applications in their entirety. But remember, our applications may not have GUI interfaces. Join John Michelsen as he shares what you’ll need to know to effectively test SOA applications.

• Learn how services-based software changes the game for software testers
• Discover the testing methods and skills you will need in the SOA world
• Consider the types of systems you will be expected to test


 T14  SPECIAL TOPICS

Testing Web Services in Four Key Dimensions
Dave Mount, J-Soup Software, Inc.

As Web services become a more prominent component of many applications, effective testing of these components is increasingly more important. Dave Mount discusses testing Web services in four different dimensions: functionality, interoperability, security, and performance. Functionality testing is familiar territory, but the other dimensions may not be. Although interoperability could be assumed, differences in .NET, Java, and XML implementations among different vendors may cause interoperability failures. Security testing is also important, since Web services can inadvertently expose capabilities and data that should be protected. Finally, Web services are subject to performance issues due to message handling, interface layers, and potentially large data payloads. Real-time and batch performance characteristics should be tested to simulate the range of possible uses of Web services.

• Learn the important differences in testing Web services
• Focus your testing efforts on the four key dimensions
• Ensure your Web services quality through effective testing



 Thursday, October 19, 2006, 3:00 PM  Go to 9:45 AM   Go to 11:15 AM   Go to 1:30 PM   

 T15  TEST MANAGEMENT

Building a Testing Factory
Patricia Medhurst, Royal Bank Financial Group

At Royal Bank Financial Group we are building a testing factory. Our vision is that code enters as raw material and exits as our finished product—thoroughly tested. As a roadmap for our work, we have used the IT Information Library (ITIL) standard. ITIL is well known throughout Europe and Canada but has yet to make inroads in the United States. It defines four disciplines: service support, service delivery, the business perspective, and application management. These disciplines define processes such as incident management, problem management, availability management, change management, and many others. Join Patricia Medhurst as she discusses their success and their next steps in completing their testing factory.

• Learn how Royal Bank built their test factory
• Understand how to integrate individual process into a cohesive whole
• Determine if ITIL would be useful for your test organization


 T16  TEST AUTOMATION

Complete Your Automation with Runtime Analysis
Poonam Chitale, IBM Rational Software

So, you have solid automated tests to qualify your product. You have run these tests on various platforms. You have mapped the tests back to the design and requirements documents to verify full coverage. You have confidence that results of these tests are reliable and accurate. But you are still seeing defects and customer issues—why? Could it be that your test automation is not properly targeted? Solid automated testing can be enhanced through runtime analysis. Runtime analysis traces execution paths, evaluates code coverage, checks memory usage and memory leaks, exposes performance bottlenecks, and searches out threading problems. Adding runtime analysis to your automation efforts provides you with information about your applications that cannot be gained even from effective automated testing.

• Learn how runtime analysis enhances automation
• Evaluate the pros and cons of code coverage
• Review the causes of memory leaks


 T17  SOA TESTING

The Art of SOA Testing: Theory and Practice
Rizwan Mallal, Crosscheck Networks

SOA (Service Oriented Architecture) based on Web Services standards has ushered in a new era of how applications are being designed, developed, and deployed. But the promise of SOA to increase development productivity poses new challenges for testers, challenges dealing with multiple Web Services standards and implementations, legacy application (of unknown quality) now exposed as Web services, weak or non-existent security controls, and services of possibly diverse origins chained together to create applications. Learn concepts and techniques to master these challenges through powerful techniques such as WSDL chaining, schema mutation and automated filtration. Learn how traditional techniques such as black, gray, and white box testing are applied to SOA testing to maximize test coverage, minimize effort, and release better products.

• Learn the Four Pillars of SOA Testing
• Use gray box techniques to enter the domain of white box testing
• Learn the powerful concept behind schema mutation


 T18  SPECIAL TOPICS

Test Estimation: Painful or Painless?
Lloyd Roden, Grove Consultants

As an experienced test manager, Lloyd Roden believes that test estimation is one of the most difficult parts of test management. In estimation we must deal with destabilizing dependencies such as poor quality code received by testers. Lloyd presents seven powerful ways to estimate test effort. Some are easy and quick but prone to abuse; others are more detailed and complex but may be more accurate. Specifically, Lloyd discusses FIA (Finger in the Air), Formula or Percentage, Historical, Parkinson’s Law v. Pricing-to-Win estimates, Work Breakdown Structures, Estimation Models, and Assessment Estimation. Spreadsheets and utilities will be available during this session to help you as tester or test manager estimate better. By the end of this session you should feel that the painful experience of test estimation could, in fact, become a painless one.

• Uncover common destabilizing dependencies
• Learn how to communicate your estimates (and what they really mean) to senior management
• Discover the appropriateness of each of these methods to your work


Friday, October 20, 2006, 10:00 AM  Go to 11:15 AM   
 F1  TEST MANAGEMENT
Keeping it Between the Ditches: A Dashboard to Guide Your Testing
Randall Rice, Rice Consulting Services, Inc.

As a test manager, you need to know how testing is proceeding at any point during the test. You are concerned with important factors such as test time remaining, resources expended, product quality, and test quality. When unexpected things happen, you may need additional information. Like the dashboard in your car, a test manager’s dashboard is a collection of metrics that can help keep your testing effort on track (and out of the ditch). In this session, Randall Rice will explore what should be on your dashboard, how to obtain the data, how to track the results and use them to make informed decisions, and how to convey the results to management. Randall will present examples of various dashboard styles.

• Build your own test management dashboard
• Select useful metrics for your dashboard
• Use the dashboard to successfully control the test


 F2  TEST TECHNIQUES
Branch Out Using Classification Trees for Test Case Design
Julie Gardiner, QST Consultants, Ltd.

Classification trees are a structured, visual approach to identify and categorize equivalence partitions for test objects to document test requirements so that anyone can understand them and quickly build test cases. Join Julie Gardiner to look at the fundamentals of classification trees and how they can be applied in both traditional test and development environments. Using examples, Julie shows you how to use the classification tree technique, how it complements other testing techniques, and its value at every stage of testing. She demonstrates a classification tree editor that is one of the free, commercial tools now available to aid in building, maintaining, and displaying classification trees.

• Develop classification trees for test objects
• Understand the benefits and rewards of using classification trees
• Know when and when not to use classification trees


 F3  METRICS
Measuring the “Good” in “Good Enough Testing”
Gregory Pope, University of California, Lawrence Livermore National Laboratory

The theory of “good enough” software requires determining the trade off between delivery date (schedule), absence of defects (quality), and feature richness (functionality) to achieve a product which can meet both the customer’s needs and the organization’s expectations. This may not be the best approach for pacemakers and commercial avionics software, but it is appropriate for many commercial products. But can we quantify these factors? Gregory Pope does. Using the COQALMOII model, Halstead metrics, and defect seeding to predict defect insertion and removal rates; the Musa/Everette model to predict reliability; and MatLab for verifying functional equivalence testing, Greg evaluates both quality and functionality against schedule.

• Review how to measure test coverage
• Discover the use of models to predict quality
• Learn what questions you should ask customers to determine “good enough”


 F4  AGILE METHODS
A Tester’s Role in Agile Projects
Chris Hetzler, Microsoft

Some agile methodologists claim that testers are not needed in agile projects—all testing is done either by developers or users. Chris Hetzler has seen the effects of that approach, and they are not pretty. When customers find bugs in large projects, the costs can be staggering. Chris believes that testers must be involved in agile projects at an even higher intensity since timelines are shorter and the risk of failure is higher. But Chris explains that tester‘s roles change and testers must be prepared for that change. In agile projects, the testers’ role is one of quality engineer rather than the traditional product validation and verification. This means the testers become the ''go-to'' people whenever a quality issue is raised. When Chris' development group adopted this strategy for involving their testers in their latest product release, the end result was a product with 95% code coverage, 98% regression run pass rate, and extremely high customer satisfaction.

• Learn how testers work within projects
• Create effective test strategies within projects
• Apply the lessons learned to your projects


 F5  SPECIAL TOPICS
Testing for Sarbanes-Oxley Compliance
Suresh Chandrasekaran, Cognizant Technology Solutions

In the wake of huge accounting scandals, many organizations are now being required to conform to Sarbanes-Oxley (SOX) legal requirements regarding internal controls. Many of these controls are implemented within computer applications. As testers, we should be aware of these new requirements and ensure that those controls are tested thoroughly. Specifically, testers should identify SOX-based application requirements, design automated test cases for those requirements, create test data and test environments to support those tests, and document the test results in a way understandable by and acceptable to auditors, both internal and external. To be most efficient, SOX testing should not be separate but should be incorporated into system testing.

• Learn the SOX testing lifecycle
• Identify testable requirements for SOX compliance testing
• Review SOX test automation strategies



 Friday, October 20, 2006, 11:15 AM  Go to 10:00 AM   
 F6  TEST MANAGEMENT
Improving the Skills of Software Testers
Krishna Iyer, ZenTEST Labs

Many training courses include the topic of soft skills for testers, specifically their attitudes and behaviors. Testers are told that to be effective they need a negative mindset and a negative approach. Krishna Iyer challenges this belief. He believes testers must be creative rather than critical; curious rather than destructive; and empathetic rather than negative. Join Krishna as he leads exercises in mind mapping, systems thinking, and belief deconstruction to improve our eye's ability to perceive detail, our nose's ability to sniff out defects, and our brain's ability to discover. Finally, Krishna will list the beliefs that hinder testers and the beliefs that help and share how he has been successful in deconstructing some of these beliefs and inculcating new ones.

• Hear the latest research in cognitive thinking
• Learn practical techniques to improve testing skills
• Understand the mindset of effective testers


 F7  TEST TECHNIQUES
Practical Model-Based Testing of Interactive Applications
Marlon Vieria, Siemens Corporate Research

Model-based tests are most often created from state-transition diagrams. Marlon Vieria generates automated system tests for many Siemens systems from use cases and activity diagrams. The generated test cases are then executed using a commercial Capture-Replay tool. Marlon begins by describing the types of models used, the roles of those models in test generation, and the basic test generation process. He shares the weaknesses of some techniques and offers suggestions on how to strengthen them to provide the required control flow and data flow coverage. Marlon describes the cost benefits and fault detection capabilities of this testing approach. Examples from a Web-based application will be used to illustrate the modeling and testing concepts.

• Learn how to implement model-based testing in your organization
• Create effective scripts for use by automation tools
• Perform a cost-benefit analysis of model-based testing


 F8  METRICS
Measuring the End Game of a Software Project–Part Deux
Mike Ennis, Savant Technology

The schedule shows only a few weeks before product delivery. How do you know whether you are ready to ship? Test managers have dealt with this question for years, often without supporting data. Mike Ennis has identified six key metrics that will significantly reduce the guesswork. These metrics are percentage of tests complete, percentage of tests passed, number of open defects, defect arrival rate, code churn, and code coverage. These six metrics, taken together, provide a clear picture of your product’s status. Working with the project team, the test manager determines acceptable ranges for these metrics. Displaying them on a spider chart and observing how they change from build to build enables a more accurate assessment of the product’s readiness. Learn how you can use this process to quantify your project’s “end game”.

• Decide what and how to measure
• Build commitment from others on your project
• Manage the end-game of your product development


 F9  AGILE METHODS
Preparing the Test Team to Go Agile
Janet Gregory, DragonFire, Inc.

When we read about agile development, we find developers using nUnit for unit testing while customers are using FIT for acceptance tests. But where are the testers? You know—those folks who have years of experience in testing. Is there a place for testers in the agile world? Janet Gregory believes there is and shares specific things that you and your test team need to understand to effectively work within the agile context. It is important to adhere to the agile values and principles while improving the product’s quality. For example, a heavy test planning process that requires knowing all the requirements up front and developing thousands of test cases will not be acceptable. Janet will describe a lightweight process that is effective for all and discuss the handling of traditional testing processes such as defect tracking, reporting, and sign-offs.

• Understand the role of the test team in an agile development environment
• Learn about the new techniques you’ll need in the agile world
• Discover tools that will improve your testing—not slow it down


 F10  SPECIAL TOPICS
Open Source Tools for Web Application Performance Testing
Dan Downing, Mentora

OpenSTA is a solid open-source testing tool that, when used effectively, fulfills the basic needs of performance testing of Web applications. Dan Downing will introduce you to the basics of OpenSTA including downloading and installing the tool, using the Script Modeler to record and customize performance test scripts, defining load scenarios, running tests using Commander, capturing the results using Collector, interpreting the results, as well as exporting captured performance data into Excel for analysis and reporting. As with many open source tools, self-training is the rule. Support is not provided by a big vendor staff but by fellow practitioners via email. Learn how to find critical documentation that is often hidden in FAQs and discussion forum threads. If you are up to the support challenge, OpenSTA is an excellent alternative to high-priced commercial tools.

• Learn the capabilities of OpenSTA
• Understand performance data
• Detect and repair performance bottlenecks


更多详细的信息大家可以通过 http://www.sqe.com/ 来了解。