fbpx

Presenter: Alexander Podelko
The recent revolution in software development, including agile / interactive development, cloud computing, continuous integration, and many more, opened new opportunities for performance testing and affected its role in performance engineering; for example, early and continuous performance testing. However, performance testing in general and specific performance testing techniques should be considered in full context including environments, products, teams, issues, goals, budgets, timeframes, risks, etc. The question is not what technique is better, the question is: What technique, or what combination of techniques, should used in a particular case? Or, in more traditional wording: What should be a performance testing strategy? 

The term “context-driven” appears as a great fit for me here in its classical form as described here: http://context-driven-testing.com/. In the functional testing community where it was introduced it became a loaded and politicized term, but all the original founding principles make perfect sense to me for performance testing. 

Traditional load testing (optimized for the waterfall software development process) is focused, basically, on one context, pre-release production-like, so the goal was to make load and the system as similar to production as possible. Well, with some variations such as stress, spike, uptime/endurance/longevity, and other kinds of performance testing, load testing is still mainly based on realistic workloads.

Drastic changes in the industry in recent years significantly expanded the performance testing horizon—agile development and cloud computing probably the most. Basically, instead of a single way of doing performance testing (and all others were considered rather exotic), we have a full spectrum of different tests that can be done at different moments, so deciding what and when to test became a non-trivial task heavily dependent on the context. 

For example, the purpose of continuous performance testing is basically regression performance testing: Checking that no unexpected performance degradations happened between tests and verifying expected performance changes on the established baseline. It may start early, although it may be a bigger challenge in the early stages, and probably should continue as soon as any changes happen to the system. It may be on a component level or on a system level—considering that not all functionality of the system is available in the beginning. Theoretically, it may be even full-scale system-level realistic tests, but it doesn’t make sense in most contexts.

For continuous performance testing we rather want short, limited scale, and fully reproducible tests, which means minimal randomness, so if results are different we know that it is due to a system change. For full-scale system-level tests to check if the system can handle the expected load, we were more concerned to make sure that the workload and the system would be as close to real life as possible and less concerned with small variations in performance results. It doesn’t mean one is better than another, they are different tests mitigating different performance risks. There is some overlap between them as they both target performance risks, but continuous testing usually doesn’t test the system’s limit and full-scale realistic tests are not good to track differences between builds. 

Moreover, performance testing is not the only way to mitigate performance risks, there are other approaches too and the dynamic of their usage is changing with time. So the art of performance engineering is to find out the best strategy of combining different performance tests and other approaches to mitigate performance risks to optimize risk mitigation / costs ratio for, of course, the specific context.

About the Presenter
The last sixteen years Alex Podelko supported major performance initiatives for Oracle, Hyperion, Aetna, and Intel in different roles including performance tester, performance analyst, performance architect, and performance engineer. Currently he is Consulting Member of Technical Staff at Oracle, responsible for performance testing and optimization of Hyperion (a.k.a. Enterprise Performance Management and Business Intelligence) products . Before specializing in performance, Alex led software development for Rodnik Software. Having more than twenty years of overall experience in the software industry, he holds a PhD in Computer Science from Gubkin University and an MBA from Bellevue University.

Event Timeslots (1)

Room 2 – 2/20
-