January 31 – February 4 | Virtual Conference

IMPACT 2022 featured more than 120 sessions including presentations, Q&A sessions, and networking activities on Performance Engineering, DevOps, Monitoring and Observability, Mainframe, Data Centers, Cloud, and more. 

Join us as we “level up” at IMPACT 2022 with new speakers, updated topics, and new education opportunities.

The IMPACT Conference

For more than 40 years, CMG’s international conference has been the source for education and peer-to-peer exchange for all things enterprise IT and infrastructure. It is the only conference of its kind where attendees can obtain real-world knowledge and training that is not vendor-run.

IMPACT features sessions on the full scope of digital transformation technologies including Artificial Intelligence and Machine Learning, Observability, DevOps, Performance Engineering, Digital Process, Cloud Native Applications, and the IT infrastructures that support them.

Sponsored By


Sessions and Speakers

Each year, CMG brings together leading technology practitioners and vendors for a conference like no other. With an emphasis on peer-to-peer education, the conference facilitates open communication and problem solving for its attendees.

Mainframe innovation: unlock any data for any application

Discover how to manage data complexity and access to mainframe unstructured data and external data sources (e.g. Oracle, Teradata) through a virtualization layer with IBM Data Virtualization Manager for z/OS. Find out how to exploit this solution to innovate traditional mainframe applications to adapt to new business requirements and trends in a data gravity model.

Virtualization creates a unified data environment that allows access to data, in a transparent way to the user, directly in the environment in which they reside. This eliminates the need to create redundant copies of data, simplifying its management and costs. Simplifying all this is not only a matter of cost, but it makes it possible to guarantee a higher quality of the data exposed. Data has great importance for our business. DVM allows users and applications to access R/W data on z/OS and Enterprise environments in real-time making them accessible to any application. So you can have a greater openness of the z/OS platform to heterogeneous external environments (such as SQL Server, Teradata, Oracle). DVM takes advantage of the inherent security of the z/OS platform and by exploiting the zIIP technology it reduces CPU consumption. CPU consumption is a hot topic nowadays. DVM also includes a number of query optimization features like the parallelism of the input/output operations and MapReduce functionality. Since all customers are unique and have different needs there are some other reasons that are important for us. With DVM we would simplify the architecture and the management of the virtualization solution. The adoption of DVM can also facilitate the migration of Cobol applications that are still using Oracle Client for z /OS which is now out of support for Oracle databases above release 12.1. In Sogei we have heterogeneous databases, with DVM we would expand the opportunities to access all data available to us This can lead to creating new applications and new services for our customers. We will talk about the configuration and some use cases we have tested.

Key Takeaways

  • Understand DVM basics
  • Explore how DVM can help to innovate mainframe without moving data
  • Prove how DVM can help to innovate mainframe without moving data

About the Speakers:

Chiara Baldan, Data Engineer, SOGEI -Chiara works as Data Engineer at Sogei. Sogei has developed and run IT services able to manage the complexity of the public system. Sogei aims to build a citizen experience new, simple, fast, and, above all, completely digital through innovation, skills development, and investments in new technologies. Chiara is working for the “Service & Technology Innovation Hub” in the area of “Change and Service Management”. She is part of the team “Change and Release Management”. The mission of the team is the governance and the management towards a cloud-oriented Data Center and the design and implementation of processes from a cloud perspective. She has over 15 years of experience in mainframe environments. She was nominated IBM Champion for the last two years.

Laura Guidi, Data Engineer, SOGEI – Laura works as Data Engineer at Sogei. Laura is working for the “Service & Technology Innovation Hub” in the area of “Cross Service & Solution”. She is part of the team “Data Platform Solution”. The mission of this team is to build solutions for Data Management, Data Analytics, and  Data Lake to satisfy business needs and to contribute to the evolution of data platforms. She has over 20 years of experience in Db2 and Mainframe environments. Currently, she is working with Data Virtualization and Data Protection.

Francesco Borrello is a z Data & AI Technical Sales Specialist in IBM Technology Italy. He joined IBM in 2011 and, during his initial learning process, he obtained a master’s in “Centralized Systems for Cloud Computing”. His main mission is to help customers in driving new business opportunities adopting both traditional Hybrid Data Management / Analytics and innovative ML / AI solutions on IBM Z and new business practices. He acted as a speaker at several international conferences and technical user groups, and he is the author of ‘IBM Data Virtualization Manager for z/OS’ Redbook.

Perspectives on the USL

This session is based on the speaker’s book entitled “Information Technology Performance” that contains fundamentals of computer performance from a fresh perspective. This session will cover scaling models which reinterpret Dr. Gunther’s USL and expand upon some of his examples. This talk additionally covers fundamental ideas to the USL model and methods.

Specifically, the session will start at the beginning with definitions and terminology. Joe will talk about utilization and its relationship to throughput, then move on to the duality of scaling and saturation and onto scaling models.

Key Takeaways

  • Relate scaling model results to saturation and response time
  • Recognize the impact of machine/network structure on the model results
  • Quantify the scalability impact of increased “capacity” (Memory/Cache)


Joe Temple, Adjunct Professor at Coastal Carolina University. Joe has 15 years of hardware development experience and 27 years of technical support / consulting experience. In addition to his teaching experience, he has a very broad technical background and patent portfolio centered on Information Technology.

Work less and save more

Most z/OS environments have thousands of jobs , hundreds of started tasks, millions of CICS, IMS , DDF or WEB transactions , tens of service classes and reports classes that run everyday. While most computer centers get the big picture of how everything is running and the main problem areas , they still have many difficulties to pinpoint the single job, address space, and transaction that gradually increase until you notice it has become a cost and performance problem. In this presentation I will present a simple methodology and reporting system that can solve most of these problems.

The session is best for those who are:

  • Be early careerists or new to this subject area (Intro/Beginner)
  • Have a working knowledge of the subject area
  • Have extensive experience in the subject area (Advanced)

About the Speaker:

Mark Cohen Austroweik is a Technical Director of EPV Technologies and is specialized in Performance Analysis, Reporting, Tuning and Capacity Planning for z/OS and Distributed systems.

System Recover Boost: Hitting the Turbo Button on z/OS

SRB is one of the more interesting things that IBM has introduced with the z15 and certainly can help certain installations, shut down, start-up, and recover faster. But what are the practical implications of using SRB? How does enabling the different flavors of SRB influence both the performance and measurement of the systems, even potentially the systems that aren’t being boosted?

Key Takeaways

  • enable it for shutdown
  • check your weights
  • consider interesting use cases

The session is best for those who are:

  • Have a working knowledge of the subject area

Physical data center capacity planning – Performance management at scale

This session discusses how to create a capacity management plan that includes public, private, and legacy workloads, accounting for how the plan changes over time to accommodate increasing public cloud content.

IT executives have targeted moving workload to public cloud over time. It is important the the capacity planner understand the adoption of public clouds, and the sensitivity analysis of how that changes over time. The session is designed to minimize the amount of stranded assets due to data center consolidation strategies that include moving workload to public cloud

Key Takeaways

  • Data center capacity management uses the same principles as server, storage, and network capacity management
  • Metrics exist that can size moving workload to and if necessarily back from public clouds
  • An IT capacity plan should incorporate public and private cloud content aligned to their overall business and IT strategy

The session is best for those who are:

  • Early careerists or new to this subject area (Intro/Beginner)
  • Have a working knowledge of the subject area

About the Speaker:

Chris Molloy is well known for both performance management (CMG 2005 best paper) and physical data center (The Green Grid board of director member). His patented performance technology has been installed on over 400,000 servers globally.

To Queue or not to Queue, that is the question

Queuing Theory has long been the de-facto high standard for accurate IT forecasting. However it has always had its downsides. Heavily data dependent and very specific in what it can forecast, it has become 1 tool in a toolbox of forecasting alternatives. This session will discuss these alternatives and what advantages they will give to the modern hybrid IT world.

Key Takeaways

  • IT has transformed in recent years. Forecasting needs to transform with it.
  • Forecasting needs to be easy and quick
  • Accuracy need not be sacrificed

The session is best for those who are:

  • Early careerists or new to this subject area (Intro/Beginner)
  • Have a working knowledge of the subject area

About the Speaker:

Bob Torz is a Solutions Specialist in the UK. He has been in the field of IT performance and capacity management for over 20 years. Bob regularly presents at conferences and is passionate about the role of capacity management in the modern IT world.

Lessons Learned from a Ransomware Attack

This talk covers a ransomware attack on a medium-size school district (23K students, 4K staff). We start with the timeline of the attack that was determined by forensic analysis, cover what was damaged in the attack, and then cover the attack recovery process. Then we’ll discuss changes that were made to avoid and mitigate any future attacks. We wrap with the lessons learned during this attack in the hope that they will help you to avoid and recover quicker if you do experience a ransomware attack.

After this talk, you will have a better understanding of:

  • how an attack happens
  • what kind of alerts may be symptoms of an attack
  • what to do in case you are attacked
  • what happens after you are attacked, and
  • what actions you can take now to avoid and mitigate a ransomware attack

Ski has worked as a system admin, manager, consultant, professor, and emergency worker since 1982. He currently is a system admin at the Northshore School District in WA and an adjunct professor at Bellevue College.  When not busy at a computer, Ski enjoys traveling, kayaking, and backpacking in the wonderful Pacific Northwest.







2021 IMPACT Partners


 Presenting Sponsor

Mainframe Track Sponsor

 Gold Sponsor

Session Sponsor