January 31 – February 4 | Virtual Conference

IMPACT 2022 featured more than 120 sessions including presentations, Q&A sessions, and networking activities on Performance Engineering, DevOps, Monitoring and Observability, Mainframe, Data Centers, Cloud, and more. 

Join us as we “level up” at IMPACT 2022 with new speakers, updated topics, and new education opportunities.

The IMPACT Conference

For more than 40 years, CMG’s international conference has been the source for education and peer-to-peer exchange for all things enterprise IT and infrastructure. It is the only conference of its kind where attendees can obtain real-world knowledge and training that is not vendor-run.

IMPACT features sessions on the full scope of digital transformation technologies including Artificial Intelligence and Machine Learning, Observability, DevOps, Performance Engineering, Digital Process, Cloud Native Applications, and the IT infrastructures that support them.

Sponsored By


Sessions and Speakers

Each year, CMG brings together leading technology practitioners and vendors for a conference like no other. With an emphasis on peer-to-peer education, the conference facilitates open communication and problem solving for its attendees.

Modernizing Mainframe using Cloud

The modernization of the mainframe is not lift and shift. It does not mean a complete replacement of the mainframe. It is moving the applications to a cloud, migrating the database to a cloud-based database, and replacing the part of the mainframe applications in an evolutionary way. You can use low code platforms and no-code platforms to cut down the effort and lessen the risk.


  • Cloud Computing
  • Low Code Platform
  • No Code Platform


Bhagvan Kommadi is Director of Product Engg at ValueMomentum. He worked on different legacy modernization projects and moved them onto cloud using no-code and low code platforms.

5G Wireless Networks and beyond with AI/ML adaption

This presentation provides valuable architecture and business insights when planning to deploy new generation wireless networks based on Service Based Architecture(SBA). Emphasis on business verticals convergence with business and operational support (B/OSS) systems highlighted. Business value propositions with AI strategies are covered. Key technology enablers are outlined with concepts include AI/ML models adaption on 5G and beyond wireless network advancements/standards considered.


  • Transform from the traditional way of performing jobs
  • Evaluate the emerging trends in wireless domain
  • Convergence of industry verticals with business value


M Arjun Valluri is a Sr. Enterprise Architect – Solutions at US Cellular Communications. He has industry experience with academic research and managing the technology domains with architectural insights to generate business value.

“Game Day” Production Testing

In the IT industry, testing in production has always been considered a “bad word”. However, we in Capital One have been doing it for over a year and realized a lot of benefits!

“Game Day” is a concept that Capital One has been utilizing to test in production in order to validate the capacity and resiliency of the critical applications. Typically testing in production could lead to customer impacts and outages, however we have been able to conduct multiple successful exercises with no negative adverse impact. This session will go over how we are able to safely test in production, value realized so far as well as future plans.

Key Takeaways

  • What is the value of testing in production?
  • How can it be done with no adverse impact to users or customers?
  • What solutions and techniques are available to make testing in production a reality
  • DevOps
  • IT Operations
  • Proactive monitoring, ML and AIOps solutions

Yar is based in Plano, Texas with his wife and two kids.  His interests are Texas Holdem, good whiskey, CrossFit and spending time with the kids!

The Freedom of Kubernetes requires Chaos Engineering to shine in production

Like any other technology transformation, k8s adoption typically starts with small “pet projects”. One k8s cluster here, another one over there. If you don’t pay attention, you may end up like many organizations these days, something that spreads like wildfire: hundreds or thousands of k8s clusters, owned by different teams, spread across on-premises and in the cloud, some shared, some very isolated.

When we start building application for k8s, we often lose sight of the larger picture on where it would be deployed and more over what the technical constraints of our targeted environment are.

Sometimes, we even think that k8s is that magician that will make all our hardware constraints disappear.

In reality, Kubernetes requires you to define quotas on nodes, namespaces, resource limits on our pods to make sure that your workload will be reliable. In case of heavy pressure, k8s will evict pods to remove pressure on your nodes, but eviction could have a significant impact on your end-users.

How can we proactively test our settings and measure the impact of k8s events to our users? The simple answer to this question is chaos Engineering.

During this presentation we will use real production stories to explain:

  • The various Kubernetes settings that we could implement to avoid major production outages.
  • How to Define the Chaos experiments that will help us to validate our settings
  • The importance of combining Load testing and Chaos engineering
  • The Observability pillars that will help us validate our experiments

About the Speaker:

Henrik Rexed is a Cloud-Native Advocate at Dynatrace, the leading Observability platform. Prior to Dynatrace, Henrik has worked as a Partner Solution Evangelist at Neotys, delivering webinars, building prototypes to enhance the capability of NeoLoad. He has been working in the performance world for more than 15 years, delivering projects in all contexts including extremely large Cloud testing on the most demanding business areas such as trading applications, Video on Demand, sports websites, etc. Henrik Rexed Is Also one of the organizers of the Conference named Performance Advisory Council.

Cloud Servers Rightsizing with Seasonality Adjustments

When the cloud servers rightsizing algorithm calculates the baseline level for the current year application server’s usage, the seasonal adjustment needs to be calculated and applied by adding the additional anticipated change, which could be increasing or decreasing the capacity usage. We describe the method and illustrate it against the real data.

The cloud servers rightsizing recommendation generated based on seasonality adjustments, would reflect the seasonal patterns, and prevent any potential capacity issues or reduce an excess capacity.
The ability to keep multi-year historical data of 4 main subsystems of application servers’ capacity usage opens the opportunity to detect seasonality changes and estimate additional capacity needs for CPU, memory, disk I/Os, and network. A multi-subsystem approach is necessary, as very often the nature of the application could be not CPU but I/Os or Memory or Network-intensive.

Applying the method daily allows downsizing correctly if the peak season passes and the available capacity should be decreased, which is a good way to achieve cost savings.

In the session, the detailed seasonality adjustment method is described and illustrated against the real data. The method is based on and developed by the author’s SETDS methodology, which treats the seasonal variation as an exception (anomaly) and calculates adjustments as variations from a linear trend.

Key Takeaways

  • How to build seasonal adjustments into the cloud rightsizing
  • To get familiar with cloud objects rightsizing techniques

Balancing Kubernetes performance, resilience & cost by using ML-based optimization – a real-world case

Properly tuning Kubernetes applications is a daunting task, often resulting in reliability and performance issues, as well as unexpected costs. We describe how ML-based optimization enabled a digital service provider to automatically tune Kubernetes pods and dramatically reduce the associated cost.

Properly tuning Kubernetes microservice applications is a daunting task even for experienced Performance Engineers and SREs, often resulting in companies facing reliability and performance issues, as well as unexpected costs.

In this session, we first explain Kubernetes resource management and autoscaling mechanisms and how properly setting pod resources and autoscaling policies is critical to avoid over-provisioning and impacting the bottom line.

We discuss a real-world case of a digital provider of accounting & invoicing services for SMB clients. We demonstrate how ML-based optimization techniques allowed the SRE team and service architects to automatically tune the pod configuration and dramatically reduce the associated Kubernetes cost. We also describe the results of incorporating resilience-related goals in the optimization goals.
Finally, we propose a general approach to tune pods and autoscaling policies for Kubernetes applications.

Managing 50 Billion Things

Today, each IT person in the enterprise manages, on average, less than 250 devices. With the advent of IoT, that ratio needs to grow closer to a million to one to be manageable. And as enterprises find more and more devices connecting to their networks, the challenges for administrators grow. This presentation will discuss how we’ll get there utilizing standardized, interoperable technologies in security, device management, and automated onboarding.

Learn how to utilize standardized, interoperable technologies

Learn how to utilize standardized, interoperable technologies

GobalPlatform: The standard for secure digital services and devices in 2020 The opportunities and challenges facing the IoT ecosystem


Your Mess Needs a Mesh

With cloud-native container-based microservice deployments, there can be challenges with consistent and reliable support for L7 metrics, rate limiting, traffic load splitting, circuit breaking, canary deployments, etc. Service Meshes address these issues. The Service Mesh framework takes care of the service discovery, service identity, security, traffic flow management, and policy enforcement of each service.

Microservices are becoming more and more prevalent in modern-day cloud-native deployments. Even with cloud-native container-based microservice deployments, there can be challenges with consistent and reliable support for L7 metrics, rate limiting, traffic load splitting, circuit breaking, canary deployments, etc. Service Meshes address these issues. The Service Mesh framework takes care of the service discovery, service identity, security, traffic flow management, and policy enforcement of each service. In this session, the attendee will learn what common application/service pain points (mess) a Service Mesh can resolve, what a Service Mesh is, how it integrates with Kubernetes, and how various microservices can leverage a Service Mesh for things like service discovery and traffic flow management. Linkerd, Envoy, and other service mesh types and components will be discussed.

Key Takeaways

  • Learn what common application/service pain points (mess) a Service Mesh can resolve
  • Learn what a Service Mesh is, how it integrates with Kubernetes
  • Understand how various microservices can leverage a Service Mesh for things like service discovery and traffic flow management


Shannon McFarland is a technical executive with a proven track record of leading highly diverse and distributed teams to execute on complex projects. He leads by example and excels at motivating individuals to achieve their own goals while they successfully contribute to the goals of their organization. Shannon is an author and has 25+ years of experience in leadership and deep technical expertise in a variety of subject areas to include: Public/Hybrid/Cloud deployment, physical and virtual networking, Kubernetes, Docker, Application Service Meshes (Istio/Linkerd), Open Source advocacy, IPv6, IP Multicast and VDI.

Finding Your Way in the Cloudy DBMS Jungle

Driven by research and adopted by the industry, two major IT domains have experienced a tremendous increase in service offers over the last few years: database management systems (DBMS) and cloud computing. Since the one-size-fits-all paradigm is no longer valid, the relational DBMS landscape has been extended with over 250 (and still growing) of NoSQL and NewSQL DBMS that promise to provide the non-functional features high performance, scalability, elasticity, and high availability for any data-intensive use case. On the resource level, cloud resources have become the preferred option to operate modern data-intensive applications in order to enable “unlimited scalability” and elasticity on the resource level. In consequence, the cloud market has become heterogeneous with over 20,000 public cloud resource offers. Needless to say, cloud resources are also a common way to operate DBMS, especially as distributed NoSQL and NewSQL DBMS promise to be cloud-native. Moreover, many established DBMS providers extend their service portfolio with Database-as-a-Service (DBaaS). While these DBMS and cloud advancements enable tailor-made data storage solutions, finding these solutions is a challenging process that involves in-depth benchmarking of the relevant non-functional features. However, reliable benchmarking of cloud-hosted DBMS requires multi-domain knowledge, reproducibility, and multi-level result processing. In this talk, we report on our long-term experiences in benchmarking cloud-hosted DBMS by highlighting key impact factors for significant evaluations. In addition, we discuss relevant DBMS performance, scalability, elasticity, availability, and cost metrics. These concepts are integrated into our benchmarking-as-a-service platform and will be demonstrated by a set of real-world cases, addressing challenges such as: Which cloud provider does provide the best DBMS performance/cloud cost ratio for my application? Will I get more performance with the next DBMS version? From a performance perspective, which one to choose: self-hosted DBMS vs. DBaaS?

Key Takeaways

  • Assess the key impact factors for benchmarking cloud-hosted DBMS
  • Specify comprehensive evaluations for cloud resources and distributed DBMS
  • Measure the higher-level DBMS metrics scalability, elasticity, and availability


Daniel Seybold is a researcher in the area of cloud computing with a focus on distributed databases in the cloud. Further interests cover cloud orchestration, model-driven engineering, and performance evaluations of distributed systems. Daniel is currently Chief Technical Officer of benchANT.

Mainframe innovation: unlock any data for any application

Discover how to manage data complexity and access to mainframe unstructured data and external data sources (e.g. Oracle, Teradata) through a virtualization layer with IBM Data Virtualization Manager for z/OS. Find out how to exploit this solution to innovate traditional mainframe applications to adapt to new business requirements and trends in a data gravity model.

Virtualization creates a unified data environment that allows access to data, in a transparent way to the user, directly in the environment in which they reside. This eliminates the need to create redundant copies of data, simplifying its management and costs. Simplifying all this is not only a matter of cost, but it makes it possible to guarantee a higher quality of the data exposed. Data has great importance for our business. DVM allows users and applications to access R/W data on z/OS and Enterprise environments in real-time making them accessible to any application. So you can have a greater openness of the z/OS platform to heterogeneous external environments (such as SQL Server, Teradata, Oracle). DVM takes advantage of the inherent security of the z/OS platform and by exploiting the zIIP technology it reduces CPU consumption. CPU consumption is a hot topic nowadays. DVM also includes a number of query optimization features like the parallelism of the input/output operations and MapReduce functionality. Since all customers are unique and have different needs there are some other reasons that are important for us. With DVM we would simplify the architecture and the management of the virtualization solution. The adoption of DVM can also facilitate the migration of Cobol applications that are still using Oracle Client for z /OS which is now out of support for Oracle databases above release 12.1. In Sogei we have heterogeneous databases, with DVM we would expand the opportunities to access all data available to us This can lead to creating new applications and new services for our customers. We will talk about the configuration and some use cases we have tested.

Key Takeaways

  • Understand DVM basics
  • Explore how DVM can help to innovate mainframe without moving data
  • Prove how DVM can help to innovate mainframe without moving data

About the Speakers:

Chiara Baldan, Data Engineer, SOGEI -Chiara works as Data Engineer at Sogei. Sogei has developed and run IT services able to manage the complexity of the public system. Sogei aims to build a citizen experience new, simple, fast, and, above all, completely digital through innovation, skills development, and investments in new technologies. Chiara is working for the “Service & Technology Innovation Hub” in the area of “Change and Service Management”. She is part of the team “Change and Release Management”. The mission of the team is the governance and the management towards a cloud-oriented Data Center and the design and implementation of processes from a cloud perspective. She has over 15 years of experience in mainframe environments. She was nominated IBM Champion for the last two years.

Laura Guidi, Data Engineer, SOGEI – Laura works as Data Engineer at Sogei. Laura is working for the “Service & Technology Innovation Hub” in the area of “Cross Service & Solution”. She is part of the team “Data Platform Solution”. The mission of this team is to build solutions for Data Management, Data Analytics, and  Data Lake to satisfy business needs and to contribute to the evolution of data platforms. She has over 20 years of experience in Db2 and Mainframe environments. Currently, she is working with Data Virtualization and Data Protection.

Francesco Borrello is a z Data & AI Technical Sales Specialist in IBM Technology Italy. He joined IBM in 2011 and, during his initial learning process, he obtained a master’s in “Centralized Systems for Cloud Computing”. His main mission is to help customers in driving new business opportunities adopting both traditional Hybrid Data Management / Analytics and innovative ML / AI solutions on IBM Z and new business practices. He acted as a speaker at several international conferences and technical user groups, and he is the author of ‘IBM Data Virtualization Manager for z/OS’ Redbook.







2021 IMPACT Partners


 Presenting Sponsor

Mainframe Track Sponsor

 Gold Sponsor

Session Sponsor