The IMPACT Conference
For more than 40 years, CMG’s international conference has been the source for education and peer-to-peer exchange for all things enterprise IT and infrastructure. It is the only conference of its kind where attendees can obtain real-world knowledge and training that is not vendor-run.
IMPACT features sessions on the full scope of digital transformation technologies including Artificial Intelligence and Machine Learning, Observability, DevOps, Performance Engineering, Digital Process, Cloud Native Applications, and the IT infrastructures that support them.
Sessions and Speakers
Each year, CMG brings together leading technology practitioners and vendors for a conference like no other. With an emphasis on peer-to-peer education, the conference facilitates open communication and problem solving for its attendees.
zCX is one of the most exciting functionalities recently introduced in z/OS. With zCX, it is now possible to run distributed applications on z/OS with no changes. We will discuss the first experiences of configuring this environment and running a Linux app that interprets SMF data in it.
We’d like to show:
- zCX installation and configuration
- how to build an image that can be then downloaded in zCX (with all its challenges)
- how to deploy a container based on the pre-prepared image
- how to work with the container
- how to persist data in the container
- zCX address space resource consumption (GCP and zIIP)
- it is not hard to configure zCX for the first time
- with Docker and zCX, mainframe users can control some apps that otherwise would be managed by other offices in their company
- Docker is not a toy, but something that users can benefit from
Matteo Bottazzi is a Software Developer at EPV Technologies. He has actively worked on Docker at EPV Technologies.
The Purposeful Innovator is an invitation to game-changing business leaders and entrepreneurs to bring their whole and higher selves to ideas that create transformation in that thriving space where integrity and innovation meet. I share a proven framework and process for a purposeful approach to creating technology-based products that solve with compassion some of the world’s most pressing challenges.
The Purposeful Innovator is an invitation to game changing business leaders and entrepreneurs to bring their whole and higher selves to ideas that create transformation in that thriving space where integrity and innovation meet. I share a proven framework and process for a purposeful approach to creating technology-based products that solve with compassion some of the world’s most pressing challenges.
Products with intention and a mission that is purposeful keeping in mind a greater impact. It’s a call to action for the private sector and entrepreneurs to partner in the purposeful innovation movement toward meaningful and purposeful products and services. Companies, nonprofits, and entrepreneurs alike are slowly waking up to the challenge of moving forward in new ways that are purposeful and human centered. Inviting listeners to join the tribe of purposeful innovators and inventors in building the future we deserve, as I share compelling stories of founders, framework and process toward product development that will flourish while solving the kinds of problems that stir one’s passions and meaningful contributions.
- Private sector participants and entrepreneurs will receive a call to action to partner in the purposeful innovation movement toward meaningful and purposeful products and services.
- Companies, nonprofits, and entrepreneurs will be helped to wake up to the challenge of moving forward in new ways that are purposeful and human-centered.
- Participants will be invited to join the tribe of purposeful innovators and inventors in building the future we deserve.
Carnellia Ajasin is a CEO at Mindkatalyst. Her team and herself were intimately involved with many emerging technologies such as AI, Machine Learning, IoT, Augmented Reality and Blockchain and are passionate about supporting global entrepreneurs and the growth-startup ecosystem, align, strategize, and execute purposeful tech product ideas that improve lives.
With cyber-attacks continuing to grab headlines, many enterprises are re-evaluating their resiliency posture, including the role the mainframe needs to play due to the criticality of the data and applications running on the platform. The focus is to discuss cyber resiliency at a high level and the techniques that can be applied to address these concerns.
A major challenge for IT operations is protecting business-critical infrastructure and data from the impact of outages and downtime. With cyber-attacks continuing to grab headlines, many enterprises are looking at their resiliency posture, which might include the deployment of air-gapped solutions to deliver the ability to recover from an event that compromises their data. However, determining the incident’s scope and surgically recovering the right data backups can drive up the recovery time required, potentially impacting the business.
We’ll show how a resilient architecture on IBM Z can be developed, including how to identify and recover the right backups of potentially compromised data quickly and easily to minimize business disruption.
- Understand cyber security risks, impact of cyber attacks and the critical role the mainframe plays
- The needs of a cyber resilient solution from detection to recovery
- Techniques and strategies for developing a cyber resilient solution incorporating the mainframe
Chris Walker is a Senior Product Manager at IBM and Diego Bessone is from IBM Z Software WW BUE. Both Diego and Chris have been involved in the IBM approach to cyber resiliency incorporating the mainframe. Diego is a co-author of a recent IBM Redbook on IBM Z Cyber Vault
Modeling has been a valuable tool for evaluating application performance during design, test and production for the last 40+ years. However, there are challenges with model construction and usage. This presentation will (a) demonstrate how animation can improve the utility of modeling and (b) how leveraging distributed traces can reduce the time and effort required to build and refresh a model.
Open-source software will be used to demonstrate the value of adding animation to the evaluation of a model. Visualizing application dynamics in real/near time provides more insight into understanding performance compared to a static before/after report. Furthermore, if you extend animation to include interactivity, you now have a “human in the loop” to guide and review “what if” scenarios.
One or the more interesting components of Observability is distributed traces. From a modeler’s point a view, a distributed trace addresses one of the primary challenges with model construction: how does a transaction flow through the system. A formal data model will be proposed as a way to capture and reuse flows represented by distributed traces.
- Modeling is still a valuable way to evaluate and understand application performance across the development lifecycle
- Adding animation and interactivity to a model has the potential to increase the breadth of application modeling consumers
- Observability’s distributed traces have more utility than providing context for real-time and historical data
Richard Gimarc is an independent consultant specializing in various areas of capacity management including application profiling/sizing, system modeling, capacity planning and performance engineering. His career started in software development and progressed into services and consulting. Richard has been an active CMG member for a number of years, both at the national and regional levels where he has contributing 50+ papers and presentations. His recent work has focused on Digital Infrastructure Capacity Planning.
Organizations moving and managing workloads in the cloud face similar challenges of selecting the cloud data platform and managing a Hybrid Multi-Cloud environment. This presentation will review the case study of applying our modeling and optimization technology in addressing these challenges.
Organizations moving and managing workloads in the cloud face similar challenges of selecting the cloud data platform and managing a Hybrid Multi-Cloud environment, including:
How to select an appropriate cloud data platform to meet performance and financial goals?
Benchmark tests do not represent actual workload well and can cost performance and financial surprises
Tokenization and Detokenization overhead is different on cloud data platforms?
How to reduce risk of performance and financial surprises in Hybrid-Multi-Cloud environment
How to meet SLGs for ETL workloads in a Multi-Cloud environment
How to optimize DevOps decisions prior to new applications deployment
How to optimize Capacity Management decisions
Increasing demand for resources affects performance and cost
Not effective management workloads in Hybrid Multi-Cloud environment could be very costly and affect business completeness
How to optimize FinOps decisions
How to verify results
Wrong decisions can trigger performance and financial surprises and can cause the job for C level executives.
This presentation will review the case study of applying our modeling and optimization technology in addressing these challenges:
• We will review how to continuously collect measurement data on premises and on all cloud data platforms
• How to aggregate measurement data into business workloads and build performance, resource utilization, data usage and financial profiles by each workload on each platform
• How to determine seasonality and workload and volume of data growth
• How to predicting the Minimum Configuration and Budget required to meet SLGs for each workload on each platform
• How to optimize the resource allocation and workload management across in Hybrid Multi-Cloud environment
• How to automate problems determination
• Performance and financial anomalies and root cause determination for all workloads on all platforms
• Apply modeling and optimization to recommend how to fix problems
• Tokenization and detokenization impact on performance and cost
• ETL processes optimization
• DevOps process optimization
• FinOps optimization
• How to compare predicted performance and financial results with actual measurement data and organize continuous close loop control to reduce risk of performance and financial surprises
• Wrong decisions can affect business and can cause the job loss by C level executives.
- How to apply modeling and optimization to select appropriate clod data platform for Data Warehouse workloads
- How to optimize FinOps and dynamic capacity management for Hybrid Multi-Cloud environment
- How to determine performance and financial anomalies and organize close loop control of the Hybrid Multi-Cloud environment
The modernization of the mainframe is not lift and shift. It does not mean a complete replacement of the mainframe. It is moving the applications to a cloud, migrating the database to a cloud-based database, and replacing the part of the mainframe applications in an evolutionary way. You can use low code platforms and no-code platforms to cut down the effort and lessen the risk.
- Cloud Computing
- Low Code Platform
- No Code Platform
Bhagvan Kommadi is Director of Product Engg at ValueMomentum. He worked on different legacy modernization projects and moved them onto cloud using no-code and low code platforms.
This presentation provides valuable architecture and business insights when planning to deploy new generation wireless networks based on Service Based Architecture(SBA). Emphasis on business verticals convergence with business and operational support (B/OSS) systems highlighted. Business value propositions with AI strategies are covered. Key technology enablers are outlined with concepts include AI/ML models adaption on 5G and beyond wireless network advancements/standards considered.
- Transform from the traditional way of performing jobs
- Evaluate the emerging trends in wireless domain
- Convergence of industry verticals with business value
M Arjun Valluri is a Sr. Enterprise Architect – Solutions at US Cellular Communications. He has industry experience with academic research and managing the technology domains with architectural insights to generate business value.
In the IT industry, testing in production has always been considered a “bad word”. However, we in Capital One have been doing it for over a year and realized a lot of benefits!
“Game Day” is a concept that Capital One has been utilizing to test in production in order to validate the capacity and resiliency of the critical applications. Typically testing in production could lead to customer impacts and outages, however we have been able to conduct multiple successful exercises with no negative adverse impact. This session will go over how we are able to safely test in production, value realized so far as well as future plans.
- What is the value of testing in production?
- How can it be done with no adverse impact to users or customers?
- What solutions and techniques are available to make testing in production a reality
Duane Diggs, Director, Technology Management, Capital One
Duane co-leads the Stability & Engineering Operations (SEO) Game Day operations where he serves as the Program Manager. His responsibilities are to oversee the planning and execution of the Game Day roadmap in partnership with enterprise technology leaders and stakeholders. In addition to his Game Day role, Duane also leads the TOC Technology Governance & Controls.
Duane has been with Capital One for over 15 years with much of his time within Technology focused on:
- Strategy and Solution Delivery
- Risk Management
- Governance and Controls
- Product and Program Management
Duane is based in Richmond, Virginia with his wife, Karen. His interests are golf, fly fishing, cooking, wine, and supporting the Alabama Crimson Tide!
Technology Operations Center (TOC)
Yar co-leads the Stability & Engineering Operations (SEO) Game Day operations where he serves as the Engineering lead. His responsibilities are to oversee the technical tactical aspect of the execution and monitoring of the Game Day roadmap in partnership with enterprise technology leaders and stakeholders. In addition to his Game Day role, Yar also leads a pod responsible for providing 24/7 operational support.Yar has been with Capital One for over 2 years with much of his time within Technology focused on:
- IT Operations
- Proactive monitoring, ML and AIOps solutions
Yar is based in Plano, Texas with his wife and two kids. His interests are Texas Holdem, good whiskey, CrossFit and spending time with the kids!
Like any other technology transformation, k8s adoption typically starts with small “pet projects”. One k8s cluster here, another one over there. If you don’t pay attention, you may end up like many organizations these days, something that spreads like wildfire: hundreds or thousands of k8s clusters, owned by different teams, spread across on-premises and in the cloud, some shared, some very isolated.
When we start building application for k8s, we often lose sight of the larger picture on where it would be deployed and more over what the technical constraints of our targeted environment are.
Sometimes, we even think that k8s is that magician that will make all our hardware constraints disappear.
In reality, Kubernetes requires you to define quotas on nodes, namespaces, resource limits on our pods to make sure that your workload will be reliable. In case of heavy pressure, k8s will evict pods to remove pressure on your nodes, but eviction could have a significant impact on your end-users.
How can we proactively test our settings and measure the impact of k8s events to our users? The simple answer to this question is chaos Engineering.
During this presentation we will use real production stories to explain:
- The various Kubernetes settings that we could implement to avoid major production outages.
- How to Define the Chaos experiments that will help us to validate our settings
- The importance of combining Load testing and Chaos engineering
- The Observability pillars that will help us validate our experiments
About the Speaker:
Henrik Rexed is a Cloud-Native Advocate at Dynatrace, the leading Observability platform. Prior to Dynatrace, Henrik has worked as a Partner Solution Evangelist at Neotys, delivering webinars, building prototypes to enhance the capability of NeoLoad. He has been working in the performance world for more than 15 years, delivering projects in all contexts including extremely large Cloud testing on the most demanding business areas such as trading applications, Video on Demand, sports websites, etc. Henrik Rexed Is Also one of the organizers of the Conference named Performance Advisory Council.
When the cloud servers rightsizing algorithm calculates the baseline level for the current year application server’s usage, the seasonal adjustment needs to be calculated and applied by adding the additional anticipated change, which could be increasing or decreasing the capacity usage. We describe the method and illustrate it against the real data.
The cloud servers rightsizing recommendation generated based on seasonality adjustments, would reflect the seasonal patterns, and prevent any potential capacity issues or reduce an excess capacity.
The ability to keep multi-year historical data of 4 main subsystems of application servers’ capacity usage opens the opportunity to detect seasonality changes and estimate additional capacity needs for CPU, memory, disk I/Os, and network. A multi-subsystem approach is necessary, as very often the nature of the application could be not CPU but I/Os or Memory or Network-intensive.
Applying the method daily allows downsizing correctly if the peak season passes and the available capacity should be decreased, which is a good way to achieve cost savings.
In the session, the detailed seasonality adjustment method is described and illustrated against the real data. The method is based on and developed by the author’s SETDS methodology, which treats the seasonal variation as an exception (anomaly) and calculates adjustments as variations from a linear trend.
- How to build seasonal adjustments into the cloud rightsizing
- To get familiar with cloud objects rightsizing techniques
Speaker: Igor Trubin
Innovative and experienced System Management specialist with the ability to organize and manage world-class multiplatform Availability and Capacity Management services and teams. Expert in cloud computing optimization, the statistical analysis of performance data, queuing theory, modeling and business driver-based forecasting for distributed mainframe and cloud platforms
CONFERENCE SESSIONS FROM THE WORLD’S LEADING COMPANIES
EDUCATION & TRAINING FROM TOP INDUSTRY LEADERS
PEER-TO-PEER NETWORKING OPPORTUNITIES
TECH EXPO FEATURING THE LATEST TECHNOLOGIES
WORKSHOPS DESIGNED TO TACKLE REAL-WORLD CHALLENGES
TECHNICAL DEMOS & TRAINING
2021 IMPACT Partners
Mainframe Track Sponsor