CONF2016: Leveraging the DNA of Digital Transformation


This was my first time attending a Splunk .Conf so I was eager to feel the event; to gauge the excitement about the products and get a sense for how Splunk might succeed with its ambitious plans for growth in an ever more competitive market.

LittleBlogAuthorGraphic  David Hodgson, September 30, 2016

The family goes to Orlando

Boasting 3 days of in-depth training, 185 technical sessions, inspiring keynotes each day, and the booths of 70 technical partners, .Conf2016 did not disappoint the nearly 5,000 attendees in terms of the intensity of the event and the velocity of interactions between us all. Obviously CEO Doug Merritt had primed his troops because in the kickoff keynote he quoted what they say internally at Splunk: “If you ever want to be inspired go out and talk to a customer”.

The fervor that the Splunk user base feels for the product brings back memories of VMware and SAP when they were cool, and promised change and progress. Perhaps because it’s a weird election cycle, Millennials are looking to technology rather than politics to shape their future. I don’t know for sure, but definitely this conference felt like the best sort of family gathering where people actually liked each other, wanted to collaborate on building solutions and wanted to bring new members into the fold.

The conference was held in the Dolphin & Swan at Disneyworld, Orlando.  By the time we got to the Tuesday event night and a roaming party around the Hollywood studios park, the atmosphere was very much of one big family having fun together.

Learning how to get machines learning IT

Splunk is the clear market leader in providing a pragmatic platform for machine learning. The results have been real and beneficial whether its detecting intrusions from unusual data access patterns or predicting trends that can be addressed to optimize IT service delivery. A big theme at .Conf2016 was the power of Machine Learning and how it is shaping Splunk’s products.

In practice Machine Learning is very different from what we usually think of as Artificial Intelligence.  AI seeks to build computer models that can emulate the functions of human brains. We expect that an AI would perceive its environment and exhibit goal seeking, purposefully behavior that is understood by humans. Ideally it would interact with humans to both receive input and augment our decision making abilities. By contrast Machine Learning is a sub-area of AI that is focused on pattern recognition that allows the system to “learn” and predict based on history, but without their being a rational explanation for that response that a human could understand.  Machine Learning relies on the consumption of masses of granular data that can be processed with statistical analysis to make predictions and uncover “hidden insights” about relationships and trends.  These “insights” are not necessarily causalities that have an explanation that humans could understand and replicate.

As a solution Splunk differentiates itself from the similar platforms like the ELK stack (Elastic Search, Logstash, Kibana) and Hadoop mainly through its functional completeness and ease of use. But it is proprietary and somewhat expensive to use with costs scaling based on the amount of data ingested daily. To accommodate customers concerns about growing costs and their desire to embrace open source technologies, Merritt announced at Conf2016 that Splunk Labs was enabling integration with Elastic Search, Spark, and Kafka- showing Splunk’s openness to adaptation to what customers are asking for in the field. The announcement was well received and is probably the answer both to customer needs and to Splunk can ensure continued popularity.

From a Syncsort perspective our Ironstream product has been focused on getting data to Splunk directly but customers have increasingly asked us to support a Kafka pipe to split data between Splunk and Hadoop. With Splunk’s new open architecture announced at .CONF2016 we will now plan to follow suit.

Splunking IT Operations

One of the significant areas of success that Splunk has had is in the area of monitoring tools for IT infrastructure. The normal users are Enterprise IT teams that need to monitor a broad array of platforms. They need to contextualize events by gathering data from connected platforms and using Splunk to do basic time-based correlation and advanced pattern recognitions. The rate of environmental change in hardware, software and connected devices makes traditional tools almost impossible to integrate and Splunk Enterprise offers a much simpler and more effective approach

For the last two years Syncsort has partnered with Splunk to add the mainframe platform to those monitored and this has proven to be an essential ingredient for the some of the world’s biggest IT organizations that have mainframes.

On the first day Merritt introduced the concept of Data as the DNA of IT, driving evolution and change. On Wednesday Andi Mann carried the theme further in his keynote “Re-Imagining IT” saying

“Digital transformation needs to be in your DNA; not passionately pursuing it is an existential challenge and threat to your individual and organization’s future success”.

Mann focused his discussion on the new 2.4 release of IT Service Intelligence (ITSI) that was unveiled at the conference. The main new capabilities of value are:

  • Anomaly detection using machine learning
  • Adaptive thresholds and tells you what the norms and thresholds should be for any time of the day, week, etc.
  • Intelligent events with contextualized data wrapped in them
  • End-to-end visibility of business services richly visualized for LOBs in the new “glass tables”

At .conf2016 Andi Mann discussed Syncsort’s role in making Big Iron Data available to Splunk for Big Data analytics

Syncsort also unveiled our latest work which was integration of mainframe data for ITSI 2.4.  We demonstrated this with glass tables visualizing an online banking system from a mobile device to a mainframe running CICS and DB2. The Syncsort ITSI module is available for download from Splunkbase at no cost.

Splunking Security

One of the most widely adopted use cases for Splunk is security and compliance. As normal you can roll your own very effectively using the Splunk Enterprise platform or you can add pre-built power features with Splunk’s premium app Enterprise Security (ES)

In her keynote Haiyan Song, SVP Security Markets described how alert based security is no longer adequate and stated that Machine Learning is now required to address internal and external threats. Splunk’s answer is User Behavioral Analytics or UBA.

At the conference Splunk announced new features in ES 4.5 and UBA 3.0 that were aimed at providing CISOs and their teams with operational intelligence. The highlights were:

  • The Adaptive Response initiative allowing partners to openly integrate SIEM technology
  • Glass tables available for advanced visualizations of the underlying data
  • Enterprise hardening for the Caspida acquisition to create UBA as a product

Song described how UBA has the ability to understand and correlate user sessions across platforms and devices. She also brought on Richard Stone from the UK Ministry of Defence who explained how they are leveraging Splunk ES and UBA to create a DaaP (Defence as a Platform) ecosystem. To Stone this is a single information environment in which anyone with the appropriate credentials can access it from any point, enter a familiar environment, and access any information. He challenged us to “Date to Imagine” saying that the biggest constraint in security is our imagination.

Syncsort again extends these solution to the mainframe offering data integration to ES for RACF via the Ironstream product.

Splunking DevOps

A new concept unveiled at .Conf2016 is a solution for DevOps. This is perhaps not surprising given Andi Mann’s background and he will be the champion for this new product. The solution uses the underlying capabilities of Splunk Enterprise to take a data-integration approach to deliver three areas of value:

  • End-to-end visibility across every component in the DevOps tool chain
  • Metrics in glass tables to show LOBs that code meets quality SLAs
  • Correlation of business metrics with code changes to drive continual improvement

Splunking the Mainframe

One of the greatest things for me about the show was the number of people interested in the Syncsort booth. Even people who were not familiar with mainframes were interested to learn how we are Splunking the Mainframe!

Our CEO Josh Rogers delivered a phenomenal Cube interview that explained our strategy of moving data from Big Iron to Big Data (BIBD) platforms. Our deliverables and direction resonate with customers and prospects alike who are as excited with what we are doing as they are about Splunk!

During his appearance on the CUBE at .conf2016, Syncsort CEO Josh Rogers defined the Big Iron to Big Data (BIBD) challenge where customers need to take core data assets being created thru transactional workloads on mainframe and move them to next generation environments for analytics.

With the pace that things are moving across this market I am looking forward to .returning to .Conf in 2017 when it will be held in Washington DC, my home town. I know that both Splunk and Syncsort will have learned more and developed more, inspired by our customers. I can’t wait to see what we will have co-created and what evolves next from the data-DNA of IT.


A Dream of Great Big Data Riches – Harvesting Mainframe Log Data


In today’s new world of big data analytics, traditional enterprise companies have jewels hidden within their walls, embedded in legacy systems. Among the most precious stones, but perhaps some of the best hidden, are the various forms of mainframe log data.

LittleBlogAuthorGraphic  David Hodgson, June 20, 2016

Z/OS system components, subsystems, applications and management tools continually issue messages, alerts and status and completion data, and write them to log files or make streams available via APIs. We are talking hundreds of thousands of data items every day, much more from big systems. This “log data” generally comes under the heading of unstructured or semi-structured data and has not traditionally been seen as a resource of great value. In some cases it is archived for later manual research if required, in many cases it just disappears!  In the case of SMF records it has traditionally been consumed by expensive mainframe based reporting products that unlock the value, but at great cost and you still need special expertise to do anything with it.

What if all this potentially valuable data could be collected painlessly in real-time, made usable by a simple query language and presented in easy to read visualizations for use by operational teams? This sounds like a fantasy dream, but it is what Syncsort and Splunk have achieved through their partnership and products.

Nuggets and gemstones

Of all the data sources we are talking about, SMF (System Management Facility) records are the wealthiest trove with over 150 different record types that can be collected. SMF provides valuable security and compliance data that can be used for intrusion detection, tracking of account usage, data movement tracking and data access pattern analysis. SMF also provides an abundance of availability and performance data for the z/OS operating system, applications, web servers, DB2, CICS, Websphere and the MQ sub-system.

But there is much additional information in feeds like SYSLOG, RMF (Resource Management Facility) and Log4J. And there are the more open ended sources that could be considered log data, like the SYSOUT reports from batch jobs.

The gem collector and now Lapidarist too

Syncsort’s solution for the collection of mainframe log data is called Ironstream and it is a super-efficient pipeline to get data into Splunk Enterprise or Splunk Cloud. Designed from the start to be lightweight with minimum CPU overhead, Ironstream is a data forwarder that converts log data into JSON field/value pairs for easy ingestion. We built it in direct response to Splunk customers who wanted to complete their Enterprise IT picture with critical mainframe data to complete an end-to-end, 3600 view.



In addition to all the data sources listed above, Ironstream offers access to any sequential file and USS files. This gives very comprehensive coverage to any source of log data that an organization might be producing from an application. But in addition we offer an Ironstream API that can be used by any application to send data directly to Splunk if it’s not already writing it out somewhere.

Of course something has to be too good to be true here doesn’t it?   Well yes, one potential issue is the sheer volume of data that is available and the cost of storing it. While all of it could be valuable, most companies are going to want to selectively focus on the items that are most valuable to them now. To address this requirement, our Ironstream engineers became digital Lapidarists.  In the non-digital world, Lapidarists are expert artisans, who refine precious gemstones into wearable works of art. With the latest release of Ironstream, we now offer a filtering facility that allows you to refine large the large volumes of mainframe data by selecting individual fields from records, discarding the rest. By customer request, we have on our roadmap an even more powerful “WHERE” select clause that will allow you to select data elements across records based upon subject or content.

Why didn’t I know this?

There is a fast moving disruption happening in the world of IT management and not everyone wants you to know it. Open source solutions and new analytical tools are changing everything.

For the last 40 years complex, point-management tools have been used by highly skilled mainframe personnel to keep mainframes running efficiently.  Critical status messages are intercepted on their way to SYSLOG and trigger automation to assist the operational staff.  All this infrastructure has made most of this log data unnecessary for operations and mainly of archival interest if of any interest at all. The most valuable SMF data usable for capacity planning, chargeback and other use cases has been kept in expensive mainframe databases and processed by expensive reporting tools.

In parallel to the disruption that is being driven by emerging technologies there is a special skill crisis in the mainframe world; the experts that have been managing these systems for 40-50 years are retiring and there are not enough people being trained to replace them.

Fortunately in the confluence of these two trends a solution is born. By leveraging this new ability to process mainframe log data in platforms like Splunk and Hadoop, a new generation of IT workers can assist “Mainframe IT” by proactively seeing problems emerge and assisting in their resolution. In the first wave of adoption this will help offset the reduced availability of mainframe skills, but it won’t obviate the need for them completely and it won’t replace the old point management tools. Yet.

As this technology matures, and machine learning solutions become proven and trusted, we will see emerge a new generation of tools.  Based on deep learning, these will replace both the old mainframe tools and the personnel who used them, but now want to be left in peace by the lake.  My prediction is that as this comes to be a reality, we will also see a move of analytics technology back onto the mainframe platform.  The old dream of “autonomic computing” will become a reality and a new mainframe will in effect evolve; one that tunes and self-heals itself.

Find the Treasure!

Syncsort plans to be there, in fact we are leading the way there. We offer the keys to the treasure chest for anyone who wants to follow our map to find the dream of great riches!


New Beginnings on Old Bedrock: Linking Mainframe to Big Data

syncsort blog

Following 14 years at CA Technologies, where I held various senior management positions, I joined Syncsort in April of this year. I wanted to become a part of the leading company that is linking Big Iron to Big Data. What will that union yield for the industry, and for me?

 LittleBlogAuthorGraphic David Hodgson, May 23, 2016

I really enjoyed working at CA and learned a lot over the years there. CA is a vibrant, energetic place to work. The employees are smart, the products are good, the installed customer base is amazing and a lot of innovation is occurring. Yes, on the mainframe side of the house too. Last year alone saw three entirely new mainframe products launched, and I am proud to have been a part of the team that did that.

Syncsort is an incredibly interesting company that I had been watching for a while. A forty year old mainframe company that is doing some of the most valuable innovation in the big data space for large enterprises. A few years ago, the company re-invented itself as the company to move mainframe data to analytics environments. Strategic partnerships with Hortonworks, Cloudera, MapR, Dell and Splunk, along with some great innovation by the development teams, has transformed Syncsort into a player in Big Data ecosystem. In fact Syncsort announced record 2015 results, including the promotion of Josh Rogers to CEO to lead the company forward to fully realize the vision and potential that we have for the next few years.

In my last few years at CA I was very focused on the Big Data space and was interested in the problems that CA could solve there. When Syncsort founder and previous Mainframe GM, Harvey Tessler decided he wanted to retire, I talked to Josh and the rest of the Syncsort management team and we all agreed that I would be a great fit to take over the reins.

A few weeks into the role here I am thrilled with the decision to join. I love being part of a smaller company again where everything is more agile, just because of the small teams, shared mission and sense of urgency. We can do so much at Syncsort from our position of strength on the mainframe and our expertise in data management.

Having now met with several customers, I have confirmed the pattern of needs that we can address.  Big Data platform ITOA solutions and business analytics are now the norm. Although the market is evolving quickly and requirements are changing, everyone is doing it. Those who still think it’s still just talk are missing out big time. Most of these initiatives are not started by Mainframe IT, but in companies with mainframes, the enterprise teams are now at the point of implementation where they realize that they need the mainframe data for an effective or complete solution.

The broad uses cases that we see include things like real time monitoring of infrastructure or business services, and real-time awareness of access activity to help spot breaches in security or compliance. What these cases, and others, have in common is a deeper contextual analysis that is impossible with traditional, point monitoring tools.  Done right these solutions can be more effective than current practices and reduce cost by saving labor, penalties and software costs.

These same customers currently indicate that they are unlikely to dump the traditional management tools, but I actually wonder about that myself. As practices in data gathering and machine learning mature I think we will quickly see the start of next-gen automation that may make the old tools redundant. In the case of the mainframe this may become a necessity when, as an industry, we lose the skills of the baby boom generation and fail to replace the depth of knowledge they have.

By joining Syncsort I have brought myself to the coal-face, where we are mining the black-stuff out of the Big Iron legacy systems. As one of those whose career has been based on the strength of the mainframe, and its continual re-invention, I hope that I can be a part of the next round of evolutionary changes. Changes that will enable the mainframe to serve the industry for a renewed lease of life. New beginnings on old bedrock. The decade of ITOA and the dawning of AI applied to business systems.


Protecting sensitive mainframe data with CA Data Content Discovery



How a new software solution from CA announced recently at CA World can help your business conquer the mainframe data mountain and protect your organization’s most sensitive data.

 LittleBlogAuthorGraphic David Hodgson, December 8, 2015

Your mainframe has been collecting data for years — probably decades — and you rely on it to run your business and the apps that serve your customers. But over the years, its collected mountains of records and files, many of which contain sensitive data that require special controls stipulated by government regulations.

With so much information residing in your mainframe, it’s hard to locate regulated or sensitive data when you need it (and takes up far too much time). You may not be limiting internal access appropriately, and copies may end up somewhere else, without proper access control. At last count, 400 mainframes worldwide are connected directly to the Internet and accessible to anyone via a login screen.  So yes, while mainframe remains the most securable platform – it isn’t 100 percent immune to data breaches.

Recently at CA World in Las Vegas, CA Technologies announced two new mainframe solutions to help organizations become more agile. One of these new solutions, CA Unified Infrastructure Management for z Systems, supports our DevOps portfolio and helps customers accelerates problem resolution with a unified view across mainframe and distributed systems.

In this blog I’d like to focus on the new solution, CA Data Content Discovery that supports our Security portfolio. Bottom line, if you don’t know where your sensitive data is, you can’t protect it. CA Data Content Discovery scans your mainframe data to identify the location of data that matches regulations such as PCI, PII or HIPA, so you can make business decisions around securing, encrypting, archiving or deleting those records.

This isn’t just good business sense; it will help you address potential audit findings and risks.

If it ain’t broke, why fix the mainframe?

But why now? After all, the adage “if it ain’t broke, why fix it” often applies to mainframes. But these days, mainframes are not only tied to mission-critical applications, but those applications now face your customers through the web and mobile apps.

In the application economy, the mainframe plays a key role in how apps perform — and how happy your customers are.

Unknown unknowns: trust isn’t a strategy 

The stakes for mainframe security have changed.  In a recent blog post, Jeff Cherrington offers a colourful history lesson and metaphor comparing mainframe security to the evolution of fortifications of medieval castles.

The plain fact is today’s application economy puts different demands on the mainframe data – everyone wants in!  The Chief Digital Officer wants access to systems of record for his pet big data project or some backup project didn’t follow all the necessarily controls – the fact is mainframe data is moving off the platform when it shouldn’t be and if it needs to – let’s at least know about the location of that sensitive data and apply the right controls.  Companies that make security a priority understand that blind trust and “nothing will happen” isn’t the solution.

With the right tools and processes, you can be confident to leverage the mainframe as part of your digital transformation while safeguarding sensitive and regulated data. CA Data Content Discovery has three distinct advantages:

  • Find: You can locate regulated and sensitive data using data-pattern scanning, helping to gain insight into the magnitude of potential data exposure on z Systems.
  • Classify: Once you’ve found the data, you can prove to auditors that you’re compliant with regulations (controls are checked by data type and content).
  • Protect: Critical data never leaves the z/OS platform. Integration with CA ACF2, IBM RACF and CA Top Secret for z/OS means you can quickly visualize who has access to regulated or sensitive data.


For more details, check out the Data Content Discovery page.

It’s not enough these days for organizations to embrace software — they need to use it strategically. And that includes the mainframe.

In the era of digital transformation, organizations need to be more agile — and this is possible, even with legacy systems. With the right tools, people and processes, it’s possible to bring your mainframe along on the digital transformation journey.



How mobile-to-mainframe apps help you get more from your mainframe


Mainframes are here to stay and are more relevant than ever before in the application economy – so it’s time your business thought about “reframing the mainframe”.

LittleBlogAuthorGraphic  David Hodgson, October 19, 2015

Too often in a haste to rush towards the future, we easily eschew what we did in the past. But as companies race towards an increasingly connected world where customers are more demanding than ever, there’s a machine in the background, underpinning the business and driving some of the most important apps we use every day on our smartphones. That’s right, folks it’s the mainframe.

Mainframes are here to stay — after all, millions of lines of COBOL code continue to run the most important business applications in the world. That’s why we need to reframe the way we think about the mainframe.

In fact these days, mainframes are expected to do far more — from providing easy data access for big data projects to supporting cloud and mobility. The specialized skill sets required by the mainframe team are becoming even scarcer and most IT departments are under pressure to keep costs down.

That’s where reframing the mainframe comes in. Recognizing that success in the application economy will require investment in the mainframe and that the investment will directly grow your business. The rise of mobile-to-mainframe applications is a key driver here.

The mainframe at play in the application economy

In my first blog post in this series, I talked about the curse and blessing of connectedness for the mainframe in the application economy. It’s this same connectedness that is empowering customers like never before.

And in my second blog post, I talked about how the customer is always right — so it’s incumbent on mainframers to create flexibility for the mainframe platform of the future. The interplay between cloud and the mainframe has presented new opportunities, including Linux on z Systems.

The three largest banks in the U.S. already have 50 million customers using mobile banking — and that adoption rate is growing at a rate of 15 percent annually. Even checking your bank balance on a mobile phone requires hits back onto the critical transaction-processing component of the data center.

To connect mobile-to-mainframe apps, developers need processes and tools that span the divide. This helps to speed up time to market and improves quality with software change management.

Past legacy equals future innovation

Moving forward, the most successful organizations will evolve from supporting legacy business processes to driving innovation.

How will they do this? They’ll develop applications exploiting mobile-to-mainframe architectures that are managed as a whole, not within silos, allowing greater visibility into operations and performance.

This empowers IT staff to perform tasks and access relevant information to accelerate service delivery. Workloads can be more easily orchestrated to run on the best-suited server — the cloud, distributed or mainframe — and data is transformed into insights to drive opportunities.

Learning to become Agile

And it’s already happening. The proportion of professionals with four or more years of Agile adoption at their firms has increased significantly, according to analysts. Many teams are already leveraging core Agile practices: 20 percent are using Agile from ideation to deployment (including DevOps), while 51 percent are using Agile in the upstream.

“Behavioral change is the biggest barrier to adoption; properly skilled business product owners are also a big impediment for Agile adoption success,” according to analysts. “Organizations are finally realizing that the lack of cross-functional teams is one of the top impediments for successful Agile adoption.”

Managing mainframe costs as a strategic asset

What we often hear from customers is that it’s hard to justify the cost of their mainframe platform. Mainframes, however, can be managed as a strategic asset that creates value — if you create mainframe platform flexibility for the future.

According to analysts, the top three benefits of becoming Agile are better business and IT alignment, faster delivery of solutions and more opportunities for midcourse corrections.

The right testing tools can help you deliver higher-quality applications faster, while keeping costs under control. Integrated Application Quality and Testing Tools from CA Technologies, for example, are designed to help you test, diagnose and fix problems in mainframe applications quickly — before issues affect application performance.

By reframing the mainframe and connecting mobile-to-mainframe applications, you can reduce costs, improve efficiency and reallocate resources to more strategic business initiatives. You’ll maximize what you already have, while integrating new service offerings — turning your mainframe platform into a competitive differentiator with potential new revenue streams.



Colorful fall insights for reframing the mainframe in the application economy


Usher in the changing of the season with CA’s Virtual Summit in partnership with InformationWeek and gain insight into how to run your business better.

 LittleBlogAuthorGraphic David Hodgson, September 28, 2015

Like clockwork, fall is in the air and this is my favorite time of the year: the crisp, cool air, the changing colors and a sense of acceleration as we head towards the holiday season.

For myself, the fall ushers a fun, frenzied but fulfilling pace at work – so much that has to get done and of course don’t we all love it – making sure we are doing right on our commitments to the business and planning for next year. As I speak to many of my colleagues and customers, I am learning mainframers are in a similar frenzy figuring out questions such as:

  • Will our current capabilities around monitoring and automation be sufficient for new growth and compliance demands of my organization?
  • How do new options around Linux on the mainframe offer us greater flexibility in our hybrid data center plans?

Join CA and InformationWeek on September 30

So, just in time, I am really excited about the upcoming Virtual Summit CA is hosting with InformationWeek“Mainframe Reframed for the Application Economy” on Wednesday, September 30, 2015.

The change of colors is a nice metaphor for our clients’ new mission, internal transformation and conversation around “mainframe reframed.” Should we really be spending our time continuing the debates and defensive stances about the mainframe?

A much more productive conversation is one about the business value and growth the mainframe can help you deliver. That’s why I urge you to take a day to connect and catch up with us.

Here is a quick sneak peek of sessions you shouldn’t miss:

  • Linux & Open Source: Driving New Innovation and Value on your Mainframe: Join a panel of industry experts from IBM, SUSE, Brown Brothers Harriman and CA as they discuss the new Linux and open source options, and what the new Open Mainframe Project consortiummeans to you.
  • Reframing the Mainframe to Thrive in the Application Economy: Join Gary Barnett of Ovum and myself, as we cover a variety of issues that combine ideas to improve overall enterprise performance, including advancements in DevOps, mobility, data management, network performance, data security and mainframe performance.
  • De-Siloing End-to-End Ops in an Increasingly Trans-Platform World: Join a panel of experts to learn how IT organizations are unifying ops across platforms to keep their multi-platform front-ends, mainframe back-ends and hybrid clouds all working together in harmony.
  • Apps and Ops: Keys to a Superior Customer Experience: To deliver great customer experiences, IT has to develop awesome apps and ensure the availability and performance of those apps with equally awesome ops. Get actionable insight into addressing some of your DevOps challenges as we explore the apps and ops approach to reframing your mainframe.

Last but not the least – you asked and we listened. We are committed to you – our user community – and making sure you are successful as you reframe the mainframe in your organization.

The Virtual summit is designed to help you catch up on the latest updates to key solutions that help you run your business. In addition to keynotes, the virtual summit hosts 15 booths for live-updates, demos, videos, on-demand tech-talks with product experts spanning application development, testing, automation, infrastructure, operations, performance, storage and security.

The summit environment will be live for 12 months so you can come back for quick refreshers or find what you need.

Grab some coffee, tea or apple cider and join us on September 30, 2015. It will be informative and fun.


How to run the mainframe of the future


Customer demands are putting new pressures on all areas of IT, including the mainframe. Linux on the mainframe is nothing new but it could just be the key to teaching an old dog new tricks.

LittleBlogAuthorGraphic  David Hodgson, September 17, 2015

I’m sure at some point you’ve walked into a store or a restaurant and seen a sign that says, “The customer is always right.” If you think about it for a minute, how many times has the establishment you’re dealing with proven this wrong? Probably more often than you’d like. And how did you vote? With your feet (and word of mouth) most likely.

Well, in the application economy, things have just got a lot more interesting. The customer is not only always right they also have the power to determine how a company is going wrong. We’re living in an era where the customer is more pivotal to the success of a business than ever before.

In my last blog post, I talked about the curse and blessing of connectedness for the mainframe in the application economy. It’s this same connectedness that is empowering customers like never before.

In a white paper that I recently authored, “Mainframe Reframed for the Application Economy,” I discussed the challenges of customers driving change in the application economy. Because customers are more likely to interact with a business through their app than a person, they can’t afford (literally) to get it wrong. One estimate suggests that a quarter of users will abandon an app within a three second delay.

In order to meet this new level of customer demand, businesses have had to provide new levels of transparency, availability and reliability like never before. With this transformation, comes new pressures on all areas of IT, including the mainframe platform, apps and back-end systems it hosts.

The open mainframe of the future

That’s why it’s incumbent on mainframers to create flexibility for the mainframe platform of the future. The interplay between cloud and the mainframe has presented new opportunities for the mainframe, including Linux on z Systems. Linux on the mainframe – tell me something new you say – I know.

Making the mainframe more accessible than ever before

The Open Mainframe Project, a collaborative effort coordinated by the Linux Foundation, which CA is a founding Platinum sponsor of, to grow Linux on the mainframe, will open up driving the direction to a broader community.

In addition, the launch of IBM LinuxONE – a portfolio of hardware, software and services solutions for the enterprise – will make the Linux platform on the mainframe more accessible.

Addressing the barriers to adoption

In the past, three main barriers to adoption of Linux on the mainframe were:

  • Use of proprietary IBM hypervisor (z/VM) for virtualization
  • Lack of support for open source components

The announcement of LinuxONE addresses all three of these barriers by allowing IT to deploy Linux on KVM, use the more common open source components, and most importantly for any business decision maker, to avoid the high initial cost of a mainframe by adding usage based pricing.

The bigger picture

CA, along with leading industry counterparts and academic institutions are platinum founding members of the Open Mainframe Project. The aim is to use Linux’s strengths in Big Data, mobile processing, cloud computing and virtualization to advance mainframe Linux tools and technologies and increase enterprise innovation.

The focus areas of the project address some of the key challenges organizations are facing when it comes to the sheer volume of transactions and data they’re dealing with in their data centers. These include scalability, reliability, performance and security.

So unless you want to be cursed with negative customer feedback, start to discover the blessings hidden in these new opportunities for the mainframe to meet and exceed customer demand.

The sign stating the obvious about the customer being right may not be hung up in a virtual sense, but it should always be in the back of companies’ minds as they increasingly have less and less physical interaction with their customers.

Join us at CA World 2015

If you want to learn more about new innovations on the mainframe and how to create great customer experiences join us at CA World 2015, November 16-20, 2015 in Las Vegas where we’ll be hosting a panel discussion on the Open Mainframe initiative and much more. Click here for the CA World Mainframe Session Guide.