With the upheaval of the economic downturn came a spate of mergers, acquisitions, divestitures, splits and buy-outs. The ensuing chaos of the resulting technology portfolios cannot really be overstated. Many surviving companies are just a mess. They not only have an absolute nightmare of a technology integration problem, but they’ve let go the folks that knew anything at all about the portfolio (or worse, re-badged them to an outsourcing partner).
In normal times this may not be a big deal. But these aren’t normal times.
Companies that are thriving or going to thrive are those that are nimble enough to not just respond to market changes but actually lead their customers to want their products. Consumerization is the buzzword mentality. Digitization is the buzzword methodology. If you can peer through the fog of consultantese and deliberate obfuscation, these are real and needed efforts by the business in the 20-teens to stay relevant: they have to know about their market, interact and engage with it, create the right products, for the right customers who have the right level of engagement with the services they’re buying. But beyond that, Technology in the 20-teens is no longer just a cost center. Technology should be at the forefront of developing new business, new lines of revenue, new customer experiences using the latest technology. All of that requires more than a basic level of responsiveness, flexibility, insight and expertise by Technology using technology. How does Technology use technology to enable this agility when the portfolio is a collection of unknown unknowns, known unknowns and a whole heaping pile of known crap? And let’s not forget that the only skill in the unknown unknowns walked out the door to “pursue other opportunities” during the last re-org. And those known unknowns? They’re now managed by the C Team who were re-badged into GiantOutsourcingCorp who now runs your systems, apps, data, etc.
It is worse than just having a mess and no one to deal with it and thus being unable to respond to the business changes. It comes down to a total lack of stewardship of enterprise technology strategy along with the assets and work products that enable it. The company doesn’t know what technology it has, how much it costs, or how to use it to succeed in the market.
That’s a pretty big deal.
Unicorns and Gonkulators
Sadly, there isn’t a silver bullet. No one will ride in on the SOA Unicorn to save the day. The purveyors and evangelists of the various -aaS offerings won’t be able to fix it all. It requires sorting out the mess and re-establishing those sustaining capabilities that went away during the chaos. And that is not an easy thing to do. It involves clarity, checks, culture, skills… in short people and process. Of the People-Process-Technology triad, people is the hardest followed closely by process.
Enter the fancy sounding IT program that aims to fix the problem. We can call it “IT Optimization” or “Application Rationalization” or “Super-Duper Cost Cutting Project” but in the end it has to involve a re-assertion of stewardship over enterprise assets (I include systems, architectures, processes in the term ‘asset’). This isn’t just about cutting cost.
A Brief Aside on Stewardship
Let’s not forget that in logic and psychology the term “Rationalization” means the “art of making excuses.” Our businesses can’t afford the routine blame game of who created the mess. This can’t be IT versus the business unit. Get that part straight first or the rest will simple fail. Stewardship is more than which billing code is assigned the cost or savings. It is a cooperative effort to ensure that when a business unit needs to react to a market change, Technology is there with the understanding of the business, the knowledge of the systems supporting that business and the ability to rapidly effect change to the underlying technology to support the business. Stewardship involves both short and long term strategy. It requires business leader input as well as Technology input. It is part of what enables Technology to move beyond taking orders and be proactive in business development.
Rationalizing It Away
Too often the efforts at Rationalization involve merely cutting systems and therefore (it is assumed) cost from the technology portfolio. By looking at utilization metrics we can observe that there are X number of servers running way below optimal capacity. We should therefore cut those servers and consolidate. This approach can work up to a certain point, and frankly that process is part of any Rationalization effort. We certainly need a good and reliable server inventory and a mechanism for obtaining reliable metrics. But simply looking at CPU utilization, for example, overlooks the fact that business applications used by human beings run on these systems. Different pieces of an app, driven partly by the distributed computing craze of the early 2000′s, will be running in different physical and logical locations with different usage during different times. We may learn more about a database server by looking at disk or network i/o than memory utilization or CPU utilization. Batch jobs and month-end close tasks will have different usage profiles than reporting servers. We need to take all of these things into account.
We need to follow a methodical, deliberative approach to collecting the right metrics, for the right type of asset, over the right period.
Figuring Out What We Don’t Know
Assuming we collect those metrics correctly, what does that tell us? We only know the usage profile by server type, size, location, etc. We still don’t really know what runs on those systems. There is the likely-forgotten application architecture to take into consideration. We lost the knowledge of how those systems are configured, how they’re interrelated, who is using them, for what, how often. The business knows what URL they go to use their mission-critical application or what client to launch to run the reports or whatever it is. On the other end our outsourced partner can tell us that Oracle is running here and Datastage is over there and SAP is here and this Siebel cluster is using these configured SAN nodes. It is that piece in the middle we need to re-establish before we rationalize and optimize our way to happiness. We need to re-align the activities and roles and processes of the business as they use their applications with the systems that those applications utilize. This can be a long process of re-creating application or system architectures piece by piece. Server inventories can help. As can interviews with business units about what applications they use or own. The mapping of the application to the systems that support it is more of the magic sauce in this overall process (if there is any magic sauce to it at all). This may take several painful, day-long workshops with business units, their IT partners or representatives and the network/server/infrastructure guys. You may have to actually step through every machine in the inventory and sort out who owns what, what runs where, what are the interfaces, where is the data. I actually recommend this approach. You would be amazed at what you discover in terms of forgotten applications, pieces of defunct applications, duplication and waste. The business application owners will be amazed at what it takes to run their pet ERP tool.
With utilization in hand and knowledge of what runs where and who owns what, we need one more critical piece to complete the picture. What does this stuff cost? Outsourced situations will often have rate cards that detail what is charged for what piece of hardware or service. Internally supported infrastructure will still need to determine some measure of cost per server, per resource unit, per GB, per… something. It may have to take the form of estimation or extrapolation. But rates need to be defined for us to understand the scale and scope of what we’re dealing with in terms of dollars. These rates can be mapped to the server inventory and cross-checked with AP invoice data, for example. If we know what servers support Application XYZ, and we know the cost for those servers, storage, management services, etc., then we can paint a picture of what Application XYZ costs in terms of physical infrastructure. We can actually begin to create visualizations of our application landscape across the business value chain. We’ll be able to describe the applications, what business units they belong to, the tools and systems that enable those applications and the costs to operate them.
The Big Reveal
Work with procurement or vendor-partners and add cost data for application licenses and SaaS subscriptions (itself no small feat to obtain) and… presto! This is the opportunity to present back to the business that their little ERP tool actually has a huge footprint of servers, all with <x% utilization, costing $x per month to operate, with a single user who hasn’t logged in for 3 months. Nobody is still around that remembers why it was architected to support a company the size of Exxon-Mobil, and no your ERP tool doesn’t need computational power sufficient to model nuclear reactions, but at least we have a working picture of what it is and what it costs. And oh by the way, we have 8 other tools in the portfolio that perform the same function, owned by teams A, B and C.
The enterprise can now make decisions, using data instead of groping blindly, on where and how to Rationalize.
The Hard Work REALLY Begins
Technology is now in a position to begin making informed, credible, actionable recommendations to clean up the mess. Furthermore, it relieves the day-to-day pressure of operating a cluster of crap and positions the Technology strategy makers to begin thinking about what they’d like to do next.
The job isn’t finished. This isn’t a fire and forget type of exercise. But the mess is on it’s way to being sorted out. There’s another piece to this puzzle that is even more complex. How do you actually sustain this clean environment?