Six Easy Pieces: How to Do Cost of Ownership Analysis Better

If cost of ownership analysis is a painful exercise for IT organizations, why has almost every company done it (and continued to do it) multiple times? In this article, we will identify six key elements to effective cost of ownership analysis, which you can use these to improve the accuracy and eliminate the frustration associated with this necessary step in your IT evolution. 1) Analyze Platforms, Not Servers First evaluate the current "platforms" within your environment, including all servers of all types in order to simplify the process. Simply because management requires an accurate understanding of current IT costs and strengths so they can better assess new ideas and technologies.

One of the most difficult things to "get right" in an analysis of this type is an exact match between a given technology and the associated costs. Limiting the scope makes cost of acquisition simple to determine but it makes every other cost almost impossible to quantify without controversy. The easiest way to do this is not to limit the technology scope to a few machines or a single new application but to expand it to match all the technology in the IT budget. A platform approach will result in development of a new "view" of the IT budget that is platform based. This gives the study team tremendous leverage if discussions should wander to "I think this amount is too high for platform A." So if the amount is reduced for platform A, it must be raised for platform B. What does B think of that?

The advantage to this approach is that the total in this view should match the total in the budget. This places the entire cost discussion on solid footing - the IT budget - and allows the process to be managed dispassionately, a key to later acceptance of the results. 2) Focus on a Representative Application and Include All the Pieces Next, let's consider a new business critical application or workload that requires platform selection. By definition, a critical application will require careful design, careful sizing, careful maintenance, operation, support, and disaster recoverability. Once again, the key to success is to not limit the view to a subset of components. It may also require a new or dedicated infrastructure, but at a minimum, it will tax existing infrastructure. The "view" developed in the previous step should facilitate this type of analysis.

Each of these components and their associated costs, should be included in any cost of ownership comparison. Over the past ten years, our group within IBM Lab Services has been doing IT Systems and Storage Optimization ("Scorpion") Studies that focus on this type of view and component based analysis. This means that any analysis that omits those other components for support, maintenance, and disaster recovery, etc. may miss half of the real costs. Our findings show that a typical ratio of production Web, application, and database servers to "everything else" is about one to one. This discrepancy grows for very large critical applications and is largely why our industry hasn't done so well sizing many new enterprise application suites. Vendors feed this controversy to gain competitive advantage.

We've all heard the stories. 3) Consider Practical Capacity, Not Vendor Ratings System capacity and performance can quickly become a very tedious and esoteric discussion, and in many cost of ownership efforts, it does. This can be avoided. Often, distributed server utilizations are very low and there is a good reason for it. Our experience is that the most important aspect of performance analysis within cost of ownership is not which vendor claim or benchmark is used as a base, but rather (a) what system utilizations are "normal" in your current environment?; and (b) what is a reasonable expectation for the future? An underutilized server requires no capacity planning. If average server utilizations in your environment are low, model a future state of 2 or 3 times the current for each component in the possible solution.

Most cost analyses are considered part of a technology acquisition process so higher future state utilizations are assumed. No higher. This is particularly true with the rise of virtualization, which is almost always assumed in cost of ownership comparisons. Use any reasonable performance metric - expected utilizations are far more important. Transitioning from a non-virtualized to virtualized server environment has some significant advantages including higher potential utilization.

Don't assume world class utilization numbers unless you know what kind of effort it will take to attain them. It has a cost, however, capacity must be managed. IBM's System z mainframe environments typically demonstrate this fact quite well. They can do this because the level of internal standardization and automation is much higher than other platforms. Mainframes usually run at very high utilizations around the clock. Other platforms will eventually attain these levels, but that is still years of vendor development away. 4) Don't Ignore Labor Costs to Protect the Innocent The most difficult topic within cost of ownership is undoubtedly the cost of labor.

High Full Time Equivalent (FTE) ratios have been an industry target for years and most IT professionals can quote the current best practice and describe how they are exceeding it. In a down economic cycle, most staff see nothing positive in quantifying the cost of labor for "their" platform. Therein lies a problem. The extent to which strategy (b) is used differs by platform for a variety of reasons, but the result is the same. IT infrastructure support organizations have been managing to these ratios for years using two basic strategies: (a) improve efficiency; or (b) push work onto other parts of the IT organization. Any cost of ownership analysis that limits labor calculations to IT infrastructure support headcount will likely miss major portions of the real support costs and skew the results.

Consider the entire IT organization and apportion every group that is not truly platform neutral (and even individuals within an otherwise neutral group, like network support) to the appropriate platform labor category. A good solution to this problem is an approach similar to item one. The same kind of "view" is now developed for assignment of labor with the underlying organization chart as the foundation. They will be higher - up to 2 times or more on x86 platforms - and will reflect true insight to cost. The results will look quite different from industry published norms. Because the resulting labor cost numbers cross organizational lines, no one group will feel responsible for them, or for lowering them.

A side benefit to the process stems from the two strategies often used to manage FTE ratios - productivity improvement and narrowing of responsibility. Resistance to the process will be lessened and again, buy-in should be improved. If the FTE ratios are changed significantly by the new "view" of the organization, the need for productivity tools will be evident. This is in stark contrast to the mainframe where software costs tend to be high while systems are maintained at strict currency levels, with the result that staffing has been flat or dropping for years with steadily improving Quality of Service (QoS). 5) Quantify QoS in a Way that Makes Sense QoS is an elusive topic since it has so many aspects that differ in importance between companies, but some general trends can give guidance. In the years that we have been working with customers doing these studies, we've seen an alarming trend toward high complexity within distributed systems - old hardware, old software, multiple releases of everything to be maintained - and a lack of investment in systems management software.

In this age of real-time systems, disaster recovery has become a universal need. The majority of customers we've worked with have done the former and not the latter because of the cost. Two key metrics in disaster recovery are Recovery Time Objective (RTO - the time to bring alternative systems online for use) and Recovery Point Objective (RPO - the age of the data on those recovered systems). If we consider the dominant RTO and RPO for a given platform, we gain insight into both cost and QoS. Though any system can be made disaster recoverable, there is a huge cost differential between making a single mainframe recoverable and making 1,000 distributed systems recoverable. This can be quantified very easily with a call to a recovery services provider and should be included in the platform cost comparison. As IT will have to compete with public clouds the above cost analysis can be used to set internal cost guidelines for a corporate cloud infrastructure very early in the cloud development process. Cloud computing and other metered services will certainly offer a recoverable option and here lies an opportunity.

Or it can be used to steer workload onto the platform that is already recoverable, thus eliminating some of the need to develop a recovery capability where it currently does not exist. 6) Look at Costs Incrementally - Plot Your Own Course The last topic to consider is primarily financial. Just as the first chip to roll off a fabrication line is worth billions and the second worth pennies, the first workload for a platform is far more expensive to provision than subsequent ones. There is a "sunk cost" and an "incremental cost" associated with IT infrastructure that must be considered. This is especially true for the mainframe since the technology may be physically refreshed, but financially the transaction is handled as an upgrade. IBM has taken this concept a step further with the arrival of mainframe "specialty" engines that have much lower price points and drastically reduced impact on software costs. This is unlike the distributed world where technology and book value are tied together.

However, they cannot run alone, they must be added to an existing system. These kinds of dramatic differences must be considered in cost of ownership and are often large enough to justify a change in course for IT. Exploiting these areas of low incremental cost to support growth can significantly improve the overall cost of IT. Virtualization is expected to have a significant effect on other platforms, so the need is universal and growing. It is not unusual for mainframe systems in production to cost $4,000/MIP while specialty engine upgrades may run only $200/MIP. The incremental costs on the mainframe in this case are 1/20th of current cost. With 33 years at IBM, John Schlosser is currently a Senior Managing Consultant for the Scorpion practice within IBM Systems and Technology Group - Lab Services & Training. He is a founding member of the group which was started in 1999. He has developed and modified many of the methodologies the team uses for IT infrastructure cost analysis.

PayChoice offers more details about data breach

PayChoice, which this week confirmed that its online payroll systems operations were breached on Sept. 23, is now beginning to offer details on what it thinks may have happened. PayChoice today tells Network World "the company was preparing a timely public statement before the Washington Post report." "We are concerned that PayChoice has joined a growing list of other well-known firms that have been victimized by cyber criminals," says PayChoice CEO Robert Digby in a statement. The company did not publicly inform the media until earlier this week when Washington Post columnist Brian Krebs revealed some information known about the intrusion. That ever-growing list, of course, could include Heartland Payment Systems, which disclosed a data breach earlier this year that has had enormous impact on banking and card processing as it became known that cybercriminals had a chance to dip into information about 100 million payment cards.

The same could be said about Hannaford Brothers, the Portland, Maine-based supermarket chain, whose CEO Ronald Hodge stepped forward last year to disclose a breach there of customer payment information. But that incident came to light because CEO Robert Carr coordinated an outreach to proactively inform the public, through the media, about its data breach and has not shied from taking tough questions. Morristown, N.J.-based PayChoice provides payroll processing services and also licenses its payroll-management product to 240 payroll-processing firms serving 125,000 organizations. But the firm adds "clients should notify employees to carefully review their bank, credit card and other statements and to notify law enforcement officials immediately if they discover suspicious activity." The firm says it has also engaged forensics experts to investigate further and according to Digby's statement, "we will be reviewing all aspects of our security protocol to add any additional necessary protective measures." The company says it became aware of the attack "when it saw what appeared to be phishing e-mails telling clients they should download a browser plug-in to continue using their online accounts," PayChoice says in its statement. "The e-mails included client user names and partial passwords, which indicated a breach of PayChoice's Online Employer website." PayChoice says "within hours of the attack, the company notified its clients, shut down the site, and deployed further security measures to protect client information before restoring access to the system." PayChoice has also notified authorities and federal law enforcement. "Only customers using Online Employer were affected," PayChoice said in its statement. "The majority of PayChoice's clients, those using telephone, fax or other non-Web-based input methods, were not impacted." PayChoice contends there's no evidence of unauthorized access to sensitive employee information.

DHS to get big boost in cybersecurity spending in 2010

The U.S. Department of Homeland Security will likely have a substantially bigger cybersecurity budget for fiscal 2010 compared to this year. The amount is $84 million, or about 27%, higher than the $313 million that was allocated for information security spending in 2009. In approving the amount, the Senate said the increase was aimed at expediting efforts to combat cyberthreats using such measures as reducing the points of Internet access across the department and increasing security training and management capabilities. The U.S. Senate yesterday passed legislation approving nearly $43 billion for the DHS for fiscal 2010. Of this, about $397 million is meant for addressing cybersecurity issues within the agency. The Senate measure reconciles different appropriation bills that were passed by the Senate and House appropriation committees.

The increase approved for DHS cybersecurity is "pretty hefty," said John Pescatore an analyst with Gartner Inc. in Stamford, Conn. "The real key thing is where does the money go?" he said. "How much of it will fund the same old stuff and how much of it will be on R&D?" The amount set aside for DHS cybersecurity spending next year looks "about right" said Karen Evans, former de factor federal CIO under the Bush administration. The bill is headed to President Obama for his signature. The amount is meant for DHS internal operations only and is consistent with increases provided to other departments and agencies, she said. The $43 billion appropriations bill included similar increases in other technology areas. Cybersecurity spending across the government for 2010 is projected at about $7.5 billion, or about 10% of the total IT budget of $75 billion, she added. A budget of about $1 billion, or about $75 million more than 2009, was approved for the DHS's department of science and technology, which conducts research on such areas as cybersecurity and air cargo security.

Obama's economic stimulus package, which passed earlier this year, provided another $248 million for planning, design, IT infrastructure, fixtures and other costs related to the consolidation. The bill also provides an additional $91 million on top of a previously approved $60 million for a massive data center migration and consolidation effort underway at the DHS. The consolidation is part of a move by the DHS to build a new 4.5 million-square-foot facility in the Washington area. The Senate bill also extended the DHS' online employment eligibility verification program, called E-Verify, by three years and allocated $137 million for operating the system and for improving its reliability and accuracy . Opponents of E-Verify had claimed the system was too buggy and error-prone to be used as the federal government's primary tool for employment verification. It also require linking to driver's license databases around the country. Meanwhile, in what appears to be a reaction to the growing opposition to the program , funding for the controversial Real ID initiative was cut back 40% from $100 million to $60 million in 2010. Real ID, launched during the Bush administratiion, requires states to meet new federal standards for issuing driver's licenses.

Chip maker TSMC buys stake in solar cell maker Motech

Taiwan Semiconductor Manufacturing (TSMC) bought a 20 percent stake of solar cell maker Motech Industries for NT$6.2 billion (US$193 million), the companies said in a joint statement on Wednesday. The two companies will work together on new business ventures, and TSMC will work with Motech to launch new products faster and evaluate other opportunities in the solar business, the companies said. Motech, Taiwan's largest maker of solar cells, the key component of solar panels and solar modules, will become a key part of TSMC's move into green industries through the investment.

TSMC is the world's largest contract chip maker. TSMC revealed a plan earlier this year to open a new business related to energy conservation, including the solar and LED industries. The company as well as other chip makers work with the same polysilicon material used for solar cells and believe they have an edge in the industry due to materials research as well as expertise in manufacturing and management. The company invested US$46 million earlier this year to open a LED (light-emitting diode) production line in central Taiwan, a move investment bank Credit Suisse called a first step into the LED lighting business. The solar and other energy alternative industries are also being nurtured by governments such as the U.S. with stimulus money meant to battle the global recession. At TSMC's second-quarter earnings conference, Chairman and CEO Morris Chang said the solar and LED will likely generate revenue as high as US$10 billion to $15 billion for TSMC by 2018. Governments worldwide, including Germany and the U.S. state of California, have offered incentives for people to use solar panels due to rising oil prices.

Apple's iPad marketing sparks complaint to FTC

Apple's iPad, announced Wednesday, has already led to one complaint to the U.S. Federal Trade Commission in which a consumer charged Apple with false advertising by showing Adobe Flash working on the device. At its launch event in San Francisco, Apple showed a video of people using the device, and a series of images of the iPad dominates the company's home page. Apple CEO Steve Jobs unveiled the iPad to a waiting world this week after months of speculation and rumors about the slim tablet computer, and Apple's marketing machine has continued to roll since then.

But those promotions are misleading, according to Paul Threatt, a Web and graphic designer who lives near Atlanta. In fact, Adobe says more than 70 percent of all games and 75 percent of all video on the Web uses Flash. Both show the iPad displaying elements of the online New York Times that cannot be viewed without Flash, even though the device apparently doesn't support that software. "I don't hold anything against them for not supporting Flash," Threatt said. "It'd be great if they did, but what I don't want them to do is misrepresent the device's capabilities." Flash is used to deliver multimedia, games and other content on many Websites. But it appears that the iPad, like the iPhone and iPod Touch, doesn't support that format. Adobe was not approached by Apple before the product was announced, Adobe spokesman Stefan Offermann said, indicating Flash is probably not in the works for the shipping version of the iPad, due to hit stores in about 60 days. In a live demonstration at Wednesday's launch event, icons that show a missing plug-in popped up on the iPad's screen when Jobs was showing off the Web front page of the New York Times.

However, some of the same New York Times content that wouldn't display during the live demonstration shows up just fine in a promotional video and one of the pictures on Apple's home page, Threatt pointed out. Likewise, a slideshow of images accompanying the article "The 31 Places to Go in 2010" require Flash, yet one of those pictures shows up in the promotional video and an image of the iPad on Apple's page. (In another twist, in the video, the slideshow begins at what appears to be the 14th image, even though on the Web it starts with the first picture.) Threatt said he was clued into the discrepancies by a Friday post at the AppleInsider blog. A collection of still images halfway down the Times front page, which represent segments on the site's video section, need Flash to appear but are visible as a user looks over the front page in the Apple clip. He was aware of the FTC's online Complaint Assistant, because he had used it before. "Whenever the urge strikes me and I feel like someone is being deliberately misleading, I go to that site," Threatt said. "I never know if I'm screaming into the void or not, but it makes me feel better." In his complaint, Threatt briefly explained the technologies to the FTC, and then summed up the problem. "In several advertisements and images representing the Apple products in question, Apple has purposefully elected to show these devices correctly displaying content that necessitates the Adobe Flash plug-in. Apple did not immediately respond to a request for comment. This is not possible on the actual devices, and Apple is very aware of that fact. ... This constitutes willful false advertising and Apple's advertising practices for the iPhone, iPod Touch, and the new iPad should be forcibly changed," Threatt wrote, in part.

The FTC was not able to give details about the complaint or how it would respond. He described himself as a longtime Apple fan who used to work weekends at the Apple Store in Tyson's Corner, Virginia. "I'm big into Apple, and I always liked helping people learn Apple stuff," Threatt said, adding that it pains him to turn against the company. Threatt wasn't an obvious candidate to lodge the complaint. As for the iPad, Threatt said he would love it as a step up from his iPod Touch - if only it had a video camera.

Lotus user wary of social networking tool rollout

As IBM moves to upgrade its cache of social networking tools, some users are taking a cautious approach to the technology while figuring out where it will apply and how to measure its effectiveness. The new 2.5 version software includes micro-blogging, file sharing and new mobile capabilities. Where IT pros do their social networking IBM Tuesday unveiled Lotus Connections 2.5, its upgraded lineup of social networking tools that are a major expansion to the company's suite of collaboration software. But some of the features are expanding faster than users' plans to utilize the software.

The company's manager of messaging and collaboration asked for anonymity because he was not authorized to speak on the record. One Connections 2.5 beta tester, a global consumer product corporation, is taking a deliberately slow approach to rolling out the social collaboration tools. The company started slow with a few hundred users who were only allowed to communicate with each other. At that point, the manager says, the number of users exploded by 650% to a few thousand. The group's size was eventually doubled and then the tools were opened up companywide. Despite the growth, the company is still "seeding the environment," said the manager, but a broader rollout is planned.

We will likely "wind up doing it anecdotally," said the manager. "The things we're struggling with there is that this doesn't match the ROI [metrics that executives] are used to looking at. The harder part to plan is the expected results because the company has yet to figure out how to measure its return on investment. How do you measure, 'we recruited this person because of the [collaboration tool]?'" While results are hard to gauge, the broader, anticipated benefits are being defined in the context of capturing and recording corporate knowledge. The worker could develop a how-to guide for use by others, he said. For example, a certain administrative assistant may routinely be tasked with booking a certain type of event, said the manager.

The manager said it is a good time to ramp up internal communities and knowledge-sharing because as the economy and job markets rebound, workers who may have suffered pay or benefit cuts amid the recession will be looking to move on. "Now is the time to get people to put information in, so you're not losing it on the back of a Post-it note." Follow John on Twitter. -Kanaracus is with the IDG News Service Follow Chris on Twitter.

Snow Leopard bug deletes all user data

Snow Leopard users have reported that they've lost all their personal data when they've logged into a "Guest" account after upgrading from Leopard, according to messages on Apple's support forum. The MacFixIt site first reported the problem more than a month ago. The bug, users said in a well-read thread on Apple's support forum, resets all settings on the Mac, resets all applications' settings and erases the contents of critical folders containing documents, photos and music.

Users claimed that they lost data when they'd logged into their Macs using a "Guest" account, either purposefully or by accident. Specifically, Snow Leopard's home directory - the one sporting the name of the Mac's primary user - is replaced with a new, empty copy after users log-in to a Guest account, log out, then log-in to their standard account. Reports of the bug go back to Sept. 3, just six days after Apple launched Snow Leopard , or Mac OS X 10.6. Users who said they'd encountered the bug said that they had upgraded their systems from Mac OS X 10.5, known as Leopard. All the standard folders - Documents, Downloads, Music, Picture and others - are empty, while the Desktop and Dock have reverted to an "out-of-box" condition. "I had the Guest account enabled on my MacBook Pro," said a user identified as "tcnsdca" in a message posted Sept. 3. "I accidentally clicked on that when I went to log in. All of doc, music, etc. gone." "Add my parents to the list of people waxed by this bug," added "Ratty Mouse" today on the same thread. "Brand new iMac, less than one month old, EVERYTHING lost. It took a few minutes to log in, then after I had logged out of that account and back into mine, my [entire] home directory had been wiped.

Just as I convinced them to go Mac after years of trying." On the thread, several users urged others to disable any Guest accounts to prevent any accidental data loss. This morning I had access to Guest Account and than all my data were lost!!!" bemoaned someone tagged as "carlodituri" last Saturday. "I had 250GB of data without backup and I lost everything: years and years of documents, pictures, video, music!!! Some people were able to restore their Macs using recent Time Machine backups, but others admitted that they had not backed up their machines for weeks or months. "Just my luck I hadn't made a backup since 11th August," acknowledged "rogerss" on a different support forum thread. "So annoyed now, in the process of restoring from Time Machine, but have lost loads of my work due to this fault." Others users, however, had neglected to back up their Macs. "Nooooo!!! Is it possible to recover something? Some, for instance, wondered if the data loss would be triggered on Macs upgraded to Snow Leopard when the Guest account was simply set to "Sharing only," which is the default. Please help me!!!!" Not surprisingly, users unaffected by the bug were reluctant to attempt to reproduce the problem.

Apple did not respond today to questions about the bug.