Recently in Cloud Computing Management Category

Process?

| No Comments | No TrackBacks
I know I've been very quiet on the blogging front, and frankly that is due to the following:

  1. I'm retired
  2. My grandaughter is way higher on my list of priorities
  3. Nothing much has annoyed me for a while (apart from our politicians who annoy me all the time)!
However, two recent things have caused me to leap to the keyboard and bore you rigid with my thoughts.

  1. I read an article about people using the infamous "CLOUD" , and hey presto they think their backup and recovery procedures are not needed any more - DOOOH!
  2. We have had a banking mess over here - not the LIBOR fiddling - which involved a certain bank's systems being offline for a long time because some poor underpaid soul in the depths of our previous empire didn't backup the scheduling info before he changed it. Unfortunately his change also happened to involve deleting it!
Several things struck me straight away. 

  • If you move key parts of your computing to cheap labour, don't be surprised if it goes wrong occasionally. Unfortunately the bean counters never look at the real costs when they move things offshore.
  • Don't blame the poor soul who deleted the data - blame the process, which probably wasn't in place. 
  • If you use the CLOUD, don't expect everything to happen by magic. All the CLOUD means is that someone else is running the system - it doesn't mean they are running key processes for you, unless you ask them to. CLOUD does not replace ITIL and CoBIT.

Our friends at BrightTALK are hosting a BSM Summit on August 10th.

The summit is featuring two of our BSMReview experts, Bill Keyworth and Ken Turbit.

Bill's session entitled "Can BSM Navigate the Turbulent Air Currents of Cloud Computing" is focused on the widening gap between IT and Business as more services are pushed out to the cloud and provisioned by business.

Ken's session entitled "Five Steps to Ensure BSM Works in the Cloud" describes what he believes are the 5 critical capabilities required to effectively support BSM in the cloud which are aligned with ITIL best practices.

The virtual summit features 5 additional industry experts that discuss how cloud computing is affecting BSM and ITSM strategies and processes. A few of the thought provoking session titles include:

How to Take the BS out of BSM?
How to Catalog your Cloud?
Can 140 Characters or Less Really Impact IT and the Business?


I think you will find this summit worth your time and it may, in fact, be quite entertaining.

I would also like to point you to a BSMReview article entitled "A Service Model for Cloud Computing" that is a worthwhile read as you prepare for the BSM Summit.

Cheers.

Chris Bruzzo, the CTO of Starbucks, and Narinder Singh, the founder of Appirio, demonstrate Starbucks Pledge 5 application, built on the force.com platform.

They did it in 21 days.  That’s the real value of the cloud.

Watch:



Having got my latest rant off my chest in the previous entry I would like to return to the whole area of cloud / dynamic computing. 

In an earlier entry, what I threw open to debate was how do you set, measure and report SLAs in a cloud environment? Who owns the service? Who reports to whom? Who knows how to react to a problem? Is it critical to them as a provider or critical to you as their customer? What does an outsourcing contract look like in a "Cloud" world"? etc. etc. 

What's going round my brain at the moment is taking this further into the whole world of dynamic computing, where everything is in a constant state of change. One of the key components in a BSM world is the CMDB, which is difficult enough to populate in a static environment. How is discovery going to work in a dynamic environment? How rapidly is it going to discover and react to the change? Is discovery going to be tied into change and compliance management so that changes, which do not fit the (hopefully established) business policies are rejected? etc. etc.

I would be very interested in your thoughts or any experiences anyone would like to share on managing a dynamic environment.
Another area that is gaining more and more attention these days is "Cloud computing" and I guess the largest issue I have is around its scope and definition. Many appear to offer hosted services and this is now renamed as Cloud computing, even outsourcing, managed services, Software as a service (SAAS) fall under this new branding. Is that all it is, a simple rebranding to allow all remotely hosted services to have a new home?

As with all new paradigm shifts the best evidence that it'll be widely adopted and accepted is by looking at the user community for this.  At this week's Westminster eForum one of the speakers, Rik Ferguson, senior security adviser at security firm Trend Micro told us that the criminal fraternity are the largest group of adopters. Well I guess if we look back to look forward, we'll see that this was the case for the early adopters of the internet (pornography being the biggest financial winner). Well Rik also highlighted that "We already see customers of Google, customers of Amazon, who are criminals and who use those services, among others, to run command-and-control services for botnets, to launch spam campaigns and to host phishing websites. They see the power, the scalability, the availability and, for them, the anonymity that is possible through cloud services and they are using it to its fullest extent."

Well the good news is that both large and small organisations will benefit from the Cloud, enabling smaller companies to automate, scale up and down depending on the market conditions whilst keeping overheads well managed. Large organisations can also reduce overheads, move into new or changing business areas quickly without being held back by in-house technology restraints. However I think that now, more than ever, process becomes king. Knowing your business services processes and IT services processes are in place, ownerships of responsibilities are understood become the key to success when the ownership of the infrastructure (including operating systems, software and applications) are left to someone who is not a part of your business. It appears to me that we are entering into the realms of treating IT as a utility, just like Electricity, gas etc. We need it to be there, we need to know the costs of utilisation, but the providers do not need to know what we run on it. This makes me think about the capacity planning and availability issues. We in the UK certainly know that the Electric providers monitor the utilisation and have to prepare for odd events like the ½ time during a football and rugby match as viewers go and put the kettle on for tea. The utility suppliers need to understand their market, its dynamics and influences, however odd, to ensure all the customers get the resources they need, when they need it without interruption. Can "the cloud" handle this now or in the future?

Who's working on the cloud right now? Well Amazon, Google, Sun, IBM etc, but some surprising companies are entering the market utilising spare capacity from their traditional business. Salesforce.com now offers Force.com to other business to host applications on. BMC software being a recent case in point. So keep your head in the cloud and watch how things develop, in particular the process issues of dual ownership and the end to end automation, but keep your feet on the ground to ensure you protect your business and understanding the current and potential risks.

The space between the cloud (hosted infrastructure (including apps)) and the end users would be the area that needs focus. Can that be called "fresh air"?
(Co-authored with Jasmine Noel) Cloud computing makes CA's acquisition of Oblicore interesting because cloud services without serious level contracts (....or a BSM orientation) are an enterprise disaster waiting to happen. Cloud Service providers (be they public, private or hybrid) will need business service management solutions capable of delivering against business-oriented SLAs. Cloud service users will need such solutions to help them make a wise choices from a confusing array of options.

The problem is two-fold, first Cloud implementations transform monolithic IT service delivery into a dynamic supply-chain with volatile interdependencies, interactions and impacts between each link. SLAs will be required that can identify, track, measure and report on each segment of the chain. CA has been working on this aspect of the problem under the Spectrum Service Assurance moniker.

Second, there is the translation of business oriented contract terms and requirements into a meaningful and measurable metrics that apply in a Cloud-environment. It will require a combination of creative modeling, impact analysis and metric identification and definition that relate business needs to infrastructure implementation...or a BSM type bridge between the business and IT gap. Oblicore focused its efforts on this aspect of the problem.

If CA can integrate Oblicore's technology with its Service Assurance efforts with minimal fuss then the results should be a very interesting BSM solution to these Cloud services problems.

Read the full commentary at http://ptaknoel.com/research-analysis/commentaries/ca-acquires-oblicore/

Apologies for the lack of blogging recently, but a combination of practising for a couple of Christmas gigs (I play keyboards and guitar) and being laid low by one of those tedious stomach bugs means that I have been somewhat occupied. Anyway, I am now out  of bed and the next practice isn't till later this evening, so I thought I would try and start a conversation on this whole cloud thing - makes a change from talking about Tiger Woods!

The title above is not the one I was going to use originally - I am more of the "Hey, you, get off of my cloud" generation (Rolling Stones for the youth readership). However, the song title, which Google tells me is from Oasis (grossly overrated IMHO) seemed very apt as:

1.Everyone has a different opinion as to what "Cloud" is, so
2.Every vendor has it, and hence
3.It is, like all new things in computing, the solution to all known problems, and therefore
4.It is also not the solution to all known problems, and will introduce a whole new raft of issues and problems.

For those who think that is a tad cynical, I have been in computing for 38 years - enough said.
Having said all that, "Cloud" will happen in some way or another and the data centre of the future will by definition be a selectively outsourced beast with e.g. one company running the network, another the desktop apps etc.

Hence, what I would like to throw open to debate is how do you set, measure and report SLAs in that environment? Who owns the service? Who reports to whom? Who knows how to react to a problem? Is it critical to them as a provider or critical to you as their customer? What does an outsourcing contract look like in a "Cloud" world"? etc. etc. 

The Joys of Real Hardware

| 4 Comments | No TrackBacks

Colleague and friend Sebatian Hassinger sent me Jeff Dean's presentation Designs, Lessons and Advice from Building Large Distributed Systems. The presentation is fascinating in quite a few ways, not least of which is the (implied) statements it makes about requirements for business service management at large scale. For example, here is an excerpt from the slide entitled The Joys of Real Hardware:

Typical first year for a new cluster:

 

~0.5 overheating (power down most machines in <5 mins, ~1-2 days to recover)

~1 PDU failure (~500-1000 machines suddenly disappear, ~6 hours to come back)

~1 rack-move (plenty of warning, ~500-1000 machines powered down, ~6 hours)

~1 network rewiring (rolling ~5% of machines down over 2-day span)

~20 rack failures (40-80 machines instantly disappear, 1-6 hours to get back)

~5 racks go wonky (40-80 machines see 50% packetloss)

~8 network maintenances (4 might cause ~30-minute random connectivity losses)

~12 router reloads (takes out DNS and external vips for a couple minutes)

~3 router failures (have to immediately pull traffic for an hour)

~dozens of minor 30-second blips for dns

~1000 individual machine failures

~thousands of hard drive failures, slow disks, bad memory, misconfigured machines, flaky machines, etc.

The bullets listed above resonate with my Agile Business Service Management thinking. They can simply be thought of as the reality underlying BSM at scale. The scale and scope of operating on top of such envirnments necessitate new techniques in BSM. For example, Jeff discusses Protocol Buffers as one such technique used by Google to attain the requisite efficiencies. Likewise, treating infrastructure as code is - as we say in chess - a practically forced variant. In both cases, the traditional wall between development and operations is moot.

Internet-Scale BSM

| 2 Comments | No TrackBacks

Colleague and friend Annie Shum shared with me fascinating data from her research on Cloud Computing. According to Annie, the economics of mega datacenters are compelling:

The study concludes that hosted services by Cloud providers with super large datacenters (at least tens of thousands of servers) can achieve enormous economies of scale of five to seven times over smaller scale (thousands of servers) medium deployments. The significant cost savings is driven primarily by scale.

In the context of BSM Review, the obvious question this study poses is the tweaking of Business Service Management to respond to and cope with operational and business challenges on such a scale. For example, at smaller scale configuration drift might be laboriously manageable through traditional techniques. For super large datacenters, however, it is a compound problem:

  • Exception handling is prohibitively expensive at large scale.
  • Scale economics are likely to diminish (due to configuration drift problems). 
  • The associated risk could be lethal. Large scale configuration drift might go beyond loss of data in an IT department - the datacenter operator might lose customer’s data.

Knowing Annie, I have no doubt she will elaborate at length and depth in this blog on various Cloud Computing aspects of Business Service Management such as Virtualization Sprawl. (See her recent article A measured Approach To Cloud Computing: Capacity Planning and Performance Assurance for the first “installment” on this important topic).   I will do the same with respect to Agile Business Service Management at grand scale. For example, an intriguing question is the setting, modus, operation and governance of the Application Support Team in this kind of environment. One can actually view it as a Venn diagram:

  • Cloud Operations on one ‘circle’ 
  • Customer Application Development on another
  • Application Support in the intersection 

Stay tuned!

Here's a free Cloud Storage Toolkit for Service Providers to help them answer the question: "Should we enter the Cloud Storage marketspace?"

We should do one for Business Service Management - a toolkit like this for the CFO and CIO!
UPDATE: we now have a BSM Maturity Model (registration required) >>

One of our unstated goals at BSMReview.com is to create a maturity model for Business Service Management and beyond. Of course, this maturity model may differ slightly by industry, but the idea is to create a model which is good enough to create a "common roadmap" for IT and its business partners (yes, we will include cloud services).

To start the discussion, I've brought together some of the traditional thinking from IT 1.0, and some "edge insights" from people like JSB.

To start, let's look at Gartner's IT Management Process Maturity Model from 2005. Looks familiar, doesn't it? What should Level 5 and Level 6 look like? 

maturitymodel_gartner.gif


For nGenera, a few years ago, Vaughan Merlyn created a different sort of maturity model based on demand and supply:
maturitymodel_demand.gif

He asks:

Business demand is also a function of IT supply - low supply maturity will constrain business demand.  For example, an IT infrastructure that is unreliable and hard to use will tend to dampen the business appetite to leverage IT for business innovation and for collaboration with customers and partners.  Typically, if business demand gets too far ahead of IT supply, there will be a change of IT leadership.  On the other hand, if IT supply gets too far ahead of business demand, IT will be seen to be overspending, resulting in a change of IT leadership.  The most common patterns are that at Level 1, business demand leads IT supply; in Level 2, IT supply tends to 'catch up' with and overtake demand, and in Level 3, demand and supply are closely aligned. From the perspective of late 2007, we see the majority of companies at mid-Level 2, some at high Level 2, and a minority at either low Level 3 or high Level 1.  Why are so many at mid-level 2, and seem to be struggling to get to the next level?
Good question. Any ideas?

Then there's Accenture's Service Management Maturity model from their ITILv3 practice - they rightly state that ITILv3's focus is on business results; hence their advocacy for adoption:

maturitymodel_accenture.gif




At Deloitte, JSB and Tom Winans have built an interesting map for "autonomic computing" which is focused on the direction of IT's evolution. It's part of a series of papers on cloud computing. It's a technology maturity model, if you will:

maturitymodel_jsb.gif

Finally, I borrowed this SOA Maturity model from Infosys:
maturitymodel_infosys.gif


Taken together, we have enough food for thought and discussion, don't you think? I have this silly notion that a business service management maturity model must begin and end not with IT but the business.  And cloud computing will certainly play a giant role in this transformation from physical datacenter to cloud service grids.  And of course we'll still have to worry about compliance and security.

Once again, I'll defer to the JSB and Winans vision for the future.  After we get to autonomic computing, then comes the service grid:

maturitymodel_jsb2.gif



If I understand correctly, here's what they're saying: technology platforms will be business platforms.

With that, let's ask once more: what does a Business Service Management Maturity Model look like to you? 

UPDATE #1:

HP has an ITIL-view which is evolutionary:

maturitymodel_hp.gif

UPDATE #2:

IBM
gives us a look at a maturity model developed by Macehiter Ward-Dutton:

maturitymodel_ibm.gif

Stay tuned.

Business Service Management (BSM) is a process, a mindset, not a product (as Peter Armstrong would say) so it is not a technology in the first place.  It is strategic, however, so let's take a quick look at each of Gartner's choices and ask:

"What has this got to do with BSM?"

Gartner's Top 10 Strategic Technologies for 2010

Cloud Computing. Cloud computing is a style of computing that characterizes a model in which providers deliver a variety of IT-enabled capabilities to consumers. Cloud-based services can be exploited in a variety of ways to develop an application or a solution. Using cloud resources does not eliminate the costs of IT solutions, but does re-arrange some and reduce others. In addition, consuming cloud services enterprises will increasingly act as cloud providers and deliver application, information or business process services to customers and business partners.
My two cents: Managing cloud services demands that companies must have a BSM strategy which can monitor and manage the physical datacenter, virtualization, and the cloud - whether it be public, private, or hybrid. We need ITIL in the Cloud and robust Cloud Service SLAs.

Advanced Analytics. Optimization and simulation is using analytical tools and models to maximize business process and decision effectiveness by examining alternative outcomes and scenarios, before, during and after process implementation and execution. This can be viewed as a third step in supporting operational business decisions. Fixed rules and prepared policies gave way to more informed decisions powered by the right information delivered at the right time, whether through customer relationship management (CRM) or enterprise resource planning (ERP) or other applications. The new step is to provide simulation, prediction, optimization and other analytics, not simply information, to empower even more decision flexibility at the time and place of every business process action. The new step looks into the future, predicting what can or will happen.

My two cents: OK, so now we know how to compete on analytics. But the decision-making process is much more complex than most people expected. Analytics are fine, but what we need is refined insight and critical understanding.  The Big Shift Index tells us about what we haven't thought about measuring yet! Where's BSM in all of this? Well, if your CRM and yoru ERP systems are mission-critical, then BSM ensures they deliver on their promise when you need it.

Client Computing. Virtualization is bringing new ways of packaging client computing applications and capabilities. As a result, the choice of a particular PC hardware platform, and eventually the OS platform, becomes less critical. Enterprises should proactively build a five to eight year strategic client computing roadmap outlining an approach to device standards, ownership and support; operating system and application selection, deployment and update; and management and security plans to manage diversity.

My two cents: Anytime, anywhere, on any device. BSM must be an integral part of managing virtualization to avoid virtual sprawl, if nothing else. Of course there's the end-user experience that needs monitoring as well.

IT for Green. IT can enable many green initiatives. The use of IT, particularly among the white collar staff, can greatly enhance an enterprise's green credentials. Common green initiatives include the use of e-documents, reducing travel and teleworking. IT can also provide the analytic tools that others in the enterprise may use to reduce energy consumption in the transportation of goods or other carbon management activities.

My two cents: Virtualization and Cloud computing will help IT become greener faster, by reducing the datacenter footprint.  And virtual collaboration can reduce carbon emissions. Isn't optimizing asset usage a BSM function?

Reshaping the Data Center. In the past, design principles for data centers were simple: Figure out what you have, estimate growth for 15 to 20 years, then build to suit. Newly-built data centers often opened with huge areas of white floor space, fully powered and backed by a uninterruptible power supply (UPS), water-and air-cooled and mostly empty. However, costs are actually lower if enterprises adopt a pod-based approach to data center construction and expansion. If 9,000 square feet is expected to be needed during the life of a data center, then design the site to support it, but only build what's needed for five to seven years. Cutting operating expenses, which are a nontrivial part of the overall IT spend for most clients, frees up money to apply to other projects or investments either in IT or in the business itself.

My two cents: See previous two cents <<

Social Computing. Workers do not want two distinct environments to support their work - one for their own work products (whether personal or group) and another for accessing "external" information. Enterprises must focus both on use of social software and social media in the enterprise and participation and integration with externally facing enterprise-sponsored and public communities. Do not ignore the role of the social profile to bring communities together.

My two cents: Have you noticed that Twitter is having availability issues lately?  I wonder if they use ITIL or BSM?  Same story on Facebook. Maybe they use ITIL-Lite.  There are unfortunately, some documented productivity issues with social computing, but we have an effective solution for improving knowledge-worker productivity.

Security - Activity Monitoring. Traditionally, security has focused on putting up a perimeter fence to keep others out, but it has evolved to monitoring activities and identifying patterns that would have been missed before. Information security professionals face the challenge of detecting malicious activity in a constant stream of discrete events that are usually associated with an authorized user and are generated from multiple network, system and application sources. At the same time, security departments are facing increasing demands for ever-greater log analysis and reporting to support audit requirements. A variety of complimentary (and sometimes overlapping) monitoring and analysis tools help enterprises better detect and investigate suspicious activity - often with real-time alerting or transaction intervention. By understanding the strengths and weaknesses of these tools, enterprises can better understand how to use them to defend the enterprise and meet audit requirements.

My two cents: See this survey on security management best practices.

Flash Memory. Flash memory is not new, but it is moving up to a new tier in the storage echelon. Flash memory is a semiconductor memory device, familiar from its use in USB memory sticks and digital camera cards. It is much faster than rotating disk, but considerably more expensive, however this differential is shrinking. At the rate of price declines, the technology will enjoy more than a 100 percent compound annual growth rate during the new few years and become strategic in many IT areas including consumer devices, entertainment equipment and other embedded IT systems. In addition, it offers a new layer of the storage hierarchy in servers and client computers that has key advantages including space, heat, performance and ruggedness.

My two cents: Wrong? We're going to see cloud storage take over this area, and it may or may not use flash memory.

Virtualization for Availability. Virtualization has been on the list of top strategic technologies in previous years. It is on the list this year because Gartner emphases new elements such as live migration for availability that have longer term implications. Live migration is the movement of a running virtual machine (VM), while its operating system and other software continue to execute as if they remained on the original physical server. This takes place by replicating the state of physical memory between the source and destination VMs, then, at some instant in time, one instruction finishes execution on the source machine and the next instruction begins on the destination machine.

However, if replication of memory continues indefinitely, but execution of instructions remains on the source VM, and then the source VM fails the next instruction would now place on the destination machine. If the destination VM were to fail, just pick a new destination to start the indefinite migration, thus making very high availability possible. 

The key value proposition is to displace a variety of separate mechanisms with a single "dial" that can be set to any level of availability from baseline to fault tolerance, all using a common mechanism and permitting the settings to be changed rapidly as needed. Expensive high-reliability hardware, with fail-over cluster software and perhaps even fault-tolerant hardware could be dispensed with, but still meet availability needs. This is key to cutting costs, lowering complexity, as well as increasing agility as needs shift.

My two cents: Now this is a BSM play if there ever was one!

Mobile Applications. By year-end 2010, 1.2 billion people will carry handsets capable of rich, mobile commerce providing a rich environment for the convergence of mobility and the Web. There are already many thousands of applications for platforms such as the Apple iPhone, in spite of the limited market and need for unique coding. It may take a newer version that is designed to flexibly operate on both full PC and miniature systems, but if the operating system interface and processor architecture were identical, that enabling factor would create a huge turn upwards in mobile application availability.

My two cents: Anytime, anywhere, on any device.  Didn't I write about that a few seconds ago? And don't we need our CMDB to track all these diverse devices and apps?

As you can see, I've attached Business Service Management (BSM) as an enabling IT strategy for just about all ten of Gartner's Strategic Technologies for 2010. And of course if it's a service provided by IT or even an external service provider, we're still going to need a Service Catalog for 2010. More on that in a later post.

Israel, where do agile practices fit into this? Just about everywhere as well?

What Matters is the End Goal

| 1 Comment | 1 TrackBack
Its strange how history repeats itself, fashions go in cycles, and every generation comes to them for the first time thinking these things are new, innovative and revolutionary. I guess it's because we're still human and we still need to learn the same lessons over and over again. We want to listen to advice, but can't, we want to learn from the past, but don't, we all want something that's called "common" but is far from it - sense!

May years ago now the company I worked for at the time brought a new concept to the marketplace. The analysts jumped onto it and made it their own and the market hype was all over it, it was the direction all business had to get to. Eight or more years on and we're still moving in that direction, the buzz died down, but the capabilities slowed and the term used changed from IRM to BSM.  However BSM was actually only a subset of what IRM aimed to achieve. With the complexities we find ourselves in today, with Virtualisation and Cloud computing the issues are still the same only in some cases magnified and the responsibility of ownership is moving. More and more the Business is, and will continue to, relinquishing ownership of the delivery of services to the employee (who make up the business) and allow suppliers to take over. It's something that has happened for centuries now. We moved from self-sufficiency to being reliant on others. Once, we all had wells in the garden to provide water for the household, now it's all provided through piped services. Once, we had to make our own small generators for the electrification of the Home, Farm or estate, now it's all provided through piped services. The list goes on, and so it is and will continue to be within the IT environment. Hence the need for Service Management to ensure we all have the disciplines, controls, standards and processes in place, controlled and managed to ensure delivery as required by the customers, whomever they may be.  Why did we move this way? Well for various reasons, economies of scale, cost savings, and to allow us to focus on our core competency without being dragged down by day to day necessities of life.

A slide on my website shows what is required to support the employee, who is at the centre of the business, and how these are more and more being delivered via services as depicted around the circumference of the sphere.  This slide goes back 8 years or more, so not new, but it appears it was rather a vision of the future, and more and more I can see it being fulfilled. Whether we use the same term or not is irrelevant, what matters is the end goal. Something that Geoffrey Moore of Crossing the Chasm fame predicted at roughly the same time.

Check out the slide and let me know if you see it being slowly fulfilled:

turbitt_internalexternalsp.jpg

cloud computing

Says Annie Shum:

IT professionals should underscore the critical roles played by integrated virtualized service oriented management, governance, performance assurance, and analytics-based feedback loops. Together, they can safeguard the successful adoption and, ultimately, the viability of Cloud Computing in enterprise IT.

Read: A Measured Approach To Cloud Computing: Capacity Planning and Performance Assurance

BMC Software announced yesterday the acquisition of Tideway Systems Limited, a UK-based, privately-held IT discovery solution.  As outlined in the press release, there is always goodness in IT delivering greater value to their business community through improved understanding of what IT assets are owned, what constitutes their relationships and inter-dependencies, where they are located and who owns them.  Tideway's contribution to that value is unquestioned.  (See Israel Gat's story on the acquisition announcement). 
 
BMC indicated that "the new offering supports the complete set of discovery requirements for BSM and features deep integration with BMC's Atrium Configuration Management Database (CMDB)." The yet-to-be fulfilled promise deals with the deeper integration of Tideway's IT discovery and BMC's Atrium Configuration Management Database (CMDB).  I'm assuming deeper integration as a result of the acquisition, else why the need to buy out their premier IT discovery partner ...except to remove that premiere offering from the grasp of BMC's competitors? 

Unknown is the impact to Tideway's existing partners such as Oracle and ASG Software Solutions.  What about the other 60+ Tideway partners and those customers who are dependent upon Tideway technologies?

We're also wary of any tool that promises to support the "complete set of discovery requirements for BSM" ...when true Business Service Management (BSM) requires discovery and mapping of most business oriented assets.  For example, does this mean that BMC is promising to support all types of business assets, including communication assets, manufacturing assets, inventory assets and transportation assets ...all of which include embedded IT components leveraged by commercial applications?  That would truly be impressive.

Finally, as IT management becomes more of a gating factor for the successful implementation of cloud computing, the BMC recognition that "visibility into the data center" and the need to "model, manage and maintain applications and services" is critical for cloud environments is welcomed. We believe the Tideway acquisition puts BMC in a stronger position to build a cloud-based CMDB which could become a core competence within BMC's solution suite, should they decide to pursue this value proposition. 

Welcome to BSMreview.com

| 2 Comments | No TrackBacks

Agile BSM

Discussion around Business Service Management (BSM) has been ongoing for years ...and years ...and years. Yet it remains a fairly immature dialogue as vendors scope BSM to capitalize on their respective product offerings; as IT organizations struggle to articulate the desired end state; and as industry analysts deliver unique perspectives for purposes of differentiation.

Fortunately, the purpose of BSM is so fundamental, so basic, and so obvious ...that vendors, IT organizations, business managers, analysts and editors intuitively "get it" ...dwindling the confusion that so frequently accompanies newer technology concepts. This website is dedicated to the BSM dialogue by whoever wishes to participate. There is no fee to join ...no content that requires a subscription ...and no censorship of reasonable ideas and questions.

IT has been, is and will continue to be hammered for being disconnected from the business needs of the customer that IT serves. Sometimes the IT organization is adequately connected to the business entity, with the value simply unrecognized. More often, IT is guilty of diversionary focus on technology silos that business doesn't care about. BSM is the discipline that aligns the deliverables of IT to the enterprise's business goals.

That discipline comes in the forms of activities, technologies, tools, metrics, processes, best practices and people. BSM creates a laser focus on those deliverables generated by IT into something that is meaningful to the business community. If the IT deliverable is of no importance to the business function, then IT should eliminate or repackage it into a service that carries appropriate business value. BSM success is entirely dependent upon the willingness and skill of both IT and business to have an effective two way conversation ...one party without the other is doomed to failure.

Read my complete introduction: The Why & What of Business Service Management

About this Archive

This page is an archive of recent entries in the Cloud Computing Management category.

CIO Agenda is the previous category.

Compliance/Risk Management (GRC) is the next category.

Find recent content on the main index or look in the archives to find all content.

Pages