Cloud transfers made easy

transfer
Transfers made easy

A while back, I wrote about the problem of consumer trust in the cloud – in particular, the problem of what happens when your cloud provider decides to change the T&Cs to your detriment, and how this can erode the trust that consumers, already alert to the technology industry’s much-publicised failures, are in danger of losing.

The issue that prompted this was the massive capacity reduction by Amazon for its cloud storage service – Cloud Drive – from unlimited to a maximum of 5GB. The original price was just £55 a year but Amazon’s new price for 15TB, for example, is £1,500.

So at this point, unless you’re happy to pay that amount, two solutions suggest themselves. The first is to invest in a pile of very large hard disks – twice as many as you need because, you know, backups, and then become your own storage manager. Some excellent NAS devices and software packages such as FreeNAS make this process much easier than it used to be, but you’ll still need to manage the systems and/or buy the supporting hardware, and pay the power bill.

The alternative is to retain some trust in the cloud – while remaining wary. But this is only half the solution; I’ll get back to that later.

This individual has found another cloud provider, Google G Suite, which offers unlimited storage and a whole heap of business services for a reasonable £6 per month. Google requires you to own your domain and to be hosting your own website but if you can satisfy those requirements, you’re in. Other cloud providers have deals too but this was the best deal I could find.

Cloud-to-cloud transfer
So the problem then is how to transfer a large volume of data to the new cloud service. One way is to re-upload it but this is very long-winded: using a 20Mbps fibre-to-the-cabinet (FTTC) connection it will take months, it can clog up your connection if you have other uses for that bandwidth, and for anyone on a metered broadband connection it will be expensive too. And if you don’t run a dedicated server, you’ll need a machine left on during this time.

Cloud-to-cloud transfer services exist to solve this problem, – and after some research, I found cloudHQ. For a reasonable fee – or for free if you blog about it (yes, this what I’m doing here) – cloudHQ will transfer data between a range of cloud services, including Google, Amazon (S3 and Cloud Drive), Gmail, Box, Basecamp, Office 365, Evernote and many more.

CloudHQ does more: it will backup and sync in real time too, forward emails, save them as PDFs, act as a repository for large attachments, and a range of other email- and scheduling related services for Google and other cloud providers.

The basic service is free but this is limited to 20GB and a maximum file size of 150MB – but the next tier up – Premium – costs £19.80 a month and offers pretty much everything the power user could want.

Hybrid clouds and backup
So is cloudHQ the solution to the problem of cloud-to-cloud transfers? Yes, but putting your data in the cloud still leaves you with a single copy without a backup (I said I’d get back to this). So either you need another cloud service, in which case cloudHQ will keep them in sync, or you create a hybrid solution, where the primary data lives under your direct control and management, but the off-site backup lives in the cloud.

This hybrid setup is the one that businesses are increasingly opting for, and for good reason. And frankly, since your irreplaceable personal data – think photos and the like – is at risk unless you keep at least two copies, preferably three, then using both local and cloud storage make huge sense.

Technology highlights 2013

I’ve been shamefully neglecting this blog recently, yet a lot of interesting new technologies and ideas have come my way. So by way of making amends, here’s quick round-up of the highlights.

Nivio
This is a company that delivers a virtual desktop service with a difference. Virtual desktops have been a persistent topic of conversation among IT managers for years, yet delivery has always been some way off. Bit like fusion energy only not as explosive.

The problem is that, unless you’re serving desktops to people who do a single task all day, which describes call centre workers but not most people, people expect a certain level of performance and customisation from their desktops. If you’re going to take a desktop computer away from someone who uses it intensively as a tool, you’d better make sure that the replacement technology is just as interactive.

Desktops provided by terminal services have tended to be slow and a bit clunky – and there’s no denying that Nivio’s virtual desktop service, which I’ve tried, isn’t quite as snappy as having 3.4GHz of raw compute power under your fingertips.

On the other hand, there’s a load of upsides. From an IT perspective, you don’t need to provide the frankly huge amounts of bandwidth needed to service multiple desktops. You don’t care what the end user wants to access the service with – so if you’re allowing people to bring and use their own devices into work, this will work with anything, needing only a browser to work. I’ve seen a Windows desktop running on an iPhone – scary…

And you don’t need to buy applications. The service provides them all for you from its standard set of over 40 applications – and if you need one the company doesn’t currently offer, they’ll supply it. Nivio also handles data migration, patching, and the back-end hardware.

All you need to do is hand over $35 per month per user.

Quantum
The company best known for its tape backup products launched a new range of tape libraries.

The DXi6800 is, says Quantum’s Stéphane Estevez, three times more scalable than any other such device, allowing you to scale from 13TB to 156TB. Aimed at mid-sized as well as large enterprises, it includes an array of disks that you effectively switch on with the purchase of a new licence. Until then, they’re dormant, not spinning. “We are taking a risk of shipping more disks than the customer is paying for – but we know customer storage is always growing. You unlock the extra storage when you need it,” said Estevez.

It can handle up to 16TB/hour which, is, reckons the company, four times faster than EMC’s DD670 – its main competitor – and all data is encrypted and protected by an electronic certificate so you can’t simply swap it into another Quantum library. And the management tools mean that you can manage multiple devices across datacentres.

Storage Fusion
If ever you wanted to know at a deep level how efficient your storage systems are, especially when it comes to virtual machine management, then Storage Fusion reckons it has the answers in the form of its storage analysis software, Storage Fusion Analyze.

I spoke to Peter White, Storage Fusion’s operations director, who reckoned that companies are wasting storage capacity by not over-provisioning enough, and by leaving old snapshots and storage allocated to servers that no longer exist.

“Larger enterprise environments have the most reclaimable storage because they’re uncontrolled,” White said, “while smaller systems are better controlled.”

Because the company’s software has analysed large volumes of storage, White was in a position to talk about trends in storage usage.

For example, most companies have 25% capacity headroom, he said. “Customers need that level of comfort zone. Partners and end users say that the reason is because the purchasing process to get disk from purchase order to installation can take weeks or even months, so there’s a buffer built in. Best practice is around that level but you could go higher.”

You also get what White called system losses, due to formatting inefficiencies and OS storage. “And generally processes are often broken when it comes to decommissioning – without processes, there’s an assumption of infinite supply which leads to infinite demand and a lot of wastage.”

The sister product, Storage Fusion Virtualize “allows us to shine a torch into VMware environments,” White said. “It can see how VM storage is being used and consumed. It offers the same fast analysis, with no agents needed.”

Typical customers include not so much enterprises as systems integrators, service providers and consultants.

“We are complementary to main storage management tools such as those from NetApp and EMC,” White said. “Vendors take a global licence, and end users can buy via our partners – they can buy report packs to run it monthly or quarterly, for example.”

Solidfire
Another product aimed at service providers, SolidFire steps aside from the usual pitch for all solid-state disks (SSD). Yes solid-state is very fast when compared to spinning media but the company claims to be offering the ability to deliver a guarantee not just of uptime but of performance.

If you’re a provider of storage services in the cloud, one of your main problems, said the company’s Jay Prassl, is the noisy neighbour, the one tenant in a multi-tenant environment who sucks up all the storage performance with a single database call. This leaves the rest of the provider’s customers suffering from a poor response, leading to trouble tickets and support calls, so adding to the provider’s costs.

The aim, said Prassl, is to help service providers offer guarantees to enterprises they currently cannot offer because the technology hasn’t – until now – allowed it. “The cloud provider’s goal is to compute all the customer’s workload but high-performance loads can’t be deployed in the cloud right now,” he said.

So the company has built SSD technology that, because of the way that data is distributed across multiple solid-state devices – I hesitate to call them disks because they’re not – offers predictable latency.

“Some companies manage this by keeping few people on a single box but it’s a huge problem when you have hundreds or thousands of tenants,” Prassl said. “So service providers can now write a service level agreement (SLA) around performance, and they couldn’t do that before.”

Key to this is the automated way that the system distributes the data around the company’s eponymous storage systems, according to Prassl. It then sets a level of IOPS that a particular volume can achieve, and the service provider can then offer a performance SLA around it. “What we do for every volume is dictate a minimum, maximum and a burst level of performance,” he said. “It’s not a bolt-on but an architecture at the core of our work.”

2012: the tech year in view (part 1)

As 2012 draws to a close, here’s a round-up of some of the more interesting news stories that came my way this year. This is part 1 of 2 – part 2 will be posted on Monday 31 December 2012.

Storage
Virsto, a company making software that boosts storage performance by sequentialising the random data streams from multiple virtual machines, launched Virsto for vSphere 2.0. According to the company, this adds features for virtual desktop infrastructures (VDI), and it can lower the cost of providing storage for each desktop by 50 percent. The technology can save money because you need less storage to deliver sufficient data throughput, says Virsto.

At the IPExpo show, I spoke with Overland which has added a block-based product called SnapSAN to its portfolio. According to the company, the SnapSAN 3000 and 5000 offer primary storage using SSD for cacheing or auto-tiering. This “moves us towards the big enterprise market while remaining simple and cost-effective,” said a spokesman. Also, Overland’s new SnapServer DX series now includes dynamic RAID, which works somewhat like Drobo’s system in that you can install differently sized disks into the array and still use all the capacity.

Storage startup Tegile is one of many companies making storage arrays with both spinning and solid-state disks to boost performance and so, the company claims boost performance cost-effectively. Tegile claims it reduces data aggressively, using de-duplication and compression, and so cuts the cost of the SSD overhead. Its main competitor is Nimble Storage.

Nimble itself launched a so-called ‘scale to fit’ architecture for its hybrid SSD-spinning disk arrays this year, adding a rack of expansion shelves that allows capacity to be expanded. It’s a unified approach, says the company, which means that adding storage doesn’t mean you need to perform a lot of admin moving data around.

Cloud computing
Red Hat launched OpenShift Enterprise, a cloud-based platform service (PaaS). This is, says Red Hat, a solution for developers to launch new projects, including a development toolkit that allows you to quickly fire up new VM instances. Based on SE Linux, you can fire up a container and get middleware components such as JBoss, php, and a wide variety of languages. The benefits, says the company, are that the system allows you to pool your development projects.

Red Hat also launched Enterprise Virtualization 3.1, a platform for hosting virtual servers with up to 160 logical CPUs and up to 2TB of memory per virtual machine. It adds command line tools for administrators, and features such as RESTful APIs, a new Python-based software development kit, and a bash shell. The open source system includes a GUI to allow you to manage hundreds of hosts with thousands of VMs, according to Red Hat.

HP spoke to me at IPExpo about a new CGI rendering system that it’s offering as a cloud-based service. According to HP’s Bristol labs director, it’s 100 percent automated and autonomic. It means that a graphics designer uses a framework to send a CGI job to a service provider who creates the film frame. The service works by estimating the number of servers required, sets them up and configures them automatically in just two minutes, then tears them down after delivery of the video frames. The evidence that it works can apparently be seen in the animated film Madagascar where, to make the lion’s mane move realistically, calculations were needed for 50,000 individual hairs.

For the future, HP Labs is looking at using big data and analytics for security purposes and is looking at providing an app store for analytics as a service.

Security
I also spoke with Rapid7, an open-source security company that offers a range of tools for companies large and small to control and manage the security of their digital assets. It includes a vulnerability scanner, Nexpose, a penetration testing tool, Metasploit, and Mobilisafe, a tool for mobile devices that “discovers, identifies and eliminates risks to company data from mobile devices”, according to the company. Overall, the company aims to provide “solutions for comprehensive security assessments that enable smart decisions and the ability to act effectively”, a tall order in a crowded security market.

I caught up with Druva, a company that develops software to protect mobile devices such as smartphones, laptops and tablets. Given the explosive growth in the numbers of end-user owned devices in companies today, this company has found itself in the right place at the right time. New features added to its flagship product inSync include better usability and reporting, with the aim of giving IT admins a clearer idea of what users are doing with their devices on the company network.

Networking
Enterasys – once Cabletron for the oldies around here – launched a new wireless system, IdentiFi. The company calls it wireless with embedded intelligence offering wired-like performance but with added security. The system can identify issues of performance and identity, and user locations, the company says, and it integrates with Enterasys’ OneFabric network architecture that’s managed using a single database.

Management
The growth of virtualisation in datacentres has resulted in a need to manage the virtual machines, so a number of companies focusing on this problem have sprung up. Among them is vKernel, whose product vOPS Server aims to be a tool for admins that’s easy to use; experts should feel they have another pair of hands to help them do stuff, was how one company spokesman put it. The company, now owned by Dell, claims it has largest feature set for virtualisation management when you include its vKernel and vFoglight products, which provide analysis, advice and automation of common tasks.

Technology predictions for 2013

The approaching end of the year marks the season of predictions for and by the technology industry for the next year, or three years, or decade. These are now flowing in nicely, so I thought I’d share some of mine.

Shine to rub off Apple
I don’t believe that the lustre that attaches to everything Apple does will save it from the ability of its competitors to do pretty much everything it does, but without the smugness. Some of this was deserved when it was the only company making smartphones, but this is no longer true. and despite the success of the iPhone 5, I wonder if its incremental approach – a slightly bigger screen and some nice to have features – will be enough to satisfy in the medium term. With no dictatorial obsessive at the top of a company organised and for around that individual’s modus operandi, can Apple make awesome stuff again, but in a more collective way?

We shall see, but I’m not holding my breath.

Touch screens
Conventional wisdom says that touchscreens only work when they are either horizontal and/or attached to a handheld device. It must be true: Steve Jobs said so. But have you tried using a touchscreen laptop? Probably not.

One reviewer has, though, and he makes a compelling case for them, suggesting that they don’t lead to gorilla arm, after all. I’m inclined to agree that a touchscreen laptop could become popular, as they share a style of interaction with users’ phones – and they’re just starting to appear. Could Apple’s refusal to make a touchscreen MacBook mean it’s caught wrong-footed on this one?

I predict that touchscreen laptops will become surprisingly popular.

Windows 8
Everyone’s a got a bit of a downer on Windows 8. After all, it’s pretty much Windows 7 but with a touchscreen interface slapped on top. Doesn’t that limit its usefulness? And since enterprises are only now starting to upgrade from Windows XP to Windows 7 — and this might be the last refresh cycle that sees end users being issued with company PCs — doesn’t that spell the end for Windows 8?

I predict that it will be more successful than many think: not because it’s especially great because it certainly has flaws, especially when used with a mouse, which means learning how to use the interface all over again.

In large part, this is because the next version of Windows won’t be three years away or more, which has tended to be the release cycle of new versions. Instead, Microsoft is aiming for a series of smaller, point releases, much as Apple does but hopefully without the annoying animal names from which it’s impossible to derive an understanding of whether you’ve got the latest version.

So Windows Blue – the alleged codename – is the next version and will take into account lessons from users’ experiences with Windows 8, and take account of the growth in touchscreens by including multi-touch. And it will be out in 2013, probably the third quarter.

Bring your own device
The phenomenon whereby firms no longer provide employees with a computing device but instead allow you to bring your own, provided it fulfils certain security requirements, will blossom.

IT departments hate this bring your own device policy because it’s messy and inconvenient but they have no choice. They had no choice from the moment the CEO walked into the IT department some years ago with his shiny new iPhone – he was the first because he was the only one able to afford one at that point – and commanded them to connect it to the company network. They had to comply and, once that was done, the floodgates opened. The people have spoken.

So if you work for an employer, expect hot-desking and office downsizing to continue as the austerity resulting from the failed economic policies of some politicians continue to be pursued, in the teeth of evidence of their failure.

In the datacentre
Storage vendors will be snapped up by the deep-pocketed big boys – especially Dell and HP – as they seek to compensate for their mediocre financial performance by buying companies producing new technologies, such as solid-state disk caching and tiering.

Datacentres will get bigger as cloud providers amalgamate, and will more or less be forced to consider and adopt software-defined networking (SDN) to manage their increasingly complex systems. SDN promises to do that by virtualising the network, in the same way as the other major datacentre elements – storage and computing – have already been virtualised.

And of course, now that virtualisation is an entirely mainstream technology, we will see even bigger servers hosting more complex and mission-critical applications such as transactional databases, as the overhead imposed by virtualisation shrinks with each new generation of technology. What is likely to lag however is the wherewithal to manage those virtualised systems, so expect to see some failures as virtual servers go walkabout.

Security
Despite the efforts of technologists to secure systems – whether for individuals or organisations, security breaches will continue unabated. Convenience trumps security every time, experience teaches us. And this means that people will find increasingly ingenious ways around technology designed to stop them walking around with the company’s customer database on a USB stick in their pocket, or exposing the rest of the world to a nasty piece of malware because they refuse to update their operating system’s defences.

That is, of course, not news at all, sadly.

Are SSDs too expensive?

Recent weeks have seen a deluge of products from solid-state disk (SSD) vendors, such as Tegile, Fusion-IO, and now LSI to name but a few; a significant proportion of new storage launches in the last year or two have been based around SSDs.

Some of this is no doubt opportunism, as the production of spinning disk media was seriously disrupted by floods in Thailand last year, a phenomenon that the disk industry reckons has now disappeared. Much of the SSD-fest though purports to resolve the problem of eking more performance from storage systems.

In your laptop or desktop PC, solid state makes sense simply because of its super-fast performance: you can boot the OS of your choice in 15-30 seconds, for example, and a laptop’s battery life is hugely extended. My ThinkPad now runs happily for four to five hours of continuous use, more if I watch a video or don’t interact with it constantly. And in a tablet or smartphone of course there’s no contest.

The problem is that the stuff is expensive, with a quick scan of retail prices showing a price delta of between 13 to 15 times the price of hard disks, measured purely on a capacity basis.

In the enterprise, though, things aren’t quite as simple as that. The vendors’ arguments in favour of SSDs ignore capacity, as they assume that the real problem is performance, where they can demonstrate that SSDs deliver more value for a given amount of cash than spinning media.

There is truth in this argument, but it’s not as if data growth is slowing down. In fact, when you consider that the next wave of data will come from sensors and what’s generally known as the Internet of things – or machine-to-machine communication – then expect the rate of data growth to increase, as this next data tsunami has barely started.

And conversations with both vendors and end users also show that capacity is not something that can be ignored. If you don’t have or can’t afford additional storage, you might need to do something drastic – like actually manage the stuff, although each time I’ve mooted that, I’m told that it remains more costly to do than technological fixes like thin provisioning and deduplication.

In practice, the vendors are, as so often happens in this industry, way ahead of all but the largest, most well-heeled customers. Most users, I would contend, more concerned with ensuring that they have enough storage to handle projected data growth over the next six months. Offer them high-cost, low capacity storage technology and they’re may well reject it in favour of capacity now.

When I put this point to him, LSI’s EMEA channel sales director Thomas Pavel reckoned that the market needed education. Maybe it does. Or maybe it’s just fighting to keep up with demand.

NAS upgrade on the way

It’s time to rebuild my server. Currently supporting two smartphones, a pair of high-powered desktops, two laptops and a variety of other devices scattered around the house, the lifespan of the Ubuntu server-powered machine in the basement has just about run out.

Not only is it running out of disk space, the space it does have badly needs re-organising. Now I know that it’s quite easy to upgrade the five-spindle EXT4-formatted RAID5 disk system in the self-built server but to be honest it’s more time and trouble than I have available to give. Also, the Ubuntu update system seems to have broken. Maybe they’ve moved where they put all the updates since I installed Ubuntu 8.10 but it no longer works and I can’t be bothered spending ages figuring out how to fix it.

Guess I’m not a pure hobbyist any more if I value my time so much that I don’t want to spend it in a dark basement tending an Ubuntu server as it rebuilds its RAID stripes.

When I first set up the server, it was designed to provide more than just storage. It would be the digital hub, functioning as as server for DHCP (IP address serving) NTP (time), VPN termination (using OpenVPN so I could log in from anywhere), and a half-dozen other things that I thought we’d need. Actually we don’t need most of that stuff. Turns out we really just need some central storage, properly managed.

Trouble is it’s not very well managed, in that it consists of five 500GB drives in one case providing about 2TB and an Iomega RAID (kindly donated) box with 1.4TB. They’re connected over the network using NFS to tie the Iomega into the main server’s directory hierarchy. All that’s shared using CIFS for the Windows boxes and AFP for the Apple machines.

The folder structure’s a mess though and the disks need upgrading both because they’ve been sitting there spinning away for over two years in an increasingly dense cloud of cobwebs — can’t keep the bugs out of the server as it’s the warmest thing down there in the winter — and because the volumes of data that video can gobble never ceases to amaze.

So it’s time to upgrade and rebuild it using bigger disks (4 x 2TB I think) and an off-the-shelf storage appliance such as FreeNAS. That way I don’t have too much support to do, costs are contained, and the functions it doesn’t have I don’t really need. I’m also going to build it on top of VMware’s ESX hypervisor (I’ll use my old PC’s motherboard and Intel Core Due CPU as the hardware for this) so if it needs more functionality (which I doubt) then I can just create and fire up a virtual machine.

So far, I’ve acquired an ESX-compliant network card (Intel PRO/1000 CT) and a low-end graphics card (with VGA out for my Adderlink IP remote KVM device that allows me to log in directly to the server from the office), and a 2TB drive that will act as a sink for the data before I move it all over to FreeNAS.

Watch this space for more – and maybe even a review or two.

Oracle buys Sun — but who really wins?

The big news this week this is undoubtedly the $7.4 billion purchase of the troubled server company Sun Microsystems by database specialist Oracle. But, given the very different nature of the two companies, will it work?

Well-known in the industry for being the favourite of developers and geeks, and among its customers for its high-powered, reliable but expensive systems, Sun has nonetheless suffered financially since the implosion of the dotcom bubble. Its accounts have bled red for years, and selling the company seems for eons — that’s eons in IT years — to have been the only way out.

Just two weeks ago, IBM made overtures to buy the company. This author among others could see that there would be some synergies, although I struggled to see how Big Blue would swallow Sun’s server range, given that it has a well-established and rational product portfolio already. IBM and Sun would have fitted together mainly on the software side, where the acquisition of Solaris, a major platform in the database world, along with Java and many open source technologies including OpenOffice, would have sat comfortably alongside IBM’s espousal of open source, and its conversion from hardware to software and services company.

It wasn’t to be. Sun demanded too much of IBM — more here — and the deal fell through. We wondered at the time how Sun could have let it happen, and accused the Silicon Valley stalwart of greed and complacency.

What we didn’t know was that it had another suitor in the wings, one willing to pay Sun’s pretty substantial asking price.

Early post-purchase signs are good. Most analysts and observers see more positives than negatives emerging from the deal. Oracle is a software company first and foremost, while Sun’s revenues stem mostly from hardware.

What’s more, Sun’s Solaris is a major platform for Oracle’s eponymous database, which means that Oracle can now offer the whole stack, from raw iron upwards, and so is in a better position to offer more tightly integrated solutions. As the company’s acquisition statement said: “Oracle will be the only company that can engineer an integrated system — applications to disk — where all the pieces fit and work together so customers do not have to do it themselves”.

Some systems integrators may suffer as a result, but that’ll be some way down the line, after two or three product refresh cycles.

The deal has even got some of the opposition thinking. As Colin Barker reports from an HP product launch in Berlin (which I was unable to make, sadly): “HP executives thought that the news was interesting and it was not difficult to see their internal calculators trying to work out any options the move would give them.”

So far so fitted.

But big questions remain to be answered. Sun has always been a fairly open company, and has always seen itself and wanted to be seen as part of a wider community. When open source came along, Sun gradually adopted it and, with no little external persuasion it seemed at the time, even made some of its own, expensively developed technology open source.

In complete contrast, Oracle has rarely if ever done that — apart perhaps from its development of its own version of Red Hat Linux, which the market has largely ignored. Oracle’s proprietary approach and eagerness to squeeze every last dollar out of its large enterprise customers is the stuff of legend.

This is unlikely to change, especially now that it can lock down those customers to a tightly integrated hardware platform. The reactions of those customers, of the competition, many of whom are in alliances with either or both the parties to the acquisition, and of the channel remain to be seen.

There will be layoffs too, given the economic situation, and the more obvious lack of need for duplicated sales, marketing or HR departments, for example. One analyst is reported to have predicted up to 10,000 job losses. I would expect the culture shock to squeeze quite a few through the out door.

But if you’re a customer, you might prefer not be locked in. If you’re a hardware partner of Oracle’s, you’re likely to be re-thinking that deal, big time. HP is in that boat, given that it’s co-developed servers for Oracle, in the database company’s first venture into hardware, back in 2008. And if you either work for Sun or are one of the developer community in Sun’s orbit, you might well find yourself wondering where to go next, whether voluntarily or not.

My take is that most customers will stay put. It’s not the time to start launching into expensive new IT roll-outs. That’s not to say that those with an aversion to single-supplier deals won’t bail as soon as possible.

However, the pressure on the competition in the current climate is likely to result in more mergers and acquisitions, and a jungle populated by fewer but bigger beasts.

But who and which? Here are some questions: will IBM swallow EMC? Will Cisco buy Brocade? And could Microsoft finally buy Yahoo!? And how many more yachts will this deal enable Oracle CEO Larry Ellison to buy?

Where does the Sun-IBM deal failure leave Sun?

So Sun Microsystems turned down IBM’s offer to buy it — even though Big Blue’s $7 billion buy-out bid was twice the valuation of the troubled Silicon Valley stalwart.

We read on Bloomberg that the sticking point was a clause in the contracts of top Sun execs. The news service reports that: “chief executive officer Jonathan Schwartz and chairman Scott McNealy have contracts that mean they would receive three times their annual pay, including salary and bonus, should Sun be acquired.”

IBM reportedly didn’t think too much of that stipulation and would not honour it — even though its acquisition of the fourth-placed server vendor would have boosted its position against number one vendor HP.

We also read that “Sun’s board contended IBM wanted too much control over Sun’s projects and employees before the deal closed”, which is hardly surprising: coughing up $7 billion has a way of concentrating the mind.

And especially when it appears that some super-rich employees wanted to grow even richer than they already are. Top Sun execs get paid in millions of dollars: Bloomberg reports that Schwartz’s salary was $1 million last year and his target bonus was twice that amount. And company founder McNealy was awarded $6.45 million in compensation last year, including $1 million in cash for his “service as an employee of Sun”.

But in this day and age, exactly how much money does one already super-rich individual truly need?

There’s another factor. Even before the recession, Sun consistently failed to show a profit so IBM would be bonkers not to want to manage Sun closely. And Sun looks to be heading for its biggest loss since 2003.

Following its rejection of IBM, Sun’s share price dipped 23 percent, its biggest fall since 2002, according to Bloomberg.

So what are we to learn from this? Chatter among techies in the industry demonstrates tremendous loyalty to Sun and its technology. However, a company selling semi-proprietary kit — yes, I know that Solaris is now open, and that it uses Intel processors and so on, but that’s not where the bulk of its sales are — was always going to struggle now that hardware is commoditised and standardised.

Analysts agree.

“Sun can survive as an independent company, but the longer the recession goes on, the more likely it is the value of the franchise begins to fade,” said one.

“Sun made a horrible mistake. Wall Street analysts probably optimistically expect their revenue to decrease year-over-year for the next several years — they should have just taken that money and ran,” said another.

Is this the beginning of the end for Sun? Industry observers — including this one — have called this before and been wrong. Largely down to the company’s huge cash cache, Sun has continued to trade even as its accounts bleed red.

What’s different this time is that Sun’s top execs seem to have forgotten that we’re in the middle of a recession. It might be because Silicon Valley has its own mental micro-climate. I was there a couple of months back, talking to venture capitalists and heads of startups looking for funding, and the untrammelled optimism was palpable: I almost started sweeping it up off the floor.

But in the real world, there’s near-universal anger and disappointment at the shenanigans of the stupendously well-paid at the heads of companies. Keen to be seen as corporately and financially responsible, IBM is likely to have been sensitive to the appearance of funding what looks like plain greed.

Neither of the two parties has commented on their falling out. But if Sun is to survive, you’d have to hope that hubris doesn’t get in the way of deals with any future suitors.

If there are any.

New HP servers take battle to Cisco

HP has today launched a swathe of servers in multiple form factors — rack, blade and tower — driven by Intel’s latest processor architecture, codenamed Nehalem.

But there’s much more to it than that.

Time was when server companies, especially those such as HP, which analysts say has the biggest server market share, would boast and blag about how theirs were the biggest and fastest beasts in the jungle.

No longer. Instead, HP put heavy emphasis on its management capabilities. That’s a shot fired across the bows of network vendor Cisco, which just two weeks ago unveiled a new unified computing initiative, at whose core is a scheme to manage and automate the movement of virtual machines and applications across servers inside data centres. Oh yes, there’s a server in there too — a first for fast-diversifying Cisco.

But this is a sidetrack: back to HP’s launch of the ProLiant G6. Performance was mentioned once in the press release’s opening paragraph — they’re twice as quick, apparently — but when he spoke to me, European server VP Christian Keller focused almost entirely on manageability, and performance per watt.

“We have 32 senders that give health information about temperatures and hotspots. Unlike our competitors, we don’t just control all six fans together — we can control them separately using advanced algorithms. These are based on computational fluid dynamics and are based in a chip, so it works even if the OS is changing — for example during virtualisation moves,” he said.

Keller went on to talk about how the servers’ power draw can be capped, again using hardware-based algorithms, which means that a server that’s been over-specified for the purposes of future-proofing won’t draw more power than it needs.

The result, Keller went on, is that “you can use the data centre better and pack more servers into the same space.” The bottom line is that the organisation reaps big total cost of ownership savings, he reckoned, although with finance very tight, he said that quick payback was at the top of mind of his customers.

“Customers are looking for faster payback today due to recession,” he said. “With HP, you need fewer servers to do the same amount of work and payback is achieved in around 12 months.” And there’s a bunch of slideware to back up his claims. You can get more on the products here.

Management software
HP’s keen to make more of its data centre management software — during a recent conversation, one HP exec said he reckoned the company had indulged in stealth marketing of its software portfolio.

And it’s true that HP’s new raft of software, much of it launched over six months ago and based on Systems Insight Manager, has barely been mentioned outside conversations with HP’s customers. It covers a wide range of functionality, enabling data centre managers to manage partitions within and across blades, which can be in the same chassis or in separate chassis — depending on what you want to do.

I saw a demo of the system and it was impressive. One of the core modules is the Capacity Advisor, which allows what-if planning so you can size your hardware requirements. It includes trending out to the future – which was a features on HP’s HP/UX platform but is now on x86. It not only allows the manager to size systems both for current and future use, it automatically checks how well the sizing operation matches reality.

Virtualisation Manager adds a view of all resources and virtual machines, and can display application resource utilisation inside VMs, while Global Workload Manager allows you to change priorities depending on which application is the most critical. So backup gets resources when the payroll cheque run is finished, for example. There’s lots more to it, so you can find out more here.

This isn’t intended to be a serious review of HP’s system management software — I didn’t spend nearly enough time with it for that. However, amid the noise surrounding VMware and Microsoft, and a host of third parties vying for position as top dog in the data centre management space, and together with the brouhaha surrounding Cisco’s recent launch, HP has quietly got on with developing hat looks like a seriously useful suite of software.

Apart from a press release six months ago, the company just hasn’t told many people about it.