Technology highlights 2013

I’ve been shamefully neglecting this blog recently, yet a lot of interesting new technologies and ideas have come my way. So by way of making amends, here’s quick round-up of the highlights.

Nivio
This is a company that delivers a virtual desktop service with a difference. Virtual desktops have been a persistent topic of conversation among IT managers for years, yet delivery has always been some way off. Bit like fusion energy only not as explosive.

The problem is that, unless you’re serving desktops to people who do a single task all day, which describes call centre workers but not most people, people expect a certain level of performance and customisation from their desktops. If you’re going to take a desktop computer away from someone who uses it intensively as a tool, you’d better make sure that the replacement technology is just as interactive.

Desktops provided by terminal services have tended to be slow and a bit clunky – and there’s no denying that Nivio’s virtual desktop service, which I’ve tried, isn’t quite as snappy as having 3.4GHz of raw compute power under your fingertips.

On the other hand, there’s a load of upsides. From an IT perspective, you don’t need to provide the frankly huge amounts of bandwidth needed to service multiple desktops. You don’t care what the end user wants to access the service with – so if you’re allowing people to bring and use their own devices into work, this will work with anything, needing only a browser to work. I’ve seen a Windows desktop running on an iPhone – scary…

And you don’t need to buy applications. The service provides them all for you from its standard set of over 40 applications – and if you need one the company doesn’t currently offer, they’ll supply it. Nivio also handles data migration, patching, and the back-end hardware.

All you need to do is hand over $35 per month per user.

Quantum
The company best known for its tape backup products launched a new range of tape libraries.

The DXi6800 is, says Quantum’s Stéphane Estevez, three times more scalable than any other such device, allowing you to scale from 13TB to 156TB. Aimed at mid-sized as well as large enterprises, it includes an array of disks that you effectively switch on with the purchase of a new licence. Until then, they’re dormant, not spinning. “We are taking a risk of shipping more disks than the customer is paying for – but we know customer storage is always growing. You unlock the extra storage when you need it,” said Estevez.

It can handle up to 16TB/hour which, is, reckons the company, four times faster than EMC’s DD670 – its main competitor – and all data is encrypted and protected by an electronic certificate so you can’t simply swap it into another Quantum library. And the management tools mean that you can manage multiple devices across datacentres.

Storage Fusion
If ever you wanted to know at a deep level how efficient your storage systems are, especially when it comes to virtual machine management, then Storage Fusion reckons it has the answers in the form of its storage analysis software, Storage Fusion Analyze.

I spoke to Peter White, Storage Fusion’s operations director, who reckoned that companies are wasting storage capacity by not over-provisioning enough, and by leaving old snapshots and storage allocated to servers that no longer exist.

“Larger enterprise environments have the most reclaimable storage because they’re uncontrolled,” White said, “while smaller systems are better controlled.”

Because the company’s software has analysed large volumes of storage, White was in a position to talk about trends in storage usage.

For example, most companies have 25% capacity headroom, he said. “Customers need that level of comfort zone. Partners and end users say that the reason is because the purchasing process to get disk from purchase order to installation can take weeks or even months, so there’s a buffer built in. Best practice is around that level but you could go higher.”

You also get what White called system losses, due to formatting inefficiencies and OS storage. “And generally processes are often broken when it comes to decommissioning – without processes, there’s an assumption of infinite supply which leads to infinite demand and a lot of wastage.”

The sister product, Storage Fusion Virtualize “allows us to shine a torch into VMware environments,” White said. “It can see how VM storage is being used and consumed. It offers the same fast analysis, with no agents needed.”

Typical customers include not so much enterprises as systems integrators, service providers and consultants.

“We are complementary to main storage management tools such as those from NetApp and EMC,” White said. “Vendors take a global licence, and end users can buy via our partners – they can buy report packs to run it monthly or quarterly, for example.”

Solidfire
Another product aimed at service providers, SolidFire steps aside from the usual pitch for all solid-state disks (SSD). Yes solid-state is very fast when compared to spinning media but the company claims to be offering the ability to deliver a guarantee not just of uptime but of performance.

If you’re a provider of storage services in the cloud, one of your main problems, said the company’s Jay Prassl, is the noisy neighbour, the one tenant in a multi-tenant environment who sucks up all the storage performance with a single database call. This leaves the rest of the provider’s customers suffering from a poor response, leading to trouble tickets and support calls, so adding to the provider’s costs.

The aim, said Prassl, is to help service providers offer guarantees to enterprises they currently cannot offer because the technology hasn’t – until now – allowed it. “The cloud provider’s goal is to compute all the customer’s workload but high-performance loads can’t be deployed in the cloud right now,” he said.

So the company has built SSD technology that, because of the way that data is distributed across multiple solid-state devices – I hesitate to call them disks because they’re not – offers predictable latency.

“Some companies manage this by keeping few people on a single box but it’s a huge problem when you have hundreds or thousands of tenants,” Prassl said. “So service providers can now write a service level agreement (SLA) around performance, and they couldn’t do that before.”

Key to this is the automated way that the system distributes the data around the company’s eponymous storage systems, according to Prassl. It then sets a level of IOPS that a particular volume can achieve, and the service provider can then offer a performance SLA around it. “What we do for every volume is dictate a minimum, maximum and a burst level of performance,” he said. “It’s not a bolt-on but an architecture at the core of our work.”

2012: the tech year in view (part 2)

Here’s part 2 of my round-up of some of the more interesting news stories that came my way in 2012. Part 1 was published on 28 December 2012.

Datacentre infrastructure
NextIO impressed with its network consolidation product, vNet. This device virtualises the I/O of all the data to and from servers in a rack, so that they can share the bandwidth resource which is allocated according to need. It means that one adapter can look like multiple virtual adapters for sharing between both physical and virtual servers, with each virtual adapter looking like a physical adapter to each server. The main beneficiaries, according to the company, are cloud providers, who can then add more servers quickly and easily without having to physically reconfigure their systems and cables. According to the company, a typical virtualisation host can be integrated into the datacentre in minutes as opposed to hours.

In the same part of the forest, the longer-established Xsigo launched a new management layer for its Data Center Fabric appliances, its connectivity virtualisation products. This allows you to see all I/O traffic across all the servers, any protocol, and with a granularity that ranges from specific ports to entire servers.

Nutanix came up with a twist on virtualisation by cramming all the pieces you need for a virtualisation infrastructure into a single box. The result, says the company, is a converged virtualisation appliance that allows you to build a datacentre with no need for separate storage systems. “Our mission is to make virtualisation simple by eliminating the need for network storage,” reckons the company. Its all-in-one appliances mean faster setup and reduced hardware expenditure, the company claims. However, like any do-it-all device, its desirability depends on how much you value the ability to customise over ease of use and setup. Most tend to prefer separates so they can pick and choose.

Cooling servers is a major problem: it costs money and wastes energy that could be more usefully employed doing computing. This is why Iceotope has developed a server that’s entirely enclosed and filled with an inert liquid: 3M Novec 7300. This convects heat away from heat-generating components and is, according to chemical giant 3M, environmentally friendly and thermally stable. The fluid needs no pumping, instead using convection currents to transport heat and dump it to a water-filled radiator. The water is pumped but, Iceotope says, you need only a 72W pump for a 20kW cabinet of servers, a far cry from a typical 1:1 ratio of cooling energy to compute power when using air as the transmission medium.

Networking
Vello Systems launched its Data Center Gateway incorporating VellOS, its operating system designed for software-defined networking (SDN) – probably the biggest revolution in network technology over the last decade. The box is among the first SDN products – as opposed to vapourware – to emerge. The OS can manage not just Vello’s own products but other SDN compliant systems too.

Cloud computing
One of the highlights of my cloud computing year was a visit to Lille, to see one of OVH‘s datacentres. One of France’s biggest cloud providers, OVH is unusual in that it builds everything itself from standard components. You’ll find no HP, IBM or Dell servers here, just bare Supermicro motherboards in open trays, cooled by fresh air. The motivation, says the company comes from thefact there are no external investors and a high level of technical and engineering expertise at the top. Effectively, the company does it this way because it has the resources to do so, and “because we are techies and it’s one of our strong values.” The claimed benefit is lower costs for its customers.

I had an interesting discussion with Martino Corbelli, the chief customer officer at Star, a UK-based cloud services provider. He said that the UK’s mid-market firms are getting stuck in bad relationships with cloud services vendors because they lack both the management and negotiation skills required to handle issues and the budget to cover the cost of swapping out.

“The industry for managed services and cloud is full of examples of people who over promise and under deliver and don’t meet expectations,” he said, reckoning that discussions with potential customers now revolve more around business issues than technology. “Now it’s about the peer-to-peer relationship,” he said. “Can you trust them, are you on the same wavelength, do you feel that your CFO can call their CFO and talk to them as equals?”

We also saw the launch of new cloud computing providers and services from mid-market specialist Dedipower, CloudBees with a Java-based platform service, and Doyenz with a disaster recovery service aimed at smaller businesses.

Storage
Coraid boasted of attracting over 1,500 customers for its unique ATA-over-Ethernet (AoE) storage products. This means that storage is using native Ethernet rather than storage-specific protocols. Coraid reckons this reduces protocol overheads and so is three to five times faster than iSCSI. The company makes a range of storage systems but, although AoE is an open standard, no other company is designing and selling products with it.

WhipTail joined the growing list of vendors selling all-flash storage systems with its Accela products. Solid-state gives you huge performance advantages but the raw storage (as opposed to the surrounding support infrastructure) costs ten times as much compared to spinning disks, so the value proposition is that the added performance allows you to make more money.

Eventually, the bulk of storage will be solid-state, as the price comes down, with disk relegated to storing backups, archives and low-priority data, but that time has yet to come. It’s a delicate balancing operation for companies such as WhipTail and Violin Memory: they don’t want to be too far ahead of the mass market and don’t want to miss the boat when flash storage becomes mainstream.

2012: the tech year in view (part 1)

As 2012 draws to a close, here’s a round-up of some of the more interesting news stories that came my way this year. This is part 1 of 2 – part 2 will be posted on Monday 31 December 2012.

Storage
Virsto, a company making software that boosts storage performance by sequentialising the random data streams from multiple virtual machines, launched Virsto for vSphere 2.0. According to the company, this adds features for virtual desktop infrastructures (VDI), and it can lower the cost of providing storage for each desktop by 50 percent. The technology can save money because you need less storage to deliver sufficient data throughput, says Virsto.

At the IPExpo show, I spoke with Overland which has added a block-based product called SnapSAN to its portfolio. According to the company, the SnapSAN 3000 and 5000 offer primary storage using SSD for cacheing or auto-tiering. This “moves us towards the big enterprise market while remaining simple and cost-effective,” said a spokesman. Also, Overland’s new SnapServer DX series now includes dynamic RAID, which works somewhat like Drobo’s system in that you can install differently sized disks into the array and still use all the capacity.

Storage startup Tegile is one of many companies making storage arrays with both spinning and solid-state disks to boost performance and so, the company claims boost performance cost-effectively. Tegile claims it reduces data aggressively, using de-duplication and compression, and so cuts the cost of the SSD overhead. Its main competitor is Nimble Storage.

Nimble itself launched a so-called ‘scale to fit’ architecture for its hybrid SSD-spinning disk arrays this year, adding a rack of expansion shelves that allows capacity to be expanded. It’s a unified approach, says the company, which means that adding storage doesn’t mean you need to perform a lot of admin moving data around.

Cloud computing
Red Hat launched OpenShift Enterprise, a cloud-based platform service (PaaS). This is, says Red Hat, a solution for developers to launch new projects, including a development toolkit that allows you to quickly fire up new VM instances. Based on SE Linux, you can fire up a container and get middleware components such as JBoss, php, and a wide variety of languages. The benefits, says the company, are that the system allows you to pool your development projects.

Red Hat also launched Enterprise Virtualization 3.1, a platform for hosting virtual servers with up to 160 logical CPUs and up to 2TB of memory per virtual machine. It adds command line tools for administrators, and features such as RESTful APIs, a new Python-based software development kit, and a bash shell. The open source system includes a GUI to allow you to manage hundreds of hosts with thousands of VMs, according to Red Hat.

HP spoke to me at IPExpo about a new CGI rendering system that it’s offering as a cloud-based service. According to HP’s Bristol labs director, it’s 100 percent automated and autonomic. It means that a graphics designer uses a framework to send a CGI job to a service provider who creates the film frame. The service works by estimating the number of servers required, sets them up and configures them automatically in just two minutes, then tears them down after delivery of the video frames. The evidence that it works can apparently be seen in the animated film Madagascar where, to make the lion’s mane move realistically, calculations were needed for 50,000 individual hairs.

For the future, HP Labs is looking at using big data and analytics for security purposes and is looking at providing an app store for analytics as a service.

Security
I also spoke with Rapid7, an open-source security company that offers a range of tools for companies large and small to control and manage the security of their digital assets. It includes a vulnerability scanner, Nexpose, a penetration testing tool, Metasploit, and Mobilisafe, a tool for mobile devices that “discovers, identifies and eliminates risks to company data from mobile devices”, according to the company. Overall, the company aims to provide “solutions for comprehensive security assessments that enable smart decisions and the ability to act effectively”, a tall order in a crowded security market.

I caught up with Druva, a company that develops software to protect mobile devices such as smartphones, laptops and tablets. Given the explosive growth in the numbers of end-user owned devices in companies today, this company has found itself in the right place at the right time. New features added to its flagship product inSync include better usability and reporting, with the aim of giving IT admins a clearer idea of what users are doing with their devices on the company network.

Networking
Enterasys – once Cabletron for the oldies around here – launched a new wireless system, IdentiFi. The company calls it wireless with embedded intelligence offering wired-like performance but with added security. The system can identify issues of performance and identity, and user locations, the company says, and it integrates with Enterasys’ OneFabric network architecture that’s managed using a single database.

Management
The growth of virtualisation in datacentres has resulted in a need to manage the virtual machines, so a number of companies focusing on this problem have sprung up. Among them is vKernel, whose product vOPS Server aims to be a tool for admins that’s easy to use; experts should feel they have another pair of hands to help them do stuff, was how one company spokesman put it. The company, now owned by Dell, claims it has largest feature set for virtualisation management when you include its vKernel and vFoglight products, which provide analysis, advice and automation of common tasks.

Happy birthday Simon the smartphone

IBM Simon
IBM Simon

Today, 23 November 2012, is the 20th anniversary of the launch of the first smartphone. The IBM Simon was a handheld cellular phone and PDA that ended up selling some 50,000 units. This was impressive as, at the time, publicly available cellular networks were a rarity.

In fact, at the London launch of the device, I remember wondering how many people would buy one given the high costs of both a subscription and the phone. In the USA, BellSouth Cellular initially offered the Simon for US$899 with a two-year service contract or US$1099 without a contract.

As well as a touch screen, the widget included an address book, calendar, appointment scheduler, calculator, world time clock, electronic note pad, handwritten annotations, and standard and predictive stylus input screen keyboards.

Measuring 203mm by 63.5mm by 38mm, it had a massive 35mm by 115mm monochrome touch screen and weighed a stonking 510g, but was only on the market for about six months. The UK never saw it commercially available.

So while it never really took off, this was largely down to timing: it was ahead of its time and it was soon overtaken by smaller, less well-featured devices that were more affordable.

But when you contemplate which shiny shiny is your next object of desire, think about the Simon, and remember, Apple didn’t invent the smartphone: IBM did.

Precise Software adds new performance monitoring features

I don’t need to tell you about the technology industry’s love affair with cloud computing – since as an individual you’re likely to be way ahead of most enterprises in your seamless use of cloud already. After all, you probably use email, you store files on Dropbox, and you sync with Google or iCloud. That makes you a cloud computing user.

For a cloud provider however, extracting maximum value from expensive infrastructure is essential. And for that they need to be able to measure performance accurately – you can’t analyse what you can’t measure. And this is where Precise Software enters the picture.

Precise’s software uses analytics to measure the performance of applications, in the shape of a new version of its flagship product, Precise 9.5, which it sells to large enterprises with their own datacentre and cloud facilities.

The problem datacentre managers are having is tracking data as it moves from virtual machines across the network to storage and back again.

Company spokesman Kevin Wood said: “Users want to track the data through from client to storage, to find why a virtual server is being starved of resources. Are resources being sucked up by another server, for example?”

The software is tailored to work with EMC and VMware‘s storage and hypervisor infrastructure, although a version that supports Microsoft’s Hyper-V and then Citrix Xen are planned in about six months.

Precise is not alone in this area of the industry however, as the explosion of cloud computing is sucking in a growing number of companies keen to sell support products such as Precise. Additionally, the company’s focus on market leaders VMware and EMC may prove a barrier to many potential buyers, who are more likely to run heterogeneous environments.

The worst press release of 2010 – by a country mile

It’s an old story but it keeps on running. Companies employ PR companies to put themselves before the media. The main way they do that is through press releases.

So would you be happy if your PR company put out a release announcing an initiative but which omitted not one but three key facts?

  1. Who was launching it
  2. Why they were launching it
  3. Why anyone else would care

How could they get it so wrong?

Here it is, in all its glory, with only the PR company’s name stripped out to protect its blushes. Though, under enough pressure, I might publish that too….

The Common Assurance Metric (CAM) launched today is a global initiative that aims to produce objective quantifiable metrics, to assure Information Security maturity in cloud, third party service providers, as well as internally hosted systems. This collaborative initiative has received strong support from Public and Private sectors, industry associations, and global key industry stakeholders.

There is currently an urgent need for customers of cloud computing and third party IT services to be able to make an objective comparison between providers on the basis of their security features. As ENISA’s work on cloud computing, has shown, security is the number one concern for many businesses and governments. Existing mechanisms to measure security are often subjective and in many cases are bespoke solutions. This makes quantifiable measurement of security profiles difficult, and imposes the need to apply a bespoke approach, impacting in time, and of course cost. The CAM aims to bridge the divide between what is available, and what is required. By using existing standards that are often industry specific, the CAM will provide a singular approach of benefit to all organisations regardless of geography or industry.

[Quotes about how wonderful it is removed from here]

The project team anticipate delivery of the framework in late 2010 followed by a process towards global adoption for organisations wishing to obtain an objective measurement of security provided by cloud providers, as well as the level of security for systems hosted internally.

You’ll notice other issues (polite word) in there too. Who is ENISA, mentioned in the second para but never explained? Why is the first sentence only barely comprehensible — or even grammatical — on the first read-through? The second sentence in the second para doesn’t belong there, it should be at the top of that para. Since when does the phrase “impacting in time” qualify as English? And as for the last sentence/para, how many times did you have to read it to extract what the hapless writer was driving at?

Finally, why do people still feel the need to double-space between sentences? I gave up typewriters and starting using a word processor almost 30 years ago, and haven’t felt the need to do that since then…

It makes you wonder.

iPad? Just say no

If the world needed an iPad, why hasn’t one been invented before? Oh look: it has. Called the Newton when Apple launched it in 1992 – there were a couple of others released about the same time but the Newton got the headlines – it died in 1998 as not enough people bought it.

Will the iPad be different? Do you care?

Amid the inevitable hoopla and swooning going on in Applista diasporas at media outlets such as the Guardian and the BBC, let’s be clear: the iPad is a blown-up iPhone. And already we hear calls for there to be a cut-down version of the iPad so that you can carry it in your pocket. Thought that’s what an iPhone was…

The iPad’s remit seems to be more limited than the Newton’s. There’s no handwriting recognition for a start but it is very shiny, has bright colours and maybe the battery life is long enough to make it useful enough to carry around all day. I await review samples for verification. There’s no talk of local connectivity to either Mac or Windows, no talk of open access to all the applications you want, no talk of opening up the OS so that others can develop extensions or applications.

And for all of Jobs’ sneering at netbooks, mine works for hours on a single charge, runs Ubuntu quite happily – though I suspect that Windows 7 might actually be easier to to use in terms of getting everything working, but at least I have the choice.

As one blogger has already pointed out, this closed-world mentality could be the fatal flaw in the iPad’s shiny armour.

iPad? I don’t think so.

New HP servers take battle to Cisco

HP has today launched a swathe of servers in multiple form factors — rack, blade and tower — driven by Intel’s latest processor architecture, codenamed Nehalem.

But there’s much more to it than that.

Time was when server companies, especially those such as HP, which analysts say has the biggest server market share, would boast and blag about how theirs were the biggest and fastest beasts in the jungle.

No longer. Instead, HP put heavy emphasis on its management capabilities. That’s a shot fired across the bows of network vendor Cisco, which just two weeks ago unveiled a new unified computing initiative, at whose core is a scheme to manage and automate the movement of virtual machines and applications across servers inside data centres. Oh yes, there’s a server in there too — a first for fast-diversifying Cisco.

But this is a sidetrack: back to HP’s launch of the ProLiant G6. Performance was mentioned once in the press release’s opening paragraph — they’re twice as quick, apparently — but when he spoke to me, European server VP Christian Keller focused almost entirely on manageability, and performance per watt.

“We have 32 senders that give health information about temperatures and hotspots. Unlike our competitors, we don’t just control all six fans together — we can control them separately using advanced algorithms. These are based on computational fluid dynamics and are based in a chip, so it works even if the OS is changing — for example during virtualisation moves,” he said.

Keller went on to talk about how the servers’ power draw can be capped, again using hardware-based algorithms, which means that a server that’s been over-specified for the purposes of future-proofing won’t draw more power than it needs.

The result, Keller went on, is that “you can use the data centre better and pack more servers into the same space.” The bottom line is that the organisation reaps big total cost of ownership savings, he reckoned, although with finance very tight, he said that quick payback was at the top of mind of his customers.

“Customers are looking for faster payback today due to recession,” he said. “With HP, you need fewer servers to do the same amount of work and payback is achieved in around 12 months.” And there’s a bunch of slideware to back up his claims. You can get more on the products here.

Management software
HP’s keen to make more of its data centre management software — during a recent conversation, one HP exec said he reckoned the company had indulged in stealth marketing of its software portfolio.

And it’s true that HP’s new raft of software, much of it launched over six months ago and based on Systems Insight Manager, has barely been mentioned outside conversations with HP’s customers. It covers a wide range of functionality, enabling data centre managers to manage partitions within and across blades, which can be in the same chassis or in separate chassis — depending on what you want to do.

I saw a demo of the system and it was impressive. One of the core modules is the Capacity Advisor, which allows what-if planning so you can size your hardware requirements. It includes trending out to the future – which was a features on HP’s HP/UX platform but is now on x86. It not only allows the manager to size systems both for current and future use, it automatically checks how well the sizing operation matches reality.

Virtualisation Manager adds a view of all resources and virtual machines, and can display application resource utilisation inside VMs, while Global Workload Manager allows you to change priorities depending on which application is the most critical. So backup gets resources when the payroll cheque run is finished, for example. There’s lots more to it, so you can find out more here.

This isn’t intended to be a serious review of HP’s system management software — I didn’t spend nearly enough time with it for that. However, amid the noise surrounding VMware and Microsoft, and a host of third parties vying for position as top dog in the data centre management space, and together with the brouhaha surrounding Cisco’s recent launch, HP has quietly got on with developing hat looks like a seriously useful suite of software.

Apart from a press release six months ago, the company just hasn’t told many people about it.