There’s been a bit of a glut of storage announcements recently, so here’s a quick round-up the more interesting ones over recent weeks.
PernixData
This company is thinking ahead to a time when large proportions of servers in datacentres will have flash memory installed inside them. Right now, most storage is configured as a storage pool, connected via a dedicated storage network but this is sub-optimal for virtualised servers which generate large amounts of IOPS.
So instead, companies such as Fusion-io have developed flash memory systems for servers, so that data is local and so can be accessed much more quickly. This abandons one of the advantage of the storage network, namely storage sharing.
So PernixData has created FVP (Flash Virtualization Platform), a software shim that sits in the hypervisor and links the islands of data stored in flash memory inside each of the host servers. The way it works is to virtualise the server flash storage so it appears as a storage pool across physical hosts. Adding more flash to vSphere hosts – they have to be running VMware’s hypervisor – prompts FVP to expand the pool of flash. According to the company, it works irrespective of the storage vendor.
What this effectively does is to create a cache layer consisting of all the solid-state storage in the host pool that can boost the performance of reads and writes from and to the main storage network.
The company reckons that: “For the first time ever, companies can scale storage performance independent of storage capacity using server side flash.” And according to CEO Poojan Kumar: “We allow all virtual machines to use every piece of flash memory. The result is 17 times lower latency and 12 times more IOPS. It’s non-disruptive, it looks very simple and is easy to use.” It costs US$7,500 per physical host or US$10k for four hosts – a price point designed for smaller businesses.
It seems like a pretty good idea, and there’s some real-world testing info here.
Arista Networks
Also new on the hardware front are products from Arista Networks.
This company started life a few years ago with a set of high performance network switches that challenged the established players – such as Cisco and Juniper – by offering products that were faster, denser, and cheaper per port. Aimed at the high performance computing market, which includes users such as life science projects, geological data, and financial institutions, they were the beachhead to establish the company’s reputation, something it found easy given that its founders included Jayshree Ullal (ex-Cisco senior vice-president) and Andy Bechtolsheim (co-founder of Sun Microsystems).
I recently spoke to Doug Gourlay, Arista’s vice-president of systems engineering, about the new kit, which Gourlay reckoned mean that Arista “can cover 100% of the deployment scenarios that customers come to us with”. He sees the company’s strength as its software, which is claimed to be “self-healing and very reliable, with an open ecosystem and offering smart upgrades”.
The new products are the 7300 and 7250 switches, filling out the 7000 X Series which, the company claims, optimises costs, automates provisioning, and builds more reliable scale-out architectures.
The main use cases of the new systems are for those with large numbers of servers of small datacentres, and for dense, high performance computing render farms, according to Gourlay. They are designed for today’s flatter networks: where a traditional datacentre network used three layers, a modern fabric type of network will use just two layers to offer the fewest numbers of hops from one server to any other server. In Arista-speak, the switches attaching directly to the servers and directing traffic between them are leaves, while the core datacentre network is the spine.
The 7300 X series consists of three devices, with the largest, the 21U 7316 offering 16 line card slots with 2,048 10Gbps ports, or 512 40Gbps ports. Claimed throughput is 40Tbps. The other two in the series, the 7308 and 7304 accommodate eight and four linecards respectively, with decreases in size (21U and 8U) and throughput (20Tbps and 10Tbps).
The 2U, fixed configuration 7250QX-64 offers 64 40Gbps ports or 256 10Gbps ports, and a claimed throughput of up to 5Tbps. All systems and series offer reversible airflow for rack positioning flexibility and a claimed latency of two microseconds. Gourlay claimed this device offers the highest port density in the world.
Tarmin
Tarmin was punting its core product, Gridbank, at the SNW show. It’s an object storage system with bells on.
Organisations deploy object storage technology to manage very large volumes of unstructured data – typically at the petabyte scale and above. Such data is created not just by workers but more so from machines. Machine generated data comes from scientific instrumentation, including seismic and exploration equipment, genomic research tools and medical sensors, industrial sensors and meters, to cite just a few examples.
Most object storage systems restrict themselves to managing the data on disk, and leave other specialist systems such as analytics tools to extract meaningful insights from the morass of bits but what distinguishes Tarmin is that Gridbank “takes an end to end approach to the challenges of gaining value from data,” according to CEO Shahbaz Ali.
He said: “Object technologies provide metadata but we go further – we have an understanding of the data which means we index the content. This means we can analyse a media file in one of the 500 formats we support, and can deliver information about that content.”
In other words, said Ali: “Our key differentiator is that we’re not focused on the media like most storage companies, but the data – we aim to provide transparency and independence of data from media. We do data-defined storage.” He called this an integrated approach which means that organisations “don’t need an archiving solution, or a management solution” but can instead rely on Gridbank.
All that sounds well and good but one of the biggest obstacles to adoption has to be the single sourcing of a technology that aims to manage all your data. It also has very few reference sites (I could find just two on its website) so it appears that the number of organisations taking the Tarmin medicine is small.
There are also of course a number of established players in the markets that GridBank straddles, and it remains to be seen if an end-to-end solution is what organisations want, when integrating best of breed products avoids proprietary vendor lock-in, to which companies are more sensitive than ever and is more likely to prove better for performance and flexibility.