I’m in love with my laptop

Yes, sad, isn’t it?

About a year ago, I bought an LG Gram 17 laptop. As you’ll see elsewhere in this blog, I was very pleased with it at the time, describing it after three months’ use as “the first laptop I’ve bought and used in at least 10 years that I’m entirely happy with.” So what’s it like after a year of use?

It had the latest CPU at the time of its launch, together with a 17-inch display, a 1TB solid state storage device and 16GB of memory. This specification seemed like enough at the time and – spoiler alert – it still does. The most stress it gets is some light gaming, when the fans will emit a noticeable but not overwhelming amount of white noise. And it can get a little bit hot. Again, not so much that it becomes difficult to handle. The rest of the time, it handles my demands easily.

It’s generally silent, cool and sips at the battery. The LG utility (hello bloatware!) keeps the battery from charging to 100 percent to avoid excessive wear and it still lasts for hours: I’ve not exhausted it, ever. It’s been a couple of hours since the last charge to 80 percent and, as I type this review, 60 percent and a reported four hours’ of life remain. Never thought I’d say nice things about bloatware…

I still welcome the way that the machine is utterly reliable, with no need to go poking around in the innards of Windows to keep it that way. It sleeps and hibernates as and when expected and returns from those states quickly and reliably. Reliable: there’s that word again. The in-built camera recognises my face and logs me in automatically and, again, quickly, except in conditions of low light when it struggles a bit.

Though it took me a little while, I’m now accustomed to the keyboard: the main keys are offset to the left of the large trackpad to make room for the number pad which felt a bit weird at first. But the space that the large format of the machine gives the keyboard means that I don’t have to use the Fn key to access keys such as Home, PgUp, PgDn and Del, each of which are separate. The keys themselves are a reasonable size with space between them. There’s a clicky feel to each keypress.

I’m a huge fan of the 17-inch display which means it almost – but not quite – becomes as good to use as my widescreen desktop displays. This was one of the main reasons I bought this machine. In bright sunshine, it does though struggle to compete.

One issue that it seems no laptop maker has resolved is physical wear and tear. Not that the LG has had a hard life: it travels rarely. But daily handling means the edges of the case where I pick it up have started to look a little worn. A wipe with some isopropyl alcohol tidies up most but not all of it. The keyboard and display in contrast still look brand new.

Other than that, there’s nothing to complain about so I stand by my original conclusion: this is the best laptop I’ve ever bought and, though LG won’t want to hear this, I plan to keep it until it gives up the ghost. And that’s something I’ve never said about any laptop.

The UK is becoming a failed state

It’s becoming clearer that the United Kingdom is heading towards failure – if it’s not already there. So it’s time to install institutions that work for most other states with which the UK likes to compare itself, namely, an elected head of state and a written constitution. Were these accompanied by a proportional method of electing the government, that would drag Britain out of its obsession with medieval methods of governance and procedure that are increasingly irrelevant if not damaging in the modern world, into the 21st century.

The question is, will the money allow anything like this to happen? Because it is very much in the interests of the big money in Britain – and especially in England – for things to remain exactly as they are. Let’s keep tax breaks for owners of huge chunks of the British land and forests, let’s ensure that people remain obsessed with minutiae and let’s not talk about the big issues.

Pyramid of power

Which big issues? Governance and citizenship, key questions with which most democratic states have grappled before installing institutions that work, some better, some worse than others, to enact the wishes of citizens. These are not questions that Britons are ever asked to seriously consider, either by the educational system or the media.

For example, despite clear evidence that the British monarchy is an anachronism, parked at the pinnacle of a pyramid of power that starts there and includes all the lords, viscounts, marquises, dukes and princes, and the entitled nabobs in the House of Lords – mostly unelected of course – who make laws on behalf of less worthy folk, support for the institution remains undimmed, supported by the controllers of public debate who own most of the media.

When polled, the British return a healthy majority in favour of a monarchy. Yet the British monarchy spearheads the patrimony and privileges of an aristocracy that owns a third of all the land and 50% of rural land. It is keen to perpetuate a landed elite and the cultural circle associated with that continuation of aristo-oligarchy, as well as a social sphere with the wealth to remain both independent of the state and to lobby for its own self-interests, thus propping up an ancient class system antagonistic to submission to liberal democratic governance.

Were the monarchy to divest itself of its inherited private wealth and economic interests, and to behave more as a figurehead institution wholly funded by the public, it might be perceived as modernised. However, it clearly has zero interest – either financially or intellectually – in pursuing that course.

Banana republic

Rather, we recently learnt that the monarch interferes with the wishes of the democratically elected government when it’s in her interests to do so. The British state, addicted to secrecy, was forced to admit this. And where there is one such admission, there may well be others.

This alone, had it happened in another continent, would be enough for learned observers to sagely aver that an unelected monarch behaving in this way is contrary to the many definitions of democracy to which the self-appointed upholders of political probity propound.

But here in the UK? Well, we’re different, special…

With no written constitution, in which the UK is alone among the states with which it likes to compare itself, there is no legal redress. The monarch can do what she likes. Instead, quiet words in the right ears, in private, will undoubtedly be deemed enough – the unwashed masses may be informed in due time – to put matters right.

So why do people continue to support the monarchy? The first reason one hears is that the monarchy does no harm because it has no real power. We can put that one to bed with the revelations about interference with legislation.

The second one is that the monarchy brings people together. It’s hard to gather evidence about this one way or the other, but evidence as to what divides the nation is freely available, and it’s powerful stuff. Specifically, the arguments over Brexit strongly suggest there is no universal vision for Britain – or rather, England. Rather, it’s crystal clear that half the country has one vision of the country, namely inward-looking and xenophobic, while the other half sees it as international and outward-looking. If the monarchy brings people together, it’s not working.

One also hears that the monarchy attracts tourism money, although the amounts, in normal, non-pandemic times, are paltry compared both to the cost of public services, such as the hard-pressed and politically emasculated NHS, or compared to the tax breaks for, and avoidance and evasion practised by the afore-mentioned rich and powerful.

The other argument is to point to the elected head of another state – Donald Trump is the obvious if only the most recent example – and say that ‘we’ don’t want that to happen, so let’s keep the Queen. This clearly misses the point: Trump has gone, voted out. We can’t vote out the Queen, whatever she or her successor does. They’re there for life, because of who their parents were.

Failed state

Yet none of these arguments gains purchase in the minds of the British. There’s little support for a constitution from the two main parties, nor for proportional representation, and none for a republic. The electorate continues to vote for the Tory party, an organisation whose interests are orthogonal to their own, working instead in the interests of those who fund it: large corporations and rich individuals who were recently calculated to gain a return on their investment of 100:1 in the form of contracts and tax breaks.

Continued support for an unelected monarchy is also hard to disentangle from the notion of English exceptionalism which permeates the body politic and the media. It resonates with the Brexit debate and the tone in which it was conducted, and the clear evidence that only England voted – very narrowly – for leaving the EU, thereby becoming worse off by any measure.

It seems to my mind that current circumstances make it very difficult to turn this tanker around, if not impossible. The UK is run by the unelected who govern in the interests of the rich and powerful, and who promulgate mythology about the state we are in, making it hard to move outside that hegemony. As a consequence, it’s very difficult not to conclude that the UK is heading towards becoming a failed state.

Cloud transfers made easy

transfer
Transfers made easy

A while back, I wrote about the problem of consumer trust in the cloud – in particular, the problem of what happens when your cloud provider decides to change the T&Cs to your detriment, and how this can erode the trust that consumers, already alert to the technology industry’s much-publicised failures, are in danger of losing.

The issue that prompted this was the massive capacity reduction by Amazon for its cloud storage service – Cloud Drive – from unlimited to a maximum of 5GB. The original price was just £55 a year but Amazon’s new price for 15TB, for example, is £1,500.

So at this point, unless you’re happy to pay that amount, two solutions suggest themselves. The first is to invest in a pile of very large hard disks – twice as many as you need because, you know, backups, and then become your own storage manager. Some excellent NAS devices and software packages such as FreeNAS make this process much easier than it used to be, but you’ll still need to manage the systems and/or buy the supporting hardware, and pay the power bill.

The alternative is to retain some trust in the cloud – while remaining wary. But this is only half the solution; I’ll get back to that later.

This individual has found another cloud provider, Google G Suite, which offers unlimited storage and a whole heap of business services for a reasonable £6 per month. Google requires you to own your domain and to be hosting your own website but if you can satisfy those requirements, you’re in. Other cloud providers have deals too but this was the best deal I could find.

Cloud-to-cloud transfer
So the problem then is how to transfer a large volume of data to the new cloud service. One way is to re-upload it but this is very long-winded: using a 20Mbps fibre-to-the-cabinet (FTTC) connection it will take months, it can clog up your connection if you have other uses for that bandwidth, and for anyone on a metered broadband connection it will be expensive too. And if you don’t run a dedicated server, you’ll need a machine left on during this time.

Cloud-to-cloud transfer services exist to solve this problem, – and after some research, I found cloudHQ. For a reasonable fee – or for free if you blog about it (yes, this what I’m doing here) – cloudHQ will transfer data between a range of cloud services, including Google, Amazon (S3 and Cloud Drive), Gmail, Box, Basecamp, Office 365, Evernote and many more.

CloudHQ does more: it will backup and sync in real time too, forward emails, save them as PDFs, act as a repository for large attachments, and a range of other email- and scheduling related services for Google and other cloud providers.

The basic service is free but this is limited to 20GB and a maximum file size of 150MB – but the next tier up – Premium – costs £19.80 a month and offers pretty much everything the power user could want.

Hybrid clouds and backup
So is cloudHQ the solution to the problem of cloud-to-cloud transfers? Yes, but putting your data in the cloud still leaves you with a single copy without a backup (I said I’d get back to this). So either you need another cloud service, in which case cloudHQ will keep them in sync, or you create a hybrid solution, where the primary data lives under your direct control and management, but the off-site backup lives in the cloud.

This hybrid setup is the one that businesses are increasingly opting for, and for good reason. And frankly, since your irreplaceable personal data – think photos and the like – is at risk unless you keep at least two copies, preferably three, then using both local and cloud storage make huge sense.

How Firefox just blew it

firefox_current_logo-150x150As a journalist, my Firefox browser – which I’ve been using since almost the day it arrived – is my primary research tool. It’s the place I call home. And it’s just been upgraded. It’s a big upgrade that for me will change the way it works, massively. I’m saying no.

Upgraded

The web is full of articles praising its developer, Mozilla, for updating it so it’s twice as fast. One article lauds “Mozilla’s mission is to keep the web open and competitive, and Firefox is how Mozilla works to endow the web with new technology like easier payments, virtual reality and fast WebAssembly-powered games.” This is endorsed by a Gartner analyst; Gartner is the biggest, and therefore the go-to analyst house in the technology industry for those needing a quote.

If you’re waiting for a ‘but’, here it is. Frankly, I don’t care how much faster it is if means I that half the functionality I’m used to is stripped away. Because that’s what allowing my browser to upgrade to the latest, greatest version would mean.

Extensions

It’s all because Firefox made the clever move to open up its browser very early on to third parties, who wrote extensions to add features and functionality. I loved that idea, embraced it wholeheartedly, and now run about 20 extensions.

The new Firefox – which despite its apocalyptic upgrade moves only from version 56.02 to 57.0 – will no longer run those extensions which for me have been the most useful.

Software developers love adding new stuff and making things look new using the latest software tools. Mozilla has been no slouch in this department. Fine for developers perhaps, but as a user, this constant change is a pain in the arse, as it means I need to re-learn each time how to use the software.

So Classic Theme Restorer (CTR) is particularly precious to me, as it enables Firefox to look and feel pretty much as it did when I first started using it.

CTR puts things, such as toolbars and menus – back where they were, so they work they have always worked – and for that matter, the way that most of my software works. But after the upgrade, CTR cannot work, as the hooks provided by the browser for it to do its stuff don’t exist in the new version.

Two other extensions are key from my point of view. One gives me tree-style tab navigation to the left of the browser window, not along the top where multiple tabs pretty soon get lost. And tab grouping, a feature that disappeared a few generations of browser ago but was replaced by a couple of extensions, means you can keep hundreds of tabs open, arranged neatly by topic or project. Who wouldn’t want this if they work in the browser all day?

Meanwhile, the developers of some other extensions have given up, due to the effort involved in completely re-writing their code, while others will no doubt get there in some form or other, eventually.

Messing with look and feel

This is a serious issue. Back in the day, one of the much-touted advantages of a graphical user interface was that all software worked the same, reducing training time: if you could use one piece of software, you could use them all. No more. Where did that idea go?

Mozilla clearly thinks performance – which can instead be boosted by adding a faster CPU – is paramount. Yes, it’s important but a browser is now a key tool, and removing huge chunks of functionality is poor decision-making.

I feel like my home is being dismantled around me. The walls have shifted so that the bedroom is now where the living room used to be, the front door is at the back, and I’ve no idea where the toilet is.

Some might argue that I should suck it up and move with the times. But I don’t use a browser to interact with the technology but rather to capture information. Muscle memory does the job without having to think about the browser’s controls or their placement. If the tool gets in the way and forces me to think about how it works, it’s a failure.

So version 57 is not happening here. Not yet, anyway.

Why some websites are deliberately designed to be insecure

Passwords remain the bane of our lives. People print them out, they re-use them or slightly change them for different services, or they simply use the same one for decades.

On the other side of the coin, for most users of a service, it’s a pain to remember a password and a bigger pain to change it – and then have to remember a new one all over again as websites change, they get hacked and/or their security policy changes.

But while passwords are imperfect, they’re the least worst option in most cases for identifying and authenticating a user, and the best way of making them more secure is to use each password for only one site, and make them long, complex, and hard to guess. However, some websites are purposely designed to be less secure by subverting those attempts. Here’s why.

2FA doesn’t work

Two-factor (2FA) authentication is often seen as a more secure method of authentication but it’s patchy at best. For example, the division between enterprise and personal environments has today all but evaporated. In the course of their jobs, people increasingly access their personal services at work using their personal devices. And employers can’t mandate 2FA for access to Facebook, for example, which might well be the chosen method of communication of a key supplier, or a way of communicating with potential customers. All FB wants is a password, and it’s not alone.

Two-factor authentication is also less convenient and takes more time. You’re prepared to tolerate this when accessing your bank account because, well, money. For most other, less important services, adding barriers to access is likely to drive users into the arms of the competition.

Password persistence

So we’re stuck with passwords until biometrics become a pervasive reality. And maybe not even then – but that’s a whole other issue. The best solution I’ve come up with to the password problem is a password manager. Specifically, KeePass, which is a free, open-source, cross-platform solution with a healthy community of third-party developers of plug-ins and utilities.

You only have to remember one password and that gets you access to everything. So you only have to remember one single master password or select the key file to unlock the whole database. And as the website says: “The databases are encrypted using the best and most secure encryption algorithms currently known (AES and Twofish).”

It works on your phone, your PC, your Mac, your tablet – you name it, and it’ll generate highly secure passwords for you, customised to your needs. So what’s not to like?

Pasting problems

Here’s the rub: some websites think they’re being more secure by preventing you from pasting a password into their password entry fields. Some website security designers will argue that that passwords to access their service should not be stored in any form. But a password manager works by pasting passwords into the password login field.

The rationale for preventing password pasting is that malware can snoop the clipboard and pass that information back to the crooks. But this is using a sledgehammer to crack a nut because KeePass uses an obfuscation method to ensure the clipboard can’t be sniffed. And it will clear the password in a very short time – configurable by you – so that the exposure time can be very short; 10 seconds will do it.

In addition, as Troy Hunt, a Microsoft MVP for Developer Security, points out: “the irony of this position is that [it] makes the assumption that a compromised machine may be at risk of its clipboard being accessed but not its keystrokes. Why pull the password from memory for the small portion of people that elect to use a password manager when you can just grab the keystrokes with malware?”

In other words, preventing pasting is counter-productive; it’s reducing security. Don’t believe me? Check out this scenario.

Insecure by design

So if you can’t paste a password in, what do you do? If you use a password manager, which is probably the most secure way of storing passwords today and puts you way ahead of the game, you open up the entry for that service in KeePass, expose the password to any prying eye that happens to be passing, and copy in the password – which is likely to be long and complex – manually, character by character. Probably takes a few minutes.

Can you see anything wrong with that? If you’re sitting in a crowded coffee shop, for example?

Yup. A no-paste policy is annoying, slow, prone to mistakes, and highly insecure. Worse, it’s likely to be the security-conscious – those using password managers and the like – who are most affected. Even a simple file full of passwords – hopefully encrypted – and tucked away in an obscure location is likely to be more secure than the method many if not most people use: re-using common, easily memorable passwords.

I’ve had discussions about this with one major UK bank which implemented a no-paste policy and seems since to have reversed course – whether as a result of my intervention (and no doubt that of others too) I have no way of knowing.

Say no to no-paste

So if you encounter a website that does not allow you to paste in a password in a mistaken bid to add security, point out to them that in effect, they’re forcing people to use weak passwords that they can remember, which will be less secure.

As Troy Hunt says: “we’ve got a handful of websites forcing customers into creating arbitrarily short passwords then disabling the ability to use password managers to the full extent possible and to make it even worse, they’re using a non-standard browser behaviour to do it!”

Is the cloud letting consumers down?

The promise of cloud services has, by and large, been fulfilled. Back in the day, and right up to the present day still, the big issue has been security: is your data safe?

What this question is really asking is whether you can retrieve your data quickly in the event of a technological melt-down. You know the kind of thing: an asteroid hits your business premises, a flood or fire makes your office unusable for weeks or months, or some form of weird glitch or malware makes your data unavailable, and you need to restore a backup to fix it.

All these scenarios are now pretty much covered by the main cloud vendors so, from a business perspective, what’s not to like?

Enter the consumer

Consumers – all of us, in other words – are also users of cloud services. Whether your phone uploads photos to the manufacturer’s cloud service, or you push terabytes of multimedia data up to a big provider’s facility, the cloud is integrated into everything that digital natives do.

The problem here is that, when it comes to cloud services, you get what you pay for. Enterprises will pay what it takes to get the level of service they want, whether it’s virtual machines for development purposes that can be quick and easy to set up and tear down, or business-critical applications that need precise configuration and multiple levels of redundancy.

Consumers on the other hand are generally unable to pay enterprise-level cash but an increasing number have built large multimedia libraries and see the cloud as a great way of backing up their data. Cloud providers have responded to this demand in various ways but the most common is a bait-and-switch offer.

Amazon’s policy changes provide the latest and arguably the most egregious example. In March 2015, it initiated, all for just £55 a year, an unlimited data storage service, not just photos as Google and others were already offering. Clearly many people saw this as a massive bargain and, although figures are not publicly available, many took it up.

Amazon dumps the deal

But in May 2017, just over two years later, Amazon announced that the deal was going to be changed, and subscribers would have to pay on a per-TB basis instead. This was after many subscribers – according to user forums – had uploaded dozens of terabytes over a period of months at painfully slow, asymmetrical data rates.

Now they are offered on a take it or leave it basis an expensive cloud service – costing perhaps three or four times more depending on data volumes – and a whole bunch of data that it will be difficult to migrate. On Reddit, many said they have given up on cloud providers and are instead investing in local storage.

This isn’t the first time such a move has been made by a cloud provider: bait the users in, then once they’re committed, switch the deal.

Can you trust the cloud?

While cloud providers are of course perfectly at liberty to change their terms and conditions according to commercial considerations, it’s hard to think of any other consumer service where such a major change in the T&Cs would be implemented because of the fear of user backlash. Especially by one of the largest global providers.

The message that Amazon’s move transmits is that cloud providers cannot be trusted, and that a deal that looks almost too good to be true will almost certainly turn out to be just so, even when it’s offered by a very large service provider who users might imagine would be more stable and reliable. That the switch comes at a time when storage costs continue to plummet makes it all the more surprising.

In its defence, Amazon said it will honour existing subscriptions until they expire, and only start deleting data 180 days after expiry.

That said, IT companies need to grow up. They’re not startups any more. If they offer a service and users in all good faith take them up on it, as the commercial managers at Amazon might have expected, they should deal with it in a way that doesn’t potentially have the effect of destroying faith and trust in cloud providers.

It’s not just consumers who are affected. It shouldn’t be forgotten that business people are also consumers and the cloud purchasing decisions they make are bound to be influenced to a degree by their personal experiences as well as by business needs, corporate policy and so on.

So from the perspective of many consumers, the answer to the question of whether you can trust the cloud looks pretty equivocal. The data might still be there but you can’t assume the service will continue along the same or similar lines as those you originally signed up to.

Can you trust the cloud? Sometimes.

AVM Fritz!Box 4040 review

AVM Fritz!Box 4040

AVM Fritz!Box 4040

AVM’s Fritz!Box range of routers has long offered a great range of features and are, in my experience, highly reliable.

The 4040 sits at the top end of the lower half of AVM’s product line-up. The top half includes DECT telephony features but if you’ve already got a working cordless phone system, you can live without that.

The 4040 looks like all the other Fritz!Box devices: a red and silver streamlined slim case without massive protuberances that would persuade you to hide the device from view. A couple of buttons on the top control WPS and WLAN, while indicators show status, with the Info light moderately configurable; it would be helpful if AVM broadened the possible uses of this indicator.

At the back are four 1Gbps LAN ports which you can downgrade individually for power-saving reasons to 100Mbps, and a WAN port. A couple of USB ports are provided too, one 3.0, one 2.0.

The 4040 supports all forms of DSL, either directly or via an existing modem or dongle, WLAN 802.11n and 11ac, both 2.4GHz and 5GHz. The higher frequency network provides connectivity at up to a theoretical 867Mbps; I managed to get 650Mbps with my phone right next to the access point.

Power-saving modes are available for the wireless signal too – it automatically reduces the wireless transmitter power when all devices are logged off – providing a useful saving for a device you’re likely to leave switched on all the time.

Security is catered for by MAC address filtering on the wireless LAN, and by a stateful packet inspection firewall with port sharing to allow access from the Internet.

The software interface is supremely easy to use and handsome too. The overview screen gives an at-a-glance of the status of the main features: the Internet connection, devices connected to the network, the status of all interfaces, and of the NAS and media servers that are built into the router.

The NAS feature allows you to connect storage to the router over USB only and access it from anywhere either over UPnP, FTP or SMB (Windows networking). Other features include Internet-only guest access which disables access to the LAN, an IPSec VPN, and Wake on LAN over the Internet.

The Fritz!Box 4040 is the latest in a long line of impressive wireless routers, continuing AVM’s tradition of high quality hardware and software, and it’s good value at around £85.

How to stay safe on the Internet – trust no-one

key
Working close to the IT industry as I do, it’s hard to avoid the blizzard of announcements and general excitement around the growth of the Internet of things allied to location-based services. This, we are told, will be a great new way to market your goods and services to consumers.

You can get your sales assistants to greet shoppers by name! You can tell them about bargains by text as they walk past your store! You might even ring them up! Exclamation marks added for general effect.

But here’s the thing. Most people don’t trust big corporations any more, according to the recently published 2013 IT Risk/Reward Barometer report. Instead, finds this international study: “Across all markets surveyed, the vast majority of consumers worry that their information will be stolen (US: 90%, Mexico: 91%, India: 88%, UK: 86%).”

As a result, blizzard marketing of the kind that triangulation technologies now permits makes people feel uneasy at best and downright annoyed at worst. People ask themselves questions about who has their data, how they got it, and what control they have over that data once it’s escaped into the ether.

From ICASA’s point of view, this is largely the fault of individuals who don’t control their passwords properly or otherwise secure their systems. It’s an auditing organisation, so that’s not an unusual position to adopt. But I think it goes further than that.

As the study also points out: “Institutional trust is a critical success factor in an increasingly connected world. […] Organisations have much work to do to increase consumer (and employee) trust in how personal information is used.”

In other words, companies need to work harder at winning your trust. Does that make you feel any better?

This is clearly not an issue that will be solved – ever. For every ten organisations that are trustworthy and manage personal data responsibly – you do read that text-wall of privacy policy each time you log onto a new site, don’t you? – there will be one that doesn’t. Even if all companies were trustworthy, people will still make mistakes and hackers will win the security battle from time to time, resulting in compromised personal data.

The only rational policy for the rest of us to adopt is to trust none of them, and that is what this study shows most people tend to do.

The least you should do is to use long, complex passwords and change them regularly, using a password safe (eg KeePass) so you don’t have commit them to memory – or worse, bits of paper.

FYI, the study was conducted by ICASA, which describes itself “an independent, nonprofit, global association, ISACA engages in the development, adoption and use of globally accepted, industry-leading knowledge and practices for information systems.”

Storage roundup with PernixData, Arista Networks and Tarmin

There’s been a bit of a glut of storage announcements recently, so here’s a quick round-up the more interesting ones over recent weeks.

PernixData
This company is thinking ahead to a time when large proportions of servers in datacentres will have flash memory installed inside them. Right now, most storage is configured as a storage pool, connected via a dedicated storage network but this is sub-optimal for virtualised servers which generate large amounts of IOPS.

So instead, companies such as Fusion-io have developed flash memory systems for servers, so that data is local and so can be accessed much more quickly. This abandons one of the advantage of the storage network, namely storage sharing.

So PernixData has created FVP (Flash Virtualization Platform), a software shim that sits in the hypervisor and links the islands of data stored in flash memory inside each of the host servers. The way it works is to virtualise the server flash storage so it appears as a storage pool across physical hosts. Adding more flash to vSphere hosts – they have to be running VMware’s hypervisor – prompts FVP to expand the pool of flash. According to the company, it works irrespective of the storage vendor.

What this effectively does is to create a cache layer consisting of all the solid-state storage in the host pool that can boost the performance of reads and writes from and to the main storage network.

The company reckons that: “For the first time ever, companies can scale storage performance independent of storage capacity using server side flash.” And according to CEO Poojan Kumar: “We allow all virtual machines to use every piece of flash memory. The result is 17 times lower latency and 12 times more IOPS. It’s non-disruptive, it looks very simple and is easy to use.” It costs US$7,500 per physical host or US$10k for four hosts – a price point designed for smaller businesses.

It seems like a pretty good idea, and there’s some real-world testing info here.

Arista Networks
Also new on the hardware front are products from Arista Networks.

This company started life a few years ago with a set of high performance network switches that challenged the established players – such as Cisco and Juniper – by offering products that were faster, denser, and cheaper per port. Aimed at the high performance computing market, which includes users such as life science projects, geological data, and financial institutions, they were the beachhead to establish the company’s reputation, something it found easy given that its founders included Jayshree Ullal (ex-Cisco senior vice-president) and Andy Bechtolsheim (co-founder of Sun Microsystems).

I recently spoke to Doug Gourlay, Arista’s vice-president of systems engineering, about the new kit, which Gourlay reckoned mean that Arista “can cover 100% of the deployment scenarios that customers come to us with”. He sees the company’s strength as its software, which is claimed to be “self-healing and very reliable, with an open ecosystem and offering smart upgrades”.

The new products are the 7300 and 7250 switches, filling out the 7000 X Series which, the company claims, optimises costs, automates provisioning, and builds more reliable scale-out architectures.

The main use cases of the new systems are for those with large numbers of servers of small datacentres, and for dense, high performance computing render farms, according to Gourlay. They are designed for today’s flatter networks: where a traditional datacentre network used three layers, a modern fabric type of network will use just two layers to offer the fewest numbers of hops from one server to any other server. In Arista-speak, the switches attaching directly to the servers and directing traffic between them are leaves, while the core datacentre network is the spine.

The 7300 X series consists of three devices, with the largest, the 21U 7316 offering 16 line card slots with 2,048 10Gbps ports, or 512 40Gbps ports. Claimed throughput is 40Tbps. The other two in the series, the 7308 and 7304 accommodate eight and four linecards respectively, with decreases in size (21U and 8U) and throughput (20Tbps and 10Tbps).

The 2U, fixed configuration 7250QX-64 offers 64 40Gbps ports or 256 10Gbps ports, and a claimed throughput of up to 5Tbps. All systems and series offer reversible airflow for rack positioning flexibility and a claimed latency of two microseconds. Gourlay claimed this device offers the highest port density in the world.

Tarmin
Tarmin was punting its core product, Gridbank, at the SNW show. It’s an object storage system with bells on.

Organisations deploy object storage technology to manage very large volumes of unstructured data – typically at the petabyte scale and above. Such data is created not just by workers but more so from machines. Machine generated data comes from scientific instrumentation, including seismic and exploration equipment, genomic research tools and medical sensors, industrial sensors and meters, to cite just a few examples.

Most object storage systems restrict themselves to managing the data on disk, and leave other specialist systems such as analytics tools to extract meaningful insights from the morass of bits but what distinguishes Tarmin is that Gridbank “takes an end to end approach to the challenges of gaining value from data,” according to CEO Shahbaz Ali.

He said: “Object technologies provide metadata but we go further – we have an understanding of the data which means we index the content. This means we can analyse a media file in one of the 500 formats we support, and can deliver information about that content.”

In other words, said Ali: “Our key differentiator is that we’re not focused on the media like most storage companies, but the data – we aim to provide transparency and independence of data from media. We do data-defined storage.” He called this an integrated approach which means that organisations “don’t need an archiving solution, or a management solution” but can instead rely on Gridbank.

All that sounds well and good but one of the biggest obstacles to adoption has to be the single sourcing of a technology that aims to manage all your data. It also has very few reference sites (I could find just two on its website) so it appears that the number of organisations taking the Tarmin medicine is small.

There are also of course a number of established players in the markets that GridBank straddles, and it remains to be seen if an end-to-end solution is what organisations want, when integrating best of breed products avoids proprietary vendor lock-in, to which companies are more sensitive than ever and is more likely to prove better for performance and flexibility.

Seagate’s new KOS disk drives aim to entice cloud builders

Among the most interesting conversations I had at the storage show SNW (aka Powering the Cloud) in Frankfurt this year was with Seagate’s European cloud initiatives director Joe Fagan, as we talked about the company’s proposed Kinetic Open Storage (KOS) drives.

The disk drive company is trying to move up the stack from what has become commodity hardware by converting its drives into servers. Instead of attaching using a SATA or SAS connector, Kinetic drives will have – a SATA or SAS connector, not an RJ45. But the data flowing inside the connector will be using IP not storage protocols, while the connector remains the same for compatibility purposes.

The aim is to help builders of large-scale infrastructures, such as cloud providers, to build denser, object-based systems by putting the server on the storage, rather than, to paraphrase Fagan, spending the energy on a Xeon or two per server along with a bunch of other hardware. Seagate argues that KOS could eliminate a layer of hardware between applications and storage, so data will flow from the application servers directly to storage rather than, as now, being translated into a variety of protocols before it hits the disk.

Fagan said two cloud builders were interested in the technology.

Behind this is, of course, a bid to grab some of the cash that enterprises and consumers are spending on cloud applications and services.

There are a few ‘howevers’, as you might imagine. Among the first is that every disk drive will need an IP address. This has huge implications for the network infrastructure and for network managers. Suddenly, there will be a lot more IP addresses to deal with, they will have to be subnetted and VLANned – did I mention that Kinetic drives will use IPV4? – and all this assumes you can summon up enough v4 addresses to start with.

Another concern is that mechanical disk drives fail relatively frequently while servers don’t, as of course they have no moving parts. So when a drive fails – and in large-scale deployments they surely will – you have to throw away the internal server too. Could be expensive.

And finally, there’s also a huge amount of inertia in the shape of today’s installed systems and the expertise needed to manage and operate them.

Is that enough to halt the initiative? Seagate clearly hopes not, and hopes too that other drive makers will come on board and develop their own versions in order to help validate the concept. It has provided APIs to help app developers exploit the concept.

As ever, time will tell. But will you find these drives in a server near you any time soon? Don’t hold your breath.