How Firefox just blew it

firefox_current_logo-150x150As a journalist, my Firefox browser – which I’ve been using since almost the day it arrived – is my primary research tool. It’s the place I call home. And it’s just been upgraded. It’s a big upgrade that for me will change the way it works, massively. I’m saying no.

Upgraded

The web is full of articles praising its developer, Mozilla, for updating it so it’s twice as fast. One article lauds “Mozilla’s mission is to keep the web open and competitive, and Firefox is how Mozilla works to endow the web with new technology like easier payments, virtual reality and fast WebAssembly-powered games.” This is endorsed by a Gartner analyst; Gartner is the biggest, and therefore the go-to analyst house in the technology industry for those needing a quote.

If you’re waiting for a ‘but’, here it is. Frankly, I don’t care how much faster it is if means I that half the functionality I’m used to is stripped away. Because that’s what allowing my browser to upgrade to the latest, greatest version would mean.

Extensions

It’s all because Firefox made the clever move to open up its browser very early on to third parties, who wrote extensions to add features and functionality. I loved that idea, embraced it wholeheartedly, and now run about 20 extensions.

The new Firefox – which despite its apocalyptic upgrade moves only from version 56.02 to 57.0 – will no longer run those extensions which for me have been the most useful.

Software developers love adding new stuff and making things look new using the latest software tools. Mozilla has been no slouch in this department. Fine for developers perhaps, but as a user, this constant change is a pain in the arse, as it means I need to re-learn each time how to use the software.

So Classic Theme Restorer (CTR) is particularly precious to me, as it enables Firefox to look and feel pretty much as it did when I first started using it.

CTR puts things, such as toolbars and menus – back where they were, so they work they have always worked – and for that matter, the way that most of my software works. But after the upgrade, CTR cannot work, as the hooks provided by the browser for it to do its stuff don’t exist in the new version.

Two other extensions are key from my point of view. One gives me tree-style tab navigation to the left of the browser window, not along the top where multiple tabs pretty soon get lost. And tab grouping, a feature that disappeared a few generations of browser ago but was replaced by a couple of extensions, means you can keep hundreds of tabs open, arranged neatly by topic or project. Who wouldn’t want this if they work in the browser all day?

Meanwhile, the developers of some other extensions have given up, due to the effort involved in completely re-writing their code, while others will no doubt get there in some form or other, eventually.

Messing with look and feel

This is a serious issue. Back in the day, one of the much-touted advantages of a graphical user interface was that all software worked the same, reducing training time: if you could use one piece of software, you could use them all. No more. Where did that idea go?

Mozilla clearly thinks performance – which can instead be boosted by adding a faster CPU – is paramount. Yes, it’s important but a browser is now a key tool, and removing huge chunks of functionality is poor decision-making.

I feel like my home is being dismantled around me. The walls have shifted so that the bedroom is now where the living room used to be, the front door is at the back, and I’ve no idea where the toilet is.

Some might argue that I should suck it up and move with the times. But I don’t use a browser to interact with the technology but rather to capture information. Muscle memory does the job without having to think about the browser’s controls or their placement. If the tool gets in the way and forces me to think about how it works, it’s a failure.

So version 57 is not happening here. Not yet, anyway.

AVM Fritz!Box 4040 review

AVM Fritz!Box 4040

AVM Fritz!Box 4040

AVM’s Fritz!Box range of routers has long offered a great range of features and are, in my experience, highly reliable.

The 4040 sits at the top end of the lower half of AVM’s product line-up. The top half includes DECT telephony features but if you’ve already got a working cordless phone system, you can live without that.

The 4040 looks like all the other Fritz!Box devices: a red and silver streamlined slim case without massive protuberances that would persuade you to hide the device from view. A couple of buttons on the top control WPS and WLAN, while indicators show status, with the Info light moderately configurable; it would be helpful if AVM broadened the possible uses of this indicator.

At the back are four 1Gbps LAN ports which you can downgrade individually for power-saving reasons to 100Mbps, and a WAN port. A couple of USB ports are provided too, one 3.0, one 2.0.

The 4040 supports all forms of DSL, either directly or via an existing modem or dongle, WLAN 802.11n and 11ac, both 2.4GHz and 5GHz. The higher frequency network provides connectivity at up to a theoretical 867Mbps; I managed to get 650Mbps with my phone right next to the access point.

Power-saving modes are available for the wireless signal too – it automatically reduces the wireless transmitter power when all devices are logged off – providing a useful saving for a device you’re likely to leave switched on all the time.

Security is catered for by MAC address filtering on the wireless LAN, and by a stateful packet inspection firewall with port sharing to allow access from the Internet.

The software interface is supremely easy to use and handsome too. The overview screen gives an at-a-glance of the status of the main features: the Internet connection, devices connected to the network, the status of all interfaces, and of the NAS and media servers that are built into the router.

The NAS feature allows you to connect storage to the router over USB only and access it from anywhere either over UPnP, FTP or SMB (Windows networking). Other features include Internet-only guest access which disables access to the LAN, an IPSec VPN, and Wake on LAN over the Internet.

The Fritz!Box 4040 is the latest in a long line of impressive wireless routers, continuing AVM’s tradition of high quality hardware and software, and it’s good value at around £85.

Technology highlights 2013

I’ve been shamefully neglecting this blog recently, yet a lot of interesting new technologies and ideas have come my way. So by way of making amends, here’s quick round-up of the highlights.

Nivio
This is a company that delivers a virtual desktop service with a difference. Virtual desktops have been a persistent topic of conversation among IT managers for years, yet delivery has always been some way off. Bit like fusion energy only not as explosive.

The problem is that, unless you’re serving desktops to people who do a single task all day, which describes call centre workers but not most people, people expect a certain level of performance and customisation from their desktops. If you’re going to take a desktop computer away from someone who uses it intensively as a tool, you’d better make sure that the replacement technology is just as interactive.

Desktops provided by terminal services have tended to be slow and a bit clunky – and there’s no denying that Nivio’s virtual desktop service, which I’ve tried, isn’t quite as snappy as having 3.4GHz of raw compute power under your fingertips.

On the other hand, there’s a load of upsides. From an IT perspective, you don’t need to provide the frankly huge amounts of bandwidth needed to service multiple desktops. You don’t care what the end user wants to access the service with – so if you’re allowing people to bring and use their own devices into work, this will work with anything, needing only a browser to work. I’ve seen a Windows desktop running on an iPhone – scary…

And you don’t need to buy applications. The service provides them all for you from its standard set of over 40 applications – and if you need one the company doesn’t currently offer, they’ll supply it. Nivio also handles data migration, patching, and the back-end hardware.

All you need to do is hand over $35 per month per user.

Quantum
The company best known for its tape backup products launched a new range of tape libraries.

The DXi6800 is, says Quantum’s Stéphane Estevez, three times more scalable than any other such device, allowing you to scale from 13TB to 156TB. Aimed at mid-sized as well as large enterprises, it includes an array of disks that you effectively switch on with the purchase of a new licence. Until then, they’re dormant, not spinning. “We are taking a risk of shipping more disks than the customer is paying for – but we know customer storage is always growing. You unlock the extra storage when you need it,” said Estevez.

It can handle up to 16TB/hour which, is, reckons the company, four times faster than EMC’s DD670 – its main competitor – and all data is encrypted and protected by an electronic certificate so you can’t simply swap it into another Quantum library. And the management tools mean that you can manage multiple devices across datacentres.

Storage Fusion
If ever you wanted to know at a deep level how efficient your storage systems are, especially when it comes to virtual machine management, then Storage Fusion reckons it has the answers in the form of its storage analysis software, Storage Fusion Analyze.

I spoke to Peter White, Storage Fusion’s operations director, who reckoned that companies are wasting storage capacity by not over-provisioning enough, and by leaving old snapshots and storage allocated to servers that no longer exist.

“Larger enterprise environments have the most reclaimable storage because they’re uncontrolled,” White said, “while smaller systems are better controlled.”

Because the company’s software has analysed large volumes of storage, White was in a position to talk about trends in storage usage.

For example, most companies have 25% capacity headroom, he said. “Customers need that level of comfort zone. Partners and end users say that the reason is because the purchasing process to get disk from purchase order to installation can take weeks or even months, so there’s a buffer built in. Best practice is around that level but you could go higher.”

You also get what White called system losses, due to formatting inefficiencies and OS storage. “And generally processes are often broken when it comes to decommissioning – without processes, there’s an assumption of infinite supply which leads to infinite demand and a lot of wastage.”

The sister product, Storage Fusion Virtualize “allows us to shine a torch into VMware environments,” White said. “It can see how VM storage is being used and consumed. It offers the same fast analysis, with no agents needed.”

Typical customers include not so much enterprises as systems integrators, service providers and consultants.

“We are complementary to main storage management tools such as those from NetApp and EMC,” White said. “Vendors take a global licence, and end users can buy via our partners – they can buy report packs to run it monthly or quarterly, for example.”

Solidfire
Another product aimed at service providers, SolidFire steps aside from the usual pitch for all solid-state disks (SSD). Yes solid-state is very fast when compared to spinning media but the company claims to be offering the ability to deliver a guarantee not just of uptime but of performance.

If you’re a provider of storage services in the cloud, one of your main problems, said the company’s Jay Prassl, is the noisy neighbour, the one tenant in a multi-tenant environment who sucks up all the storage performance with a single database call. This leaves the rest of the provider’s customers suffering from a poor response, leading to trouble tickets and support calls, so adding to the provider’s costs.

The aim, said Prassl, is to help service providers offer guarantees to enterprises they currently cannot offer because the technology hasn’t – until now – allowed it. “The cloud provider’s goal is to compute all the customer’s workload but high-performance loads can’t be deployed in the cloud right now,” he said.

So the company has built SSD technology that, because of the way that data is distributed across multiple solid-state devices – I hesitate to call them disks because they’re not – offers predictable latency.

“Some companies manage this by keeping few people on a single box but it’s a huge problem when you have hundreds or thousands of tenants,” Prassl said. “So service providers can now write a service level agreement (SLA) around performance, and they couldn’t do that before.”

Key to this is the automated way that the system distributes the data around the company’s eponymous storage systems, according to Prassl. It then sets a level of IOPS that a particular volume can achieve, and the service provider can then offer a performance SLA around it. “What we do for every volume is dictate a minimum, maximum and a burst level of performance,” he said. “It’s not a bolt-on but an architecture at the core of our work.”

2012: the tech year in view (part 1)

As 2012 draws to a close, here’s a round-up of some of the more interesting news stories that came my way this year. This is part 1 of 2 – part 2 will be posted on Monday 31 December 2012.

Storage
Virsto, a company making software that boosts storage performance by sequentialising the random data streams from multiple virtual machines, launched Virsto for vSphere 2.0. According to the company, this adds features for virtual desktop infrastructures (VDI), and it can lower the cost of providing storage for each desktop by 50 percent. The technology can save money because you need less storage to deliver sufficient data throughput, says Virsto.

At the IPExpo show, I spoke with Overland which has added a block-based product called SnapSAN to its portfolio. According to the company, the SnapSAN 3000 and 5000 offer primary storage using SSD for cacheing or auto-tiering. This “moves us towards the big enterprise market while remaining simple and cost-effective,” said a spokesman. Also, Overland’s new SnapServer DX series now includes dynamic RAID, which works somewhat like Drobo’s system in that you can install differently sized disks into the array and still use all the capacity.

Storage startup Tegile is one of many companies making storage arrays with both spinning and solid-state disks to boost performance and so, the company claims boost performance cost-effectively. Tegile claims it reduces data aggressively, using de-duplication and compression, and so cuts the cost of the SSD overhead. Its main competitor is Nimble Storage.

Nimble itself launched a so-called ‘scale to fit’ architecture for its hybrid SSD-spinning disk arrays this year, adding a rack of expansion shelves that allows capacity to be expanded. It’s a unified approach, says the company, which means that adding storage doesn’t mean you need to perform a lot of admin moving data around.

Cloud computing
Red Hat launched OpenShift Enterprise, a cloud-based platform service (PaaS). This is, says Red Hat, a solution for developers to launch new projects, including a development toolkit that allows you to quickly fire up new VM instances. Based on SE Linux, you can fire up a container and get middleware components such as JBoss, php, and a wide variety of languages. The benefits, says the company, are that the system allows you to pool your development projects.

Red Hat also launched Enterprise Virtualization 3.1, a platform for hosting virtual servers with up to 160 logical CPUs and up to 2TB of memory per virtual machine. It adds command line tools for administrators, and features such as RESTful APIs, a new Python-based software development kit, and a bash shell. The open source system includes a GUI to allow you to manage hundreds of hosts with thousands of VMs, according to Red Hat.

HP spoke to me at IPExpo about a new CGI rendering system that it’s offering as a cloud-based service. According to HP’s Bristol labs director, it’s 100 percent automated and autonomic. It means that a graphics designer uses a framework to send a CGI job to a service provider who creates the film frame. The service works by estimating the number of servers required, sets them up and configures them automatically in just two minutes, then tears them down after delivery of the video frames. The evidence that it works can apparently be seen in the animated film Madagascar where, to make the lion’s mane move realistically, calculations were needed for 50,000 individual hairs.

For the future, HP Labs is looking at using big data and analytics for security purposes and is looking at providing an app store for analytics as a service.

Security
I also spoke with Rapid7, an open-source security company that offers a range of tools for companies large and small to control and manage the security of their digital assets. It includes a vulnerability scanner, Nexpose, a penetration testing tool, Metasploit, and Mobilisafe, a tool for mobile devices that “discovers, identifies and eliminates risks to company data from mobile devices”, according to the company. Overall, the company aims to provide “solutions for comprehensive security assessments that enable smart decisions and the ability to act effectively”, a tall order in a crowded security market.

I caught up with Druva, a company that develops software to protect mobile devices such as smartphones, laptops and tablets. Given the explosive growth in the numbers of end-user owned devices in companies today, this company has found itself in the right place at the right time. New features added to its flagship product inSync include better usability and reporting, with the aim of giving IT admins a clearer idea of what users are doing with their devices on the company network.

Networking
Enterasys – once Cabletron for the oldies around here – launched a new wireless system, IdentiFi. The company calls it wireless with embedded intelligence offering wired-like performance but with added security. The system can identify issues of performance and identity, and user locations, the company says, and it integrates with Enterasys’ OneFabric network architecture that’s managed using a single database.

Management
The growth of virtualisation in datacentres has resulted in a need to manage the virtual machines, so a number of companies focusing on this problem have sprung up. Among them is vKernel, whose product vOPS Server aims to be a tool for admins that’s easy to use; experts should feel they have another pair of hands to help them do stuff, was how one company spokesman put it. The company, now owned by Dell, claims it has largest feature set for virtualisation management when you include its vKernel and vFoglight products, which provide analysis, advice and automation of common tasks.

Technology predictions for 2013

The approaching end of the year marks the season of predictions for and by the technology industry for the next year, or three years, or decade. These are now flowing in nicely, so I thought I’d share some of mine.

Shine to rub off Apple
I don’t believe that the lustre that attaches to everything Apple does will save it from the ability of its competitors to do pretty much everything it does, but without the smugness. Some of this was deserved when it was the only company making smartphones, but this is no longer true. and despite the success of the iPhone 5, I wonder if its incremental approach – a slightly bigger screen and some nice to have features – will be enough to satisfy in the medium term. With no dictatorial obsessive at the top of a company organised and for around that individual’s modus operandi, can Apple make awesome stuff again, but in a more collective way?

We shall see, but I’m not holding my breath.

Touch screens
Conventional wisdom says that touchscreens only work when they are either horizontal and/or attached to a handheld device. It must be true: Steve Jobs said so. But have you tried using a touchscreen laptop? Probably not.

One reviewer has, though, and he makes a compelling case for them, suggesting that they don’t lead to gorilla arm, after all. I’m inclined to agree that a touchscreen laptop could become popular, as they share a style of interaction with users’ phones – and they’re just starting to appear. Could Apple’s refusal to make a touchscreen MacBook mean it’s caught wrong-footed on this one?

I predict that touchscreen laptops will become surprisingly popular.

Windows 8
Everyone’s a got a bit of a downer on Windows 8. After all, it’s pretty much Windows 7 but with a touchscreen interface slapped on top. Doesn’t that limit its usefulness? And since enterprises are only now starting to upgrade from Windows XP to Windows 7 — and this might be the last refresh cycle that sees end users being issued with company PCs — doesn’t that spell the end for Windows 8?

I predict that it will be more successful than many think: not because it’s especially great because it certainly has flaws, especially when used with a mouse, which means learning how to use the interface all over again.

In large part, this is because the next version of Windows won’t be three years away or more, which has tended to be the release cycle of new versions. Instead, Microsoft is aiming for a series of smaller, point releases, much as Apple does but hopefully without the annoying animal names from which it’s impossible to derive an understanding of whether you’ve got the latest version.

So Windows Blue – the alleged codename – is the next version and will take into account lessons from users’ experiences with Windows 8, and take account of the growth in touchscreens by including multi-touch. And it will be out in 2013, probably the third quarter.

Bring your own device
The phenomenon whereby firms no longer provide employees with a computing device but instead allow you to bring your own, provided it fulfils certain security requirements, will blossom.

IT departments hate this bring your own device policy because it’s messy and inconvenient but they have no choice. They had no choice from the moment the CEO walked into the IT department some years ago with his shiny new iPhone – he was the first because he was the only one able to afford one at that point – and commanded them to connect it to the company network. They had to comply and, once that was done, the floodgates opened. The people have spoken.

So if you work for an employer, expect hot-desking and office downsizing to continue as the austerity resulting from the failed economic policies of some politicians continue to be pursued, in the teeth of evidence of their failure.

In the datacentre
Storage vendors will be snapped up by the deep-pocketed big boys – especially Dell and HP – as they seek to compensate for their mediocre financial performance by buying companies producing new technologies, such as solid-state disk caching and tiering.

Datacentres will get bigger as cloud providers amalgamate, and will more or less be forced to consider and adopt software-defined networking (SDN) to manage their increasingly complex systems. SDN promises to do that by virtualising the network, in the same way as the other major datacentre elements – storage and computing – have already been virtualised.

And of course, now that virtualisation is an entirely mainstream technology, we will see even bigger servers hosting more complex and mission-critical applications such as transactional databases, as the overhead imposed by virtualisation shrinks with each new generation of technology. What is likely to lag however is the wherewithal to manage those virtualised systems, so expect to see some failures as virtual servers go walkabout.

Security
Despite the efforts of technologists to secure systems – whether for individuals or organisations, security breaches will continue unabated. Convenience trumps security every time, experience teaches us. And this means that people will find increasingly ingenious ways around technology designed to stop them walking around with the company’s customer database on a USB stick in their pocket, or exposing the rest of the world to a nasty piece of malware because they refuse to update their operating system’s defences.

That is, of course, not news at all, sadly.

Happy birthday Simon the smartphone

IBM Simon
IBM Simon

Today, 23 November 2012, is the 20th anniversary of the launch of the first smartphone. The IBM Simon was a handheld cellular phone and PDA that ended up selling some 50,000 units. This was impressive as, at the time, publicly available cellular networks were a rarity.

In fact, at the London launch of the device, I remember wondering how many people would buy one given the high costs of both a subscription and the phone. In the USA, BellSouth Cellular initially offered the Simon for US$899 with a two-year service contract or US$1099 without a contract.

As well as a touch screen, the widget included an address book, calendar, appointment scheduler, calculator, world time clock, electronic note pad, handwritten annotations, and standard and predictive stylus input screen keyboards.

Measuring 203mm by 63.5mm by 38mm, it had a massive 35mm by 115mm monochrome touch screen and weighed a stonking 510g, but was only on the market for about six months. The UK never saw it commercially available.

So while it never really took off, this was largely down to timing: it was ahead of its time and it was soon overtaken by smaller, less well-featured devices that were more affordable.

But when you contemplate which shiny shiny is your next object of desire, think about the Simon, and remember, Apple didn’t invent the smartphone: IBM did.

New developments in open source security

I just spent some time talking to Claudio Guarnieri, European security researcher for Rapid7, about some interesting new open source security developments. Guarnieri is responsible for Cuckoo Sandbox, a malware analysis system. His website reckons that “you can throw any suspicious file at it and in a matter of seconds Cuckoo will provide you back some detailed results outlining what such file did when executed inside an isolated environment.”

But he was also talking about a USB threat detection software which appears to be unique. Ghost USB Honeypot is a honeypot for malware which spreads via USB storage devices. The aim is to fool malware into infecting a fake device, from which point you can trap and/or analyse the malware.

It works by emulating a USB device so that, if a computer is infected by malware which propagates using USB flash drives, as so much of it does, the honeypot will trick the malware into infecting the emulated device, where it can be detected without compromising the host system. This kind of attack can particularly difficult to detect because it can attack high security machines that aren’t network-connected. Stuxnet was one such.

To anyone looking at it from user space or from higher levels in the kernel-mode storage architecture, the Ghost drive appears to be a real removable storage device, that strives to behave exactly like disk.sys, the operating system’s disk class driver. The key to its operation is that malware should not be able to detect that it’s not a real USB device.

You can drive it from a GUI or from the command line, and the aim is for companies to be able to deploy the software on standard client machines without the user having to get involved.

In fact, ideally, according to Ghost’s developer, Bonn University student Sebastian Poeplau, the best way to get this to work successfully is to hide it from the user so they don’t try to write to it. In this way, any write access can be assumed to be malware, and the data written is copied into an image file and can be copied off for later analysis. There’s a video of a recent presentation Poeplau gave about the project, its rationale and how it works, here.