Why some websites are deliberately designed to be insecure

Passwords remain the bane of our lives. People print them out, they re-use them or slightly change them for different services, or they simply use the same one for decades.

On the other side of the coin, for most users of a service, it’s a pain to remember a password and a bigger pain to change it – and then have to remember a new one all over again as websites change, they get hacked and/or their security policy changes.

But while passwords are imperfect, they’re the least worst option in most cases for identifying and authenticating a user, and the best way of making them more secure is to use each password for only one site, and make them long, complex, and hard to guess. However, some websites are purposely designed to be less secure by subverting those attempts. Here’s why.

2FA doesn’t work

Two-factor (2FA) authentication is often seen as a more secure method of authentication but it’s patchy at best. For example, the division between enterprise and personal environments has today all but evaporated. In the course of their jobs, people increasingly access their personal services at work using their personal devices. And employers can’t mandate 2FA for access to Facebook, for example, which might well be the chosen method of communication of a key supplier, or a way of communicating with potential customers. All FB wants is a password, and it’s not alone.

Two-factor authentication is also less convenient and takes more time. You’re prepared to tolerate this when accessing your bank account because, well, money. For most other, less important services, adding barriers to access is likely to drive users into the arms of the competition.

Password persistence

So we’re stuck with passwords until biometrics become a pervasive reality. And maybe not even then – but that’s a whole other issue. The best solution I’ve come up with to the password problem is a password manager. Specifically, KeePass, which is a free, open-source, cross-platform solution with a healthy community of third-party developers of plug-ins and utilities.

You only have to remember one password and that gets you access to everything. So you only have to remember one single master password or select the key file to unlock the whole database. And as the website says: “The databases are encrypted using the best and most secure encryption algorithms currently known (AES and Twofish).”

It works on your phone, your PC, your Mac, your tablet – you name it, and it’ll generate highly secure passwords for you, customised to your needs. So what’s not to like?

Pasting problems

Here’s the rub: some websites think they’re being more secure by preventing you from pasting a password into their password entry fields. Some website security designers will argue that that passwords to access their service should not be stored in any form. But a password manager works by pasting passwords into the password login field.

The rationale for preventing password pasting is that malware can snoop the clipboard and pass that information back to the crooks. But this is using a sledgehammer to crack a nut because KeePass uses an obfuscation method to ensure the clipboard can’t be sniffed. And it will clear the password in a very short time – configurable by you – so that the exposure time can be very short; 10 seconds will do it.

In addition, as Troy Hunt, a Microsoft MVP for Developer Security, points out: “the irony of this position is that [it] makes the assumption that a compromised machine may be at risk of its clipboard being accessed but not its keystrokes. Why pull the password from memory for the small portion of people that elect to use a password manager when you can just grab the keystrokes with malware?”

In other words, preventing pasting is counter-productive; it’s reducing security. Don’t believe me? Check out this scenario.

Insecure by design

So if you can’t paste a password in, what do you do? If you use a password manager, which is probably the most secure way of storing passwords today and puts you way ahead of the game, you open up the entry for that service in KeePass, expose the password to any prying eye that happens to be passing, and copy in the password – which is likely to be long and complex – manually, character by character. Probably takes a few minutes.

Can you see anything wrong with that? If you’re sitting in a crowded coffee shop, for example?

Yup. A no-paste policy is annoying, slow, prone to mistakes, and highly insecure. Worse, it’s likely to be the security-conscious – those using password managers and the like – who are most affected. Even a simple file full of passwords – hopefully encrypted – and tucked away in an obscure location is likely to be more secure than the method many if not most people use: re-using common, easily memorable passwords.

I’ve had discussions about this with one major UK bank which implemented a no-paste policy and seems since to have reversed course – whether as a result of my intervention (and no doubt that of others too) I have no way of knowing.

Say no to no-paste

So if you encounter a website that does not allow you to paste in a password in a mistaken bid to add security, point out to them that in effect, they’re forcing people to use weak passwords that they can remember, which will be less secure.

As Troy Hunt says: “we’ve got a handful of websites forcing customers into creating arbitrarily short passwords then disabling the ability to use password managers to the full extent possible and to make it even worse, they’re using a non-standard browser behaviour to do it!”

Is the cloud letting consumers down?

The promise of cloud services has, by and large, been fulfilled. Back in the day, and right up to the present day still, the big issue has been security: is your data safe?

What this question is really asking is whether you can retrieve your data quickly in the event of a technological melt-down. You know the kind of thing: an asteroid hits your business premises, a flood or fire makes your office unusable for weeks or months, or some form of weird glitch or malware makes your data unavailable, and you need to restore a backup to fix it.

All these scenarios are now pretty much covered by the main cloud vendors so, from a business perspective, what’s not to like?

Enter the consumer

Consumers – all of us, in other words – are also users of cloud services. Whether your phone uploads photos to the manufacturer’s cloud service, or you push terabytes of multimedia data up to a big provider’s facility, the cloud is integrated into everything that digital natives do.

The problem here is that, when it comes to cloud services, you get what you pay for. Enterprises will pay what it takes to get the level of service they want, whether it’s virtual machines for development purposes that can be quick and easy to set up and tear down, or business-critical applications that need precise configuration and multiple levels of redundancy.

Consumers on the other hand are generally unable to pay enterprise-level cash but an increasing number have built large multimedia libraries and see the cloud as a great way of backing up their data. Cloud providers have responded to this demand in various ways but the most common is a bait-and-switch offer.

Amazon’s policy changes provide the latest and arguably the most egregious example. In March 2015, it initiated, all for just £55 a year, an unlimited data storage service, not just photos as Google and others were already offering. Clearly many people saw this as a massive bargain and, although figures are not publicly available, many took it up.

Amazon dumps the deal

But in May 2017, just over two years later, Amazon announced that the deal was going to be changed, and subscribers would have to pay on a per-TB basis instead. This was after many subscribers – according to user forums – had uploaded dozens of terabytes over a period of months at painfully slow, asymmetrical data rates.

Now they are offered on a take it or leave it basis an expensive cloud service – costing perhaps three or four times more depending on data volumes – and a whole bunch of data that it will be difficult to migrate. On Reddit, many said they have given up on cloud providers and are instead investing in local storage.

This isn’t the first time such a move has been made by a cloud provider: bait the users in, then once they’re committed, switch the deal.

Can you trust the cloud?

While cloud providers are of course perfectly at liberty to change their terms and conditions according to commercial considerations, it’s hard to think of any other consumer service where such a major change in the T&Cs would be implemented because of the fear of user backlash. Especially by one of the largest global providers.

The message that Amazon’s move transmits is that cloud providers cannot be trusted, and that a deal that looks almost too good to be true will almost certainly turn out to be just so, even when it’s offered by a very large service provider who users might imagine would be more stable and reliable. That the switch comes at a time when storage costs continue to plummet makes it all the more surprising.

In its defence, Amazon said it will honour existing subscriptions until they expire, and only start deleting data 180 days after expiry.

That said, IT companies need to grow up. They’re not startups any more. If they offer a service and users in all good faith take them up on it, as the commercial managers at Amazon might have expected, they should deal with it in a way that doesn’t potentially have the effect of destroying faith and trust in cloud providers.

It’s not just consumers who are affected. It shouldn’t be forgotten that business people are also consumers and the cloud purchasing decisions they make are bound to be influenced to a degree by their personal experiences as well as by business needs, corporate policy and so on.

So from the perspective of many consumers, the answer to the question of whether you can trust the cloud looks pretty equivocal. The data might still be there but you can’t assume the service will continue along the same or similar lines as those you originally signed up to.

Can you trust the cloud? Sometimes.

AVM Fritz!Box 4040 review

AVM Fritz!Box 4040

AVM Fritz!Box 4040

AVM’s Fritz!Box range of routers has long offered a great range of features and are, in my experience, highly reliable.

The 4040 sits at the top end of the lower half of AVM’s product line-up. The top half includes DECT telephony features but if you’ve already got a working cordless phone system, you can live without that.

The 4040 looks like all the other Fritz!Box devices: a red and silver streamlined slim case without massive protuberances that would persuade you to hide the device from view. A couple of buttons on the top control WPS and WLAN, while indicators show status, with the Info light moderately configurable; it would be helpful if AVM broadened the possible uses of this indicator.

At the back are four 1Gbps LAN ports which you can downgrade individually for power-saving reasons to 100Mbps, and a WAN port. A couple of USB ports are provided too, one 3.0, one 2.0.

The 4040 supports all forms of DSL, either directly or via an existing modem or dongle, WLAN 802.11n and 11ac, both 2.4GHz and 5GHz. The higher frequency network provides connectivity at up to a theoretical 867Mbps; I managed to get 650Mbps with my phone right next to the access point.

Power-saving modes are available for the wireless signal too – it automatically reduces the wireless transmitter power when all devices are logged off – providing a useful saving for a device you’re likely to leave switched on all the time.

Security is catered for by MAC address filtering on the wireless LAN, and by a stateful packet inspection firewall with port sharing to allow access from the Internet.

The software interface is supremely easy to use and handsome too. The overview screen gives an at-a-glance of the status of the main features: the Internet connection, devices connected to the network, the status of all interfaces, and of the NAS and media servers that are built into the router.

The NAS feature allows you to connect storage to the router over USB only and access it from anywhere either over UPnP, FTP or SMB (Windows networking). Other features include Internet-only guest access which disables access to the LAN, an IPSec VPN, and Wake on LAN over the Internet.

The Fritz!Box 4040 is the latest in a long line of impressive wireless routers, continuing AVM’s tradition of high quality hardware and software, and it’s good value at around £85.

How to stay safe on the Internet – trust no-one

key
Working close to the IT industry as I do, it’s hard to avoid the blizzard of announcements and general excitement around the growth of the Internet of things allied to location-based services. This, we are told, will be a great new way to market your goods and services to consumers.

You can get your sales assistants to greet shoppers by name! You can tell them about bargains by text as they walk past your store! You might even ring them up! Exclamation marks added for general effect.

But here’s the thing. Most people don’t trust big corporations any more, according to the recently published 2013 IT Risk/Reward Barometer report. Instead, finds this international study: “Across all markets surveyed, the vast majority of consumers worry that their information will be stolen (US: 90%, Mexico: 91%, India: 88%, UK: 86%).”

As a result, blizzard marketing of the kind that triangulation technologies now permits makes people feel uneasy at best and downright annoyed at worst. People ask themselves questions about who has their data, how they got it, and what control they have over that data once it’s escaped into the ether.

From ICASA’s point of view, this is largely the fault of individuals who don’t control their passwords properly or otherwise secure their systems. It’s an auditing organisation, so that’s not an unusual position to adopt. But I think it goes further than that.

As the study also points out: “Institutional trust is a critical success factor in an increasingly connected world. […] Organisations have much work to do to increase consumer (and employee) trust in how personal information is used.”

In other words, companies need to work harder at winning your trust. Does that make you feel any better?

This is clearly not an issue that will be solved – ever. For every ten organisations that are trustworthy and manage personal data responsibly – you do read that text-wall of privacy policy each time you log onto a new site, don’t you? – there will be one that doesn’t. Even if all companies were trustworthy, people will still make mistakes and hackers will win the security battle from time to time, resulting in compromised personal data.

The only rational policy for the rest of us to adopt is to trust none of them, and that is what this study shows most people tend to do.

The least you should do is to use long, complex passwords and change them regularly, using a password safe (eg KeePass) so you don’t have commit them to memory – or worse, bits of paper.

FYI, the study was conducted by ICASA, which describes itself “an independent, nonprofit, global association, ISACA engages in the development, adoption and use of globally accepted, industry-leading knowledge and practices for information systems.”

Storage roundup with PernixData, Arista Networks and Tarmin

There’s been a bit of a glut of storage announcements recently, so here’s a quick round-up the more interesting ones over recent weeks.

PernixData
This company is thinking ahead to a time when large proportions of servers in datacentres will have flash memory installed inside them. Right now, most storage is configured as a storage pool, connected via a dedicated storage network but this is sub-optimal for virtualised servers which generate large amounts of IOPS.

So instead, companies such as Fusion-io have developed flash memory systems for servers, so that data is local and so can be accessed much more quickly. This abandons one of the advantage of the storage network, namely storage sharing.

So PernixData has created FVP (Flash Virtualization Platform), a software shim that sits in the hypervisor and links the islands of data stored in flash memory inside each of the host servers. The way it works is to virtualise the server flash storage so it appears as a storage pool across physical hosts. Adding more flash to vSphere hosts – they have to be running VMware’s hypervisor – prompts FVP to expand the pool of flash. According to the company, it works irrespective of the storage vendor.

What this effectively does is to create a cache layer consisting of all the solid-state storage in the host pool that can boost the performance of reads and writes from and to the main storage network.

The company reckons that: “For the first time ever, companies can scale storage performance independent of storage capacity using server side flash.” And according to CEO Poojan Kumar: “We allow all virtual machines to use every piece of flash memory. The result is 17 times lower latency and 12 times more IOPS. It’s non-disruptive, it looks very simple and is easy to use.” It costs US$7,500 per physical host or US$10k for four hosts – a price point designed for smaller businesses.

It seems like a pretty good idea, and there’s some real-world testing info here.

Arista Networks
Also new on the hardware front are products from Arista Networks.

This company started life a few years ago with a set of high performance network switches that challenged the established players – such as Cisco and Juniper – by offering products that were faster, denser, and cheaper per port. Aimed at the high performance computing market, which includes users such as life science projects, geological data, and financial institutions, they were the beachhead to establish the company’s reputation, something it found easy given that its founders included Jayshree Ullal (ex-Cisco senior vice-president) and Andy Bechtolsheim (co-founder of Sun Microsystems).

I recently spoke to Doug Gourlay, Arista’s vice-president of systems engineering, about the new kit, which Gourlay reckoned mean that Arista “can cover 100% of the deployment scenarios that customers come to us with”. He sees the company’s strength as its software, which is claimed to be “self-healing and very reliable, with an open ecosystem and offering smart upgrades”.

The new products are the 7300 and 7250 switches, filling out the 7000 X Series which, the company claims, optimises costs, automates provisioning, and builds more reliable scale-out architectures.

The main use cases of the new systems are for those with large numbers of servers of small datacentres, and for dense, high performance computing render farms, according to Gourlay. They are designed for today’s flatter networks: where a traditional datacentre network used three layers, a modern fabric type of network will use just two layers to offer the fewest numbers of hops from one server to any other server. In Arista-speak, the switches attaching directly to the servers and directing traffic between them are leaves, while the core datacentre network is the spine.

The 7300 X series consists of three devices, with the largest, the 21U 7316 offering 16 line card slots with 2,048 10Gbps ports, or 512 40Gbps ports. Claimed throughput is 40Tbps. The other two in the series, the 7308 and 7304 accommodate eight and four linecards respectively, with decreases in size (21U and 8U) and throughput (20Tbps and 10Tbps).

The 2U, fixed configuration 7250QX-64 offers 64 40Gbps ports or 256 10Gbps ports, and a claimed throughput of up to 5Tbps. All systems and series offer reversible airflow for rack positioning flexibility and a claimed latency of two microseconds. Gourlay claimed this device offers the highest port density in the world.

Tarmin
Tarmin was punting its core product, Gridbank, at the SNW show. It’s an object storage system with bells on.

Organisations deploy object storage technology to manage very large volumes of unstructured data – typically at the petabyte scale and above. Such data is created not just by workers but more so from machines. Machine generated data comes from scientific instrumentation, including seismic and exploration equipment, genomic research tools and medical sensors, industrial sensors and meters, to cite just a few examples.

Most object storage systems restrict themselves to managing the data on disk, and leave other specialist systems such as analytics tools to extract meaningful insights from the morass of bits but what distinguishes Tarmin is that Gridbank “takes an end to end approach to the challenges of gaining value from data,” according to CEO Shahbaz Ali.

He said: “Object technologies provide metadata but we go further – we have an understanding of the data which means we index the content. This means we can analyse a media file in one of the 500 formats we support, and can deliver information about that content.”

In other words, said Ali: “Our key differentiator is that we’re not focused on the media like most storage companies, but the data – we aim to provide transparency and independence of data from media. We do data-defined storage.” He called this an integrated approach which means that organisations “don’t need an archiving solution, or a management solution” but can instead rely on Gridbank.

All that sounds well and good but one of the biggest obstacles to adoption has to be the single sourcing of a technology that aims to manage all your data. It also has very few reference sites (I could find just two on its website) so it appears that the number of organisations taking the Tarmin medicine is small.

There are also of course a number of established players in the markets that GridBank straddles, and it remains to be seen if an end-to-end solution is what organisations want, when integrating best of breed products avoids proprietary vendor lock-in, to which companies are more sensitive than ever and is more likely to prove better for performance and flexibility.

Seagate’s new KOS disk drives aim to entice cloud builders

Among the most interesting conversations I had at the storage show SNW (aka Powering the Cloud) in Frankfurt this year was with Seagate’s European cloud initiatives director Joe Fagan, as we talked about the company’s proposed Kinetic Open Storage (KOS) drives.

The disk drive company is trying to move up the stack from what has become commodity hardware by converting its drives into servers. Instead of attaching using a SATA or SAS connector, Kinetic drives will have – a SATA or SAS connector, not an RJ45. But the data flowing inside the connector will be using IP not storage protocols, while the connector remains the same for compatibility purposes.

The aim is to help builders of large-scale infrastructures, such as cloud providers, to build denser, object-based systems by putting the server on the storage, rather than, to paraphrase Fagan, spending the energy on a Xeon or two per server along with a bunch of other hardware. Seagate argues that KOS could eliminate a layer of hardware between applications and storage, so data will flow from the application servers directly to storage rather than, as now, being translated into a variety of protocols before it hits the disk.

Fagan said two cloud builders were interested in the technology.

Behind this is, of course, a bid to grab some of the cash that enterprises and consumers are spending on cloud applications and services.

There are a few ‘howevers’, as you might imagine. Among the first is that every disk drive will need an IP address. This has huge implications for the network infrastructure and for network managers. Suddenly, there will be a lot more IP addresses to deal with, they will have to be subnetted and VLANned – did I mention that Kinetic drives will use IPV4? – and all this assumes you can summon up enough v4 addresses to start with.

Another concern is that mechanical disk drives fail relatively frequently while servers don’t, as of course they have no moving parts. So when a drive fails – and in large-scale deployments they surely will – you have to throw away the internal server too. Could be expensive.

And finally, there’s also a huge amount of inertia in the shape of today’s installed systems and the expertise needed to manage and operate them.

Is that enough to halt the initiative? Seagate clearly hopes not, and hopes too that other drive makers will come on board and develop their own versions in order to help validate the concept. It has provided APIs to help app developers exploit the concept.

As ever, time will tell. But will you find these drives in a server near you any time soon? Don’t hold your breath.

Innergie mMini DC10 twin-USB charging car adapter

Innergie adapter 1
Clean design

We all travel with at least two gadgets these days – or is it just me? What you too often don’t think about though is that each widget adds to the task of battery management. The Innergie 2A adapter’s twin USB charging ports will help.
Twin USB ports
Twin USB ports

The company sent me a sample to try and I found the design to be clean and tidy, and it all works as expected. It’s also quite compact, measuring 70mm long from tip to tail, and protruding from the car’s power socket by just 28mm. This means it won’t take up too much precious space, an issue especially if the power socket is mounted in the glovebox.
Innergie adapter 2
Nice shiny contact

When activated the front lights up a pleasing blue, and it then allows you to charge your USB-fitted devices to its max 2A potential. This means that if your device’s battery capacity is 2,000mAh, which is reasonably typical, it’ll take an hour (in theory) to recharge from empty.

Officially, it costs £19 (probably less on the street), and there’s more about it here.

Seagate launches new solid-state disks (SSD)

Seagate, the biggest maker of hard disks, recently launched a new range of solid state disk drives, as it aims to align itself better with current buying trends.

In particular, the company’s new 600 SSD is aimed at laptop users who want to speed their boot and data access times. This is Seagate’s first foray into this market segment.

Claiming a 4x boot time improvement, Seagate said that SSD-stored data is safer if the laptop is dropped. From my own experience over the last five years of using using SSDs in laptops, I can confirm both this, and that their lower power consumption helps to improve battery life too.

The 600 SSD is available with up to 480GB and in multiple heights including 5mm, which the company says makes it “ideal for most ultra-thin devices as well as standard laptop systems”. The drive features up to 480GB of capacity and comes in a 2.5 form factor. It’s compatible with the latest 6Gbps SATA interface.

The other new SSD systems are aimed at enterprises. The most interesting of these is the X8 Accelerator, which is the result of Seagate’s investment in Virident, a direct competitor with Fusion-io, probably the best-known maker of directly-attached SSDs for servers. The Seagate product is also a PCIe card with a claimed IOPS of up to 1.1 million. The X8 offers up to 2.2TB in a half-height, half-length card.

Of the two other new drives, the 2.5-inch 480GB 600 Pro SSD and the 1200 Pro SSD, the first is targeted at cloud system builders, data centres, cloud service providers, content delivery networks, and virtualised environments, and is claimed to consume less power and so need less cooling. It consumes 2.8W, variable according to workload, which Seagate reckons is “the industry’s highest IOPS/watt”.

Up the performance scale is the 800GB 1200 Pro SSD, which is aimed at those needing high throughput. It attaches using dual-port 12Gbps SAS connectors and “uses algorithms that optimize performance for frequently accessed data by prioritizing which storage operations, reads or writes, occur first and optimizing where it is stored.”

Seagate said it buys its raw flash memory from Samsung and Toshiba but holds patents for its controller and system management technologies.

Hard disks and flash storage will co-exist – for the moment

When it comes to personal storage, flash is now the default technology. It’s in your phone, tablet, camera, and increasingly in your laptop too. Is this about to change?

I’ve installed solid-state disks in my laptops for the last three or so years simply because it means they fire up very quickly and – more importantly – battery life is extended hugely. My Thinkpad now works happily for four or five hours while I’m using it quite intensively, where three hours used to be about the maximum.

The one downside is the price of the stuff. It remains stubbornly stuck at 10x or more the price per GB of spinning disks. When you’re using a laptop as I do, with most of my data in the cloud somewhere and only a working set kept on the machine, a low-end flash disk is big enough and therefore affordable: 120GB will store Windows and around 50GB of data and applications.

From a company’s point of view, the equation isn’t so different. Clearly, the volumes of data to be stored are bigger but despite the blandishments of those companies selling all-flash storage systems, many companies are not seeing the benefits. That’s according to one storage systems vendor which recently announced the results of an industry survey.

Caveat: industry surveys are almost always skewed because of sample size and/or the types of questions asked, so the results need to be taken with a pinch – maybe more – of salt.

Tegile Systems reckons that 99 percent of SME and enterprise users who are turning to solid state storage will overpay. They’re buying more than they need, the survey finds, at least according to the press release, which wastes no time by mentioning in its second paragraph that the company penning the release just happens to have the solution. So shameless!

Despite that, I think Tegile is onto something. Companies are less sensitive to the price per GB than they are to the price / performance ratio, usually expressed in IOPS, which is where solid-state delivers in spades. It’s much quicker than spinning disks at returning information to the processor, and it’s cheaper to run in terms of its demands on power and cooling.

Where the over-payment bit comes in is this (from the release): “More than 60% of those surveyed reported that these applications need only between 1,000 and 100,000 IOPS. Paying for an array built to deliver 1,000,000 IOPS to service an application that only needs 100,000 IOPS makes no sense when a hybrid array can service the same workload for a fraction of the cost.”

In other words, replacing spinning disks with flash means you’ve got more performance than you need, a claim justified by the assertion that only a small proportion of the data is being worked on at any one time. So, the logic goes, you store that hot data on flash for good performance but the rest can live on spinning disks, which are much cheaper to buy. In other words, don’t replace all your disks with flash, just a small proportion, depending on the size of your working data set.

It’s a so-called hybrid solution. And of course Tegile recommends you buy its tuned-up, all-in-one hybrid arrays which saves you the trouble of building your own.

Tegile is not alone in the field, with Pure Storage having recently launched in Europe. Pure uses ordinary consumer-grade disks, which should make it even cheaper although price comparisons are invariably difficult due to the ‘how long is a piece of string?’ problem.

There are other vendors too but I’ll leave you to find out who they are.

From a consumer point of view though, where’s the beef? There’s a good chance you’re already using a hybrid system if you use a recent desktop or laptop, as a number of hard disk manufacturers have taken to front-ending their mechanisms with flash to make them feel more responsive from a performance perspective.

Hard disks are not going away as the price per GB is falling just as quickly as it is for flash, although its characteristics are different. There will though come a time when flash disk capacities are big enough for ordinary use – just like my laptop – and everyone will get super-fast load times and longer battery life.

Assuming that laptops and desktops survive at all. But that’s another story for another time.

Whom do you trust?

Keeping your data secure is something you need to be constantly aware of. Apart from the army of people out there who actively seek your credit card and other financial and personal details, not to mention the breadcrumbs that accumulate to a substantial loaf of data on social media, it’s too easy to give the stuff away on your own.

It’s really all about trust. We’re not very good at choosing whom we trust, as we tend to trust people we know – or even people we have around us sometimes. As an example, I present a little scenario I encountered yesterday on a train.

The train divides en route, so to get to your destination, you need to be in the right portion of the train. An individual opposite me sat for 45 minutes through seemingly endless announcements – from the guard, the scrolling dot matrix screens, and the irritatingly frequent, automated announcements – all conveying the same information both before, during and after the three or four stops before we arrived at the decision point about which bit of the train to be in.

At the station where a decision had to be made, she leaned across and asked if she was in the right portion of the train for her destination.

Why? She would rather trust other passengers than the umpteen announcements. She’s not alone, as I’ve seen this happen countless times.

So it’s all about whom you trust. As passengers, we were trustworthy.

So presumably were the security researchers with clipboards standing at railway stations asking passengers for their company PC’s password in exchange for a cheap biro. They gathered plenty of passwords.

I recently left a USB phone charger in a hotel belonging to a major international chain. They said they would post it back if I sent them a scanned copy of my credit card to cover the postage. That they offered suggests there must be plenty of people willing to take the gamble that their email won’t be read by someone who shouldn’t. Not to mention what happens after the hotel has finished with the data. Can they be sure the email would be securely deleted?

I declined the offer and suggested that this major chain could afford the £7 it would cost to pop it in the post. Still waiting, but not with bated breath. I don’t trust them.