Famous Hacks

Twitter, Facebook, NBC, New York Times, Home Depot, Amazon, Staples, Sears, Neiman Marcus, Nordstrom and many, many more companies that have concealed data breaches in recent times to protect their reputations underline a glut in hacking which small businesses seem to read as a signal that only big business is at risk.

However, the idea that “it won’t happen to me” just does not  wash in 2015. True, by lagging behind the “chip and pin” standards that so many countries adhere to, the US (which incidentally boasts more credit cards per head of population than any other nation) makes itself a soft target and Europe perhaps attracts a little less attention. When the US wakes up to its epidemic credit card theft, hackers will move on.

For small businesses which still imagine it cannot happen to them, ignorance is bliss.

The truth is that often you do not know when the reaper has been, and if he has, he is probably still there. For now, the fashion grows for watering hole attacks, where hackers target large companies for soft spots to give them access to bulk customers records. The problem with this for hackers is what to do with such large databases. This is where people like you come in: hackers often store and or process stolen data on third party networks, so even if your data records are not playing host to prying eyes, there are many more ways your IT might be helping “the dark side”.

The cost of cleaning up after high profile hacks is enormous. Home Depot reported its “hack” could cost up to £28 million. Do not think it cannot happen in North Wales on a scale that can cause real damage. In 2002 an IT company in North Wales was taken for a substantial ransom. If there is any message in this, it is that weak targets get plucked first.

Below are a list of high profile breaches. The regularity is remarkable. One conclusion we might draw from this is that the bigger the IT department and budget, the easier it is to hack. More realistically, the more credible conclusion is that the Press is only interested in high profile hacks and if this many IT departments are losing their shirts, then as Dirty Harry said, you have to ask yourself, “Do (you) feel lucky?”

As fibre broadband rolls out across and you are thinking about that cheap router, or whether you need that antivirus software now that GMail does it for you, just bear in mind you are not out of sight either.

Jun -2015: The US Federal Government’s Office of Personnel Management discovered a breach in its systems affecting over 4 million past and present employees. The breach was discovered during an “aggressive effort” to update OPM’s security systems. The US Government alleged that the intrusion was orchestrated by China’s notorious PLA Unit 61398 which is believed to have systematically stolen hundreds of terabytes of data from at least 141 organisations around the world according to BBC News. In the latest breach, the hackers targeted an OPM data center housed at the Interior Department, according to the Washington Post. The database did not contain information on background investigations or employees applying for security clearances. OPM was hacked reportedly by the same group about a year previously. In the March 2014 breach, OPM officials discovered that hackers had breached an OPM system that manages sensitive data on federal employees applying for clearances, according to the Washington Post. That often includes financial data, information about family and other sensitive details.  Read more OPM allegedly hacked by Chinese.

August -2015: Hackers claim to have distributed the personal information on 33 million accounts via the dark web following an earlier attack.

April-2015: France’s national TV network, TV5 Monde, disabled by Islamic State hackers

Feb-2015: Anthem hacked for 80 million user accounts

Jan – 2015: US Military’s Central Command Twitter & Facebook feeds hacked

Jan-2015: Hackers steal more than £5m worth of bitcoin from Bitstamp

Dec-2014: Lizard Squad takes down Micrsoft XBox & Sony Playstation

Nov-2014: Sony Pictures & “The interview”

Oct – 2014: Home Depot lapse compromises 56m credit cards

Sep – 2014: Sears raided for undisclosed numbers of credit card records

Jul-2014: P F Chang’s POS machines hacked

Aug-2014: Celebgate – Apple & iCloud targeted password theft

Jun-2014: Domino’s Pizza hacked costing est 650,000 compromised French & Belgian records

Feb-2014: Syrian Electronic Army hacks eBay & PayPal

Nov-2013: Retailer Target hacked for 40 million credit card no’s, 70 million customer account records.

Jul-2013: Montana’s Dept of Health data breach lost 1.3 million patient records – perpetrators unknown

Notable hackers

For backup, read “high availability”

Office 365 and similar technologies provide a workaround to render the barbed issue of backups increasingly obsolete. In its place, high availability is the one overiding reason for businesses to relocate their email and data stores to solutions like Office 365.

The difficulty with backups is that really, the concept is fundamentally flawed. It is just that, until now, there has not been an alternative. Nor is this an end user problem; even network engineers struggle. For example:

1. it is not in people’s genes to run backups reliably,

2. data stores are so large in today’s knowledge economy that it is impractical to execute time consuming backups,

3. backups are out of date as soon as they are completed because new data already needs appending,

4. we never know if backups will work,

5. if we adopt the conventional guidance to test backups, it means destroying a good data store to test an unknown quantity,

6. even if users escape the first five traps, since it takes so long to record complete backups, it follows that it takes at least as long to restore them. So, in the event of disaster recovery, an organisation could potentially waste weeks and possibly months restoring terrabytes of data. Even network administrators would have nothing to do but watch the wheels on the tapes go round and round.

A few years ago, large organisations started approaching this problem from another angle. The thinking was that backups themselves were not as important as the availability of data to enable people to access information. Rather than “point-in-time” backups, engineers replicated or synchronised data stores in real time to other hardware, so that in the event of server or data failure, users could default to replicated services which ran in real time. This approach was labelled “high availability”. The approach gained traction as a model within organisational networks, and because the principle scales easily, it is an underlying reason for the success of today’s “cloud” services. Email is a critical application that depends on high availability services. Here is an illustration of how network administrators span “Database Availability Groups” in Microsoft Exchange to replicate email databases across multiple servers and multiple locations so that there is no single point of failure:

microsoft exchange database availability groups

So what does this actually mean? In today’s terms, whereas a single point of failure like a server crash was once catastrophic, a server failure today usually means someone – usually an intern covering for his boss who has a more critical date with a sand wedge – will get around to a “dud” server within a few days while the network’s servers takes up the slack. So sophisticated are today’s high availability services that servers diagnose their peer appliances and invoke automated action to ringfence corrupted devices.

This is good news for small business, which has neither the desire to hire the required expertise to do this in-house, or the money to invest in hardware to do this in the first place.

Instead, businesses are turning to solutions like Microsoft’s Office 365, whose gigantic scales of economy are equalled only by the sheer size of its multi-billion dollar investment that provides data availability not in the order of two concurrent databases for users, but ten.

How does Microsoft do this? Office 365’s solution is built around a globally distributed core of data centres. By “data centre”, Microsoft means a facility of at least a hundred thousand servers. At this level, between then and a hundred such facilities are online at any given time. Underneath these kingpins sits a hierarchy of edge nodes and metro solutions which gradually disseminate data towards regions that users are localised in. Importantly, data transits Microsoft’s own fibre network until the last mile, which greatly improves prompt and clean delivery. This diagram shows how Microsoft’s topography works (click on the image to enlarge):

 

office 365's network topology

Office 365’s network stands apart from Microsoft’s own network, and the American, European, and Asian regions are interconnected.

So, where does a user’s data reside and is there a risk of data bottlenecks?

Data does not look like you would imagine it to be by the time it reaches a data centre, so the point is not so important as you might think. The biggest consideration – where users are geographically based – does not have so much to do with the hops required for data to reach devices, rather the considerations like data protection laws and practices that differ from region to region.  Office 365’s provisions services like retention policies and data leakage to help local network administrators provision email and data management for end users compliantly.

So data is primarily situated within a user’s region for a variety of reasons and to improve availability, tools are provided to minimise information flow between users and data centres. For instance. when working on a Word document, the entire file is not written to storage every time a user clicks “save”: only changes are synchronised. Once data starts compiling, it is replicated to a mirror within the primary data centre, where point-in-time backups are kept in any event, and techniques are deployed to populate  data between geographically remote locations. In this scenario, deleting documents by mistake is harder for end users to do because at  least nine copies remain in place, and even when those disappear, there is still a point-in-time backup to recover from. This diagram illustrates the scheme (click on the image to enlarge):

office_365_logical_2

Office 365’s Exchange email services operate a little differently, but again with high availability in mind for such a critical application. It is worth mentioning that although Office 365 for Small and Medium Business still vexes users with building their own email archiving, Office 365 Enterprise solutions include “inline” archiving which enables users to completely remove archiving from their desktop versions of Outlook. I will cover the implications of that in another article.

The biggest worry for business users is the thought of surrendering data to a third party. Yet, law practitioners and accountants whose authorising bodies require high standards of efficacy already use Office 365 because in terms of document discovery, data retention, and other factors, it is difficult to match Office 365’s services. Indeed, Office 365’s Exchange mail services are the only commercial solution at time of writing that satisfies Federal and EU requirements for departmental deployment “off the shelf”.

For more information about Office 365, Office 365 network administrator help, and trial subscriptions, please contact Steve Galloway on 07834 461 266 or Fred Dreiling on 07919 340 570.

 

 

Open chat
1
Scan the code
👋Scan the QR code or click open Chat to talk to us on WhatsApp.