Public Cloud & Data Protection

The accelerating trend of organisations to move data, including customers’ personal data to public cloud environments or other off premises services, raises an important question about who is responsible for the protection of a customer’s personal data.

The principle behind data privacy is that the information we occasionally give to others about ourselves is ours. If we give information to other entitities, like companies we buy goods and services from, the information we give should only be used for the purpose we gave it for.

Optionally, we can expand the scope of that remit, but let’s keep this simple. If we give our personal information to an appliance company when buying a washing machine, the company should only use that communication to talk to us about washing machines, and when the washing machine is dead, the data should also be deleted. If the washing machine company gives our information to another party without our permission, or if the company uses it for another purpose, then they are in breach of data protection laws.

The standard applies not only to large businesses, but equally to small businesses who hold data about their customers. So the question is, when small businesses use external email suppliers and public cloud services, who is responsible for keeping this data secure?

To understand the answer, we need to understand two concepts – data privacy, and data security.

Data privacy is a concept. Data privacy is the concept we use to explain our entitlement. It is an academic, or intellectual proposition. As property owners, our house is our castle. However, this intellectualisation does not mean that our house is safe. To enforce the concept, we have to secure it with locks.

This leads us to data security. This is a physical process, like securing our house with locks or other systems. Data security provides the tools that give us the confidence to know that our rights are protected. Data privacy and data security are interchangeably used, but they really account for two different propositions, and this brings us to the crux of the issue about who is responsible for the protection of data that individuals submit to the small business owner. Until now, many small business owners maintained customer data on premises, so the responsibility is apparently more clear cut. The data rests with the small business, therefore the business is responsible for it.

As small businesses use free email services and storage like GMail, Live, and Yahoo, and others increasingly move towards professional public cloud services, there is a tendancy for small businesses to imagine that the responsibility migrates with the data to the public services they use. Unfortunately, this is not the case.

In the case of the washing machine example, the company who holds a person’s information is what is called in the UK a “data controller”. Responsibility for protection of that data rests exclusively with the data controller, even if that information is stored with a third party elsewhere.

By way of example, Theo Watson, an attorney for Microsoft, recently cited a case where an NHS trust awarded a contract to an IT firm to dispose of computer equipment. Unknown to the Trust, the IT firm subcontracted the disposal to a third party who sold the computer equipment instead. However, the computers had not been purged and sensitive patient information made its way into the public domain. UK authorities determined that the responsible party was neither the contractor, nor the subcontractor, but the NHS. Although the NHS had delegated a job to a contractor, the NHS was ultimately responsible for knowing what happened to the data it held on behalf of its patients. If that meant watching hard drives being physically destroyed, it should have made sure that that happened.

The responsibilities of data controllers is absolute, and draws the role of free email services and public cloud services into sharp resolution. Small businesses often outsource IT work because they do not have expertise or financials to handle IT in-house. Yet, large suppliers like Google will be quick to point out that breaches arising from data that they hold on behalf of small businesses are not their responsibility. So, how does a small business protect itself?

In the final analysis, the answer lies in small business owners understanding the role that they play in handling customer data, and having confidence in the suppliers they use who provide the security necessary to protect data privacy rights on their behalf as an incentive to encourage business, rather than a contractual offering.

Premium suppliers like Micorosoft are certified to ISO 27001, HIPAA, FISMA, FERPA for their Office 365 solutions. Their participation in the “Safe Harbour” protocols enables the company to transfer data between EU and US jurisdictions within the confines of regional legal governance. At time of writing, Office 365 is the only email product supplied “off the shelf” that meets the regulatory governance required by US Federal Government department buyers and EU agencies. The same products are available for public purchase and include Microsoft’s extensive libraries of transport rules to help not only with observance of data privacy, but also tools for automated management of data leakage like credit card numbers, National Health Insurance numbers, and financial services information. Consequently, IT pros recommend services like MS Exchange, Office 365, because there is confidence in Microsoft’s efficacy as far as observance of data protection principles are concerned.

Rather than relying on ISO certifications etc. Micorosoft makes notable efforts to inform users about issues which are increasingly relevant as businesses move services online. Valuable resources are available at their Office 365 Trust Centre.

On the other hand, services like GMail, Yahoo, and others appear to sail closer to the wind. One reason is that many of the services small businesses use are not designed for business use in the first place. Whatever Google’s relationship with regulators over its evolving data policies, the weight of litigation by British, French, European, American, and other regulators hardly lends endorsement to Google’s ethical efficacy. Consequently, its growth in the business world is stunted compared to Office 365’s performance.

Regardless, responsibility for the execution of securing your customers’ personal information rests with you. In choosing your supplier, choose wisely, and choose an IT supplier who can inform you.

 

Exchange Email – data leakage & loss protection

From October 1st ComStat can provide support to help organisations and users manage data leakage and data protection.

On a large scale, data leakage is a serious issue which finds its way into national headlines. American retailer Target faced enormous losses and serious reputational damage in November 2013 when the company lost 40 million credit card numbers to hackers.

Small businesses may argue they do not face such risks, however small businesses are subject to the same data protection governance for due diligence regarding personal information, and even if a small business does not store credit card numbers electronically, users can still “leak” senstive date to third parties that can come back to haunt businesses.

ComStat network administrators have access to a large array of geographically relevant “policies” which can be established monitor outgoing email for sensitive information like credit card numbers, drivers licenses, passwords, in fact just about anything. On identification of an imminent “leak” users are notified with a number of options:

1. Users can override and permit transit of email, although the event is logged,
2. Sensitive information can be masked by the system,
3. Sensitive information can be delted,
4. Entire emails can be deleted with user notification.

ComStat’s engineers work with businesses with a strategy of using these kinds of tools to educate users of risk while enabling them to conduct their business with minimal obstruction.

In addition to monitoring email textual content, services also extend to identify attachments, which might comprise forms like applications, patents, etc.

Data leakage and data protection issues are difficult to meaasure because the risk of loss is usually hard to quantify until a significant event, by which time businesses can be exposed to substantial threat. As a lowest common denominator, however, businesses have an strict obligation to protect customer and third party personal information, and increasingly free email services like GMail, Yahoo, and Live do not provide tools to manage with the responsibilities European and UK law impose on businesses.

Although these services are aimed primarily at ComStat’s Exchange email users, the same tools are being expanded in 2014 and 2015 to encompass raw data storage like document libraries, spreadsheets, pdf’s, etc.

Please contact us to find out more about how our data protection services can help you.

Exchange Email – mobile device management

From Sept 25th, ComStat is providing management services for users and organisations who need help managing business information on mobile devices like laptops, tablets, and mobile phones.

While users increasingly connect to organizational data using multiple devices, the pace for keeping up with the protection of sensitive business and personal information has fallen behind that curve. Losing a mobile phone is one thing. Loss or theft of a mobile phone which holds business data is a potentially serious issue, and one which can put entities in breach of data protection laws.

exchange mdm

ComStat’s mobile device management services enable us to manage an organisation’s mobile “fleet” in a number of ways:

1. Controlling access to services by equipment brand, or model, or user
2. Implementing selective or global PIN access to mobile devices
3. Temporary restrictions to services from mobile devices
4. Wiping all information associated with user accounts.

For instance, if Alex loses a mobile phone in Frankfurt, he can probably get the SIM stopped rapidly. However, without management tools of some kind in place, whoever has custody of the phone has potential access to everything on Alex’ desktop at work. On notofication of loss, ComStat engineers can invoke any of the techniques above to restrict or stop all services associated with Alex’ account instantly.

The issue of “mobile” data protection is important for another reason. Entities who give you or your organisation access to their personal data expect a duty of care requiring the “custodian” to use the data for the purposes it wa given and to protect it. In cases where mobile devices are lost, information which at law belongs to your customers and which falls into someone else’s hands may leave you or your organisation with reputational and potentially legal liability.

Please contact us for more information about data loss protection and mobile device management services.

Office 365’s data centres

Office 365’s low cost points and deceptive ease of use masks a multi-billion dollar global investment in fibre optics, hardware, and corporate class software tools which makes Microsoft’s Office products the dominant solution of choice for corporates and governments, and a compelling offering for small and medium business. It is not so much a matter of adulation for Microsoft than a matter of fact that the rest of Microsoft’s competition combined does not match its dominance in business email and productivity apps. So, what does Microsoft do with everyone’s data? Let’s take a look at  a Microsoft data centre.

Microsoft’s Senior Operations Program Manager, Alistair Speirs, said in June 2014 that one reason clients decided to “onboard” their services from their own premises installations was nothing simpler than their concern that adding more appliances to their server room left a real risk that the floor could collapse on the staff underneath. Why IT departments are moving to Microsoft’s cloud services are not clear cut, then, however customers are relying on Microsoft’s provisioning for critical data management.

Alistair used this anecdote to explain Microsoft’s approach to building their system from the ground up. If it is already on the ground, it is hard for anything to fall through it. So much is common sense, and Office 365’s architecture really is rooted to the ground.

The data centres and server farms are built  modularly: they are designed for ease of access and uniformity in all respects. Power and power supply back-up services are “failover”, so that redundant equipment is already operational in the event of break down, and faulty units are simply swapped out and rotated so that repairs can be handled elsewhere.

data centre power supply

Modularity plays an important role in the server arrays, too. Microsoft takes delivery of completely configured server racks from hardware vendors. If more disk space is needed, engineers do not install more drives. Instead, the facility is “stamped out” and another module, or ITPAK is installed. Here, engineers lowers an air handling unit on top of a pre-assembled rack at a facility in Quincy, eastern Washington.

ITPAK cooling unit

When fully assembled, these installations are called “ITPACs”. ITPACs are built from four components: an IT load, an evaporated cooling unit, an air handling unit, and a mixing unit. The evaporative cooling unit has a mesh screen where water can slowly drip through to keep a consistent amount of humidity. Air will naturally blow through the evaporated cooling unit. The IT load sucks that air through, and the air handling unit provides a pressure difference between the outside pressure and inside pressure of the data centre. Air gets naturally pulled through the evaporative cooling units to cool the servers without having to be powered by fans. The air handling unit pulls air out and pushes some of it back to the mixer. That is how engineers control temperature. The attraction of this model is that air conditioning is no longer a consideration and Microsoft’s server farms and data centres live outside. Below is their Quincy data center in Eastern Washington where the climate similar to Madrid. Once concrete is laid, it takes about four hours to install, and needs only three connections: the ping, the pipe, and the power.  It is remotely monitored, all operating from this unit. Microsoft’s installation incorporates some nifty tricks to mitigate energy consumption further and keep end users costs down. In this installation one building block comprises about 25,000 servers.

an assembled IT pak

Inside the data centre, the uniformity of hardware is unmistakable. Access to hardware is restricted by “rack” and on need to know authorisation. For instance, non Office 365 engineers from Microsoft’s own corporate installations would not access these kinds of facilities.

server room rack

Lastly, in addition to uniform hardware and components, seemingly peripheral issues like co-ordination of cabling colour and cabling runs follow strict protocols, all in the interest of avoiding what engineers call configuration drift:

data centre cabling

Microsoft’s data centre in Quincy, is part of a global network that comprises a core of 10 – 100 data centres at time of writing, with subsidiary “Edge Nodes” and other “Metro Solutions” to provide a mechanism for delivering content to last mile user end points within its Office 365 wide area network.

office 365;s global infrastructure

 

 

For backup, read “high availability”

Office 365 and similar technologies provide a workaround to render the barbed issue of backups increasingly obsolete. In its place, high availability is the one overiding reason for businesses to relocate their email and data stores to solutions like Office 365.

The difficulty with backups is that really, the concept is fundamentally flawed. It is just that, until now, there has not been an alternative. Nor is this an end user problem; even network engineers struggle. For example:

1. it is not in people’s genes to run backups reliably,

2. data stores are so large in today’s knowledge economy that it is impractical to execute time consuming backups,

3. backups are out of date as soon as they are completed because new data already needs appending,

4. we never know if backups will work,

5. if we adopt the conventional guidance to test backups, it means destroying a good data store to test an unknown quantity,

6. even if users escape the first five traps, since it takes so long to record complete backups, it follows that it takes at least as long to restore them. So, in the event of disaster recovery, an organisation could potentially waste weeks and possibly months restoring terrabytes of data. Even network administrators would have nothing to do but watch the wheels on the tapes go round and round.

A few years ago, large organisations started approaching this problem from another angle. The thinking was that backups themselves were not as important as the availability of data to enable people to access information. Rather than “point-in-time” backups, engineers replicated or synchronised data stores in real time to other hardware, so that in the event of server or data failure, users could default to replicated services which ran in real time. This approach was labelled “high availability”. The approach gained traction as a model within organisational networks, and because the principle scales easily, it is an underlying reason for the success of today’s “cloud” services. Email is a critical application that depends on high availability services. Here is an illustration of how network administrators span “Database Availability Groups” in Microsoft Exchange to replicate email databases across multiple servers and multiple locations so that there is no single point of failure:

microsoft exchange database availability groups

So what does this actually mean? In today’s terms, whereas a single point of failure like a server crash was once catastrophic, a server failure today usually means someone – usually an intern covering for his boss who has a more critical date with a sand wedge – will get around to a “dud” server within a few days while the network’s servers takes up the slack. So sophisticated are today’s high availability services that servers diagnose their peer appliances and invoke automated action to ringfence corrupted devices.

This is good news for small business, which has neither the desire to hire the required expertise to do this in-house, or the money to invest in hardware to do this in the first place.

Instead, businesses are turning to solutions like Microsoft’s Office 365, whose gigantic scales of economy are equalled only by the sheer size of its multi-billion dollar investment that provides data availability not in the order of two concurrent databases for users, but ten.

How does Microsoft do this? Office 365’s solution is built around a globally distributed core of data centres. By “data centre”, Microsoft means a facility of at least a hundred thousand servers. At this level, between then and a hundred such facilities are online at any given time. Underneath these kingpins sits a hierarchy of edge nodes and metro solutions which gradually disseminate data towards regions that users are localised in. Importantly, data transits Microsoft’s own fibre network until the last mile, which greatly improves prompt and clean delivery. This diagram shows how Microsoft’s topography works (click on the image to enlarge):

 

office 365's network topology

Office 365’s network stands apart from Microsoft’s own network, and the American, European, and Asian regions are interconnected.

So, where does a user’s data reside and is there a risk of data bottlenecks?

Data does not look like you would imagine it to be by the time it reaches a data centre, so the point is not so important as you might think. The biggest consideration – where users are geographically based – does not have so much to do with the hops required for data to reach devices, rather the considerations like data protection laws and practices that differ from region to region.  Office 365’s provisions services like retention policies and data leakage to help local network administrators provision email and data management for end users compliantly.

So data is primarily situated within a user’s region for a variety of reasons and to improve availability, tools are provided to minimise information flow between users and data centres. For instance. when working on a Word document, the entire file is not written to storage every time a user clicks “save”: only changes are synchronised. Once data starts compiling, it is replicated to a mirror within the primary data centre, where point-in-time backups are kept in any event, and techniques are deployed to populate  data between geographically remote locations. In this scenario, deleting documents by mistake is harder for end users to do because at  least nine copies remain in place, and even when those disappear, there is still a point-in-time backup to recover from. This diagram illustrates the scheme (click on the image to enlarge):

office_365_logical_2

Office 365’s Exchange email services operate a little differently, but again with high availability in mind for such a critical application. It is worth mentioning that although Office 365 for Small and Medium Business still vexes users with building their own email archiving, Office 365 Enterprise solutions include “inline” archiving which enables users to completely remove archiving from their desktop versions of Outlook. I will cover the implications of that in another article.

The biggest worry for business users is the thought of surrendering data to a third party. Yet, law practitioners and accountants whose authorising bodies require high standards of efficacy already use Office 365 because in terms of document discovery, data retention, and other factors, it is difficult to match Office 365’s services. Indeed, Office 365’s Exchange mail services are the only commercial solution at time of writing that satisfies Federal and EU requirements for departmental deployment “off the shelf”.

For more information about Office 365, Office 365 network administrator help, and trial subscriptions, please contact Steve Galloway on 07834 461 266 or Fred Dreiling on 07919 340 570.

 

 

Exchange Server Deployment Assistant

Microsoft’s Exchange Server Deployment Assistant helps engineers to prepare for migration of Exchange Server environments to current versions of Exchange. Migration has always been an obstacle for organisations and engineers alike, and even in 2014, organisations ran platforms dating as far back as Exchange 2010, 2007, and 2003.

exchange deployment assistant

It is understandable why earlier versions of Exchange pose difficult choices – in 2003, nobody understood how cloud based infrastructures would develop commercially. Exchange 2010 was the first platform designed with consideration for future cloud developments. Whereas once backups were a major consideration, the evolution of Exchange’s Database Availability Groups (DAG) means that with mail databases replicating across multiple servers, backup practices which sag under ever increasing data volumes, have given way to the alternate pursuit of high availability services which make single points of failure a minimal risk.

Plotting a path for migration is not for the faint hearted. Neither Exchange 2003 nor 2007 can be migrated to 2013. Exchange 2007 needs a path of some description via 2010, and problems with 2003 migration can be alleviated with some nifty tricks in Exchange 2013 Online by porting ‘2013 to 2010 Client Access Server (CAS) and then conducting migration to Exchange 2013 Online.

Microsoft’s Exchange Server Deployment Assistant gives both engineers and IT advisors an invaluable roadmap for bringing services into line with today’s powerful functionality.

‘Appy Days for Office 365

Office 365 users can tap into the Office Store’s app inventory to customise Office 365.

The facility enables authorised users to install apps from the Office 365 store.

Popular apps include Microsoft’s “Bing Maps”, which detects addresses in email content and gives users options to open maps within Outlook Web Acces (OWA). Another app which admins love is a tool for rendering email headers, which for some reason Microsoft have made so difficult for engineers to access in later versions of Outlook.

The real value comes for organisations whose admins can install apps within their Office 365 environment, and either make apps optionally available to end users, or push apps directly to end user accounts. This “server room” capability hints at Office 365’s more extensive features available to administrators, who have access to Exchange 2013’s full suite of management tools, which range from user account management to archiving policies and even options for managing, restricting, or wiping data on user’s connected mobile phones and tablets,  following loss or theft.

For a thirty trial of Office 365, or for a demonstration of services, please contact Steve Galloway on 07834 461 266.

Message Header Screen shots

The Message Header Analyzer runs as a drop down windows in OWA’s email reading pane.  Users click open the tool for a fully featured report on transport, anti-spam data, and other headers which help engineers isolate delivery issues. This screen shot shows summary header information – click on the images to see full res detail :

The Message Header Analyzer for OWA is a fully specified tool for examining various header types not normally available. This imge shows the summary header.

Message Analyzer also reports on header information not usually available in mainstream services like GMail, Windows Live, and Yahoo. Data is broken down into categories to help engineers understand present or potentially developing spam problems and transit information. In this screen shot, we have opened the “Original Headers” tab to capture raw data which is meaningful to engineers when troubleshooting. Click on the images to see full res detail:

appy_days_message_header_2

 

 

Open chat
1
Scan the code
👋Scan the QR code or click open Chat to talk to us on WhatsApp.