WordPress, contact forms, & the smoking gun

Website contact forms lead owners into dangerous territory for a variety of reasons, not least of which is that it is the first port of call for “bots” or even a plain old mischievous pair of hands to infiltrate.

The problem with contact forms is that there is no measure to report the kind of problem which is so troublesome. Contact forms can seemingly be operating fine for months, and users assume that the reason why the contact form is quiet is because the website is quiet. There is, however, a darker reason.

More often than not, the contact form is being used, but the website owner receives nothing.

There is a pattern in this. Firstly, if contact forms were really that unreliable, people would not use them. The truth is that well designed and widely subscribed plugins do their job very well. Secondly, email usually leaves the website in tact. We know this, because emails that fall over at the website are invariably well documented and reported to both sender and receiver. Thirdly, the problem more often than not coincides with GMail, Live (Hotmail), Yahoo, and other bulk email services that small business users operate to keep costs down and it is here, at the perimeter of the website owner’s provider, where the smoking gun smoulders.

By way of illustration, this portrays how Microsoft’s Exchange Online service manages incoming email. GMail and other providers will use similar processes.

antiV_800

 

Users might be surprised at the scale of the processes illustrated. Notably the flowchart focuses on the recipient’s services. For example, a successfully delivered email must tranist 17 different tests to reach a recipient once it reaches the end user’s supplier. This is partly because there is scope of end users in this kind of system to manually update their own needs in the filtering system. Gmail etc. will use similar approaches, but critically their tools for customising end user needs ranges from minimal to none. When an email is sent from a WordPress contact form plugin, the email is already authenticated, so it does not go through a testing process. It just “goes”, and the conduct for the handling of the email really falls to other parties from this point. At the other end, it only takes a few emails – as few as 3 or 4 within the space of a day at Hotmail – to trigger a block at the first evaluation. When this happens, end users do not even get notice by way of delivery to their own spam box.

Contact form email outages pose serious commercial risks to owners: site visitors assume the email got through and nobody cares, or in cases where contact forms support event calendaring or tangible purchases, the consequences can be terminal for organisers or sellers. Whatever the purpose of the form, the reputational damage for an otherwise efficient back office is difficult to lift once the damage is done, and as long as users rely on free email services, there is not much that website designers and network engineers can do to circumvent issues which unfortunately tends to lead to misplaced attitudes towards the IT provisioners. The answer lies in email, upgrading services, and just as importantly managing associated email services to keep email addresses “clean”.

Contact forms are so widely used because publishing an email address in recognisable characters on a web page or blog is the easiest way to get the email address into the open for abuse. Contact forms mask the website owner’s email address so that it is invisible to public scrutiny. Some will argue that their inboxes are well managed by, for instance, GMail, so that spam is not an issue for them. However this is dangerous territory. Once an email address is in the open, it becomes one of the very metrics that security services establish to measure the veracity of email, and apart from customers spamming your own “contaminated” email, it is not unknown for site owner’s contact forms to lose functionality because the owner’s own antivirus tools have blacklisted the very email address the owner relies on.

Once, users could rely on freely available services like GMail, Yahoo, Hotmail, etc. There is a widening gap, however, between the reliability of email traffic delivered to “subscription” users who are given dedicated email services and tools (e.g. configurable antivirus/spam and even “connectors” which effectively tunnel emails between trading partners or configured resources like contact forms) and “free” services, whose solutions are provided “as is” and which do not provide adequate tools for customisation of email services.

Others point to social media, which is fine if everybody subscribes to Facebook, for example.. Really, businesses need a minimum variety of contact points, and contact forms are not easily left out of the solution.

Hard pressed web site owners might also remember the days when people just picked up the phone and called someone. True. One reason we use the web, though, is to reach a geographically broader audience, and some network engineers would argue that people actually do call – at 4:00am!

It is frustrating for users on the one hand that advances in web authoring tools like WordPress empower small business users, while the increasing complexities of email and other technologies still make it difficult for those same users to compete on an even playing field with larger organisations. Small business solutions are still judged to the same standard that large organisations enjoy, albeit with hefty investments. Although some users rely on freely available niche providers, some of whom are pretty good, the question looms – how long can a loss sustaining business model last in the first place, and what happens if it goes down, taking an end user’s services with it?

In terms of email, the idea of a two-tier “Internet” is already here. The good news for small businesses who are committed to online services is that solutions like Exchange Online are available at a fraction of the price that corporates and government departments have paid over the last two decades to produce these services. Exchange Online starts at £2.50 per month, for instance although it still needs some professional support to pull all the leavers.

In a world of choice, website owners can continue to persevere with freely available email and many will, partly because they do not know their email provider is trashing their contact form email. In this case, though, their Gmail account is not serving much purpose, either.

For advice with issues about contact forms, please contact either Fred Dreiling or Steve Galloway using our contact page.

Onboarding, and the case for supported services

“There is an awkward inverse in the relationship between technology, cost, and risk. While ever more sophisticated services come to market at ever lower costs, the inherent risks associated with these software solutions increase while user awareness remains behind the curve”.

Steve Galloway discusses the evolving nature of professional IT network support/administration and the changing risks that small businesses face.

Where “fast” broadband and “cloud” were once the marketing buzz words of marketing types in search of fertile sales pastures, today “onboarding” is the fashionably coined phrase, which describes a process where businesses migrate their data and other IT resources to mature cloud solutions like Office 365. Two years ago onboarding was a leading edge concept in commercial terms. The principle was not in fact, a new concept  to IT. The only changes were those associated with the bandwidth and computing availability to make services possible.

Today, onboarding’s trickle has turned into a tidal swell. The economies of scale that cloud services bring to businesses large and small are compelling, yet organisations struggle to keep up with toxic hazards lurking in data protection laws and proliferating accessibility.

This is not just a problem for big business. It is worth sitting up for: apart from legal liabilities which small businesses face for negligent treatment of customers’ personal data, the financial and reputational consequences following “leakage” of a firm’s own sensitive data can be ruinous, and the source could be as seemingly innocuous as your own mobile phone.

Historically, the network engineer’s bête noir was the office printer, and email. Even in the era of “cloud”, when the paperless office’s Promised Land has finally dawned, printers are still the benchmark that bookeepers and office managers use to guage network performance. Still, office printers vex users and support teams alike: printers jam when you need them most and for all the remote tools in the world you still have to hear them and the see the printer output to know that they are fixed.

Yet, printers stand a distant second to email. There are two ends of this spectrum for engineers. The first is the impossible support request along the lines of “somebody (read: “unknown person”) sent me an email and I didn’t get it (potentially as long as a year or more ago)”. The second is the ominous portent that heralds a black day for server room teams with the first “an email I just sent to (an important trading partner) has just bounced with the message “return to sender”, IP address blacklisted”. Somewhere in between lies the web site contact form that quietly ceases to work, and everything falls somewhere in between.

Workgroup printers were designed with critical performance in mind, but machines are just that, and it is perhaps easier for users to live with frustrating printer failures. After all, like cats and dogs, mechanical failure is a human condition. Ditto, snail mail: if the postman is late, the staff I remembered employing in the 1980’s went to the pastry shop and fired up the percolator for a morning of “R and R”, and if I was brave I would admit a few transgressions of my own. After all, how could anyone work without the morning post?

That culture changed forever with email. Email is not really more important than snail mail. However, something does appear to award them a significantly higher value. Perhaps it is a sense of entitlement. Perhaps its ethereal or “virtual” state leading us to imagine that it is not “machinery” that is capable of failing.

The problem with email really goes back to its genesis: email was not designed to be a critical business tool.  It just happened to become the most transformative business tool of the “Knowledge Revolution”. Email is fundamentally flawed for the business purposes we use it for today because by definition it is an unreliable service. Its design is founded in the quantum proposition that in chaos all paths between two points are possible. “Quantum” is an interesting term. It is a scientific proposition, but some scientists fall out with it because conventional science likes the idea that predictable things happen in predictable circumstances and if they could order the world there would be answers for all circumstances. Since quantum theory says there is no order in the first place, it understandably nags at the realm of science which requires order. The world is not very predictable though, and when one stops to imagine how many thousands or millions of random routes that the fragments which comprise one email take to reconstitute at another end, it is amazing that email works at all. Sometimes things do break down, and when it comes to email the user reaction is more often than not disproportinately robust.

So, email is a broken wheel in many respects, but by the time anyone realised how important email was and despite other technologies like social networking, the world was stuck with something we love to hate, unless that hate is directed at a network engineer by proxy. Instead, computer science has tried to construct orderly solutions to encapsulate a chaotic process at incredible costs. Ultimately, and in the debate over the virtues of science versus philoposphy, chaos prevails and the “quantumistas” and philosphers have devised some clarification: it is possible for the same thing to materialise elsewhere almost instantly, but not always. Is that not email?

Whatever, the reality is that in answering an email user’s defensively simple demand (“I just want my email to work”), an industry as ubiquitous as the energy industry itself has evolved in a fraction of the time to try to make it possible and subsequently  email is not the only issue in the context of this article. The problem for small business is that the tools big business have utilised in the past to make email viable are only now becoming available to small businesses thanks to the economies of scale that come with cloud computing, and with it some serious headaches.

Which brings us to the point. In bringing big business “reliability” to email and other chaotic Internet services for the small business market, the solutions themselves pose evolving threats to small businesses which only big businesses have typically feared – namely the risk of loss or leakage of an organisation’s sensitve data and its customers’ personal data.

Businesses face any number of risks, from malicious hacking to internal espionage, and even more worldly risks like flooding and storms. We live in a chaotic world. In terms of IT, though, there is a more pervasive problem for organisations which comes from an unexpected corner – mobile devices like tablets and mobile phones. By connecting users with laptops, tablets, and mobile phones to organisational email and electronic document libraries, small businesses unwittingly double and triple the size of their IT networks at a stroke, incrementally increasing the risk of data protection breaches. Arguably, even a doubling of a conventional network would once have prompted a planned development. Instead, individuals now own enough devices to constitute a small business network of its own:

365_multi_device_400

Giving email and document access to employees 24/7 is a compelling proposition whichever way one considers it. So, let’s consider what happens when a boss asks his IT engineer to connect his manager’s personally owned mobile phone to company email services. Despite the fact that the company may have good control of its IT policies to comply with its obligation under data protection laws, by connecting third party assets to the company network, the company is now at risk of compromise: the mobile phone may have been left at a restuarant, or perhaps the manager has installed an app on his mobile phone which might have nothing to do with business processes at face value, but which subsequently trawls business information on the employee’s phone for information, which could possibly lead to data protection breaches and consequently, catastrophic consequences. These things can happen in unpredictable ways. The fate of US retailer Target’s headline-busting customer credit card theft in 2013 is well documented. The loss happened not by hacking the company’s “conventional” and well ordered IT network, but by compromising its cash tills.

Another pertinent example is the enormous institutional damage caused with the systemic “hacking” of mobile phones by journalists over several years in Britain. So simple was the technique that real hackers are (probably) offended at the thought that the technique could even be called hacking in the first place. The point is just that – overwhelmingly, personal and business information is wide open to abuse via employee-connected devices, whether by the family computer at home, a portable tablet, or simplistically, a mobile phone. As businesses use new technologies such as voice services, the nature of risk changes again and extends beyond just email and contact lists.

Small business users often tell me, “but those cases are examples of big businesses getting what they deserve, won’t happen to me”. This may be seen as a feasible argument, but the victim is the public in both cases, and unfortunately the risk to small businesses ceding data through compromise to malicious parties is no less. While big business devotes resources to governance policies and managing peripheral equipment, small business usually makes no effort at all while regularly getting caught out in the process.

Small business users will alternately say that responsibility lies with cloud providers to which they migrate services to. They even cite this as a reason for onboarding. In the UK, there is already a track record of legal precedent to say that that does not wash either. Legal responsibility of customer data remains absolutely with the business who takes custody of it in the first place – the data handler in UK parlance, the “responsible party” elsewhere.

What should small businesses do? What about professional services like Office 365? At the point of sale customers are told that there are tools for managing conventional and portable devices – indeed, a major selling point of these kinds of services is the ability to connect users with as many as 5 devices to their centralised service. How does a network newcomer sort that out, though? When it is all businesses can do to get email off the flight deck in the first place, who spends time on academic issues like lost phones?

Moreover, in buying mature and proven services like Office 365, how many organisations have asked staff to modify mobile phone voicemail PIN numbers following the historic phone hacking scandal mentioned above? How many power users have developed a tested policy for dealing with broader issues concerning their organisation’s lost laptops and mobile devices?

The answer to all this and more is the old fashioned concept of the network admin who has been around the block a couple of times. The likelier truth is that by the time a power user understands how to remotely lock someone’s iPad or a mobile phone after it is lost (a telco SIM block does not stop a wifi connected device from continuing to collect business emails and sych documents), the damage has long been done, and the user’s business ends up underwriting reputational damage, statutory censure, and legal liability. Worse than that is the enemy within. Handsets, tablets, and any other connected device that is not subject to governance, leaves your business open to an ongoing risk with every app a user runs without the organisation’s knowledge. In Hollywood terms, the threat is present and real.

There is a beguiling appeal to the unbelievably low price points for sophisticated business services. It can appear to be that such low price points can somehow circumvent risks we face in the real world, as if the risks are not really there any more. For example, Microsoft’s Hosted Exchange email service offers email tools that only big businesses with immense IT departments could have afforded to fund as far back as 5 years ago. The application costs as little as £2.50 per month for one user at time of writing. Importantly, although it introduces a Pandora’s box, the expertise to operate Exchange safely is still commensurate with the engineering skill crucially required for efficient management of  sizeable IT departments.

Today, there is an awkward inverse in the relationship between technology, cost, and risk. It is a difficult reality that is standing the world on its head far beyond the IT world itself. Increasingly sophisticated services come to market at ever lower costs, providing services which drive labour costs down. Although these technologies reduce a lot of operational costs too, other risks persist and broader exposure increases yet more risks. The inherent risks associated with evolving technologies like “onboarding” increase while user awareness remains behind the curve, and so the case for professionally consolidating and supporting devices within an organisation’s conventional and extended IT network remains stronger than ever.

decreasing versus increasing risk

Desreasing software costs do not diminish risks which increase with the proliferation of mobile devices in business networks.

ComStat is a certified Microsoft partner and accredited network adminstrator, capable of coping with organisations small and large. For advice about establishing and managing an “compliant” network policy, please call Steve Galloway on 07834 461 266.

Public Cloud & Data Protection

The accelerating trend of organisations to move data, including customers’ personal data to public cloud environments or other off premises services, raises an important question about who is responsible for the protection of a customer’s personal data.

The principle behind data privacy is that the information we occasionally give to others about ourselves is ours. If we give information to other entitities, like companies we buy goods and services from, the information we give should only be used for the purpose we gave it for.

Optionally, we can expand the scope of that remit, but let’s keep this simple. If we give our personal information to an appliance company when buying a washing machine, the company should only use that communication to talk to us about washing machines, and when the washing machine is dead, the data should also be deleted. If the washing machine company gives our information to another party without our permission, or if the company uses it for another purpose, then they are in breach of data protection laws.

The standard applies not only to large businesses, but equally to small businesses who hold data about their customers. So the question is, when small businesses use external email suppliers and public cloud services, who is responsible for keeping this data secure?

To understand the answer, we need to understand two concepts – data privacy, and data security.

Data privacy is a concept. Data privacy is the concept we use to explain our entitlement. It is an academic, or intellectual proposition. As property owners, our house is our castle. However, this intellectualisation does not mean that our house is safe. To enforce the concept, we have to secure it with locks.

This leads us to data security. This is a physical process, like securing our house with locks or other systems. Data security provides the tools that give us the confidence to know that our rights are protected. Data privacy and data security are interchangeably used, but they really account for two different propositions, and this brings us to the crux of the issue about who is responsible for the protection of data that individuals submit to the small business owner. Until now, many small business owners maintained customer data on premises, so the responsibility is apparently more clear cut. The data rests with the small business, therefore the business is responsible for it.

As small businesses use free email services and storage like GMail, Live, and Yahoo, and others increasingly move towards professional public cloud services, there is a tendancy for small businesses to imagine that the responsibility migrates with the data to the public services they use. Unfortunately, this is not the case.

In the case of the washing machine example, the company who holds a person’s information is what is called in the UK a “data controller”. Responsibility for protection of that data rests exclusively with the data controller, even if that information is stored with a third party elsewhere.

By way of example, Theo Watson, an attorney for Microsoft, recently cited a case where an NHS trust awarded a contract to an IT firm to dispose of computer equipment. Unknown to the Trust, the IT firm subcontracted the disposal to a third party who sold the computer equipment instead. However, the computers had not been purged and sensitive patient information made its way into the public domain. UK authorities determined that the responsible party was neither the contractor, nor the subcontractor, but the NHS. Although the NHS had delegated a job to a contractor, the NHS was ultimately responsible for knowing what happened to the data it held on behalf of its patients. If that meant watching hard drives being physically destroyed, it should have made sure that that happened.

The responsibilities of data controllers is absolute, and draws the role of free email services and public cloud services into sharp resolution. Small businesses often outsource IT work because they do not have expertise or financials to handle IT in-house. Yet, large suppliers like Google will be quick to point out that breaches arising from data that they hold on behalf of small businesses are not their responsibility. So, how does a small business protect itself?

In the final analysis, the answer lies in small business owners understanding the role that they play in handling customer data, and having confidence in the suppliers they use who provide the security necessary to protect data privacy rights on their behalf as an incentive to encourage business, rather than a contractual offering.

Premium suppliers like Micorosoft are certified to ISO 27001, HIPAA, FISMA, FERPA for their Office 365 solutions. Their participation in the “Safe Harbour” protocols enables the company to transfer data between EU and US jurisdictions within the confines of regional legal governance. At time of writing, Office 365 is the only email product supplied “off the shelf” that meets the regulatory governance required by US Federal Government department buyers and EU agencies. The same products are available for public purchase and include Microsoft’s extensive libraries of transport rules to help not only with observance of data privacy, but also tools for automated management of data leakage like credit card numbers, National Health Insurance numbers, and financial services information. Consequently, IT pros recommend services like MS Exchange, Office 365, because there is confidence in Microsoft’s efficacy as far as observance of data protection principles are concerned.

Rather than relying on ISO certifications etc. Micorosoft makes notable efforts to inform users about issues which are increasingly relevant as businesses move services online. Valuable resources are available at their Office 365 Trust Centre.

On the other hand, services like GMail, Yahoo, and others appear to sail closer to the wind. One reason is that many of the services small businesses use are not designed for business use in the first place. Whatever Google’s relationship with regulators over its evolving data policies, the weight of litigation by British, French, European, American, and other regulators hardly lends endorsement to Google’s ethical efficacy. Consequently, its growth in the business world is stunted compared to Office 365’s performance.

Regardless, responsibility for the execution of securing your customers’ personal information rests with you. In choosing your supplier, choose wisely, and choose an IT supplier who can inform you.

 

Domain names and zone records explained

Increasingly, small business users need their domain names to handle web and email services which are catered for by different providers. Without the knowledge to leverage their domain name records, users often default their email to free services like “Live” and “GMail” so that web designers can at least manage their website. These kinds of services while fine for residential use do not provide the reliability, resilience, and efficacy provided by professional solutions like Office 365. For instance, users frequently find that Yahoo, Live, and Gmail treat incoming email from a web site contact form as spam, and users unknowingly lose important communications.

So, how do domain names work, and how can domains be put to work to manage business’ needs for web sites, email, messaging services, online document management systems, etc. under one roof?

DNS

One way to think of a domain name is by comparing it to a phone book, where the domain name is the title of a phone book which lists a variety of entries that point to addresses. In the same way that phone books help us find phone numbers for people or organisations, a domain name lists records which computers need to connect to web sites, email servers, and other things. Domain names hold this list in a “zone record”. Once a domain name and its “authoritative” record is established, copies of the zone record are distributed automatically around the Internet to make it easier for users to find and connect to that domain name’s services. If records need to be amended, then copies of the amended zone record are redistributed. This is DNS.

Zone records

Small businesses used to organise their websites and email with a single web server sourced from a retail provider. The zone record below is typical of this kind of deployment. The “www” record points to the website, and we can deduce that mail services are handled by the same server because “www” and “mail” records point to the same address (see bottom of image).

sample mx record
As email becomes more difficult to manage, small businesses are having to separate mail services from websites so that email can be handled by dedicated email providers, like Microsoft’s Office 365 Exchange email service. Another reason why domain names are becoming more difficult to handle  is because businesses are using more externally based “cloud” services like document management, instant messaging, video conferencing, etc. All of these often need customised entries in a domain name’s zone record.  So, zone records can become complex, and their scope, already beyond the ability of in-house management, is becoming much trickier to handle.

Conceptually, domain names and zone records are not difficult. Since these records exist in a real-time operational state, though, amendments which are incorrectly made can cause catastrophic disruption to email and web services. Professional guidance is recommended for dealing with these services.

Problems also arise when multiple parties need access to a domain name to manage specific services like web hosting and email. A web designer might manage a web site, while a network engineer provides Exchange Email via Office 365. Domain name registrars only recognise one administrator for managing domain name owners’ services. So, who gets the key? Web designers know what kind of records they need and are not too concerned about other services, for example.

DNS Management

To deal with these unusual problems, ComStat uniquely provides specialist services not just for its own customers but also for third party engineers who need access to customers’ zone records for their project work via web based access to a centralised management control panel. ComStat’s service enables customers to prevent their domain name portfolios from fragmenting while enabling authorised parties to collaboratively manage records. In addition to conventional records, ComStat’s zone management cPpanel, below, provides for advanced services like IP v6, SPF, SOA, and TXT records .

comstats zone record control panel

For more information about ComStat’s domain name management services, please contact Steve Galloway on +44 (07834) 461 266, or send him a message via our contact form.

Office 365’s data centres

Office 365’s low cost points and deceptive ease of use masks a multi-billion dollar global investment in fibre optics, hardware, and corporate class software tools which makes Microsoft’s Office products the dominant solution of choice for corporates and governments, and a compelling offering for small and medium business. It is not so much a matter of adulation for Microsoft than a matter of fact that the rest of Microsoft’s competition combined does not match its dominance in business email and productivity apps. So, what does Microsoft do with everyone’s data? Let’s take a look at  a Microsoft data centre.

Microsoft’s Senior Operations Program Manager, Alistair Speirs, said in June 2014 that one reason clients decided to “onboard” their services from their own premises installations was nothing simpler than their concern that adding more appliances to their server room left a real risk that the floor could collapse on the staff underneath. Why IT departments are moving to Microsoft’s cloud services are not clear cut, then, however customers are relying on Microsoft’s provisioning for critical data management.

Alistair used this anecdote to explain Microsoft’s approach to building their system from the ground up. If it is already on the ground, it is hard for anything to fall through it. So much is common sense, and Office 365’s architecture really is rooted to the ground.

The data centres and server farms are built  modularly: they are designed for ease of access and uniformity in all respects. Power and power supply back-up services are “failover”, so that redundant equipment is already operational in the event of break down, and faulty units are simply swapped out and rotated so that repairs can be handled elsewhere.

data centre power supply

Modularity plays an important role in the server arrays, too. Microsoft takes delivery of completely configured server racks from hardware vendors. If more disk space is needed, engineers do not install more drives. Instead, the facility is “stamped out” and another module, or ITPAK is installed. Here, engineers lowers an air handling unit on top of a pre-assembled rack at a facility in Quincy, eastern Washington.

ITPAK cooling unit

When fully assembled, these installations are called “ITPACs”. ITPACs are built from four components: an IT load, an evaporated cooling unit, an air handling unit, and a mixing unit. The evaporative cooling unit has a mesh screen where water can slowly drip through to keep a consistent amount of humidity. Air will naturally blow through the evaporated cooling unit. The IT load sucks that air through, and the air handling unit provides a pressure difference between the outside pressure and inside pressure of the data centre. Air gets naturally pulled through the evaporative cooling units to cool the servers without having to be powered by fans. The air handling unit pulls air out and pushes some of it back to the mixer. That is how engineers control temperature. The attraction of this model is that air conditioning is no longer a consideration and Microsoft’s server farms and data centres live outside. Below is their Quincy data center in Eastern Washington where the climate similar to Madrid. Once concrete is laid, it takes about four hours to install, and needs only three connections: the ping, the pipe, and the power.  It is remotely monitored, all operating from this unit. Microsoft’s installation incorporates some nifty tricks to mitigate energy consumption further and keep end users costs down. In this installation one building block comprises about 25,000 servers.

an assembled IT pak

Inside the data centre, the uniformity of hardware is unmistakable. Access to hardware is restricted by “rack” and on need to know authorisation. For instance, non Office 365 engineers from Microsoft’s own corporate installations would not access these kinds of facilities.

server room rack

Lastly, in addition to uniform hardware and components, seemingly peripheral issues like co-ordination of cabling colour and cabling runs follow strict protocols, all in the interest of avoiding what engineers call configuration drift:

data centre cabling

Microsoft’s data centre in Quincy, is part of a global network that comprises a core of 10 – 100 data centres at time of writing, with subsidiary “Edge Nodes” and other “Metro Solutions” to provide a mechanism for delivering content to last mile user end points within its Office 365 wide area network.

office 365;s global infrastructure

 

 

For backup, read “high availability”

Office 365 and similar technologies provide a workaround to render the barbed issue of backups increasingly obsolete. In its place, high availability is the one overiding reason for businesses to relocate their email and data stores to solutions like Office 365.

The difficulty with backups is that really, the concept is fundamentally flawed. It is just that, until now, there has not been an alternative. Nor is this an end user problem; even network engineers struggle. For example:

1. it is not in people’s genes to run backups reliably,

2. data stores are so large in today’s knowledge economy that it is impractical to execute time consuming backups,

3. backups are out of date as soon as they are completed because new data already needs appending,

4. we never know if backups will work,

5. if we adopt the conventional guidance to test backups, it means destroying a good data store to test an unknown quantity,

6. even if users escape the first five traps, since it takes so long to record complete backups, it follows that it takes at least as long to restore them. So, in the event of disaster recovery, an organisation could potentially waste weeks and possibly months restoring terrabytes of data. Even network administrators would have nothing to do but watch the wheels on the tapes go round and round.

A few years ago, large organisations started approaching this problem from another angle. The thinking was that backups themselves were not as important as the availability of data to enable people to access information. Rather than “point-in-time” backups, engineers replicated or synchronised data stores in real time to other hardware, so that in the event of server or data failure, users could default to replicated services which ran in real time. This approach was labelled “high availability”. The approach gained traction as a model within organisational networks, and because the principle scales easily, it is an underlying reason for the success of today’s “cloud” services. Email is a critical application that depends on high availability services. Here is an illustration of how network administrators span “Database Availability Groups” in Microsoft Exchange to replicate email databases across multiple servers and multiple locations so that there is no single point of failure:

microsoft exchange database availability groups

So what does this actually mean? In today’s terms, whereas a single point of failure like a server crash was once catastrophic, a server failure today usually means someone – usually an intern covering for his boss who has a more critical date with a sand wedge – will get around to a “dud” server within a few days while the network’s servers takes up the slack. So sophisticated are today’s high availability services that servers diagnose their peer appliances and invoke automated action to ringfence corrupted devices.

This is good news for small business, which has neither the desire to hire the required expertise to do this in-house, or the money to invest in hardware to do this in the first place.

Instead, businesses are turning to solutions like Microsoft’s Office 365, whose gigantic scales of economy are equalled only by the sheer size of its multi-billion dollar investment that provides data availability not in the order of two concurrent databases for users, but ten.

How does Microsoft do this? Office 365’s solution is built around a globally distributed core of data centres. By “data centre”, Microsoft means a facility of at least a hundred thousand servers. At this level, between then and a hundred such facilities are online at any given time. Underneath these kingpins sits a hierarchy of edge nodes and metro solutions which gradually disseminate data towards regions that users are localised in. Importantly, data transits Microsoft’s own fibre network until the last mile, which greatly improves prompt and clean delivery. This diagram shows how Microsoft’s topography works (click on the image to enlarge):

 

office 365's network topology

Office 365’s network stands apart from Microsoft’s own network, and the American, European, and Asian regions are interconnected.

So, where does a user’s data reside and is there a risk of data bottlenecks?

Data does not look like you would imagine it to be by the time it reaches a data centre, so the point is not so important as you might think. The biggest consideration – where users are geographically based – does not have so much to do with the hops required for data to reach devices, rather the considerations like data protection laws and practices that differ from region to region.  Office 365’s provisions services like retention policies and data leakage to help local network administrators provision email and data management for end users compliantly.

So data is primarily situated within a user’s region for a variety of reasons and to improve availability, tools are provided to minimise information flow between users and data centres. For instance. when working on a Word document, the entire file is not written to storage every time a user clicks “save”: only changes are synchronised. Once data starts compiling, it is replicated to a mirror within the primary data centre, where point-in-time backups are kept in any event, and techniques are deployed to populate  data between geographically remote locations. In this scenario, deleting documents by mistake is harder for end users to do because at  least nine copies remain in place, and even when those disappear, there is still a point-in-time backup to recover from. This diagram illustrates the scheme (click on the image to enlarge):

office_365_logical_2

Office 365’s Exchange email services operate a little differently, but again with high availability in mind for such a critical application. It is worth mentioning that although Office 365 for Small and Medium Business still vexes users with building their own email archiving, Office 365 Enterprise solutions include “inline” archiving which enables users to completely remove archiving from their desktop versions of Outlook. I will cover the implications of that in another article.

The biggest worry for business users is the thought of surrendering data to a third party. Yet, law practitioners and accountants whose authorising bodies require high standards of efficacy already use Office 365 because in terms of document discovery, data retention, and other factors, it is difficult to match Office 365’s services. Indeed, Office 365’s Exchange mail services are the only commercial solution at time of writing that satisfies Federal and EU requirements for departmental deployment “off the shelf”.

For more information about Office 365, Office 365 network administrator help, and trial subscriptions, please contact Steve Galloway on 07834 461 266 or Fred Dreiling on 07919 340 570.

 

 

Open chat
1
Scan the code
👋Scan the QR code or click open Chat to talk to us on WhatsApp.