Make sure disaster recovery is done the right way

The threat of natural disasters or other business interruptions such as power outages and viruses means that companies need robust backup and disaster recovery solutions for their data environment. Often, however, backup and disaster recovery services are conflated, and businesses end up with solutions that don't necessarily offer all the functionality they actually need. To ensure the enterprise IT environment is fully recoverable in the wake of a disaster, companies can benefit from working with a managed services provider to develop a customized plan that fits their needs.

One common misconception about disaster recovery is that it offers nothing appreciably different from a backup or cloud storage solution, a recent MSP Mentor article explained. Most companies already have some form of backup solution, perhaps hosted in the cloud, which may make a separate recovery service seem superfluous.

However, simply relying on backup storage doesn't take the need for getting key applications running again into account, and it can quickly become expensive or difficult to manage as the volume of data increases, Sundar Raman, CTO of Perpetuuiti Technosoft Services, noted in a recent interview with CIOL. This complexity can make shortcuts even more tempting.

"CIOs tasked with addressing business continuity (BC) and disaster recovery issues are keen to achieve quick wins, and the 'tick box' audit approach, which tries to copy successful strategies used elsewhere, is often adopted without consideration of the suitability," Raman explained.

To combat this problem, companies can benefit from working with a dedicated managed service provider to craft a customized solution that fits their specific needs. By determining the best plan to meet recovery time objectives for various applications and data while also working within a manageable budget, companies can establish a disaster recovery plan that gives them more than basic backup without overextending themselves.

How companies protect data centers against the threat of physical intruders

The range of threats impacting business data is diverse, but while substantial attention gets paid to protecting systems from hackers, the actual infrastructure that houses sensitive information can be an attack vector as well. Companies have grown increasingly aware of the threats posed by a physical intruder in the data center, and certain best practices have emerged around physical security as a result. Leading enterprise data centers and colocation facilities use solutions such as surveillance, security checks, hardened exteriors and mantraps to protect themselves from these threats.

"Companies spend multi-millions of dollars on network security," Enterprise Storage Forum contributor Christine Taylor wrote in a recent article. "Yet if an attacker, disaster, or energy shortage takes down your data center then what was it all for? Don't leave your data center gaping open, and make very sure that your data center provider isn't either."

Limiting physical access by outsiders to the data center is key, as it is a sensitive environment that can be easily sabotaged – either knowingly or unknowingly. One initial protection many data centers use is to have a hardened exterior with extra thick walls and windows (as well as no windows in the server room), Taylor wrote. This precaution helps protect against both physical attacks such as explosives and natural disasters. Similar protections might include crash barriers or landscaping features around the data center to help hide it and protect it from an event like a car crash, for instance.

Security checks and mantraps
Another basic security practice is to use 24/7 surveillance with cameras that move and cover the entire premises, ideally backed by an on-premise security guard. During business hours, security guards can also perform security checks on visitors. In a recent column for TechRepublic, contributor Michael Kassner described a visit to an enterprise data center for which he was required to show two forms of ID and turn over his electronic devices to prevent him from taking pictures.

He then faced some internal physical barriers in the form of a turnstile and mantraps, which are essentially airlocks designed to prevent more than one person from passing through a door at once. The ones Kassner encountered had sensitive weight scales to detect if more than one person was coming through, as well as if that person had carried something in and not out or vice versa. Mantraps and turnstiles prevent tailgating, or the practice of following an approved employee through a secure entrance.

As companies make data center decisions, choosing a provider that can offer these robust solutions for protecting physical infrastructure is essential. Just as businesses need to secure their digital perimeter, they should look to achieve best practices for locking down their physical perimeter as well.

Disaster recovery services, cybersecurity critical to protecting electric grid from attacks

Over the past few years, the utilities industry has made a concentrated effort to make key infrastructure "smarter." The integration of data-capturing devices and automated, software-based management systems has the potential to create smart electric grids that can more effectively use and distribute power, reducing energy costs and environmental impact in the process.

However, turning power grids into connected devices has potentially harrowing implications – a concentrated cyberattack could cause lengthy and widespread outages, not only withholding electricity from businesses and residences, but disrupting communications, healthcare systems and the economy. According to many cybersecurity researchers, the likelihood of a potential problem occurring is less of an "if" and more of a "when." 

Ramping up disaster recovery services and cybersecurity protocols is key to shielding the smart electric grid from a devastating attack. While the federal government tries to increase the efficacy and stringency of its own security measures, it's important that utility companies – from national generators to local distributors – build up their own prevention and backup systems, according to a recent white paper by the three co-chairs of the Bipartisan Policy Center's Electric Grid Cybersecurity Initiative. This effort will require a hybrid system that responds to both physical and cybersecurity threats. 

"Managing cybersecurity risks on the electric grid raises challenges unlike those in more traditional business IT networks and systems," the report stated. "[I]t will be necessary to resolve differences that remain between the frameworks that govern cyber attack response and traditional disaster response."

Disaster recovery efforts need to include backup digital systems that rival physical ones. Electric grids require faultless failover technology that can depend on a secondary backup network if the primary one is taken offline for any reason. As the Baker Institute pointed out in a recent Forbes article, the measure of a disaster recovery system's effectiveness is based on whether the grid can be restarted following a major breach, disruption or cyberattack. Without a system that can effectively monitor, prevent and immediately respond to such threats, the smart electric grid could be putting many key infrastructure systems in danger.

Disaster-recovery-as-a-service market emphasizes changing priorities

Disaster recovery, once a relative afterthought or nonentity in business planning, is now a central consideration. Advanced threats and high-profile data breaches have helped to convince organizations that it's time to stop dragging their feet and start taking disaster recovery more seriously. The rapid rise of the market for disaster-recovery-as-a-service highlights an important shift in priorities.

According to TechNavio, the global market for DRaaS is expected to rise at a compound annual growth rate of 54.6 percent between 2014 and 2018. Demand for hybrid and cloud-based disaster recovery has driven investment, especially in small- and medium-sized businesses that have found a "flying under the radar" approach by virtue of their size is no longer a viable approach to avoiding the consequences of information security compromises.

Larger organizations have also realized that IT departments are generally unable to maintain complete oversight and disaster recovery protection amid data deluges and rapid network expansion. To cite one sector, the banking industry has begun to invest heavily in the cloud to relieve the amount of resources it has to spend on application updates, software patches and IT infrastructure, according to Bank Systems & Technology.

The report did note that relying too much on a generic cloud solution or paying insufficient attention to backup data could diminish the effectiveness of DRaaS investment. A company is better served by using a multi-service provider that focuses on customization, specificity and addressing pain points. This way, it can avoid any data integrity or governance issues stemming from a lackluster vendor. 

Optimizing data center strategies for financial services firms

 

Data center investment strategies are critical to the lifeblood of financial services organizations. While finance firms have long used proprietary or third party data centers for information storage and business continuity, big data has given rise to a new set of complications and considerations. Not the least of these are a variety of regulatory and compliance measures that place restrictions on information storage and archival practices. New technologies, rising costs and data management issues are driving compatibility issues in traditional data center models, and financial services firms need to adapt.

Data management in finance is a problem with several moving parts that impact each other. Accumulating and storing data is a relatively straightforward issue, albeit a resource-intensive one. Under the traditional model, a firm would procure additional servers for its onsite facility or enlarge its third-party data center investment, either through colocation or leasing the provider’s equipment.

The deluge of data can make this approach prohibitively costly, forcing organizations to rethink their infrastructure approach, Wall Street & Technology editorial director Greg MacSweeney wrote. Firms with proprietary data centers now stand to save significantly by outsourcing its storage, architecture and management demands. A third-party data center can provide state-of-the-art server hardware, but more importantly has the infrastructure to deploy next-gen network solutions such as virtualization, which drastically reduces the amount of physical equipment needed to contain rising petabytes of data and information-crunching applications.

Working with a third-party data center provider also helps businesses tackle more rapidly moving targets – data integrity and compliance. Data quality and validation are some “small data” issues that grow more problematic as firms accumulate more information from a wider source pool, said software developer Oleg Komissarov, according to a recent FierceFinanceIT article.

Keeping data clean, complete and consistent is a tough task that requires powerful tools and a dedicated team. A managed data center services provider can help offer this level of attention. It can also help in compliance efforts, as any blind spots or inconsistency in information or reporting leave the door open for compliance issues to crop up. As big data expands and accelerates, financial services firms need their data centers to stay one step ahead.

Managed services can help organizations avoid top 10 business hazards

Managed services enable businesses to more successfully navigate a threat-laden enterprise landscape. Although an organization’s biggest IT, operations and security anxieties vary by region, industry and company size, what they’re most afraid of is generally the same across the board – lost profitability, client churn and a tarnished reputation.

In the Twitter age, no confirmed threat goes unpublished or unanalyzed, and it’s difficult for an organization to escape blame even if it’s only affected as a byproduct of another incident. The woes of retailer Target, which reported a 22 percent decrease in its client base in January following a massive data breach during the 2013 holiday season, serve to underscore consumer response to an enterprise that demonstrates less-than-exemplary information security, data management and business continuity.

According to a recent Business Continuity Institute study of nearly 700 enterprise respondents in 82 different countries, the top 10 most common perceived threats to disaster recovery and business continuity are:

  1. Unplanned IT outages
  2. Cyberattacks
  3. Data breaches
  4. Adverse weather effects
  5. Utility supply interruptions
  6. Fires
  7. Security compromises
  8. Health or safety incident
  9. Act of terrorism
  10. New laws or regulations

How managed services assuage anxiety
Managed services offer vast potential for companies to mitigate potential problems in many areas because a provider’s solutions are customized to the needs of the company. The above list offers a variety of incidents stemming from the company’s location, industry, employee behavior and general security management. Overseeing prevention and contingency plans that effectively respond to all of these potential hazards is time consuming, resource intensive and costly. While it’s impossible to prevent adverse weather or control regulatory measures, it’s possible to keep these threats from doing any real damage.

Managed services are scalable, so the amount of a provider’s involvement can correspond exactly a company’s anxieties and potential hazards. One organization may simply require online backup services via an offsite server in order to increase its data loss prevention activities. Another may want to virtualize nearly all of its infrastructure so its employees can stay connected and productive during a wave of bad weather. As a company’s needs change over time, it doesn’t have to rearrange its entire back-end infrastructure in order to keep danger at bay.

Differentiating effective IT business continuity from disaster recovery

With constant threats posed by extreme weather and external attackers, companies have increasingly recognized the importance of protecting their IT assets in the wake of a disaster. But the nature of that protection plan is often up for debate. Recovering from disaster means leveraging tools like online backup services at the very least. However, true resilience in the face of a disaster requires a more all-encompassing business continuity approach.

The plan goes beyond data protection and recovery
While backing up data so it can be restored in the wake of an outage is the bedrock of any business continuity plan, it's only half the battle. Depending on a business's approach, its backup solution may do it little good in the event of an actual disaster. For instance, some businesses relying on off-site tape storage have found themselves unable to restore their files at a secondary location after a storm because they couldn't physically travel to the tape storage facility due to flooding, industry expert Jarrett Potts explained in a column for Data Center Knowledge. Having a plan that encompasses the full recovery process is essential.

"IT disaster recovery plans are very important when one considers how intertwined organizations are with technology, but it is important to note that IT disaster recovery plans are not, by themselves, a complete business continuity strategy," Continuity Central contributor Michael Bratton explained in a recent article.

The solution is oriented toward application uptime
A key differentiator between disaster recovery and business continuity is that the latter's focus is keeping core business operations running. As Bratton noted, this approach goes beyond simply IT. However, from a tech perspective, it primarily means keeping critical applications running with as little interruption as possible. Through technologies like virtualization and a distributed network of colocation facilities, businesses can establish a flexible application hosting model that can easily weather unexpected events. The exact nature of the plan is likely to vary from company to company, so working with a third-party solution provider to develop a custom response can also be beneficial.

Real-world business continuity: The soaring costs of downtime

Many organizations approach business continuity as an afterthought. When a company is building up its hardware footprint and application investments in support of its growing business model, contingency plans are often relegated to the backseat and linger there. These organizations find out too late about the costs of prolonged downtime and the difficulty involved in righting the ship only in the aftermath of an unplanned event. One recent report offers some fairly chilling statistics about widespread shortcomings and expensive consequences of ignoring business continuity planning.

The Ponemon Institute report on the cost of data center outages in 2013 found that organizations lose $7,900 per minute of downtime. The mean cost of a single data center outage is $627,418 and the maximum amount lost to a single incident was more than $1.7 million. The total and per-minute costs correlated to the size of the facility and the duration of the outage, while IT equipment failure represented the most expensive root cause of unplanned data center downtime. Financial hits were worse for companies in data center-dependent industries such as e-commerce, financial services and telecommunications.

Costs can quickly escalate as a business recovers from an unplanned incident. From detection and containment to lost revenues and dwindled productivity, the expenditures can be immense. An organization will suffer more for each area of its business continuity planning that is lackluster or poorly thought out. 

These findings convey the importance of having an effective business continuity approach in place. The approach is twofold – prevention and recovery. Eliminating root causes of downtime is vital, especially in the case of expensive ones like IT equipment that can be more effectively managed. Visibility and redundancy can help streamline efforts to get the system back on track following a surprise incident.

Virtualization can be a great asset to both aspects of business continuity planning, as a recent CIO.com webinar pointed out. It provides a more manageable, agile environment for continuity efforts, mitigates hardware vulnerabilities by slashing equipment needs and helps a company access its safely stored systems and applications immediately following an unplanned occurrence.

Managed services key to making disaster recovery planning stick

Managed services can help organizations eliminate one of their biggest pain points – disaster recovery. Establishing and upholding continuity and contingency plans can be complicated and resource-intensive. Many businesses, especially fledgling ones, choose to shove disaster recovery planning on the back burner. Over time, the lack of attention paid to disaster recovery planning puts organizations at risk.

According to a recent study by The Disaster Recovery Preparedness Council, many organizations are woefully unprepared for disaster to strike. Its global survey of more than 1,000 organizations, from small businesses to large corporations, found that a whopping 73 percent of organizations do not have adequate disaster recovery plans in place. Its other findings include:

  • 64 percent of respondents said that their organizations' disaster recovery efforts are underfunded.
  • More than 60 percent do not have fully documented plans.
  • Among the 40 percent that do have documented plans, 23 percent have never actually tested them to see if they work.
  • Of respondents that experienced outages, almost 30 percent lost data center functionality for days or weeks at a time.

Since there's no way of knowing when and how a potential disaster may occur, companies are gambling with their future every day they don't do something about their disaster recovery and business continuity planning efforts. Being proactive is the only way to successfully combat the effects of unplanned events.

Managed services can help organizations establish a meaningful, up-to-date disaster recovery system. They can provide concentrated data backup and system recovery services beyond those a business has the budget or time to uphold, noted MSPmentor. Keeping systems current, especially when an organization adds a new application or hardware, is key to eliminating vulnerabilities that stem from outdated disaster recovery plans. 

Proactive risk mitigation is important. Managed services providers can help organizations develop recovery time objectives for business-critical applications and conduct automated recovery testing. Having a dedicated IT staff on hand relieves companies of having to make their forays into the difficult science of disaster recovery and business continuity planning alone.

How to choose a colocation provider

Colocation is an advantageous infrastructure model for any company concerned about supporting its data storage needs. Among the variety of data center, server placement and management options available, it's the one that directly marries an organization's desire to maintain control over its equipment with its need for better network and security support.

In a colocation environment, an organization leases data center space for servers it owns. The data center provider offers server racks, power, bandwidth and physical security. The organization retains control over server management, unless it chooses to outsource these needs to the provider as well. 

Simple, right? Because the colocation business is booming, it attracts a lot of upstart providers. Not all of them offer the same level of service. That's just the reality of the situation. Additionally, one provider's solutions may be right for one organization and match up poorly with another's needs. Misfiring on this selection can be a costly decision, not only in wasted capital expenses but potentially down the road if business continuity is affected, according to ComputerWeekly. 

Determining the most pressing concerns is a company's first step. For example, a company with its central location in an area more susceptible to natural disasters should look for a colocation facility in a safer area. Connectivity is another issue. While every business wants to stay online, some may be able to afford less than 99.999 percent uptime ("five-nines uptime") in exchange for a more cost-effective colocation plan. A financial services firm or federal entity may need to pay a premium to ensure servers are always available. It's simply a matter of weighing financial costs with the price of availability.

Security is a near-universal concern, while many organizations may be dealing with increased complications related to industry compliance, according to Data Center Knowledge contributor Bill Kleyman. A company needs to make sure its colocation provider is certified for adherence to compliance standards. A variety of physical and facility safeguards can provide additional protection, which may be the way to go if a company's colocation center is in a more populated area.