Posts

3 advantages of server virtualization

 

Server virtualization is quickly becoming the preferred deployment model for corporate data centers, as companies look to tap into the benefits of managing servers on a software level. Switching to virtualization means that the workloads happening on servers are not tied to a specific piece of physical hardware and that multiple virtual workloads can occur simultaneously on the same piece of machinery. The immediate benefits of virtualization include higher server utilization rates in the data center and lower costs, but there are more sophisticated advantages as well. Three of these are:

1. Improved disaster recovery and business continuity: With virtualization, the information on a server is not contained to a specific piece of hardware, which means a hardware failure doesn’t have to be catastrophic. Instead, data and software are backed up to multiple machines, and it’s easier to reboot systems at a new location, a recent Firmology article explained. The result is faster recovery times. Also, since virtual machines can be run on a wide variety underlying hardware, it’s possible for companies to use older machines for their recovery systems to reduce costs.

2. Easier IT management: With virtualization, IT employees are saved much of the grueling maintenance and provisioning work that physical servers require, a recent VMWare white paper noted. Considering that routine tasks like adding new server workloads and launching new applications account for at least half of employees’ time in nine out of 10 IT departments, the potential productivity gains are substantial. Adding new servers and carrying out maintenance can be done with a few clicks of a mouse.

“These tools eliminate the need for IT workers to manually perform routine maintenance and troubleshooting on multiple physical machines,” the white paper stated. “In fact, these tools not only make it easy to pinpoint IT issues, but they can also proactively detect and resolve these issues without intervention.”

3. More agile business processes: The business world changes fast, and companies need to be able to respond accordingly. As opposed to traditional deployment schedules, which required planning for hardware purchases and installation, virtual infrastructure allows companies to scale rapidly, adding new virtual servers on demand, the white paper said. Additionally, it’s much easier to change how virtual resources are allocated, giving businesses the ability to shift strategies on the go.

For companies looking to tap into the advantages of a virtual infrastructure, it can be valuable to work with a managed services provider that has a background in server virtualization deployments. With this external expertise, businesses can move toward a computing model that’s more disaster resilient, more agile and easier to manage.

How can companies improve the disaster resilience of their data center infrastructure?

According to a recent benchmark survey by the Disaster Recovery Preparedness Council, nearly three quarters of companies worldwide are failing in terms of disaster readiness, with struggles in downtime for specific critical applications or even entire data center environments. Close to 20 percent of companies reported losses of over $50,000 stemming from outages. Companies can protect themselves against this possibility by investing in resilient data center solutions from a colocation provider focused on business continuity.

"Reliability starts with high industry standards in a checklist of requirements: climate-controlled environments, intelligent security structure and state-of-the-art equipment, technologies and design," BizTimes.com contributor Kevin Knuese wrote in a recent article.

He noted that companies should look for data center solutions with redundant networking and power supplies, as well as redundant cooling systems and all-around state-of-the-art technology. Additional data center features such as 24/7 monitoring and physical security safeguards meant to withstand both break-ins and natural disasters such as floods and earthquakes are important as well. A hosting provider based in the Midwest can be particularly reliable due to the reduced likelihood of certain natural disasters like earthquakes, hurricanes and mudslides that are more common on the coasts.

A provider that offers backup and business continuity services is also important, Knuese wrote. Executives can sometimes be skeptical of "disaster recovery," seeing it as an alarmist term and frustrating cost driver, according to industry expert Steve Kahan. However, the argument for a reliable data center and backup solution is more clear-cut, as such technology solves the problem of many IT headaches. As a result, a colocation provider with business continuity services can be key for maintaining brand credibility from an IT side.

"Some audiences are more responsive when the conversation is focused on the crucial role that IT plays in ensuring 'business continuity' or the operational costs triggered by an 'extended outage,'" Kahan wrote for DRBenchmark.org. "Here's one more suggestion: think of disaster preparedness as 'an investment in brand security,' a way to protect your company's reputation."

Moving toward the virtual data center

Virtualization – the process of abstracting hardware functions to a software level – is one of the signature advancements of modern computing, allowing companies to consolidate their server footprints and increase the flexibility of their infrastructure. With virtualization, businesses can quickly create new virtual servers and move workloads from one physical location to another on a software level. As server virtualization becomes increasingly standard in the data center, companies are beginning to look at other forms of virtualization that can also be applied to increase flexibility, such as storage virtualization and network virtualization. With virtualization in all its forms becoming more important for managing a data center, companies are turning to managed services partners to help.

InformationWeek’s 2013 Virtualization Management Survey found that 72 percent of companies reported extensive use of server virtualization, and just 4 percent had no plans for use. In comparison, only 22 percent reported extensive use of storage virtualization, with 28 percent saying they had no plans for use, and a mere 11 percent reported extensive network virtualization, with 44 percent saying they had no plans for use. The main drivers for virtualization included operational flexibility and agility (56 percent) and business continuity (55 percent).

“Undoubtedly, a fully virtualized data operation offers many advantages,” ITBusinessEdge’s Arthur Cole wrote in a recent column. “Aside from the lower capital and operating costs, it will be much easier to support mobile communications, collaboration, social networking and many of the other trends that are driving the knowledge workforce to new levels of productivity.”

The evolving virtual data center
At the same time, Cole cautioned, much of the virtual technology that extends beyond server virtualization is still in its early phases. As a result, companies may encounter challenges as they look to enjoy the management benefits of abstracting elements of their data centers. A trusted data center partner can help businesses evaluate and implement emerging technologies, and even oversee transitions such as server virtualization and consolidation.

The standard for what counts as a virtualized data center is set to evolve in the coming years as more physical components are virtualized, and businesses will want to be at the cutting edge of whatever emerges. By outsourcing some infrastructure management tasks to a trusted third-party provider, they can ensure they are adopting these innovations even if they do not have the in-house technical expertise or capital to make the changes. To keep close tabs on the move toward the virtual data center, a managed services and IT consulting partnership is essential.

Recognize the business advantages of data colocation

For many companies, it can be tempting to approach data storage in a fairly insular manner, keeping files on-premise so they can be easily accessed and IT can maintain absolute control. But businesses are increasingly jettisoning their expensive storage and server infrastructure in favor of switching to an outsourced colocation model. By making IT a fixed operational expenditure rather than a massive capital expenditure, companies can remain more flexible. Colocation data centers also provide numerous IT benefits in terms of disaster resilience and collaboration.

"Hosting your own infrastructure can require significant capital investment in real estate," IT executive James Carnie told ComputerWeekly in a recent feature about the merits of different spending models.

In addition to real estate costs, companies investing in their own infrastructure face massive hardware expenses, and they also must accurately anticipate future expansion to know how much equipment to buy during purchase cycles. Additionally, ownership includes the need to pay for their own ongoing maintenance, which leads to unexpected costs as companies deal with issues that arise. In contrast, a colocation model shifts businesses to a planning approach built around fixed monthly costs, IT executive Akshay Kalle wrote in a recent column for the Globe & Mail.

"Managed services models reduce the considerable costs of storage, upgrades, data recovery, converting big capital outlays and unpredictable maintenance costs in time and materials, into predictable monthly fees with clear expectations and guarantees," Kalle explained.

Colocation also simplifies the challenge of dealing with disasters by moving data off-site to a resilient facility, and, by centralizing business information, it enables easier audits, Kalle added. Centralization and virtualization also foster collaboration: By moving data to shared resources in a data center rather than letting it languish on desktops, companies can simplify file sharing and other collaborative processes among their employees.

Make sure disaster recovery is done the right way

The threat of natural disasters or other business interruptions such as power outages and viruses means that companies need robust backup and disaster recovery solutions for their data environment. Often, however, backup and disaster recovery services are conflated, and businesses end up with solutions that don't necessarily offer all the functionality they actually need. To ensure the enterprise IT environment is fully recoverable in the wake of a disaster, companies can benefit from working with a managed services provider to develop a customized plan that fits their needs.

One common misconception about disaster recovery is that it offers nothing appreciably different from a backup or cloud storage solution, a recent MSP Mentor article explained. Most companies already have some form of backup solution, perhaps hosted in the cloud, which may make a separate recovery service seem superfluous.

However, simply relying on backup storage doesn't take the need for getting key applications running again into account, and it can quickly become expensive or difficult to manage as the volume of data increases, Sundar Raman, CTO of Perpetuuiti Technosoft Services, noted in a recent interview with CIOL. This complexity can make shortcuts even more tempting.

"CIOs tasked with addressing business continuity (BC) and disaster recovery issues are keen to achieve quick wins, and the 'tick box' audit approach, which tries to copy successful strategies used elsewhere, is often adopted without consideration of the suitability," Raman explained.

To combat this problem, companies can benefit from working with a dedicated managed service provider to craft a customized solution that fits their specific needs. By determining the best plan to meet recovery time objectives for various applications and data while also working within a manageable budget, companies can establish a disaster recovery plan that gives them more than basic backup without overextending themselves.

How companies protect data centers against the threat of physical intruders

The range of threats impacting business data is diverse, but while substantial attention gets paid to protecting systems from hackers, the actual infrastructure that houses sensitive information can be an attack vector as well. Companies have grown increasingly aware of the threats posed by a physical intruder in the data center, and certain best practices have emerged around physical security as a result. Leading enterprise data centers and colocation facilities use solutions such as surveillance, security checks, hardened exteriors and mantraps to protect themselves from these threats.

"Companies spend multi-millions of dollars on network security," Enterprise Storage Forum contributor Christine Taylor wrote in a recent article. "Yet if an attacker, disaster, or energy shortage takes down your data center then what was it all for? Don't leave your data center gaping open, and make very sure that your data center provider isn't either."

Limiting physical access by outsiders to the data center is key, as it is a sensitive environment that can be easily sabotaged – either knowingly or unknowingly. One initial protection many data centers use is to have a hardened exterior with extra thick walls and windows (as well as no windows in the server room), Taylor wrote. This precaution helps protect against both physical attacks such as explosives and natural disasters. Similar protections might include crash barriers or landscaping features around the data center to help hide it and protect it from an event like a car crash, for instance.

Security checks and mantraps
Another basic security practice is to use 24/7 surveillance with cameras that move and cover the entire premises, ideally backed by an on-premise security guard. During business hours, security guards can also perform security checks on visitors. In a recent column for TechRepublic, contributor Michael Kassner described a visit to an enterprise data center for which he was required to show two forms of ID and turn over his electronic devices to prevent him from taking pictures.

He then faced some internal physical barriers in the form of a turnstile and mantraps, which are essentially airlocks designed to prevent more than one person from passing through a door at once. The ones Kassner encountered had sensitive weight scales to detect if more than one person was coming through, as well as if that person had carried something in and not out or vice versa. Mantraps and turnstiles prevent tailgating, or the practice of following an approved employee through a secure entrance.

As companies make data center decisions, choosing a provider that can offer these robust solutions for protecting physical infrastructure is essential. Just as businesses need to secure their digital perimeter, they should look to achieve best practices for locking down their physical perimeter as well.

Disaster recovery services, cybersecurity critical to protecting electric grid from attacks

Over the past few years, the utilities industry has made a concentrated effort to make key infrastructure "smarter." The integration of data-capturing devices and automated, software-based management systems has the potential to create smart electric grids that can more effectively use and distribute power, reducing energy costs and environmental impact in the process.

However, turning power grids into connected devices has potentially harrowing implications – a concentrated cyberattack could cause lengthy and widespread outages, not only withholding electricity from businesses and residences, but disrupting communications, healthcare systems and the economy. According to many cybersecurity researchers, the likelihood of a potential problem occurring is less of an "if" and more of a "when." 

Ramping up disaster recovery services and cybersecurity protocols is key to shielding the smart electric grid from a devastating attack. While the federal government tries to increase the efficacy and stringency of its own security measures, it's important that utility companies – from national generators to local distributors – build up their own prevention and backup systems, according to a recent white paper by the three co-chairs of the Bipartisan Policy Center's Electric Grid Cybersecurity Initiative. This effort will require a hybrid system that responds to both physical and cybersecurity threats. 

"Managing cybersecurity risks on the electric grid raises challenges unlike those in more traditional business IT networks and systems," the report stated. "[I]t will be necessary to resolve differences that remain between the frameworks that govern cyber attack response and traditional disaster response."

Disaster recovery efforts need to include backup digital systems that rival physical ones. Electric grids require faultless failover technology that can depend on a secondary backup network if the primary one is taken offline for any reason. As the Baker Institute pointed out in a recent Forbes article, the measure of a disaster recovery system's effectiveness is based on whether the grid can be restarted following a major breach, disruption or cyberattack. Without a system that can effectively monitor, prevent and immediately respond to such threats, the smart electric grid could be putting many key infrastructure systems in danger.

Disaster-recovery-as-a-service market emphasizes changing priorities

Disaster recovery, once a relative afterthought or nonentity in business planning, is now a central consideration. Advanced threats and high-profile data breaches have helped to convince organizations that it's time to stop dragging their feet and start taking disaster recovery more seriously. The rapid rise of the market for disaster-recovery-as-a-service highlights an important shift in priorities.

According to TechNavio, the global market for DRaaS is expected to rise at a compound annual growth rate of 54.6 percent between 2014 and 2018. Demand for hybrid and cloud-based disaster recovery has driven investment, especially in small- and medium-sized businesses that have found a "flying under the radar" approach by virtue of their size is no longer a viable approach to avoiding the consequences of information security compromises.

Larger organizations have also realized that IT departments are generally unable to maintain complete oversight and disaster recovery protection amid data deluges and rapid network expansion. To cite one sector, the banking industry has begun to invest heavily in the cloud to relieve the amount of resources it has to spend on application updates, software patches and IT infrastructure, according to Bank Systems & Technology.

The report did note that relying too much on a generic cloud solution or paying insufficient attention to backup data could diminish the effectiveness of DRaaS investment. A company is better served by using a multi-service provider that focuses on customization, specificity and addressing pain points. This way, it can avoid any data integrity or governance issues stemming from a lackluster vendor. 

Optimizing data center strategies for financial services firms

 

Data center investment strategies are critical to the lifeblood of financial services organizations. While finance firms have long used proprietary or third party data centers for information storage and business continuity, big data has given rise to a new set of complications and considerations. Not the least of these are a variety of regulatory and compliance measures that place restrictions on information storage and archival practices. New technologies, rising costs and data management issues are driving compatibility issues in traditional data center models, and financial services firms need to adapt.

Data management in finance is a problem with several moving parts that impact each other. Accumulating and storing data is a relatively straightforward issue, albeit a resource-intensive one. Under the traditional model, a firm would procure additional servers for its onsite facility or enlarge its third-party data center investment, either through colocation or leasing the provider’s equipment.

The deluge of data can make this approach prohibitively costly, forcing organizations to rethink their infrastructure approach, Wall Street & Technology editorial director Greg MacSweeney wrote. Firms with proprietary data centers now stand to save significantly by outsourcing its storage, architecture and management demands. A third-party data center can provide state-of-the-art server hardware, but more importantly has the infrastructure to deploy next-gen network solutions such as virtualization, which drastically reduces the amount of physical equipment needed to contain rising petabytes of data and information-crunching applications.

Working with a third-party data center provider also helps businesses tackle more rapidly moving targets – data integrity and compliance. Data quality and validation are some “small data” issues that grow more problematic as firms accumulate more information from a wider source pool, said software developer Oleg Komissarov, according to a recent FierceFinanceIT article.

Keeping data clean, complete and consistent is a tough task that requires powerful tools and a dedicated team. A managed data center services provider can help offer this level of attention. It can also help in compliance efforts, as any blind spots or inconsistency in information or reporting leave the door open for compliance issues to crop up. As big data expands and accelerates, financial services firms need their data centers to stay one step ahead.

Managed services can help organizations avoid top 10 business hazards

Managed services enable businesses to more successfully navigate a threat-laden enterprise landscape. Although an organization’s biggest IT, operations and security anxieties vary by region, industry and company size, what they’re most afraid of is generally the same across the board – lost profitability, client churn and a tarnished reputation.

In the Twitter age, no confirmed threat goes unpublished or unanalyzed, and it’s difficult for an organization to escape blame even if it’s only affected as a byproduct of another incident. The woes of retailer Target, which reported a 22 percent decrease in its client base in January following a massive data breach during the 2013 holiday season, serve to underscore consumer response to an enterprise that demonstrates less-than-exemplary information security, data management and business continuity.

According to a recent Business Continuity Institute study of nearly 700 enterprise respondents in 82 different countries, the top 10 most common perceived threats to disaster recovery and business continuity are:

  1. Unplanned IT outages
  2. Cyberattacks
  3. Data breaches
  4. Adverse weather effects
  5. Utility supply interruptions
  6. Fires
  7. Security compromises
  8. Health or safety incident
  9. Act of terrorism
  10. New laws or regulations

How managed services assuage anxiety
Managed services offer vast potential for companies to mitigate potential problems in many areas because a provider’s solutions are customized to the needs of the company. The above list offers a variety of incidents stemming from the company’s location, industry, employee behavior and general security management. Overseeing prevention and contingency plans that effectively respond to all of these potential hazards is time consuming, resource intensive and costly. While it’s impossible to prevent adverse weather or control regulatory measures, it’s possible to keep these threats from doing any real damage.

Managed services are scalable, so the amount of a provider’s involvement can correspond exactly a company’s anxieties and potential hazards. One organization may simply require online backup services via an offsite server in order to increase its data loss prevention activities. Another may want to virtualize nearly all of its infrastructure so its employees can stay connected and productive during a wave of bad weather. As a company’s needs change over time, it doesn’t have to rearrange its entire back-end infrastructure in order to keep danger at bay.