Posts

The essential components for complete ransomware protection

For criminals, ransomware is big business.

The methodology is simple: attackers target a company with malware which encrypts their data, then send a request for money, usually in the form of Bitcoin or another difficult-to-trace cryptocurrency. Should the company refuse to pay up, their data will remain encrypted and inaccessible. Or it might even be shared publicly on the internet.

Given the potential damage both financial and reputational that might result, it’s no wonder that many companies choose to pay the ransom.

Kaspersky Lab noted a thirteen-fold increase in ransomware attacks in the first quarter of 2017 compared to the previous year. With the average cost of a ransomware attack sitting at over $1,000, the danger is a significant one . . . and no company is safe.

Victims range from small businesses to huge organizations, such as the UK’s National Health Service and aeronautical engineering firm Boeing. Whatever the size of your company, protecting data against ransomware is every bit as essential as physically protecting your premises from burglars.

Here are four things you can do to ensure that you are effectively protected against ransomware.

Backup everything, often

A robust backup plan can make all the difference to a company hit by a ransomware attack.

Rolling back to a previous version may make it possible to avoid paying the ransom and resume normal operations. But beware. Ransomware is becoming increasingly sophisticated. Many new viruses are designed to seek out backups and encrypt those as well.

To avoid this worst-case scenario ensure that you employ a backup solution with versioning or one that is physically disconnected from your system, like a cloud backup solution.

Train your staff

Every staff member in your organization is a potential entry point for malware. Many attacks still succeed largely due to human error.

Indeed the “WannaCry” attack which struck Boeing was transmitted by means of a zipped file attached to an email. In order for the malware to take effect, an employee within the organization had to unzip and run the file.

Train your employees to identify fake emails and encourage a culture of double-checking the origin of any suspicious attachments. Also, establish robust procedures for employees to follow when they think they might have exposed a device to malware. A swift response can isolate the machine in question and potentially save thousands of dollars in damages.

Stay up to date

There are many reasons to keep the operating systems, browsers and plugins up to date. Ransomware prevention is just one of them.

Many ransomware attackers gain entry to a system via weaknesses inherent in out-of-date plugins and other tech. By recommending (or, better yet, enforcing) updates, you can stay ahead of the criminals and keep your sensitive data secure.

Employ ransomware protection

Last, but by no means least, you should ensure that every machine (even personal devices used for work purposes) in your organization is running malware protection software from a reputable provider. While no program can prevent every single attack, most will be able to guard against a whole raft of common exploits.

If the worst does happen . . .

If you are subject to a ransomware attack and cannot recover your data from backup, your options are limited.

Paying the ransom might seem like the most sensible course of action, but there have been numerous cases in which doing so didn’t yield a decryption key. If that happens, you’ve only added an extra cost to an already-expensive situation.

An expert might be able to help you mitigate the damage, but it is vastly preferable to avoid attacks in the first place. The time to act is now—protect your data and ensure that your company doesn’t end up on the long list of ransomware victims.

The biggest cybersecurity breaches of 2017 and what we can learn from them

If we’ve learned anything from the biggest cybersecurity breaches of 2017, it’s this: no one is immune from online threats. Not even the largest companies with millions in technology resources, serious cybersecurity measures and strong reputations as household names.

2017 came and went with multiple significant cybersecurity breaches involving major organizations. And the bad news doesn’t stop there. Cybercriminals aren’t going anywhere. Cybersecurity breaches are still very much a thing.

The average cost of a data breach in 2020 will exceed $150 million by 2020, as more business infrastructure gets connected. – Juniper Research

Here are three of the biggest cybersecurity breaches of 2017, what happened, and what we can learn from them.

Equifax

One of the worst breaches of all time happened in 2017 with Equifax. Equifax, as you almost certainly know, is one of the three largest credit agencies in the United States. Their data, the data that was compromised, is extremely sensitive.

Stolen information included names of customers, their dates of birth, credit card numbers, addresses, driver’s license numbers, and social security numbers. That’s pretty much everything a cybercriminal needs to engage in identity theft.

Verizon

In July of 2017, Verizon had a major cybersecurity breach that affected over 14 million subscribers.

A third-party analytics provider, NICE Systems, was using Amazon’s S3 cloud platform to store “customer call data” from telecom providers including Verizon. Forbes

While this breach was claimed to have been brief, the 14 million affected had their data exposed, including their names, addresses, phone numbers, and most importantly, their plain text PINs. Again, this is prime information for identity theft.

This happened because some of Verizon’s security measures simply weren’t set up the right way.

Instead of a private security setting, the information was made public. Anyone with the public link could see the Verizon data, which was stored on an Amazon S3 storage server—a commonly used cloud storage for data.

Uber

While Uber’s security breach wasn’t at the same level as the Equifax or Verizon cybersecurity breaches, it was still embarrassing and alarming. In this case, the worst of it was how Uber managed things in the aftermath of the cybersecurity breach.

Uber paid a 20-year-old hacker $100,000 to keep quiet after he managed to get his hands on the personal data of 57 million users.

Instead of being transparent about the leak, Uber tried to conceal it. Not only is that illegal in California, where the home company is based, but it further erodes customer confidence. Any company that falls prey to a cybersecurity breach will take a hit to their reputation. But if you continue to mishandle things, your reputation can suffer even more.

Just ask the folks at Uber.

What we have learned

One of the major takeaways here is that while the cyberattacks have grown sophisticated and complex, there’s a lot companies of all sizes can do to be proactive. The threat is valid, but if you address potential vulnerabilities in a timely manner, you’ll be able to avoid making these kinds of headlines.

For instance, the Equifax attack was due to a flaw in a web application, Apache Struts. The tool is used to build web applications. And here’s the kicker. The problem that led to the breach was identified months earlier, but all of the Equifax machines were not updated. This allowed hackers the ability to enter.

The Uber fiasco illustrates another compelling point. If you do suffer a cyberattack, there are good ways to handle the situation and bad ways to handle it. Restoring customer trust is critical, so it’s best to be transparent and take full responsibility.

Protecting your company from a cybersecurity breach

Your company’s critical data must be protected not only for your customers and their peace of mind but for the sake of your data, as well. You need to stay ahead of ever-changing threats. Cybercriminals are constantly changing their tactics. You have to constantly adjust your protection just to keep pace.

Know where your data is stored, how it’s protected, how often that protection is updated, and utilize data analytics to strategically update your protection as needed.

Cybersecurity breaches are on the rise. Companies must take proactive steps in order to keep their data secure.

 

The CIO’s guide to lowering IT costs and boosting performance

There’s one question that haunts every single business leader, regardless of industry, business size, mission statement or product. How do you lower costs without sacrificing performance?  If you can answer that question effectively, you’re set up for ROI and stability. If you can’t, you won’t be a business leader for long.

To complicate matters, the answer will vary for different departments within your organization. The strategies that lower IT costs may or may not work when you turn to HR or accounting. Some techniques are universal, and some are functionality-specific.

In this whitepaper, we’re going to focus on trimming your company’s IT costs.

But before we dive in, there are no magic bullets here. The suggestions outlined below aren’t even particularly innovative or unique. Instead, they’re solid. When combined, you’re sure to see a difference in your technology budgeting.

If you’re serious about reducing your IT costs, this is how you can do it.

Learn to be proactive

We begin with an underlying philosophical approach. Stop waiting for network problems to pop up before you address them. Get out in front of potential technical issues by becoming a proactive organization.

The primary advantage of getting proactive is a reduction in downtime. Few things will drive IT costs up like downtime. The hourly cost of downtime varies, of course, with estimates soaring as high as $100,000 per hour in some cases.

There are two things you can do to stop downtime before it starts.

Man and woman looking at monitor

Infrastructure monitoring and alerting

The only way to know if your IT network is healthy is to monitor it. If there are warning signs, alerts should trigger appropriate preventative action. If you’re unfamiliar with monitoring and alerting, Network World has a great introductory article on the subject.

Patching and updating

Software patches are critical for network health. They include everything from security updates to bug fixes. They’re easy to overlook, though, because they rarely feel urgent and they seem so frequent. We strongly encourage you to make them a priority if you’re interested in lowering potential IT costs.

Tackle IT projects strategically

No organizational project should ever begin without clear objectives. That’s particularly true for IT projects where timelines, budgets and organizational impact can easily get out of hand—if you don’t have a solid game plan.

We recommend a balanced approach. Yes, upfront IT costs are a consideration. However, you should also think about productivity, integration, efficiency, reporting, training and employee satisfaction before you undertake a new IT project.

For example, there are compelling reasons to move from a PBX phone system to a hosted voice solution, but there’s more to the decision than the math. Also consider how your staff, customers and processes will be affected by such a foundational change.

Utilize outsourced support

While many CIOs are hesitant to embrace outsourced IT support, there’s a strong case to be made for the change. Not only that, but you don’t have to approach the decision focused exclusively on an absolute solution.

Why not have both in-house and outsourced IT support? Just make sure you use the two support sources differently in ways that make strategic sense. Some tasks, due to security, compliance or other business needs, are better kept in-house. And some tasks can be effectively managed by an outsourced firm at a fraction of the cost.

Additionally, keep in mind that even a world-class outsourced IT support provider will need your organization to play an active role. Take the time to find the best way to work with your IT support provider and don’t forget to bring your employees into the loop.

Take cybersecurity seriously

It’s difficult to overstate the importance of cybersecurity. In the last year alone, the headlines have been littered with horror stories of data breach. It only takes one cybersecurity lapse to compromise your company’s data and devastate your reputation.

Just one.

Cybersecurity key on keyboard

While it’s possible to handle network security on your own, we highly recommend partnering with a managed IT services provider for the best possible protection. Cybersecurity is a complex, multi-layered issue. This is one area where it’s simply pragmatic to trust an expert.

The moderate IT cost of cybersecurity protection from an MSP far outweighs the negative impact of a successful cyberattack.

Get your employees up to speed

We’ve touched on this idea a couple of times already, but it deserves its own section. If you’re not convinced, consider this. 100% of government IT workers surveyed report that they believe employees to be the single greatest threat to cybersecurity.

You read that right. 100%.

That doesn’t mean most employees mean to pose a risk. In many cases, employees simply don’t know the best practices necessary to maintain network security. The same goes for every other factor that can drive up IT costs, from downtime to productivity.

Employees need to know how to protect data, utilize available IT tools, and interact productively with IT support to lower IT costs.

Prepare a worst-case-scenario plan

Finally, few things will unexpectedly add to your IT costs like a disaster. Disasters include things like floods, hurricanes, tornadoes and fires, as well as smaller downtime-causing incidents like power outages and equipment failure.

In other words, a “disaster” is anything that takes your IT network offline.

How you react in the face of a disaster, regardless of scale, will either set you apart from the competition or bury you beneath them. The deciding factor is typically your level of preparation. Smart CIOs make sure their companies have a complete backup and disaster recovery plan.

Everyone in your organization, from your IT support (in-house or outsourced) to customer service and sales should be familiar with your backup and disaster recovery plan. The less time you spend offline, the lower the impact on your reputation and your revenue.

Wrapping up

It’s not that difficult to lower IT costs while simultaneously boosting organizational performance. All that’s required is a strategic approach that includes all of the above areas. If you cover these bases, your company will operate more efficiently without incurring unnecessary expenses.

That’s a major win for any CIO.

3 things you might be forgetting about disaster recovery

When things are good, it’s hard to imagine how the world could ever wrong you. But when something goes wrong, it’s nearly impossible to see the sun through the clouds. Disasters happen without warning, and they can cripple a company if you’re not ready.

This is the entire reasoning behind investing in a disaster recovery plan. These procedures help companies get back on their feet after a major catastrophe, and they’re often the reason businesses don’t go belly-up following such an event.

That said, a large number of companies aren’t properly prepared for the worst. They may have disaster recovery solutions, but they haven’t fully worked them out. This can be just as dangerous as not having any plan at all, and we would like to rectify these issues by discussing some aspects of disaster recovery that you may not be considering.

1. You need to test constantly

“Everyone has a plan until they get punched in the face.” That’s a quote from Mike Tyson, and it’s just as true in boxing as it is in disaster recovery planning. Actually coming up with a plan is great and puts you ahead of the companies that haven’t, but it’s impossible to know if your procedure will work until you’ve put it through its paces.

“A huge portion of organizations just aren’t putting any priority into testing.”

Sadly, a huge portion of organizations just aren’t putting any priority into testing. Some test once or twice and think they’re done, while other literally never test at all. Therefore, it’s up to you to ensure that your company’s plan actually works.

TechTarget recommends starting with a test that checks data recovery, application recovery and communications. That last aspect is the most important, as not being able to discuss issues with your team can lead to widespread panic and confusion. The site states that these tests should happen on a “regular basis” all throughout the year, so don’t think you can do it once and be done.

Finally, you’ll want to examine audit logs to see exactly what worked and what needs some more tweaking. With enough patience and testing, you can come up with a procedure that will hopefully see you through the worst disasters.

2. What about your employees?

Although most people think of data systems and downtime when discussing disaster recovery, it’s important to realize there is a much more human element to this process that you’re want to consider. Specifically, you need to figure out what your employees will be doing during such an event.

Of course, the first step is to make sure everyone is alive and well following a catastrophe. After this, you’ll need to think about where these people can work. Will they be able to simply log in from home? Do they need access to data systems stored in the office? Do they have all the equipment they need at home?

After considering this, TechTarget asks administrators to consider the possibility of employees being displaced from their homes. In such situations, work is the last thing on an employees mind. While there’s nothing wrong with that, it’s up to you to figure out what the next step is. TechTarget recommends gaining access to trained psychological professionals in order to help workers mentally readjust.

What happens if your employees lose their homes? When an employee loses their home, they generally don’t worry about work.

3. Your workers are a major threat

Clearly, your employees are a valuable asset. That said, they’re also often the ones most responsible for disasters in the workplace. According to a 2014 report from IBM, 95 percent of data security disasters can be traced back to human error.

Although you trust your employees, this statistic shows that the best way to avoid a disaster may be to better train your employees. Exactly what that means depends on your industry and what employees have access to, but the point is that thinking about external factors like tornados and earthquakes while ignoring human error can have disastrous results.

Top things to consider in a colocation site

More data is being generated, collected and analyzed than ever before. Data storage options are also becoming major centerpieces for business continuity and disaster recovery strategies. As time progresses, it will be significantly more difficult for in-house IT to manage it all. Colocation has become an answer for organizations to achieve security, easy access and ample data storage alongside optimal uptime levels. Let's take a look at the top considerations in a colocation site:

1. Location

Where you decide to colocate is a major decision. Kansas City Business Journal contributor Dan Kurtz suggested choosing a facility close to your company's headquarters or near the majority of your employees. Having a colo facility in close proximity allows leaders to go check on their systems and manage them appropriately. It will also help provide the connectivity and latency that users require. The facility should also be in a place that is protected from severe weather events and disperses water away. Details like these will enable organizations to avoid disaster and drive continuous operations.

The facility's location could impact your decision.The facility's location could impact your decision.

2. Security

Your colocation site should give you peace of mind that your data is protected. Data Center Journal noted that there should be multiple levels of security externally as well as internally. This could include monitoring systems, physical barriers and layered security zones. Keycard access, staffed checkpoints and alarm systems should all be standard features. Guards can constantly monitor visitor access and ensure that no unauthorized personnel are able to access your hardware or data. Ask what types of safeguards are in place as well as what Tier compliance the site has. These considerations could make a big difference in where you decide to colocate and what vendor you choose.

"Compare vendor prices to quote comparable facilities and support services."

3. Pricing

The cost associated with colocation services can be a major factor in your decision. TechTarget contributor Julius Neudorfer noted that while this shouldn't be the crux of your choice, you should compare vendor prices to quote comparable facilities and support services. The amount of power and cooling required will play a big part in your price, and each provider will have its own formula for supplying these utilities. Carefully consider your options based on the solutions provided, history of success and industry costs. These factors will help narrow down your options to the best colocation facility for your requirements.

As data becomes more of a priority for businesses, it will be important to store, manage and protect this asset effectively. It's often time-consuming and expensive to build and manage a data center on your own, but with colocation, you can have a data center without all the cost. The facility itself is governed by the provider, while you maintain your hardware. It will be important to look at the facility's location, security capabilities and service pricing compared to other vendors to guide you to the best solution. For more information on choosing a colocation site, contact ISG today.

How the cloud speeds up the disaster recovery process

If a critical system goes down or your data is lost, how long would it take your organization to restore operations? For many businesses, it will come down to what disaster recovery efforts are in place, and if these initiatives are successful in practice.

Unfortunately, a number of companies are not ready for emergency situations, and it can take a significant amount of time to restore operations. The 2014 State of Global Disaster Recovery Preparedness report found that nearly 25 percent of respondents lost most or all data center functions for hours or even days, with losses ranging from thousands to millions of dollars. This isn't even considering the reputation and customer losses that downtime incurs. Implementing cloud solutions can significantly speed up the disaster recovery process and improve your operations in a few key ways:

1. Accessible from anywhere

Backing up critical files and assets provides a layer of flexibility to ensure that you can access and restore systems quickly. For data loss situations, the cloud provides instant connection to the necessary files, preventing heavy fines from industry governing bodies to recover information. This also minimizes productivity deficits and missed revenue opportunities. Accessibility to this essential data will help streamline recovery while reducing potential costs.

Cloud assets are available anywhere with an internet connection, speeding up recovery time.Cloud assets are available anywhere with an internet connection, speeding up recovery time.

What happens if work machines malfunction or the power goes out in your facility? You can no longer operate at that location and must wait for the issue to be fixed. The cloud makes it possible to conduct business outside of the office, allowing parts to be ordered to repair hardware or the power to be restored. However, as Ars Technica noted, this measure is only a short-term stopgap for many organizations. Your cloud disaster recovery plan must anticipate region-wide outages or other events to ensure that you're ready to cope with them if they occur.

2. Ease of use

Tape and disks have been used for system backups for decades. While these methods have their place in disaster recovery strategies, their age is starting to show, particularly when compared with cloud benefits. Tape and disks must be kept under particular conditions and are susceptible to environmental damage and deterioration. Backing up to and restoring data from these devices can also take a significantly long time and impede your operations.

Cloud backups run in the background on a scheduled basis, recording and saving changes to every essential document. This ensures that organizations have the most recent version of data on hand upon restoration. According to an infographic by ERS Computer Solutions, 52 percent of companies are moving to the cloud for disaster recovery efforts due to its ease of use, leaving the complexity of traditional solutions behind. In fact, 32 percent of respondents using cloud for disaster recovery are able to recover within 24 hours, compared with only 23 percent of those that don't leverage the cloud. An additional 20 percent of cloud users are able to restore operations within a few hours, while only 9 percent of non-cloud users could say the same.

"If an emergency happens, how do you know that your strategy will work?"

3. Automated testing

Many organizations believe that because they have a disaster recovery plan in place, that's good enough. However, if an emergency happens, how do you know that your strategy will work? Are you certain that your backup methods have been recording and restoring the right pieces of information? When disaster strikes, if you don't have the necessary information on hand or if your backups aren't working, it will take a lot of money and a significant amount of time to restore everything – if it can be restored at all.

You might be saying, "But we don't have time to test our plan every time a change is made." With the cloud, you can easily automate your disaster recovery testing to eliminate the guesswork and ensure a predictable, reliable recovery program, according to IT Biz Advisor. Evaluating your plan with automation will increase visibility into service-level agreements, adhere to regulatory requirements and reduce potential costs of a disaster.

Disaster recovery can be a tricky pursuit, but with the cloud, organizations can be better prepared for an emergency. Cloud-based solutions are available anywhere and easy to use, driving faster restoration capabilities. Contact ISG today to find out more about how the cloud can improve your disaster recovery strategy.

Is your disaster recovery strategy foolproof?

No one wants to imagine what it would be like if an emergency situation impacted his or her business. Unfortunately, this is exactly what organizational leaders must do if they hope to get through various scenarios and recover quickly. According to a survey by Nationwide Insurance, more than 75 percent of small business owners don't have a disaster plan. To make matters worse, 52 percent estimate that it would take a minimum of three months to restore operations following a disaster. Here are a few tips to ensure your disaster recovery strategy is foolproof:

1. Take stock of hardware and software

If a machine or application goes offline, how would that impact your ability to operate? Business leaders must evaluate each piece of hardware and software to determine what items are mission-critical to support. This level of detail will help prioritize what elements are restored first and which ones can wait. CIO suggested keeping vendor contact information on hand at all times to quickly reach out for guidance. Managed service providers typically offer round the clock assistance, ensuring peace of mind during pivotal situations.

Businesses must take stock of essential hardware and software.Businesses must take stock of essential hardware and software.

Keep in mind that any infrastructure changes must be reflected within your disaster recovery strategy. If these adjustments aren't accounted for, your business could be left without essential functions and prolong recovery time. Evaluate and adapt your plan every six months to accommodate any modifications.

2. Determine your disaster tolerance

Not all scenarios are the same. They all have different implications, severity levels and means of recovering. As The Business Journals contributor Heinan Landa noted, there are five event levels, ranging from inconvenient to catastrophic. It will be important to determine your tolerance threshold for each category based on how much downtime you can afford and your tolerance for lost data. This evaluation will help determine the best course to take to recover quickly and how your employees should respond to particular situations.

The tolerance level should take into account the variety of situations that can happen to your specific business. These scenarios could come as a result of your industry, location, dependence on technology and budget. You must set boundaries that follow these characteristics to ensure you have the processes and solutions in place to restore operations quickly.

"Running a drill provides critical insight into your DR plan."

3. Test your strategy

While it's great to have a disaster recovery plan established, you can't set it and forget about it. Effective disaster recovery requires training employees and testing the strategy regularly to identify any gaps. Of the organizations that have a DR plan, 40 percent test it once annually, according to CIO Insight. Another 22 percent test it rarely and 6 percent don't test it at all. These numbers are continuing to improve, but a number of businesses still aren't testing out their DR strategy as much as they should be.

Running a drill provides critical insight to demonstrate just how effective your DR plan is. If there are any issues or gaps, make changes to the strategy to cover them – and then run the test again. Evaluating your plan on a regular basis will account for infrastructure and personnel changes and provide peace of mind that any adjustments to the plan will be effective.

Disaster recovery isn't the coolest topic for business leaders, but it's one of the most vital discussions to have to protect your organization. By taking inventory of critical systems, determining your disaster tolerance and testing your strategy regularly, you will be able to foolproof your DR plan. You can have peace of mind that you're prepared for emergency events and are able to restore operations quickly.

Will your DR solution come through in the clutch?

Customers value an organization's reliability and ease of access, so whenever unplanned downtime occurs, it not only costs businesses in lost sales, it also damages their reputation. To prevent this type of situation, many companies leverage a disaster recovery solution to get them back online as quickly as possible. However, are you confident that your DR solution will come through in the clutch? Let's take a look at how businesses can ensure that their DR plan works effectively when they need it most.

Document the plan

It's important to have the DR strategy fully documented for training purposes and to guide employees during difficult situations. When there's chaos in the office, it can help to have a policy ready to show workers what steps need to be taken to mitigate the problem. However, only 60 percent of companies actually have a documented DR plan, according to Zetta's "2016 State of Disaster Recovery" report. Of those that are confident in their DR plan, 78 percent have a formally documented plan. It's important to establish this type of mindset to help teams calmly and effectively handle unexpected events.

Organizations should routinely test their DR solution.Organizations should routinely test their DR solution.

Test regularly

Once you have a plan in place, your work is just beginning. Even if you believe that your solution is going to be effective, how can you know that for sure? For example, if backups are a part of your plan, what happens if they malfunction or don't have the information that you require? It's vital to routinely test your DR solution down to the finest details to identify any holes or factors that hadn't been considered. As TechTarget contributor George Crump noted, you won't be able to do a complete test every time because it can be expensive and time-consuming. However, partial testing should be done on a quarterly basis, and a full-scale test should be executed once a year.

Testing is an important part of maintaining a DR solution to ensure that it stays in sync with the production environment. If new hardware or personnel are added into the mix, for example, the DR plan must reflect these changes as soon as possible. Testing offers a chance to review what items have changed since the previous test and allows decision-makers to update the plan. This will address any configuration changes, preventing data loss and other operational failures.

Utilize capable tools

"When downtime occurs, you'll want a DR solution that you know you can rely on."

When downtime occurs, you'll want a DR solution that you know you can rely on. If it lacks functionality or is too complex, it could just create more bottlenecks and make it challenging to restore operations quickly. Zetta's report found that 37 percent of respondents believe their DR solution is simply too difficult to use. It's important to not only have a tool that meets your needs, but also is user-friendly. Choosing such a solution will help employees catch on quickly and effectively guide them through difficult situations.

When it comes to tools, there are a wide variety of options to choose from. However, it's important to get a solution that integrates well with other programs. TechTarget contributor Jon Toigo noted that businesses might be looking at storage hardware, continuous data protection, data backup, virtualization and cloud tools. Develop a DR strategy with testing in mind, particularly how all of these solutions fit together and the best way that they would be evaluated. There might be a tool that has a number of these features, making it easy to test and perfect for your DR needs.

Disaster can strike at any time and can come in a number of forms. With the right tools and vendor support, plan documentation and strategy testing, you can ensure that your DR solution comes through in the clutch.