Is a hybrid cloud solution right for your company?

Over the last decade, many companies have been shifting IT responsibilities to the cloud, a solution that allows various users and hardware to share data over vast distances. Cloud programs frequently take the form of infrastructure as a service. A company that can't afford in-house servers or a full-sized IT team can use cloud solutions to replace these hardware and personnel limitations.

Large companies like Amazon, Microsoft and Google are all behind cloud services, propelling the space forward and innovating constantly. However, there are still limitations when it comes to cloud adoption. For as convenient as theses services are, they are designed for ubiquitous usage. Organizations that specialize in certain tasks may find a cloud solution limited in its capabilities.

Those businesses wishing to support service-oriented architecture may wish to consider a hybrid cloud solution, a new service becoming widespread throughout various enterprise application. As its name suggests, a hybrid cloud solution combines the power of a third-party cloud provider with the versatility of in-house software. While this sounds like an all-around positive, these solutions are not for every organization.

"Before businesses discuss a hybrid solution, they need three separate components."

Why technical prowess matters for hybrid cloud adoption
TechTarget listed three essentials for any company attempting to implement a hybrid cloud solution. Organizations must:

  1. Have on-premise private cloud hardware, including servers, or else a signed agreement with a private cloud provider.
  2. Support a strong and stable wide area network connection.
  3. Have purchased an agreement with a public cloud platform such as AWS, Azure or Google Cloud.

Essentially, before businesses can discuss a hybrid solution, they need all the separate components. An office with its own server room will still struggle with a hybrid cloud solution if its WAN cannot reliably link the private system with the third party cloud provider. And here is the crutch. Companies without skilled IT staffs need to think long and hard about what that connection would entail.

Compatibility is a crucial issue. Businesses can have the most sophisticated, tailored in-house cloud solution in the world but, if it doesn't work with the desired third party cloud software, the application will be next to useless. It isn't just a matter of software. Before a hybrid cloud solution can be considered feasible, equipment like servers, load balancers and a local area network all need to be examined to see how well they will function with the proposed solution.

After this preparation is complete, organizations will need to create a hypervisor to maintain virtual machine functionality. Once this is accomplished, a private cloud software layer will be needed to empower many essential cloud capabilities. Then the whole interface will need to be reworked with the average user in mind to create a seamless experience.

In short: in-house, skilled IT staff are essential to successfully utilizing a hybrid cloud solution. If businesses doubt the capabilities of any department, or question whether they have enough personnel to begin with, it may be better to hold off on hybrid cloud adoption.

Without being properly installed, a poorly implemented solution could cause delays, lost data and, worse of all, potentially disastrous network data breaches.

Cloud technology has been designed to keep business data secure. Poorly installing a hybrid solution could weaken this stability.Cloud technology has been designed to keep business data secure. Poorly installing a hybrid solution could weaken this stability.

The potential benefits of the hybrid cloud
However, if created the right way, a hybrid cloud solution brings a wide array of advantages to many enterprises, particularly those working with big data. According to the Harvard Business Review, hybrid cloud platforms can bring the best of both solutions, including unified visibility into resource utilization. This improved overview will empower companies to track precisely which employees are using what and for how long. Workload analysis reports and cost optimization will ultimately be improved as organizations can better direct internal resources and prioritize workers with stronger performances.

Overall platform features and computing needs will also be fully visible, allowing businesses to scale with greater flexibility. This is especially helpful for enterprises that see "rush periods" near the end of quarter/year. As the need rises, the solution can flex right along with it.

Hybrid cloud services are also easier to manage. If implemented properly, IT teams can harmonize the two infrastructures into one consistent interface. This will mean that employees only need to become familiar with one system, rather than learning different apps individually.

Companies processing big data can segment processing needs, according to the TechTarget report. Information like accumulated sales, test and business data can be retained privately while the third party solution runs analytical models, which can scale larger data collections without compromising in-office network performance.

As The Practical Guide to Hybrid Cloud Computing noted, this type of solution allows businesses to tailor their capabilities and services in a way that directly aligns with desired company objectives, all while ensuring that such goals remain within budget.

Organizations with skilled, fully formed IT teams should consider hybrid cloud solutions. While not every agency needs this specialized, flexible data infrastructure, many businesses stand ready to reap considerable rewards from the hybrid cloud.

3-2-1 Backup Rules Best Practices

Companies that backup to tape as their offsite backup often aren’t aware of what recovering from tape looks like until they unfortunately have to live through it. Depending on the nature of the failure and the extent of the data involved, that type of recovery can take days to restore “business as usual” functionality.

Image result for 3-2-1 backup rule

What Backup Is… and What It Isn’t

Data backups are critical for data protection and recovery, but they should not be a substitute for other important parts of your IT strategy:
$1,000 Free Cloud Connect Services

  • Backup is for data protection and targeted item recovery:
    It is not for archive. Archives ideally will be indexed for search, have a managed retention policy, and will be stored on less expensive storage mediums.
  • It is not for disaster recovery. It is nearly impossible to test a full environment recovery scenario when relying on this method. It will often require 100% more equipment overhead to have the empty equipment in standby, equipment not providing any usefulness or return on investment
  • It is not a failover solution. Recovery times with this method should be measured in weeks, not hours.

Snapshots are not backup:

  • Snapshots can be used as one part of a backup strategy, but provide no protection on their own in scenarios where the storage devices have failed or are no longer available
  • Snapshots are usually not very granular and are commonly the recovery method of last resort
  • Snapshots are not disaster recovery on their own, only a part of a comprehensive plan

The untested data recovery plan is both useless and a waste of time to create:

  • Make time for testing, it will always be worth it.
  • Do not let the single point of failure be a human, involve many members of the team in the process so that when the time comes to execute your plan it does not have to wait for the only one who knows how.



Free White Paper




The presidential debate and the future of American cybersecurity

Cybersecurity is becoming less of an individual problem and more of an issue that entire states need to deal with. Due to the importance of this issue, both presidential candidates were asked in the recent debate to discuss the current state of cybersecurity within the U.S. as well as what they plan to do when they get into the Oval Office. Their responses – as well as their previous actions – could very well foretell the future of America’s cybersecurity efforts.

Both candidates need to study up

During the debate, moderator Lester Holt asked the candidates about their opinions concerning the current state of U.S. cybersecurity. Hillary Clinton was quick to jump on Russia as a major antagonist. In fact, she went so far as to blame Putin himself for the hack levied against the Democratic National Convention. She also took a very hard line against anyone considering a cyberattack against America, saying that the U.S. would not “sit idly by” and allow foreign entities to breach private American data.

That said, Clinton has certainly had trouble with cybersecurity in the past. She set up her own private email server against State Department regulations, which was eventually compromised by a hacker.

Clinton has been hacked before. A hacker was able to gain access to Clinton’s private email server.

Donald Trump was also adamant that America needs to improve its defenses, although his response was slightly different. As Government Technology’s Eyragon Eidam pointed out, Trump brought up the uncertainty of cyberattacks like the one that befell the DNC. When discussing this attack, the candidate said it could have been anyone from Russia to Iran or even “somebody sitting on their bed that weighs 400 pounds.”

While it’s certainly true that America’s enemies are no longer visible on a map, broadly painting hackers as obese people downplays the importance of this issue.

New federal CISO’s job hangs in the balance

Although both of the candidates will continue to duke it out, the current president has decided to take action. President Obama has created the position of federal chief information security officer, and he’s appointed retired Brigadier General Gregory J. Touhill to the post. Touhill has more than 30 years of experience in the U.S. military, much of which was spent within IT. He’s also been awarded the Bronze Star Medal, according to his biography on the Air Force’s website. This position is meant to come up with a uniform cybersecurity plan for federal government organizations.

“The federal CISO is an appointed position.”

While it’s certainly good to see the White House attempting to tackle the widespread security problems present across the government, the federal CISO is an appointed position. This means the current president is allowed to choose who can fulfill the role, which puts Touhill in a tenuous position. The next president will enter office on January 20, 2017, which means Touhill has around four months to implement some changes.

Whether the next president keeps Touhill will depend entirely on who wins. If Trump is voted into office, he’ll most likely want a fresh slate and appoint his own CISO. There’s a good chance that Clinton will do the same – however, she’s probably Touhill’s only hope at job security. He’ll have to make some huge leaps in the next few months if he hopes to impress.

Could a network assessment have saved Southwest from major downtime?

Southwest Airlines has been having a pretty turbulent few weeks. First, starting on July 20, the organization had one of the largest IT outages ever to affect a major airline. Now, two unions associated with the company are demanding that CEO Gary Kelly step down or be fired, according to David Koenig of The Tribune of San Luis Obispo.

Although it was originally estimated that the downtime cost as little as $5 million, one Southwest representative stated that it’s most likely going to be “into the tens of millions.” With so much money being lost to a technical failure, the question remains: How did this happen, and was it preventable?

One router started all the trouble

Koenig reported that all of these IT issues stemmed from a single router. Basically, this piece of equipment failed in an unpredictable way, which eventually led to other systems being knocked offline. Southwest is keeping specific details about this undisclosed, but the scale of this particular outage suggests that the network associated with this router was not properly set up.

“Companies need multiple points of failure to accommodate for a singular outage.”

As their name implies, these devices route information to their intended destinations. Data generally is bounced between multiple locations before arriving where it’s being sent. Generally, this means you have multiple points of failure to accommodate for a singular outage. If it’s true that one router’s downing caused this event, then Southwest most likely had a poorly engineered network. FlightStats stated that around 8,000 flights were affected in this incident, and a single router simply should not have the ability to affect that many planes.

The conclusion to be made here is that Southwest should have tested its network more rigorously. Network assessments are incredibly important in order to determine weak points within a particular IT system, such as how one router could be made accountable for thousands of flights. Simple tests such as these could have easily uncovered this point of failure, allowing Southwest to take actions to mitigate the risks of such a catastrophic outage.

Network assessments can prevent more than downtime

Although downtime is certainly something businesses should work to avoid, it isn’t the only problem that network assessments can unveil. These tests also help companies determine their preparedness in terms of cybersecurity. Perhaps the best recent example of this is the massive heist levied against Bangladesh Bank.

At its most basic, hackers gained access to a global banking system and basically tricked financial institutions into sending money to fraudulent accounts. When all was said and done, the criminals involved in this got away with $81 million, according to Serajul Quadir of Reuters. After some investigation, it was discovered that the bank was relying on $10 network switches for the banking system. On top of that, Bangladesh Bank had no firewall protecting private financial data.

This is one of the biggest heists in history. Hackers got away with millions from Bangladesh Bank.

IT companies are generally surprised to hear when small businesses don’t have firewalls, so the thought of a multi-billion dollar corporation lacking these most basic of cybersecurity tools is simply mind-boggling. To top this off, the heist could have been so much worse. The criminals were originally trying to get closer to $1 billion dollars, but their plans were foiled when they accidentally misspelled the name of a financial institution.

Simple mistakes such as those made by Bangladesh Bank are exactly what network assessments are designed to catch. IT employees at these organizations often need to focus on keeping systems running, and cybersecurity can sometimes take a backseat. As this incident shows, this can often have disastrous results, and companies need to be aware of the consequences of letting something like this go under the radar.

Let ISG Technology help preserve your company’s image

Clearly, missing even the smallest detail in your network’s setup could seriously affect both your company’s finances and its client-facing image. No one wants to put their money in a bank that can’t keep it safe, and consumers certainly don’t want to spend money on an airline that has a history of leaving passengers stranded. As such, it might be time to have your company’s IT infrastructure checked out by an experienced professional.

ISG Technology’s experts have spent years investigating and solving some of the most complex network problems out there, and we can help make sure your company’s name isn’t dragged through the mud. If you’d like to find out how you can benefit from a free consultation, contact one of our representatives today.

Schedule Your Free Consultation with ISG

Colocation – 8 Terms to Know

8 Factors Graphic.jpgColocation continues to evolve every year as needs for storing mission critical information change. For many companies, balancing profitability of IT with constant repairs, downtime, and continuously improving security has become overwhelming. As such, colocation is in demand, simply because it makes good business sense.

When determining if colocation is the best solution for your company and how it aligns with your company’s long-term strategy, you may come across a few new terms. To help you during the discovery process, we created the following list of 8 key colocation terms that you can share with your team:

1. Hybrid Colocation – the act of storing data both on and off-site.

2. Rack Space – the amount of physical space you will need to house your servers off-site.
3. Cabinet Space – a cabinet is the term commonly used to reference one full rack (42-47 U).  Half and full racks as well as space by the unit can be rented at most colocation facilities to house your company’s servers.
4. Cage Space – provides an added layer of physical security.  The additional layer of protection provides you with the peace of mind that no one will have access to your highly sensitive date.
5. Uptime – refers to the availability of your servers and is often measured in a percentage.  A data center’s estimated uptime is categorized by tiers.  Tiers range from 1-4 or 99.61% – 99.99% expected uptime.  What is your uptime?
6. N+1 Redundancy – have an independent back-up in case of failure to assure that your data remains available.  A common example includes: back-up generators.
7. Service Level Agreement (SLA) – a contract outlining what level of service the provider will deliver and what consequences there will be for not abiding by those commitments.  Addresses: performance, reliability, and support.
8. SSAE 16 SOC II – a detailed auditing report created by the AIPCA, is designed specifically to evaluate a data center’s security, availability, processing, integrity, confidentiality, and privacy.  It also replaces the use of SAS 70.

To learn more about Colocation, download our free white paper: 4 Factors to Consider with Colocation.

Copy of 4 Factors to Consider with Colocation.jpg

ISG Offers Veeam Cloud Connect Replication

ISG Technology Expanding Partnership With Veeam
ISG Technology’s Cloud Services business unit, which provides cloud and hosted solutions for small-to-midsized companies throughout the Midwest and beyond, is pleased to announce yet another expansion of its Cloud & Service Provider Gold partnership with Veeam®, innovative provider of solutions that deliver Availability for the Always-On Enterprise™. In addition to its status as a provider of cloud backup repositories using Veeam Cloud Connect, ISG Technology now also supports Cloud Connect Replication. What this means for Veeam clients is that VMs can be replicated to the ISG Cloud via standard Internet connection, providing an offsite cloud environment to assist in executing Disaster Recovery Plans.

ISG Technology continues to provide enterprise-class solutions that help clients meet long-term business objectives through technology. According to Matt Brickey, Vice President of ISG’s Cloud & Hosting Solutions, “Our relationship with Veeam provides a winning scenario – both for ISG and for our clients. Developing Veeam-powered solutions enables us to provide large-scale, multi-tenant Backup-as-a-Service and DR-as-a-Service products while ensuring the best combination of simplicity and value for our clients.”

If you are interested in hearing more about cloud backup and replication opportunities – whether you own your own Veeam licensing or you would like to explore a fully hosted solution – contact your Account Executive or a Cloud Specialist at cloud@isgtech.com.

Lessons learned from the Bangladesh Bank hack

Years ago, bank robberies were a very physical affair. Criminals donned ski masks and shot automatic weapons in the air, shouting for tellers to step away from the silent alarm buttons. That said, it would appear thieves have decided that this is just a little too much work. Hacking banks in order to steal money allows for the same reward without having to deal with a hostage negotiator.

In fact, the most recent cyberattack levied against Bangladesh Bank shows just how lucrative these schemes can be. The hackers involved in this scenario made away with around $81 million, which is more loot than any ski-masked thug could ever carry away. However, perhaps the most interesting part of this whole debacle is that this is nowhere near what the culprits originally intended to get. Investigators have discovered that the original plan was to take close to $1 billion when all was said and done, according to Ars Technica.

Unfortunately for the individuals involved, a simple typo wrecked what could have been the biggest criminal act of all time. A transaction meant for the Shalika Foundation was spelled as “Fandation,” which tipped employees off that something was afoot. Regardless, this is still a massive undertaking that demands intense review.

“Bangladesh Bank isn’t completely free of blame.”

How did they get in?

To understand how this whole scheme began, it’s important to comprehend how Bangladesh Bank sends and receives funds. Institutions like this rely on SWIFT software, which basically creates a private network between a large number of financial organizations. This lets them send money to each other without having to worry about hackers – or so the banks thought.

Gaining access to the transactions within this network was basically impossible, unless someone were to be able to compromise a bank’s internal IT systems. This is exactly what the criminals did.

However, Bangladesh Bank isn’t completely free of blame here. The only reason that hackers were able to gain entry was because the financial institution was relying on old second-hand switches that cost about $10 each. Considering how much was at stake, pinching pennies in such a crucial department seems incredibly irresponsible in hindsight. What’s more, the bank didn’t even have a firewall set up to keep intruders out.

Once hackers bypassed this low level of security, they were given free rein to do as they pleased. Accessing Bangladesh Bank’s network allowed them to move on to SWIFT, as the cheap switches didn’t keep these two separate. However, the really interesting part of this whole criminal act was how they took the money without anyone noticing.

Why weren’t they discovered sooner?

In order to make off with the cash, the criminals had to access a piece of software called Alliance Access. This is used to send money, which allowed the hackers to increase transactions in order to make a profit. However, Alliance Access also records transactions. This was a big problem for the thieves, as they couldn’t make money if someone knew they were stealing it.

To fix this, the hackers simply inserted malware that disrupted the software’s ability to properly regulate the money that was being moved. On top of that, this malicious code also modified confirmation messages about the transactions. This allowed the criminals to continue to operate in obscurity, racking up millions of dollars without anyone being the wiser. In fact, they would have gotten close to $1 billion if one of these altered reports didn’t have a spelling error.

A small error cost these hackers hundreds of millions. The hackers could have made so much more money if they’d checked their spelling.

However, understanding so much about how Bangladesh Bank’s system worked has pointed investigators to the notion that this was an inside job. In fact, The Hill reported that “people familiar with the matter” know that a major suspect is a person who works at the bank. No one has been named yet, but getting an employee in on the job certainly makes sense.

Network assessments are a must

Regardless of whether or not this turns out to be an inside job, the fact still remains that Bangladesh Bank was incredibly vulnerable to a hack like this. Relying on cheap network switches is bad enough, but not having any sort of firewall is a major hazard that modern institutions simply cannot allow.

This is why every company should consider receiving a network assessment from ISG Technology. Our skilled experts know how to spot glaring vulnerabilities such as these, and can suggest fixes to ensure the security of private data.