Cities increasingly utilizing the cloud for disaster recovery services

 

With state and local governments increasingly feeling the pressure to streamline IT operations to control costs and enhance performance, a growing number of cities are beginning to pursue the most up-to-date tools and hardware architectures to modernize their data centers.

At the same time as there is an emphasis on physical devices, city IT managers and CIOs are also utilizing the cloud in their data center renovations. Instead of using tight budgets on new data center facilities, cities are able to implement pay-as-you-go cloud services to consolidate data and programs from different government agencies in an effective way. Many local agencies are employing the cloud to handle spikes in data center workloads, or as a backup service or a disaster recovery utility.

Under the supervision of CIO Vijay Sammeta, the city of San Jose is implementing plans to use the cloud as a backup mechanism for the city’s critical IT infrastructure. In the next 12 to 18 months San Jose will be transitional virtual machines to the cloud and using the technology to manage various applications, as well as for backup and disaster recovery services.

“When you think about all the components of a highly available service delivery stack: network, servers, database and the applications, it starts [to] make a lot of sense to simply let someone else worry about that and just build redundancy to the Internet,” said Sammeta.

The cloud an alternative to physical facilities
The city of Asheville, North Carolina has also turned to the cloud for its disaster recovery plan. The city was set to build a $200,000 disaster recovery center as part of a fire station construction project, but it never came together so Asheville needed a plan B. Utilizing the cloud allows the city to enter disaster recovery mode only when it is critically necessary. The ability to scale for need saves Asheville thousands of dollars a year as compared to the cost of maintaining hardware in a physical facility. With the new system, the city is also able to encompass a number of applications into the disaster recovery plan that were previously uncovered.

In Michigan, Oakland County is using the cloud to supplement its overworked data center facilities, according to CIO Phil Bertolini. Implementing a cloud infrastructure allows the county to transition some systems to the cloud, taking computing pressure off of the data centers’ servers. The town of Newington, Massachusetts is also getting in on the cloud craze, implementing services to extend the city’s business continuity and disaster recovery capabilities.

FBI in search of cloud storage services

The FBI announced this month that it is seeking ideas and suggestions from the private sector about how to construct and implement large-scale cloud infrastructure. The agency's Criminal Justice Information Services Division- which manages the criminal background check system, crime statistics and fingerprint services- is hoping to transition its systems and databases to a cloud environment.

Experts say the move could help cut costs and make the agency's operations more efficient. According to industry expert Trey Hodgkins, the FBI could enhance its mission by transferring services and applications to a cloud platform. In an interview with Federal Times, Hodgkins said that FBI systems and databases would be able to run more efficiently and at a lower cost than legacy systems that frequently run in to trouble when trying to connect to new technology.

"Building a cloud infrastructure gives the FBI the flexibility to decide how much they want to use and what controls and authentications they want to deploy," Hodgkins said.

The cloud environment employed by the FBI must be based between two data centers at least 1,500 miles apart, be able to scale to 2.3 petabytes of memory and replicate data between the two facilities. The platform should also be able to support a wide range of services, including pay-as-you-go policies, scalability and the ability to access all stored information securely and in real-time. The agency also requires the infrastructure to include the use of virtualization, rapid elasticity, resource pooling, continuous monitoring and centrally managed multi-site operations.

The FBI is hoping to make a five year commitment with a contractor to help create and run the public cloud system.

New study finds companies increasingly utilizing cloud for disaster recovery

 

As technology becomes more prevalent in business and companies increasingly rely on massive amounts of data to complete work, the need for a secure backup service and disaster recovery plan is more necessary than ever. In a recent webinar sponsored by Microsoft, Forrester analyst Noel Yuhanna recommended that enterprises strategically implement public cloud services for disaster recovery to ensure business continuity.

According to Yuhanna, more than 70 percent of enterprises currently have to manage at least two terabytes of data, but at the rate new information is being created that could become petabytes in just a few years. In the webinar, Yuhanna praised the cloud for its ability to automate the data backup process and include encryption while not requiring staff to manage the day-to-day operations of the servers and storage platform.

Forrester recently conducted a survey of more than 200 database backup and operations professionals on three continents and found that 15 percent of companies are currently utilizing the cloud for database backups. This number has doubled in the last year, according to Yuhanna. The report also found that users were driven to the cloud for backup and disaster recovery services due to the need for constant application availability, cost savings and organizational agility.

Cloud offers multiple DR benefits
The cloud is ideally suited for disaster recovery because it is able to replicate data that resides in a physical location without having to create a redundant facility to house it. It is also a cost-effective option, as backups and archived data often sit unused for years at a time with few updates and don’t need to be stored in an expensive physical facility. The cloud therefore creates a dual benefit of storing information in a cost-effective environment that is also offsite in case of a disaster.

The Forrester survey also discovered that the key reasons companies utilized the cloud for backup and disaster recovery services were the ability to save money on data storage and administrative costs and provide more frequent backups.

“You could almost be guaranteed that if you decide to put some data in the cloud that, whether it’s an archive or backup, the next year it’s going to be cheaper to store it there,” explained Forrester principal analyst Dave Bartoletti.

Finally, the report found that 57 percent of respondents reported the use of cloud backup and disaster recovery services actually helped to improve their company’s service level agreements, as processes and systems become more reliable with the cloud.

Cloud computing in education market grows as schools see benefits

As the benefits of cloud computing become increasingly obvious, a growing number of industries are beginning to embrace it. The newest sector adopting cloud services is education, as they offer students and teachers the ability to access a variety of applications and resources easily and economically. Added security, cost-effectiveness and ease-of-use are also driving the adoption of cloud computing in educational institutions of all levels and sizes, as well as disaster recovery services and the promise of stronger communication and collaboration between students.

Due to the increase in adoption of cloud technology by schools, the global cloud computing in education market is growing at a rapid pace. MarketsandMarkets recently released a report projecting the global market will grow to more than $12 billion by 2019, an increase of $7 billion over five years. The study found that reduced costs, increased flexibility and enhanced infrastructure scalability were major market drivers, as was the growing need for schools to be technologically advanced. 

"The significant production of inexpensive computers, Internet broadband connectivity, and loaded learning content has created a worldwide trend in which Information and Communication Technology is being used to alter the education process," the report stated. "Cloud computing is beginning to play a key role in this revolution."

The report went on to say that North America is expected to remain the leader of the market, but the Asia-Pacific and European regions are projected to show the most significant traction.

A separate survey from a technology provider found that almost 50 percent of respondents in higher education made adopting cloud computing a priority because their employees were increasingly utilizing cloud applications and mobile devices in their work. The study also found that IT professionals in higher education expect to save an average of 20 percent over the next three years due to the implementation of cloud services.

Schools see multiple benefits with cloud 
​A major reason many schools are adopting cloud technology is the ability to cut spending, not just with the cloud services themselves but through the ability to reduce the costs of office supplies like paper and ink. With cloud-based services, teachers are able to make lesson plans, homework and reading available online instead of having to print and copy hundreds of pages each semester.

Another major benefit of the cloud is that disaster recovery and online backup services are built right in to the infrastructure, which comes in very handy as students increasingly complete work electronically. Schools are also benefiting from the large amounts of cloud storage services available. Student records can be kept in the cloud and then encrypted, making them easy to share with necessary parties while at the same time improving security.

With the cloud, students are able to collaborate more easily and effectively as they can work on and edit documents simultaneously. Sharing and transmitting documents is also made easier which improves the ability to receive feedback and improve work. The increased accessibility offered by the cloud also allows students to work on assignments from anywhere with an Internet connection.

New tests discover 'no-wait data center' technology

Researchers from the Massachusetts Institute of Technology recently announced that they have created what they are calling a 'no-wait data center'. According to ZDNet, the researchers were able to conduct experiments in which network transmission queue length was reduced by more than 99 percent. The technology, dubbed FastPass, will be fully explained in a paper being presented in August at a conference for the Association for Computing Machinery special interest group on data communication.

The MIT researchers were able to use one of Facebook's data centers to conduct testing, which showed reductions in latency that effectively eliminated normal request queues. The report states that even in heavy traffic, the latency of an average request dropped from 3.65 microseconds to just 0.23 microseconds.

While the system's increased speed is a benefit, the aim is not to use it for increased processing speeds, but to simplify applications and switches to shrink the amount of bandwidth needed to run a data center. Because of the miniscule queue length, researchers believe FastPass could be used in the construction of highly scalable, centralized systems to deliver faster, more efficient networking models at decreased costs.

Centralizing traffic flow to make quicker decisions
In current network models, packets spend a lot of their time waiting for switches to decide when each packet can move on to its destination, and have to do so with limited information. Instead of this traditional decentralized model, FastPass works on a centralized system and utilizes an arbiter to make all routing decisions. This allows network traffic to be analyzed holistically and routing decisions made based off of the information derived from the analysis. In testing, researchers found that a single eight-core arbiter was able to handle 2.2. terabytes of data per second. 

The arbiter is able to file requests quicker because it divides up the necessary processing power to calculate transmission timing among its cores. FastPass arranges workloads by time slot and assigns requests to the first available server, passing the rest of the work on to the next core which follows the same process.

"You want to allocate for many time slots into the future, in parallel, " explained Hari Balakrishnan, an MIT professor in electrical engineering and computer science. " According to Balakrishnan, each core searches the entire list of transmission requests, picks on to assign and then modifies the list. All of the cores work on the same list simultaneously, efficiently eliminating traffic.

Arbiter provides benefits for all levels
Network architects will be able to use FastPass to make packets arrive on time and eliminate the need to overprovision data center links for traffic that can arrive in unpredictable bursts. Similarly, distributed applications developers can benefit from the technology by using it to split up problems and send them for answers to different servers around the network.

"Developers struggle a lot with the variable latencies that current networks offer," said the report's co-author Jonathan Perry. "It's much easier to develop complex, distributed programs like the one Facebook implements."

While the technology's inventors admit that processing requests in such a manner seems counterintuitive, they were able to show that using the arbiter dramatically improved overall network performance even after the lag necessary for the cores to make scheduling decisions.

The FastPass software is planned to be released as open source code, but the MIT researchers warn that it is not production-ready as of yet. They believe that the technology will begin to be seen in data centers sometime in the next two years.

New tests discover ‘no-wait data center’ technology

Researchers from the Massachusetts Institute of Technology recently announced that they have created what they are calling a 'no-wait data center'. According to ZDNet, the researchers were able to conduct experiments in which network transmission queue length was reduced by more than 99 percent. The technology, dubbed FastPass, will be fully explained in a paper being presented in August at a conference for the Association for Computing Machinery special interest group on data communication.

The MIT researchers were able to use one of Facebook's data centers to conduct testing, which showed reductions in latency that effectively eliminated normal request queues. The report states that even in heavy traffic, the latency of an average request dropped from 3.65 microseconds to just 0.23 microseconds.

While the system's increased speed is a benefit, the aim is not to use it for increased processing speeds, but to simplify applications and switches to shrink the amount of bandwidth needed to run a data center. Because of the miniscule queue length, researchers believe FastPass could be used in the construction of highly scalable, centralized systems to deliver faster, more efficient networking models at decreased costs.

Centralizing traffic flow to make quicker decisions
In current network models, packets spend a lot of their time waiting for switches to decide when each packet can move on to its destination, and have to do so with limited information. Instead of this traditional decentralized model, FastPass works on a centralized system and utilizes an arbiter to make all routing decisions. This allows network traffic to be analyzed holistically and routing decisions made based off of the information derived from the analysis. In testing, researchers found that a single eight-core arbiter was able to handle 2.2. terabytes of data per second. 

The arbiter is able to file requests quicker because it divides up the necessary processing power to calculate transmission timing among its cores. FastPass arranges workloads by time slot and assigns requests to the first available server, passing the rest of the work on to the next core which follows the same process.

"You want to allocate for many time slots into the future, in parallel, " explained Hari Balakrishnan, an MIT professor in electrical engineering and computer science. " According to Balakrishnan, each core searches the entire list of transmission requests, picks on to assign and then modifies the list. All of the cores work on the same list simultaneously, efficiently eliminating traffic.

Arbiter provides benefits for all levels
Network architects will be able to use FastPass to make packets arrive on time and eliminate the need to overprovision data center links for traffic that can arrive in unpredictable bursts. Similarly, distributed applications developers can benefit from the technology by using it to split up problems and send them for answers to different servers around the network.

"Developers struggle a lot with the variable latencies that current networks offer," said the report's co-author Jonathan Perry. "It's much easier to develop complex, distributed programs like the one Facebook implements."

While the technology's inventors admit that processing requests in such a manner seems counterintuitive, they were able to show that using the arbiter dramatically improved overall network performance even after the lag necessary for the cores to make scheduling decisions.

The FastPass software is planned to be released as open source code, but the MIT researchers warn that it is not production-ready as of yet. They believe that the technology will begin to be seen in data centers sometime in the next two years.

Companies look to cloud for data center efficiency

 

As companies continue to see the value of collecting information on clients and business processes, the amount of data is increasing at an enormous rate. According to Smart Data Collective, 90 percent of the data being stored today was created within just the past two years. With companies information growing, the need for larger, more energy-sucking data centers also increases.

As The New York Times reported, one large-scale data center can consume energy equivalent to a small town’s use. And while data centers eat up an incredible amount of energy, only between 6 and 12 percent of it is used for actual computation. The remaining 90 percent is used for letting servers run idle in case of usage spikes.

Because of the dramatic amount of energy used by data centers and the cost to companies, many organizations are trying to manage their information stockpiles while dealing with the financial and environmental ramifications that come along with it. An article by Smart Data Collective contributor Cameron Graham noted that data centers are responsible for almost 20 percent of technology’s carbon footprint. The environmental impact of such facilities is leading companies to operate in a more efficient and sustainable way.

Schneider Electric recently released a survey of business leaders that found data center efficiency will be one of the most popular techniques for energy management employed by organizations in the next five years. While businesses often look to make physical improvements to their data center in an effort to increase efficiency, many companies are now utilizing either colocation or cloud providers that employ energy efficient and sustainable practices.

Fixed costs associated with a data center’s cooling, hardware and power can be reduced with the help of cloud computing, which in turn allows a company to increase agility and growth. Adopting a virtualized environment, be it by the transition of applications into the cloud or server virtualization, helps companies to consolidate their systems and reduce their overall IT electrical load. Capital costs can also be shifted into operational expenses and help organizations find savings in a variety of sectors.

Asheville, NC moves disaster recovery to the cloud

Jonathan Feldman, the CIO for the city of Asheville, North Carolina, made a big splash recently when he decided to migrate the city's disaster recovery operations to the cloud.

When Feldman took over as CIO, he was dismayed to find out that Asheville's disaster recovery facility was located two blocks away from City Hall. The city had already started using the cloud to host some geographic applications, as well as IT development and testing environments, and Feldman was interested in finding a way to expand the use of their cloud infrastructure. Using a cloud disaster recovery platform, Feldman was able to use a pre-built automation tool that would essentially run the city's disaster recovery program on its own and ensure business continuity.

"I was not comfortable with us coming up with a home-brewed automation system to do something as critical as disaster recovery," said Feldman in an interview with SaaS In The Enterprise. "We don't do it enough to be a core competency for us."

With Asheville's disaster recovery operations off site and in the cloud, the city no longer has to worry about losing both primary systems and their backups at the same time if a storm were to knock out power. Utilizing a cloud-based platform also allows the city to only pay for disaster recovery when they need it, instead of paying around the clock for a physical facility.

Feldman started small with the migration to the cloud, transitioning only important but non-essential applications first with plans to grow capacity once the platform has been proven. The new disaster recovery system was designed to test one system each quarter, with each test taking between one and four hours to complete.

"We're able to failover pretty quickly, and failover very inexpensively, and have a high degree of confidence because of automation," said Feldman. "When we do disaster recovery, we know it's actually going to work. Between that and the geographic dispersion, that's huge." 

BYOD policies support majority of Americans who can't go 24 hours without their phone

A recent survey from Bank of America found that 96 percent of Americans between the ages of 18 and 24 consider mobile phones to be very important. While that may not be so surprising, the fact that only 90 percent of the respondents in the same group reported deodorant as also being very important. The report involved interviews with 1,000 adults who owned smartphones and found that they were more important than most anything, including toothbrushes, television and coffee.

The survey also discovered that 35 percent of Americans check their smartphones constantly throughout the day. Forty-seven percent of respondents said they wouldn't be able to last an entire day without their mobile phone, and 13 percent went so far as to say they couldn't even last an hour.

As the Bank of America report proves, people are more attached to their devices than ever. Millennials are especially dependent on their phones and tablets, and they are also the group making up the biggest portion of new workers. Companies are increasingly able to benefit from implementing BYOD policies, as employees who have grown accustomed to their particular phone expect to be able to continue using that phone at work. Allowing workers to keep their own device increases productivity, as they aren't constantly checking an alternate phone, as well as boosting employee satisfaction.

Using colocation in the era of the cloud

Colocation facilities have long been vital resources for organizations that require high-performing data centers but prefer to entrust infrastructure management to a third-party provider. In addition to sparing IT departments the headaches of maintaining servers, switches and other equipment, colocation produces tangible benefits such as:

  • Redundant power supplies: Individual endpoint failures or even natural disasters won’t compromise uptime.
  • Streamlined IT costs: Colocation removes many of IT’s considerable expenditures on equipment and personnel
  • Cutting-edge performance: A colo facility typically has access to best of breed IP services and equipment, more often than not enabling better speed and reliability than the client could achieve strictly in-house.

Accordingly, in North America, the colocation and managed hosting services market is primed for strong expansion. TechNavio recently projected that it would increase at a 13.6 percent compound annual growth rate from 2013 to 2018.

Reduction of capital and operating expenditures is expected to be a key driver of colocation uptake. But what is colocation’s place in an IT landscape increasingly dominated by cloud services?

Finding the right colocation provider in the era of cloud computing
Cloud computing has fundamentally changed IT by giving developers, testers and operations teams access to unprecedented amounts of on-demand resources. Organizations have more options than ever for scaling their businesses, and the cloud has already enabled the success of blockbluster services such as Netflix and Instagram.

Colocation can play an important part as companies modernize their infrastructure and take advantage of remote infrastructure. Many IT departments are in the midst of migrating some on-premises systems to the cloud, creating mixed environments known as hybrid clouds. Colocation providers can step to the plate and supply the security, flexibility and know-how needed for evolving IT for the cloud age.

To that end, buyers should look for experienced managed services providers adept at handling a variety of infrastructure. Although colocation has been around since before the cloud entered the mainstream, cutting-edge offerings may offer a level of usability on par with public cloud, via top-flight service management.

“[C]olocation providers need to offer more than just remote hands,” wrote Keao Caindec, chief marketing officer at 365 Data Centers, for Data Center Knowledge. “They need to offer basic managed services such as firewall management, server management, backup and recovery services as well as other managed IT operations services for the dedicated infrastructure of each client.”