Posts

Companies look to cloud for data center efficiency

 

As companies continue to see the value of collecting information on clients and business processes, the amount of data is increasing at an enormous rate. According to Smart Data Collective, 90 percent of the data being stored today was created within just the past two years. With companies information growing, the need for larger, more energy-sucking data centers also increases.

As The New York Times reported, one large-scale data center can consume energy equivalent to a small town’s use. And while data centers eat up an incredible amount of energy, only between 6 and 12 percent of it is used for actual computation. The remaining 90 percent is used for letting servers run idle in case of usage spikes.

Because of the dramatic amount of energy used by data centers and the cost to companies, many organizations are trying to manage their information stockpiles while dealing with the financial and environmental ramifications that come along with it. An article by Smart Data Collective contributor Cameron Graham noted that data centers are responsible for almost 20 percent of technology’s carbon footprint. The environmental impact of such facilities is leading companies to operate in a more efficient and sustainable way.

Schneider Electric recently released a survey of business leaders that found data center efficiency will be one of the most popular techniques for energy management employed by organizations in the next five years. While businesses often look to make physical improvements to their data center in an effort to increase efficiency, many companies are now utilizing either colocation or cloud providers that employ energy efficient and sustainable practices.

Fixed costs associated with a data center’s cooling, hardware and power can be reduced with the help of cloud computing, which in turn allows a company to increase agility and growth. Adopting a virtualized environment, be it by the transition of applications into the cloud or server virtualization, helps companies to consolidate their systems and reduce their overall IT electrical load. Capital costs can also be shifted into operational expenses and help organizations find savings in a variety of sectors.

Asheville, NC moves disaster recovery to the cloud

Jonathan Feldman, the CIO for the city of Asheville, North Carolina, made a big splash recently when he decided to migrate the city's disaster recovery operations to the cloud.

When Feldman took over as CIO, he was dismayed to find out that Asheville's disaster recovery facility was located two blocks away from City Hall. The city had already started using the cloud to host some geographic applications, as well as IT development and testing environments, and Feldman was interested in finding a way to expand the use of their cloud infrastructure. Using a cloud disaster recovery platform, Feldman was able to use a pre-built automation tool that would essentially run the city's disaster recovery program on its own and ensure business continuity.

"I was not comfortable with us coming up with a home-brewed automation system to do something as critical as disaster recovery," said Feldman in an interview with SaaS In The Enterprise. "We don't do it enough to be a core competency for us."

With Asheville's disaster recovery operations off site and in the cloud, the city no longer has to worry about losing both primary systems and their backups at the same time if a storm were to knock out power. Utilizing a cloud-based platform also allows the city to only pay for disaster recovery when they need it, instead of paying around the clock for a physical facility.

Feldman started small with the migration to the cloud, transitioning only important but non-essential applications first with plans to grow capacity once the platform has been proven. The new disaster recovery system was designed to test one system each quarter, with each test taking between one and four hours to complete.

"We're able to failover pretty quickly, and failover very inexpensively, and have a high degree of confidence because of automation," said Feldman. "When we do disaster recovery, we know it's actually going to work. Between that and the geographic dispersion, that's huge." 

BYOD policies support majority of Americans who can't go 24 hours without their phone

A recent survey from Bank of America found that 96 percent of Americans between the ages of 18 and 24 consider mobile phones to be very important. While that may not be so surprising, the fact that only 90 percent of the respondents in the same group reported deodorant as also being very important. The report involved interviews with 1,000 adults who owned smartphones and found that they were more important than most anything, including toothbrushes, television and coffee.

The survey also discovered that 35 percent of Americans check their smartphones constantly throughout the day. Forty-seven percent of respondents said they wouldn't be able to last an entire day without their mobile phone, and 13 percent went so far as to say they couldn't even last an hour.

As the Bank of America report proves, people are more attached to their devices than ever. Millennials are especially dependent on their phones and tablets, and they are also the group making up the biggest portion of new workers. Companies are increasingly able to benefit from implementing BYOD policies, as employees who have grown accustomed to their particular phone expect to be able to continue using that phone at work. Allowing workers to keep their own device increases productivity, as they aren't constantly checking an alternate phone, as well as boosting employee satisfaction.

Using colocation in the era of the cloud

Colocation facilities have long been vital resources for organizations that require high-performing data centers but prefer to entrust infrastructure management to a third-party provider. In addition to sparing IT departments the headaches of maintaining servers, switches and other equipment, colocation produces tangible benefits such as:

  • Redundant power supplies: Individual endpoint failures or even natural disasters won’t compromise uptime.
  • Streamlined IT costs: Colocation removes many of IT’s considerable expenditures on equipment and personnel
  • Cutting-edge performance: A colo facility typically has access to best of breed IP services and equipment, more often than not enabling better speed and reliability than the client could achieve strictly in-house.

Accordingly, in North America, the colocation and managed hosting services market is primed for strong expansion. TechNavio recently projected that it would increase at a 13.6 percent compound annual growth rate from 2013 to 2018.

Reduction of capital and operating expenditures is expected to be a key driver of colocation uptake. But what is colocation’s place in an IT landscape increasingly dominated by cloud services?

Finding the right colocation provider in the era of cloud computing
Cloud computing has fundamentally changed IT by giving developers, testers and operations teams access to unprecedented amounts of on-demand resources. Organizations have more options than ever for scaling their businesses, and the cloud has already enabled the success of blockbluster services such as Netflix and Instagram.

Colocation can play an important part as companies modernize their infrastructure and take advantage of remote infrastructure. Many IT departments are in the midst of migrating some on-premises systems to the cloud, creating mixed environments known as hybrid clouds. Colocation providers can step to the plate and supply the security, flexibility and know-how needed for evolving IT for the cloud age.

To that end, buyers should look for experienced managed services providers adept at handling a variety of infrastructure. Although colocation has been around since before the cloud entered the mainstream, cutting-edge offerings may offer a level of usability on par with public cloud, via top-flight service management.

“[C]olocation providers need to offer more than just remote hands,” wrote Keao Caindec, chief marketing officer at 365 Data Centers, for Data Center Knowledge. “They need to offer basic managed services such as firewall management, server management, backup and recovery services as well as other managed IT operations services for the dedicated infrastructure of each client.”

Cloud or on-premises? Finding the right UC solution

"Cloud or on-premises?' has become a defining question for IT departments. With the rise of high-speed Internet and managed services, many of the routine chores of IT – everything from storing application data to deploying new servers – can now be easily performed at a colocation facility or via a cloud service provider's resources.

Leading consumer services such as Netflix and Airbnb have gone exclusively cloud to reduce infrastructure management and reach users at scale. Still, despite the cloud's rapid ascent, on-premises IT is alive and well, and in many scenarios it's the right choice.

Unified communications: cloud or on-premises?
Take unified communications, which can be deployed either way. Having the entire UC systems hosted by third-party confers many advantages, including:

  • Lower cost of ownership: The platform is managed by the provider and requires no capital expenditures.
  • Compatibility with multiple devices: Hosted solutions can be accessed by smartphones and tablets in addition to in-office PCs.
  • Intuitive operating systems: It's easy to customize settings for each device and tweak the OS to company requirements.

These benefits can ease a small or mid-size business's transition from legacy telephony to comprehensive UC, with data and voice efficiently flowing over the same network. However, companies with high call volumes may need something more. To that end, on-premises UC provides the performance and reliability needed to deal with numerous concurrent conversations.

Never miss a call
Such a UC system can be designed and installed by an experienced provider to ensure its integrity. When put to the test, it can send calls to different devices, adequately distributing the incoming volume and relieving pressure on agents. For the business at large, this means quicker, yet higher-quality, responses to sales queries and support requests.

"The implementation of UC also ensures that employees need never miss a call again, as incoming calls can be routed to desk phones, twinned devices or apps," stated Jon Nowell, head of product management at TalkTalk Business, according to Information Age. "Reliability and dependability are major aspects of a business's reputation and are equally important to companies of all sizes."

Accordingly, the CAPEX of an on-premises systems more than pays for itself through a redoubtable communications infrastructure. As always, though, the "right" choice of cloud or on-premises depends on the organization. Some will want UC installed on-site, but, as indicated by a recent MarketsandMarkets report predicting that the UC-as-a-service market will top $23 billion by 2019, cloud solutions are also on the upswing.

Determining bandwidth requirements in the data center

How much bandwidth does a data center really need? It depends on how many workloads and virtual machines are in regular operation, as well as what the facility is designed to support. For example, a data center providing resources to a public cloud requires much more bandwidth than one that is simply powering internal systems and operations shielded by the company firewall. The increasing uptake of remote data centers and colocation arrangements, in tandem with server virtualization, has added to organizations' bandwidth considerations.

How virtualization complicates bandwidth requirements
Server and desktop virtualization have made companies less reliant on physical infrastructure and the specific sites that house it. Here's how they work:

  • With desktop virtualization, or VDI, an operating system can be hosted by a single machine (even an aging one), simplifying management of both software and hardware while reducing costs
  • Server virtualization involves a single physical server being turned into multiple virtual devices. Each instance is isolated and the end user cannot usually see the technical details of the underlying infrastructure.

By getting more out of IT assets via virtualization, companies have reshaped IT operations. More specifically, they have spread out their infrastructure across multiple sites and put themselves in position to move toward cloud computing.

With increased reliance on virtualization, organizations have looked to ensure that remote facilities receive the bandwidth needed to provide software, instances and data to users. However, liabilities still go overlooked, jeopardizing reliability – especially when data centers are too far apart from each other.

Ensuring low latency is just one piece of the data center optimization puzzle, though. Sufficient bandwidth must also be supplied to support the organization's particular workloads. In the past, Microsoft has advised Exchange users to think beyond round trip latency.

"[R]ound trip latency requirements may not be the most stringent network bandwidth and latency requirement for a multi-data center configuration," advised Microsoft's Exchange Server 2013 documentation. "You must evaluate the total network load, which includes client access, Active Directory, transport, continuous replication and other application traffic, to determine the necessary network requirements for your environment."

Knowing how much bandwidth is needed
Figuring out bandwidth requirements is a unique exercise for each enterprise. In a blog post, data center networking expert Ivan Pepelnjak looked at the nitty-gritty of assessing bandwidth-related needs, honing in on some of the problems that reveal a need to rethink how bandwidth is allocated and utilized.
These issues include:

  • Over-reliance on slow legacy equipment
  • Oversubscription to services
  • Miscalculation of how much traffic each virtual machine generates 

In addition, data center operators sometimes overlook bottlenecks such as how virtual machines can sometimes interact slowly with storage. If they have to frequently access data stored on an HDD, for example, quality of service may degrade. Networks may require extra bandwidth in order to avoid data transfer hiccups. 

HealthKit, healthcare and managing BYOD

As smartphones become faster and increasingly capable of running sophisticated applications and services, health care organizations are faced with a dilemma. Do they allow doctors, nurses and staff to participate in bring-your-own-device policies and potentially unlock productivity gains that enable higher-quality care? Or do they hold back out of legitimate concerns about data security and compliance with regulations?

The growing interest of technology firms in health care tracking only complicates the situation. Individuals may now use devices such as wristbands, in addition to smartphones, to record and share health information, making it critical for providers to keep tabs on BYOD activity to ensure compliance.

HealthKit and the larger issue of sharing health information
At this year's Worldwide Developers Conference, Apple announced HealthKit, a platform built into iOS that underscores how healthcare on mobile devices is rapidly evolving and sparking questions about how sensitive data is handled. HealthKit isn't a discrete solution but a system of APIs that would allow, say, an application that tracks steps to share its information with medical software that could provide actionable advice.

Major health care organizations are already on board. The Mayo Clinic created an application that monitors vital signs and then relays anomalous readings to a physician. Given the already considerable presence of mobile applications in health care, HealthKit could give hospital and clinic staff additional tools for providing efficient care.

At the same time, HealthKit turns any iOS device into a potential compliance painpoint. Data that is stored on an iPhone, for example, would not fall under the purview of the Health Insurance Portability and Accountability Act, but if shared with a provider or one of their business associates, HIPAA would likely apply. Stakeholders will need time to adjust to the nuances of how healthcare applications interact with each other in the HealthKit ecosystem.

"The question would be whether the app is being used by a doctor or other health care provider. For example, is it on their tablet or smartphone?," asked Adam Greene of Davis Wright Tremaine LLP, according Network World. "Where the app is used by a patient, even to share information with a doctor, it generally will not fall under HIPAA. Where the app is used on behalf of a healthcare provider or health plan, it generally would fall under HIPAA."

Tracking and securing privileged health information
HealthKit is just one platform on a single OS, but it is part of a broader shift in data control, away from centralized IT departments and organizations and toward end users. For healthcare, this change is particularly challenging since providers have to ensure that the same compliance measures are enforced, even as BYOD and cloud storage services become fixtures of everyday operation.

A recent Ponemon Institute survey of more than 1,500 IT security practitioners found that almost 60 percent of respondents were most concerned about where sensitive data was located. BYOD complicates compliance, and healthcare organizations will have to ensure that they have well defined policies in place for governing security responsibilities.

"People trained in security also view IT as accountable for the security domain," Larry Ponemon, chair of the Ponemon Institute, stated in a Q&A session on Informatica's website. "But in today's world of cloud and BYOD, it's really a shared responsibility with IT serving as an advisor, but not necessarily having sole accountability and responsibility for many of these information assets."

It's no longer enough to rely on IT alone to enforce measures. Security teams and IT must work together and implement BYOD security as well as network monitoring to ensure that only authorized devices can connect to the system, and that data is safely shared.

Virtualization, open source switches changing the face of data centers

Data center technology moves quickly. With the emergence of wide-scale cloud computing over the past decade, enterprises have constructed new facilities and adopted cutting-edge equipment to keep up with demand and/or worked with managed services providers to receive capacity through colocation sites.

Virtualization drives strong growth of data center networking market
Last year, MarketsandMarkets estimated that the data center networking market alone could top $21 billion by 2018 as virtualization and specific technologies such as 40 Gigabit Ethernet continue to gain traction. Rather than rely on legacy physical switches that are challenging to upgrade and scale, enterprises are turning to virtual alternatives.

Virtualizing the network makes equipment and services much easier to modify. Since the fundamental advantage of cloud computing is the ability to get resources on demand, such extensibility is critical for helping companies keep pace with changing requirements.

"Virtualization being a disruptive technology is one of the major driving factors in [the] data center networking market," MarketsandMarkets analyst Neha Sinha told Network Computing. "The adoption of high-performance virtual switches is critical to support increasing number of virtual machines used in multi-tenant data centers. The virtual switches include programmatically managed and extensible capabilities to connect the virtual machines to both physical and virtual networks."

Down the road, such interest in mixing and matching legacy, physical and virtual assets may lead organizations to take up software-defined networking. This practice entails managing network services in a more intelligent, CPU-centric way.

However, SDN is still over the horizon for many companies right now. Both the use case and the underlying technology are not widely understood. Plus, enterprises are still trying to accrue enough personnel expertise in areas such as server virtualization to give them a solid foundation for future modifications of their networks and data centers.

Facebook announces open source data center switch
The demand for higher data center efficiency is unabating, and tech giants such as Facebook are looking to get in on the action. PCWorld reported that the social network has confirmed an open source switch, released through the Open Compute Project, that could challenge longstanding incumbents such as Cisco.

Facebook's switch is a top of the rack appliance that connects servers to other data center infrastructure. It has 16 individual 40 Gigabit Ethernet ports. The endpoint is designed for maximum flexibility for developers and data center operators, and it may contribute to broader efforts to make infrastructure more flexible.

Banks, other organizations use UC to improve client service and user experience

The unified communications market is changing. Feature-rich Internet messaging and voice-over-IP telephony were once mostly the domain of CIOs and IT departments, but these services are entering the mainstream, driven by employees’ uptake of mobile hardware through BYOD initiatives and easy-to-use applications, as well as the subsequent entry of these endpoints into the workplace. Costs have declined and the underlying technology has been simplified, making UC, whether delivered through the cloud or on-premises infrastructure, an increasingly attractive option.

“[F]ocus has shifted to the end-user experience, including ease of use, as well as the business value of UC,” observed COMMfusion president Blair Pleasant in article for No Jitter. “There’s a growing realization that the user experience must be intuitive, relevant to the user’s work and tools, and competitive with the experiences delivered by consumer devices and apps. It’s no longer about getting the ‘latest and greatest’ – it’s delivering intuitive and contextual UC solutions, and the business results that are achieved by simplifying collaboration and meetings and enhancing the mobile experience.”

Unified communications market reshaped by consumer focus
The shifts toward intuitive UC user experience comes at just the right time, as UC begins displacing legacy systems. In the past, communications infrastructure was too limited, costly and complex to cater to the end user. Much of IT’s time was devoted to simply maintaining the status quo, with little left over for improving usability or refining the user interface.

With the emergence of cloud computing as well as flexible, highly capable on-premises solutions, all of that has changed. Third-party hosting companies now steward UC technology, optimizing it for day-to-day use by their clients. At the same time, organizations with large call volumes increasingly utilize on-site UC – with installation help from managed services providers – for maximum stability and cost-effectiveness. Either way, businesses and their clients now benefit from amenities such as:

  • Contextual services: Relevant call histories, emails, texts and documents can be retrieved for each conversation.
  • Embedded technologies: Computer telephony integration in integrated into most contact center solutions, and UC is moving in the same direction. It is no longer a standalone services so much as fundamental communications infrastructure.
  • Video meeting rooms: Video conferencing enables better remote collaboration, and with VMRs it is possible for users to connect using a client of their choice, whether they are inside or outside the company firewall.

All of these features add up to a rich UC experience for users and tangible benefits for the organization. Banks, for instance, have deployed wide-area networks and contact centers to better support UC and improve interactions with clients. According to AllAfrica, Comnavig ICT Advisers CEO Olufemi Adeagbo recently identified a well-designed, technologically sound contact center – with features such as UC and video conferencing – as the only way to ensure that business opportunities are realized and brand reputation maintained.

“Imagine a car sale opportunity that is lost because the advertised mobile number is off, unavailable or cannot be answered,” stated Adeagbo. “Imagine the dormant account the bank does not proactively place a call about to understand the issue and reactivate.”

How BYOD can be made easier through desktop virtualization

 

Bring your own device policies, already buoyed by rapid uptake of smartphones and tablets, may gather additional momentum as prominent technology vendors devote attention to making mobile hardware valuable in the workplace. Dropbox for Business has made several big acquisitions related to BYOD, with the aim of helping businesses transition to multi-device, highly consumerized IT environments. Meanwhile, Apple has included advanced support for email, device enrollment and calendar collaboration in iOS 8, making the mobile OS more amenable to BYOD than ever.

It’s clear that BYOD isn’t going away. However, organizations are still adjusting to the new pressures that the phenomenon places on data control, security and compliance. While major firms continue to work on BYOD-centric solutions, enterprises have to assess their mobility needs and decide whether to implement measures such as desktop virtualization to enable BYOD.

Virtualization makes BYOD more secure for leading steel producer
The central issue with any BYOD policy is the transfer of control – over hardware, software and data – from the IT department to employees, who may be less scrupulous in terms of what applications they use. For example, files that should remain behind the company firewall may be shared with consumer-facing cloud services. Mobile devices enable such habits, even as they hold potential to enhance collaboration and remote work.

Fortunately, desktop virtualization facilitates a middle ground between BYOD adoption and enterprise security. Rather than let each endpoint have its own OS and applications, IT departments distribute a single desktop experience via a virtual machine. Devices connect to the VM securely and gain access to approved software. Data is not retained on user hardware after a session ends.

Essar Group, a conglomerate involved in steel, oil and telecom services, turned to desktop virtualization to standardize and secure its employees’ mobile experience when working with company assets. Ultimately, it moved 5,000 users to its new virtualized platform.

“Security of data was the primary point of scope for looking for [a] desktop virtualization solution,” Jayantha Prabhu, CTO at the Essar Group, told Dataquest. “We had a good experience of the ability to control the data at the disposal of the employee when we deployed the same for some of our teams which handled data which was very critical both from a confidentiality and a brand perspective. We had around 3,000 BlackBerry users and more than 2,000 people with tablets, and with all the applications being accessed on the tablets, it was tough to ensure security of critical information.”

Desktop virtualization is a powerful tool for securing data and controlling mobile devices, but its benefits don’t stop there. Other perks include:

  • Reduced power consumption through the use of thin clients (machines that depend on a server for most or all of their software).
  • Centralized management of software and devices, with much more efficient patch distribution and application upgrades.
  • Support for remote collaboration since users can get the same experience from any Internet-enabled device.

With a broad set of advantages for organizations in finance, healthcare, education and other sectors, desktop virtualization is a practical, versatile way to incorporate BYOD while maintaining the integrity of company data.