Posts Tagged ‘cloud data protection’

The Cloud Is the Perfect Vehicle for Data…and Data Protection

Brian Heckert

Principal Content Editor, Dell EMC
My first long-term exposure to technology was the typewriter. I still love that invention, which really sparked my interest in writing. For the past 20 years, I have worked in high tech as a content development specialist, marketing writer, and documentation editor. Prior to working in the software industry, I was a journalist, photographer, photo editor, and military fire fighter. After hours, I enjoy spending time with family, reading, and hiking in the mountains.

The end of cloud computing? Don’t hold your breath!

Recently I watched a video about how cloud computing has run its course. The headline—The End of Cloud Computing—grabbed my attention (that was the point, of course). But there’s just one thing: it’s not true!

The premise is that many of the future devices both large and small that we will depend on daily will need to collect real-world data in real time. That means lots of data real fast. For example, to operate safely, self-driving cars need information—and lots of it. While they’re maneuvering, self-driving cars gather incredible amounts on information—more than 1 GB per second—and use it quickly to ensure maximum safety for everyone on the road. The process requires sensors in the car to collect data about road conditions, make inferences about those conditions, and then act with extreme agility.

cloud data protection

The process of sensing, inferring, and then acting quickly and accurately makes a lot of sense for a self-driving vehicle when we consider that a wrong “decision” by the vehicle could cause an accident, resulting in damaged property, or worse, bodily injury to vehicle occupants or pedestrians. That means data needs to be onboard the vehicle, which becomes a “moving” data center.

But here’s the thing: not all data centers need to be moving. While an “onboard” data center makes sense for a self-driving car, there are vast amounts of information that reside comfortably in the cloud. And that isn’t going to change. These days most of our devices and how we use them depend on a data-gathering process that occurs centrally in the cloud. The device in use pings the cloud and then information in the cloud is returned. For example, when you do a Google search or use your favorite app, the cloud is the perfect vehicle from which to grab the necessary data.

And what about business-critical data? The cloud is the perfect vehicle for enterprise IT. In fact, today many organizations are given mandates to store a certain percentage of the business’s data in the cloud. Why? It’s economical and it’s safe—practical reasons that reduce TCO. These days, who doesn’t want to reduce TCO? (more…)

Leveraging the Cloud for Data Protection Analytics

Tyler Stone

Dell EMC eCDM Product Marketing
Tyler is currently a software engineer in the Engineering Leadership Development Program at Dell EMC. He has spent the past three years at EMC across multiple roles in storage, IT, and data protection. Tyler has a love of technology and innovation, and something about data centers has always sparked his sense of wonder and amazement (as he says, maybe it’s the flashing lights). When his head isn’t up in the cloud, he is out travelling, taking pictures, or playing golf. Follow him on Twitter @tyler_stone_

 

data protection analytics

Trying to think of a New Years’ resolution? You could try that no-carb diet again, or head to the gym in preparation for the summer season. However, if you’re anything like me, those lifestyle-focused resolutions probably won’t last into the new month, let alone the New Year. So let’s think in terms of data protection: if your New Year’s resolution is focused on protecting your business, it should probably have something to do with reducing risk, reducing operating expenses, and/or reducing capital expenses. Especially if your business is a large, global enterprise, managing these three metrics pervade every aspect of your job.

You wouldn’t be alone – IT leaders in large enterprises are struggling to keep up. It is extremely difficult to supervise backup operations and predict the needs of protection infrastructure in data centers across multiple geographies. With potentially hundreds of production databases supporting every part of the business, how can IT management accurately determine that copies of these databases are meeting retention policies and SLAs? Without aggregated metrics, it is hard to determine whether or not data is being effectively protected in a given location. In addition, the distributed nature of the infrastructure makes it difficult to anticipate the capacity needs and performance bottlenecks within a business’s protection ecosystem.

If you want to work towards achieving your New Year’s resolution, you will need a simple and centralized way to view and analyze systems’ health across all data centers. The best way to meet those requirements is through a cloud analytics solution for data protection.

Cloud and analytics go together like icing and cake. The cloud offers infinitely scalable resources that are not limited to locations, installations, or hardware; these qualities make the cloud a perfect candidate for global-scope analytics applications. When these concepts are applied to data protection, something magical happens: IT leaders develop the ability to observe their protection systems’ health across all data centers and anticipate the needs of their global data protection infrastructure. With this ability, they are able to proactively address potentials risks and identify opportunities to increase efficiency to ultimately lower costs throughout their data protection environment.

How, you ask? Well, an ideal cloud analytics offering for data protection would provide global map views, health scores, and capacity analysis. These types of key features allow IT management to be able to effectively identify and address problems with their data management operations and protection infrastructure. Global map views would allow users to view every data center in their enterprise, with the ability to drill-down into each site for additional context into the exact infrastructure details. From there, users can see throughput bottlenecks, unprotected data copies, and deduplication ratios, among other metrics. (more…)

Ransomware Hits Light-rail System, Resulting in Lost Revenue

Brian Heckert

Principal Content Editor, Dell EMC
My first long-term exposure to technology was the typewriter. I still love that invention, which really sparked my interest in writing. For the past 20 years, I have worked in high tech as a content development specialist, marketing writer, and documentation editor. Prior to working in the software industry, I was a journalist, photographer, photo editor, and military fire fighter. After hours, I enjoy spending time with family, reading, and hiking in the mountains.

Ransomware really gets around, faster than even the best form of mass transportation can move busy commuters to work.

ransomware-on-the-rail

Recently, ransomware caused the San Francisco Municipal Transportation Authority (SFMTA) light-rail system to lose revenue when the organization shut down ticket machines and fare gates as a precaution to the malware attack. According to the SFMTA site, ransomware infected mainly 900 office computers. However, another source claimed that more than 2,000 computers were infected, including office admin desktops, CAD workstations, email and print servers, employee laptops, payroll systems, SQL databases, lost and found property terminals and station kiosk PCs.

The ransomware scrambled the data on infected hard drives, posted a message on corresponding computers (“You Hacked, ALL Data Encrypted, Contact For Key (cryptom27@yandex.com) ID:601.”), then demanded a 100 Bitcoin ransom (approximately US$75,000) before the cybercriminals would agree to hand over a master decryption key that would allow the SFMTA to decipher the data ransomed on the infected hard drives.

Ransomware is a threat to businesses that already costs millions of dollars each year, and unfortunately is prevalent and grows more sophisticated. There are literally millions of new malware variants each year. In 2015 there were 431 million variants added, according to the Internet Security Threat Report.

Using a variety of attacks, criminals can inject malware into your network, which then holds your data or other systems hostage until you pay a ransom. Ransomware gains access to a computer system through a network’s weakest link, which is typically a user’s email or social networking site. Once a user clicks on a malicious link or opens an infected attachment, the malware spreads quickly throughout the system.

When a file or other data is held for ransom, the affected organization must meet the financial demands of the cybercriminal in exchange for a decryption key to “unlock” the ransomed data. If you don’t pay the ransom, you forfeit access to your computer and the data that’s on it. You also forfeit access for others to shared documents and data, compounding the impact exponentially. You might think that’s the worst case. Not so. (more…)

Cloud Adoption: Strategy vs. Reality

Vladimir Mandic

Chief Technology Officer & Distinguished Engineer Data Protection Cloud, Core Technologies Division, Dell EMC
Vladimir has been driving technical innovation and change within EMC for the past 10 years, first in the area of data protection software and, currently, in cloud technologies. Prior to that, he’s had rich industry experience as a solution integrator and in the service provider space. When not working on technology innovation, he may be difficult to locate due to his passion for world travel.

Latest posts by Vladimir Mandic (see all)

Myths About Migrating to the Cloud

Myth 1: Cloud Bursting
One of the original highly publicized use-cases for public cloud was bursting. The story made sense: as your demand for computecloud adoption-vlad increased, you would use the public cloud to increase the capacity of your private infrastructure. Like so many good stories, bursting didn’t really happen. In fact, bursting is one of the least common public cloud use cases.
Why did bursting not become more widespread? Enterprises are either keeping applications on-premises in newly designed IaaS private clouds or they are moving them to the public cloud. It’s an OR function, not an AND one. Furthermore, it almost always happens per-application. You evaluate your future application needs and decide where it makes more sense to run the application for those needs. Bursting across environments is just too complex.

Myth 2: Multi-Cloud
Most enterprises have neither a comprehensive hybrid cloud nor an end-to-end multi-cloud strategy that covers entire IT cloud comic-vladenvironments. Frequently there is a general desire for multi-cloud strategy to minimize the dependency on a single cloud provider. But that strategy turns out again to be a per-application choice rather than a centralized plan.
Organizations choose to run some applications in the private cloud and some in different public clouds. Every cloud has very different functionality, interfaces, and cost optimizations. And each time an application developer chooses an environment, it’s because that cloud was the optimal choice for that application. As a result, application mobility becomes a myth; it’s something desired, but very few are willing to settle for the smallest common denominator between different choices just to enable application mobility.
Even if customers wanted to and could move the application, it’s unlikely to happen. Moving large amounts of data between environments is challenging, inefficient, and costly. So, once the choice of a cloud provider is made, the application stays where it is, at least until the next tech refresh cycle when per-application considerations can be re-evaluated.

Cloud Adoption for Legacy Applications
While so much of the focus has been on creating new applications, enterprises are also migrating traditional workloads. So what are the stages of cloud adoption?

  • Step 1: Infrastructure as a Service. Treat the cloud like a typical infrastructure; in other words, think of servers and storage as you currently think of them. Applications are installed on top of the infrastructure. Because everything is relatively generic, the choice of a cloud provider is not too critical.
    But as applications start to move, a new way of thinking evolves; you start looking at the infrastructure as services instead of servers.
  • Step 2: Software as a Service. Some legacy applications are swapped for new ones that run as a service. In this case, you don’t care where your SaaS service runs as long as it’s reliable. The choice of a cloud provider is even less relevant; it’s about choice of the SaaS solution itself.
  • Step 3: Rewrite the Application. Some applications are redesigned to be cloud-native. In some cases, the cloud is an excuse to rewrite decades of old COBOL code that nobody understands. In other cases, features of the cloud enable an application to scale more, run faster, and deliver better services. Not all applications should be rewritten.

The Core Issue: Data. When thinking about moving the applications, what’s left is the actual data, and that is where company value truly resides. Some data moves with applications where it resides, but not all data is application structured. And that is the last challenge of cloud adoption—looking how data services can enable global, timely, and secure access to all data, whether it resides inside an application or outside of it.

The Role of IT
Just what is the role of the central IT organization, and is there a single strategy for IT? Not really.
The word “strategy” comes not from having a single plan that covers all applications, but from a comprehensive evaluation that should be done before choices are made and from having a unified set of services that ensure security, availability, and reliability of all those different environments.

Consider how IT organizations are evolving to become service brokers. For example, sometimes:

  • It makes sense to build a private cloud based on new converged (or hyper-converged) infrastructure.
  • It may go with the software-defined data center (SDDC), but that is more the case of when they have to deal with unknown external consumers instead of explicit requirements
  • IT organizations will broker services from public cloud providers such as AWS, Azure, GCE, or VirtuStreamThe alternative is so-called “shadow IT” where each application team attempts to manage their own applications without understanding the global impacts of their choices. In such scenarios, security is typically first to go and data protection follows closely.

I’ve written before how with move to public cloud, the responsibility of infrastructure availability shifts to the cloud provider. But that does not negate the need for a comprehensive data protection strategy.

You still need to protect your data on-premises or in the cloud from external threats such as ransomware or internally caused data corruption events (as the application is frequently the cause of corruption, not just infrastructure failures), or from the common (and sometimes embarrassing) “threat” of “I deleted the wrong data and I need it back.”

Companies weigh the costs and benefits of any investment. There are places where different types of infrastructure deliver the right answer. For IT to remain relevant, it needs to support different types of environments. IT’s future is in delivering better on-premises services, becoming a service broker, and ensuring that data is securely stored and protected.

Conclusion
The cloud is real and it is part of every IT team’s life. IT can be instrumental in the successful adoption of the cloud, as long as they approach it with calmness and reason—and an open mind. The goal isn’t to design the world’s greatest hybrid cloud architecture. It’s about choice and designing for application services instead of looking at servers and storage separately from the applications. There will be well-designed private clouds and public clouds that are better fits for specific applications. But the applications will dictate what works best for them; they will not accept a least-common denominator hybrid cloud.
In the end, hybrid cloud is not a goal in itself; it is a result of a well-executed strategy for applications and data.

Data Sovereignty in the Cloud

Mat Hamlin

Director of Products for Spanning by Dell EMC
Mat is the Director of Products for Spanning by Dell EMC. He is responsible for the overall direction and strategy for Spanning's suite of SaaS backup and recovery solutions. His career in technology spans five startups and two large organizations, all in Austin, TX. Mat started out in product support and training, then engineering leadership and for the past nine years has been focused on product management and product marketing. Prior to joining Spanning, Mat served as Sr. Product Manager for SailPoint Technologies and Sun Microsystems, contributing to their market-leading enterprise identity management solutions.

The requirement to comply with data protection and privacy laws, like the EU’s General Data Protection Regulation (GDRP) and Australia’s privacy laws, drive the need to evaluate where enterprise organizations are storing their data in cloud data centers. If your organization hosts your own data centers, this can be challenging if you are multinational, but it can be just as difficult when you rely on SaaS providers to manage your data since the control of your data destination is a bit out of your hands.

dp-compliance

If you’re using a SaaS application, such as Office 365 or Salesforce, and are backing up your data with a third-party backup provider, there are many factors to consider as you evaluate your data protection strategy. Understanding the regulations and requirements first and then considering how the providers handle your data are both important.

What privacy laws apply to my organization?
As you build a cloud and data protection strategy, start by evaluating the privacy laws that apply to your data and corporate policies, and compare that against your SaaS provider’s offering, including the primary data storage location and their replication strategy.

My strong suggestion is that you work directly with your audit, compliance and legal teams to ensure you fully understand the regulations that could be applied to you directly or indirectly through business relationships with organizations in other regions.

Generally, global privacy and data protection laws provide strong frameworks and mechanisms to transfer personal data to other countries and economic regions if required, but the regulations are typically strict and the penalties can be costly. As a result, many organizations decide to enforce data governance policies that ensure data remains within defined boundaries. (more…)

SUBSCRIBE BELOW

Categories

Archives

Connect with us on Twitter

Click here for the Cloud Chats blog