Author Archive

Leveraging the Cloud for Data Protection Analytics

Tyler Stone

Dell EMC eCDM Product Marketing
Tyler is currently a software engineer in the Engineering Leadership Development Program at Dell EMC. He has spent the past three years at EMC across multiple roles in storage, IT, and data protection. Tyler has a love of technology and innovation, and something about data centers has always sparked his sense of wonder and amazement (as he says, maybe it’s the flashing lights). When his head isn’t up in the cloud, he is out travelling, taking pictures, or playing golf. Follow him on Twitter @tyler_stone_

 

data protection analytics

Trying to think of a New Years’ resolution? You could try that no-carb diet again, or head to the gym in preparation for the summer season. However, if you’re anything like me, those lifestyle-focused resolutions probably won’t last into the new month, let alone the New Year. So let’s think in terms of data protection: if your New Year’s resolution is focused on protecting your business, it should probably have something to do with reducing risk, reducing operating expenses, and/or reducing capital expenses. Especially if your business is a large, global enterprise, managing these three metrics pervade every aspect of your job.

You wouldn’t be alone – IT leaders in large enterprises are struggling to keep up. It is extremely difficult to supervise backup operations and predict the needs of protection infrastructure in data centers across multiple geographies. With potentially hundreds of production databases supporting every part of the business, how can IT management accurately determine that copies of these databases are meeting retention policies and SLAs? Without aggregated metrics, it is hard to determine whether or not data is being effectively protected in a given location. In addition, the distributed nature of the infrastructure makes it difficult to anticipate the capacity needs and performance bottlenecks within a business’s protection ecosystem.

If you want to work towards achieving your New Year’s resolution, you will need a simple and centralized way to view and analyze systems’ health across all data centers. The best way to meet those requirements is through a cloud analytics solution for data protection.

Cloud and analytics go together like icing and cake. The cloud offers infinitely scalable resources that are not limited to locations, installations, or hardware; these qualities make the cloud a perfect candidate for global-scope analytics applications. When these concepts are applied to data protection, something magical happens: IT leaders develop the ability to observe their protection systems’ health across all data centers and anticipate the needs of their global data protection infrastructure. With this ability, they are able to proactively address potentials risks and identify opportunities to increase efficiency to ultimately lower costs throughout their data protection environment.

How, you ask? Well, an ideal cloud analytics offering for data protection would provide global map views, health scores, and capacity analysis. These types of key features allow IT management to be able to effectively identify and address problems with their data management operations and protection infrastructure. Global map views would allow users to view every data center in their enterprise, with the ability to drill-down into each site for additional context into the exact infrastructure details. From there, users can see throughput bottlenecks, unprotected data copies, and deduplication ratios, among other metrics. (more…)

Beyond the Buzzword: Modern Data Management

Tyler Stone

Dell EMC eCDM Product Marketing
Tyler is currently a software engineer in the Engineering Leadership Development Program at Dell EMC. He has spent the past three years at EMC across multiple roles in storage, IT, and data protection. Tyler has a love of technology and innovation, and something about data centers has always sparked his sense of wonder and amazement (as he says, maybe it’s the flashing lights). When his head isn’t up in the cloud, he is out travelling, taking pictures, or playing golf. Follow him on Twitter @tyler_stone_

When I think of technology buzzwords, I think of really big, complex concepts that slowly grow into all-encompassing terms to make those concepts easy to reference. Due to the complexity of some technology topics, however, buzzwords often create an opportunity for misconceptions. I’ve had people ask me if I work “for the cloud” when I talk about what I do, which is close – yet so far away.

In an earnest attempt to combat misconceptions, I want to discuss a term near and dear to my heart: modern data management. On the surface, it looks like this phrase implies that we want to manage data… modernly…? Well, that’s not wrong; but there’s a lot more behind this buzz-phrase.

First, we need to understand a broader trend in the industry: self-service. Basically, businesses no longer depend on IT to solve infrastructure problems due to the development of new technologies to enable increased agility. Before self-service, if an application administrator or DBA needed a copy of a production database, they would have to place a request to a storage or backup administrator and wait for the request to be fulfilled. Now, application owners and DBAs have easy access to copy creation tools from their native utilities, making it extremely easy to create copies themselves for any desired use case. Self-service is the foundation for modern data management.

While self-service is necessary to keep up with the demands of agile businesses, it raises some concerns. Specifically, if application owners and DBAs are managing their own copies, how can managers enforce a specific set of rules or policies to ensure compliance with business objectives, such as protection SLAs or retention policies?  This trend towards self-service has led to siloes of storage managed by decentralized teams, making it difficult for IT managers and protection administrators to monitor their copy ecosystem and ensure service level agreement (SLA) compliance.

Moreover, native utilities have no way to monitor or enforce SLAs or protection policies. Copy creation tools simply relay that the copy was created successfully. Even if application, database, and backup administrators create copies to align with protection SLAs, they can’t be expected to constantly check the environment to ensure that nothing has happened to those copies. Administrators are left to assume that their copies are sitting healthy on storage, even though they could be missing their SLAs.

Ok, so how does modern data management solve these problems? Well, modern data management provides a comprehensive view of data generated in the modern enterprise to enable SLA-based copy data lifecycle management, thus ensuring enterprise compliance and security. In order to fulfill this mission, modern data management consists of two additional components that build on self-service: intelligent copy oversight and analytics.

modern DC Management

Intelligent copy oversight, such as Dell EMC Enterprise Copy Data Management, provides managers with a view of all copies in a data center. This component allows managers to create SLOs and apply them to protectable storage assets. Then, copies of assets will be continually monitored for compliance, and SLA enforcement can be automated to ensure compliance with business objectives. With intelligent copy oversight, managers will have a clear way to view and manage their copy ecosystem. (more…)

C-D-M and Y-O-U: Excess Copy Data is a Problem, but How Do You Solve It?

Tyler Stone

Dell EMC eCDM Product Marketing
Tyler is currently a software engineer in the Engineering Leadership Development Program at Dell EMC. He has spent the past three years at EMC across multiple roles in storage, IT, and data protection. Tyler has a love of technology and innovation, and something about data centers has always sparked his sense of wonder and amazement (as he says, maybe it’s the flashing lights). When his head isn’t up in the cloud, he is out travelling, taking pictures, or playing golf. Follow him on Twitter @tyler_stone_

If you’ve been following EMC’s latest announcements, one of the numbers you’ve seen repeated over and over… and over is $50 billion, the amount that the “copy data problem” is expected to cost customers globally over the next three years. Given such an outrageous number, it’s hard not to take a closer look at what’s causing this major cost overrun. I’ll save you the Google search and tell you right now: your numerous data copies are taking up valuable space on your storage, and the decentralized self-service methods of monitoring, managing, and protecting these copies are costing you a lot of time and money due to lack of oversight.
CDM and You 1

You can’t expect your DBAs and application owners to deviate from native copy creation processes, and you can’t get rid of every copy in your data center. Copies are vital to supporting nearly every task that shouldn’t be done with production data – operations, analytics, dev/test, data protection, and more. But how effective are you at managing those copies? Can you effectively mitigate the risk associated with self-service copy creation? Do you have the right number of copies on the right storage? Copy management solutions provide a central way to supervise copy creation and administration, which means you get to reclaim control of your copy data. With the right copy management solution, application owners and DBAs can continue to create copies while providing you with a way to oversee copy orchestration and ensure that copies are on the right storage to meet SLAs and mitigate risk.

Okay, so you get it – copy data management is relevant and important to enable self-service, ensure business compliance, and mitigate security and data protection risks. Now here’s the important question: which copy data management solution is best for you? (more…)

SUBSCRIBE BELOW

Categories

Archives

Connect with us on Twitter

Click here for the Cloud Chats blog