C-D-M and Y-O-U: Excess Copy Data is a Problem, but How Do You Solve It?

Tyler Stone

EMC eCDM Product Marketing
Tyler is currently a software engineer in the Engineering Leadership Development Program at EMC. He has spent his three years at EMC across multiple roles in storage, IT, and data protection. Tyler has a love of technology and innovation, and something about data centers has always sparked his sense of wonder and amazement (as he says, maybe it’s the flashing lights). When his head isn’t up in the cloud, he is out travelling, taking pictures, or playing golf. Follow him on Twitter @tyler_stone_

If you’ve been following EMC’s latest announcements, one of the numbers you’ve seen repeated over and over… and over is $50 billion, the amount that the “copy data problem” is expected to cost customers globally over the next three years. Given such an outrageous number, it’s hard not to take a closer look at what’s causing this major cost overrun. I’ll save you the Google search and tell you right now: your numerous data copies are taking up valuable space on your storage, and the decentralized self-service methods of monitoring, managing, and protecting these copies are costing you a lot of time and money due to lack of oversight.
CDM and You 1

You can’t expect your DBAs and application owners to deviate from native copy creation processes, and you can’t get rid of every copy in your data center. Copies are vital to supporting nearly every task that shouldn’t be done with production data – operations, analytics, dev/test, data protection, and more. But how effective are you at managing those copies? Can you effectively mitigate the risk associated with self-service copy creation? Do you have the right number of copies on the right storage? Copy management solutions provide a central way to supervise copy creation and administration, which means you get to reclaim control of your copy data. With the right copy management solution, application owners and DBAs can continue to create copies while providing you with a way to oversee copy orchestration and ensure that copies are on the right storage to meet SLAs and mitigate risk.

Okay, so you get it – copy data management is relevant and important to enable self-service, ensure business compliance, and mitigate security and data protection risks. Now here’s the important question: which copy data management solution is best for you?

Traditional Copy Data Management
To date, most copy data management solutions have followed the same traditional approach and architecture. Basically, traditional copy data management consists of a server and storage. When installed, the CDM product is inserted into the copy data path and it copies your production database to the product’s own storage, creating a “gold” or “master” copy. The master copy is the copy for which all other copies are derived, and it is kept up to date with your production database through a scheduled synchronization process.

However, this architecture has some drawbacks:

  • Introduces a bottleneck in the copy data path
  • Requires reworking operational workflows and additional hardware
  • Copies are stored on a secondary storage device not designed for protection
  • Centralized copy management hardware creates a single point of failure

The limitations of the traditional copy data management offerings have prompted the conception of a new architecture.

Modern Copy Data Management
Modern copy data management, like EMC Enterprise Copy Data Management (eCDM), allows you to non-disruptively discover copies across both primary and protection storage in your data center. It embraces the decentralized ways of creating copies and the various underlying storage technologies that empower efficient copy creation, orchestration, and protection. The solution will non-disruptively discover those copies and monitor their lifecycle across the data center. Then, using the same solution, you can create customized service plans, automate SLO compliance, and make informed decisions about your copy data. Through this model, storage administrators, backup administrators, application owners, and DBAs can continue to create copies however they wish while still providing you with the global oversight needed to ensure compliance with various business or regulatory objectives.

Now that I’ve described the solutions, you can decide – copy data is a problem, but how will you solve it?

Data Center Modernization 101

Sarah Werner

EMC Marketing Intern
As an undergraduate at the University of Massachusetts Amherst majoring in chemical engineering, this summer internship has allowed me to explore the business side of the IT industry. Specifically, I have been tasked with spearheading important competitive projects. In my short time at EMC, I’ve been able to draw parallels between scientific methods learned in my engineering classes and apply those to solving real-world business problems.

Latest posts by Sarah Werner (see all)

EMC declared 2016 as the Year of All-Flash – now is the time to modernize your data center by switching to all-flash storage arrays!  Of course, choosing the right all-flash array is just the beginning of the journey and there are other steps that need to be taken in order to maximize the value that all-flash can deliver to your business. Here are some important considerations for your data center modernization efforts:

Step 1:  Make the Move

kodati-make the move

 

 

 

 

 

 

After finding the ideal all-flash appliance for your business, prepare your old arrays for retirement by migrating all of your data over. This sounds like a daunting process, one that traditionally takes 6 to 9 months to complete, but it can be made easier by using non-disruptive online tech refresh that a solution like VPLEX provides. This minimizes the time to value of the new flash arrays from months to days by migrating data over without any required downtime. Your data and applications remain accessible and your business keeps running all while modernizing your data center.

Step 2:  Keep All-Flash Always On

kodati-keep flash on

 

 

 

 

 

 

Congratulations! Your data and applications have been moved in record-breaking time, and your new all-flash environment is already delivering much greater performance. It’s not difficult to love the speed and power of flash, but that doesn’t mean flash arrays are any less susceptible to disaster situations that can result in data loss. Power outages, fire, flooding, or human error can all put your critical data – and by extension, your business – at risk leading to loss of revenue and customer satisfaction. Luckily, preventive measures can be taken to ensure your data is continuously available, despite unforeseen disasters. Data sent down a particular host’s data path can be copied between arrays, whether they’re in the same data center or not. This keeps your flash environment always on even if one array fails, and your data and applications are safe and always available. Learn more about continuous availability and disaster recovery solutions.

Don’t forget the network: Make sure your SAN environment doesn’t become a bottle neck for the blazing fast flash performance. When implementing high availability replication technologies the communication link between the data centers also needs to be looked at to make sure that it can handle the traffic between the datacenters.


Step 3: Accelerate IT Operations

Kodati- accelerate IT

 

 

 

 

 

 

 

 

Before you know it, your data center already requires changing configurations, moving applications, load balancing servers, using more of cloud storage etc. No business is immune to frequent adjustments, and their IT specialists shouldn’t struggle to meet new demands due to limitations of technology. Evaluate your current level of IT infrastructure agility – how much effort and downtime is involved in moving applications for load balancing, consolidating data centers for better resource utilization or moving data back and forth between your data center and public cloud. Once again server and storage virtualization go a long way to make these tasks easy and non-disruptive and help create infrastructure that can turn on a dime.

Moving to all-flash storage is a great step to modernize your data center to deliver uncompromised application performance. It is also a great opportunity to take a deeper look at your entire infrastructure to identify and eliminate sources of inefficiencies. With the right technology in place you can realize the full advantage of the all-flash storage to create data centers that are well protected, always available and agile to operate.

 

Note:  This blog was co-written by Parasar Kodati

SaaS is Changing Everything – Including Data Loss Risk from Admin Error

Lori Witzel

Product Marketing Manager, Spanning by EMC
Lori Witzel is a Salesforce MVP, has worked with and for SaaS companies since 2005, and has been sharing info with, listening to, and learning from tech users ever since. She is currently PMM for Spanning Backup for Salesforce, as well as PMM for Spanning Backup for Google Apps. Prior to Spanning Backup, Lori worked for various early-stage Cloud start-ups, mid-sized middleware providers, and ed tech firms, and she’s always eager to learn more. Lori's profile on LinkedIn: https://www.linkedin.com/in/loriwitzel

Software-as-a-Service (SaaS) has a history unlike that of on-premises software, and the people who manage and administer SaaS applications reflect that difference. When it comes to data protection, that difference matters, as you’ll learn.

accidental admin

What is SaaS, and does it REALLY differ from on-premises or from hosted applications?
SaaS isn’t just some software sitting on a vendor-managed server in the cloud –  it’s significantly different from its predecessors, hosted and on-premises applications, in its delivery and its architecture.

  • A SaaS application is by definition cloud-based and multi-tenant, sharing IT resources securely in the cloud among multiple applications and tenants (businesses, organizations, schools). Multi-tenancy is the technical architecture that differentiates SaaS from hosted/ASP applications. The customer will access the application through a web browser, and is only responsible for managing the data and metadata (customizations) of their instance.
  • A hosted application is almost always a single-instance, single-tenant adaptation of an on-premises application. The customer may lease or own physical or virtualized servers upon which the application is installed, and will access it through a web browser or a thin client. The customer may be responsible for managing the servers, and is responsible for managing application upgrades and maintenance.
  • On-premises applications are installed on and operated from a customer’s in-house (on-premises) servers and computing infrastructure. The customer is responsible for application security, availability to the organization, and management.

How Did SaaS Come to PaaS?
In 1999, salesforce.com was founded, offering the first true multi-tenant architecture in a commercial software application. Its SaaS applications, such as Sales Cloud and Service Cloud, were developed on its Force.com Platform-as-a-Service (PaaS). By foregoing conventional application development platforms and creating its own platform, salesforce.com freed itself from some of the performance limitations inherent in a standard relational database.

The salesforce.com achievement in creating a PaaS to enable SaaS, enabling them to scale up to support hundreds of thousands of intra- and inter-enterprise tenants (different departments, different organizations) was, to quote Computerworld, “complex, commendable and quite revolutionary.” (more…)

Modern Networks for Today and Tomorrow

Doug Fierro

Senior Director, Connectrix and Storage Networking at EMC
Doug Fierro is responsible for the Connectrix Business Unit which drives strategy and product delivery of storage networking technologies at EMC. This includes responsibility for delivering storage networking technologies that EMC sells within the Connectrix and EMC Select product lines, or qualifies within end-to-end solutions. These storage networking technologies include Fibre Channel, Ethernet, iSCSI, FCoE, Network Virtualization and WAN acceleration. Doug has 30 years of technical and marketing experience in the computer and storage industry, which includes 20 years at EMC.

Latest posts by Doug Fierro (see all)

“Why should I buy this now?” Have you ever thought that while looking at the latest products, whether that’s new technology, entertainment systems, sports equipment or anything else?

Do you wonder, “Will it provide more value, better performance, or just a give me something to brag about with my friends?”

Ultimately, you’re trying to decide “Is it worth the investment now, or should I wait?”

That decision process happens all the time when purchasing new products, especially if they are the latest technology. And it is happening again in the Fibre Channel technology race, as the industry is entering the next generation of product cycles, anchored by the latest 32Gb speed, and supported by new capabilities that extend Fibre Channel for the next generation of IT.

When is Enough…Enough???
When your business stops growing, when your customers stop requiring new levels of service, and when your budgets are infinite.X6-8_front Since that is not happening anytime soon, this is a great time to think ahead; position for today and tomorrow; and let new technology provide a way to solve challenges and add value to your business.

EMC, and our close partner Brocade, are giving you that opportunity today. With the latest EMC Connectrix B-Series products, we have worked with Brocade to deliver an industry leading series of 32Gb Fibre Channel systems that are available now, and will help you solve your most critical IT issues today and into the 2020s.

Keeping up with the All Flash Arrays
It is no secret that All Flash storage arrays are changing IT. What has been a bit of a mystery is how your FC storage network can enhance your All Flash array experience. It is amazing how many customers are introducing All Flash arrays into their environment, and connecting them to 5 year old (or older) FC networks. The result is similar to putting a board under the gas pedal of a Formula I car and expecting it to accelerate to top-end speed. It is just not going to happen.

Updating your FC SAN now will help you get the greatest value from your newest All Flash arrays. Doing that with the newest Connectrix B-Series will position you to experience top-end performance now and for many years to come. So you are ready to keep updating your storage environment no matter what new arrays you introduce into your Data Center.

That is future proofing with a purpose… (more…)

A Variety of Webinars to Fit Your Needs

Lauren Simpson

Principal Product Marketing Manager
Lauren is a Principal Product Marketing Manager at EMC working on the Experiential Marketing team. She helps drive customer-facing activities including engagement campaigns, events, webinar programs, and tradeshows. Outside of work, she enjoys traveling and spending time with her husband and two children.

Latest posts by Lauren Simpson (see all)

Are you curious about where your use case and product questions can be addressed? The Core Technologies team here at EMC has many ways to engage with you, at your convenience, to meet your needs. The most recent way to stay connected is our newly launched webinar series – a three-part program with different content and levels of complexity. There is a webinar to specifically address each unique facet of IT challenges.  This includes everything from cyber attacks, data duplication, complex application management, and data sprawl to the inability to support multiple virtual machines. If you have IT challenges, we have a solution to fit your needs.


TechTalk-Twitter

EMC Tech Talks
The tech talk series provides you with an inside look at managing your experience with our products. The focus is primarily based on one product at a time due to the technical nature of the content. We utilize simulators, GUIs, and animated videos to drive the 30-minute conversation and address your questions as they arise.

 

Modernize-Twitter


Modernize Without Compromise”
andAre You Protected?”
These two series highlight how to efficiently store, protect, and manage your information wherever it lives. Each session is approximately 60 minutes in length, and they all offer the opportunity to ask questions via chat.

 

 

Click here to be added to the webinar distribution list, ensuring that you receive the most up to date information on scheduling, topics, etc.

To view the upcoming schedules or view all of our previously recorded events click on one of the series names below:

We look forward to having you join our upcoming events!

EMC Unity All-Flash Storage Accelerates Cyber Investigations

Matthew Edman

Ph.D, Director at at Berkeley Research Group LLC, and Guest Blogger
Matthew J. Edman, Ph.D., is a Director at Berkeley Research Group LLC where he specializes in cybersecurity and investigations. Dr. Edman previously worked as a lead cybersecurity engineer for a federally funded research and development center, where he provided specialized computer network security research and development to federal law enforcement on a number of high-profile cases, including the investigation into and seizure of Silk Road--the notorious $1.2 billion underground drug market. Dr. Edman has a B.S. in computer science from Baylor University, and an M.S. and Ph.D. in computer science from Rensselaer Polytechnic Institute, where his research areas included novel techniques for cryptographic security and authentication in wireless networks, and the design, implementation, and analysis of anonymous communication systems on the Internet.

Latest posts by Matthew Edman (see all)

Cyber security is on everyone’s mind these days, and for good reason. We’ve all heard or read about high-profile hacks where sensitive personal records were breached or millions of dollars were stolen. The business impact on the targeted companies can be tremendous, ranging from bad press and a tarnished reputation to lost revenue and hefty regulatory fines. The worst cases have even put firms out of business.cyber-investigation

Within BRG’s Global Cyber Security and Investigations practice, we have built a diverse and experienced group that helps organizations assess unique security risks and deploys fast, effective response teams when breaches occur. Our practice includes veteran FBI agents and federal prosecutors who have conducted some of the most high-profile cyber investigations in recent history, computer scientists who have developed innovative and state-of-the-art investigative tools and techniques, and security engineers who have years of experience analyzing and securing corporate IT infrastructure across myriad industries.

We know first-hand from our real-world experience how important it is to systematically analyze the facts and electronic evidence to identify threat actors, mitigate data loss, maintain business continuity, and ultimately mount a legal response. In every case, our response time is one of the most critical client considerations.

Think about it. If your company is in the middle of getting hacked, the management team wants answers, not excuses while their incident response team is still uploading and processing evidence. When we built out our cyber security lab infrastructure, we wanted a storage system that could live up to our clients’ understandably high expectations. That’s why we chose EMC Unity’s all-flash storage platform. We looked at several other vendors and a number of storage technologies, but EMC Unity’s all-flash storage provided superior performance, expandability, and simplicity at a favorable price point.

Blazing-fast storage is critical, especially when we are dealing with massive datasets, but so is expandability. Our work regularly requires analysis of increasingly large amounts of data, including, for example: an analysis of hundreds of millions of database records related to a virtual currency-based money laundering operation; an investigation into criminal activity targeting a popular website that generated nearly 10 terabytes of logs each day; and a collection and review of nearly 20 terabytes of Microsoft Exchange data in connection with the investigation of a case of insider theft. As our practice continues to rapidly expand, we demanded a storage platform that could easily grow along with our evidence storage requirements and still provide us the performance to conduct high-speed investigative analytics for our clients.

Flexibility is also key. A complete incident response will leverage a diverse set of tools depending on the unique circumstances of the investigation. We may be using industry-standard forensic tools such as FTK or EnCase one day, running analytics against multi-terabyte Microsoft SQL Server or Cassandra databases the next, or analyzing evidence with our own in-house investigative tools. Some of our tools require block-level storage, while others need file-level storage—and extremely fast access times are necessary in all cases. Our environment is also highly virtualized, so we required a storage platform that would integrate seamlessly with VMware.

One other major consideration was manageability. We are focused on constantly delivering high-quality, rapid results for our clients, and we are not able to do that effectively if we are distracted supporting our technology. We needed a storage solution with a streamlined user interface that is easy to learn and quick to deploy, and that requires little day-to-day maintenance from our engineers.

EMC Unity provided an affordable solution that met or exceeded our requirements, which is why it is the core storage environment for our lab infrastructure. Ultimately, our cyber security and investigative work comes down to response time and accuracy. The faster we can provide actionable results to our clients, the quicker and more effective they are when responding to security incidents. That’s how we measure success.

 

SUBSCRIBE BELOW

Categories

Archives

Connect with us on Twitter

Click here for the Cloud Chats blog