Posts Tagged ‘cloud storage’

Tips for Running a Database as a Service

Yoav Eilat

Director of Product Marketing, Dell EMC
Yoav is Director of Product Marketing at Dell EMC, driving the marketing efforts for database and application solutions. He joined the EMC XtremIO team from Oracle, where he spent several years in the applications, middleware and enterprise management product teams. Yoav has an extensive background in enterprise software and data center technologies, and holds a B.Sc. in mathematics and computer science from Tel Aviv University and an MBA from the University of California, Berkeley.

Latest posts by Yoav Eilat (see all)

Database as a Service (DBaaS) is becoming another one of those industry buzzwords that can mean almost anything. Obviously it has something to do with running databases in a cloud model. But technology vendors don’t hesitate to apply that term to any product that’s even remotely related to that topic. Database software? Yep, that’s DBaaS. Storage arrays for your database? That’s DBaaS too. A coffee machine? Probably!

For a serious discussion about DBaaS, it’s useful to look at the state of databases today. Data is the foundation on which modern businesses are built, and much of it lives in commonly used databases such as Oracle Database or Microsoft SQL Server. Database sprawl and the resultant explosive growth of database copies represent an enormous challenge for enterprise IT teams. In an IDC survey, 77% of enterprise IT decision makers said they have more than 200 instances of Oracle Databases or Microsoft SQL Servers in their data centers.

db-as-a-service

Source: IDC Data Management Survey for EMC, November, 2015

In the same survey, more than 80% said they have more than 10 copies of each given production instance, typically for development, testing, data center operations, analytics, data protection or disaster recovery. While database copies are critical for these business activities, database administrators have often been reluctant to expand the number of database copies, due to the hardware, software and administrative costs involved.

And it’s not just about costs: these databases are typically not standardized, and comprise of a wide range of versions, patch levels and configurations. This sprawl and lack of standardization make it challenging to manage governance and compliance, and to meet service-level agreements. Inefficient management tools and a lack of visibility into the copy infrastructure can exacerbate these challenges.

So how can databases be made available to critical business activities while keeping costs under control and delivering quick service and time to market? How do you set up an efficient cloud environment that will reduce complexity, ensure data availability and accelerate business processes? Let’s go through a checklist for making sure your DBaaS initiative is a success.  (more…)

Cloud Adoption: Strategy vs. Reality

Vladimir Mandic

Chief Technology Officer & Distinguished Engineer Data Protection Cloud, Core Technologies Division, Dell EMC
Vladimir has been driving technical innovation and change within EMC for the past 10 years, first in the area of data protection software and, currently, in cloud technologies. Prior to that, he’s had rich industry experience as a solution integrator and in the service provider space. When not working on technology innovation, he may be difficult to locate due to his passion for world travel.

Latest posts by Vladimir Mandic (see all)

Myths About Migrating to the Cloud

Myth 1: Cloud Bursting
One of the original highly publicized use-cases for public cloud was bursting. The story made sense: as your demand for computecloud adoption-vlad increased, you would use the public cloud to increase the capacity of your private infrastructure. Like so many good stories, bursting didn’t really happen. In fact, bursting is one of the least common public cloud use cases.
Why did bursting not become more widespread? Enterprises are either keeping applications on-premises in newly designed IaaS private clouds or they are moving them to the public cloud. It’s an OR function, not an AND one. Furthermore, it almost always happens per-application. You evaluate your future application needs and decide where it makes more sense to run the application for those needs. Bursting across environments is just too complex.

Myth 2: Multi-Cloud
Most enterprises have neither a comprehensive hybrid cloud nor an end-to-end multi-cloud strategy that covers entire IT cloud comic-vladenvironments. Frequently there is a general desire for multi-cloud strategy to minimize the dependency on a single cloud provider. But that strategy turns out again to be a per-application choice rather than a centralized plan.
Organizations choose to run some applications in the private cloud and some in different public clouds. Every cloud has very different functionality, interfaces, and cost optimizations. And each time an application developer chooses an environment, it’s because that cloud was the optimal choice for that application. As a result, application mobility becomes a myth; it’s something desired, but very few are willing to settle for the smallest common denominator between different choices just to enable application mobility.
Even if customers wanted to and could move the application, it’s unlikely to happen. Moving large amounts of data between environments is challenging, inefficient, and costly. So, once the choice of a cloud provider is made, the application stays where it is, at least until the next tech refresh cycle when per-application considerations can be re-evaluated.

Cloud Adoption for Legacy Applications
While so much of the focus has been on creating new applications, enterprises are also migrating traditional workloads. So what are the stages of cloud adoption?

  • Step 1: Infrastructure as a Service. Treat the cloud like a typical infrastructure; in other words, think of servers and storage as you currently think of them. Applications are installed on top of the infrastructure. Because everything is relatively generic, the choice of a cloud provider is not too critical.
    But as applications start to move, a new way of thinking evolves; you start looking at the infrastructure as services instead of servers.
  • Step 2: Software as a Service. Some legacy applications are swapped for new ones that run as a service. In this case, you don’t care where your SaaS service runs as long as it’s reliable. The choice of a cloud provider is even less relevant; it’s about choice of the SaaS solution itself.
  • Step 3: Rewrite the Application. Some applications are redesigned to be cloud-native. In some cases, the cloud is an excuse to rewrite decades of old COBOL code that nobody understands. In other cases, features of the cloud enable an application to scale more, run faster, and deliver better services. Not all applications should be rewritten.

The Core Issue: Data. When thinking about moving the applications, what’s left is the actual data, and that is where company value truly resides. Some data moves with applications where it resides, but not all data is application structured. And that is the last challenge of cloud adoption—looking how data services can enable global, timely, and secure access to all data, whether it resides inside an application or outside of it.

The Role of IT
Just what is the role of the central IT organization, and is there a single strategy for IT? Not really.
The word “strategy” comes not from having a single plan that covers all applications, but from a comprehensive evaluation that should be done before choices are made and from having a unified set of services that ensure security, availability, and reliability of all those different environments.

Consider how IT organizations are evolving to become service brokers. For example, sometimes:

  • It makes sense to build a private cloud based on new converged (or hyper-converged) infrastructure.
  • It may go with the software-defined data center (SDDC), but that is more the case of when they have to deal with unknown external consumers instead of explicit requirements
  • IT organizations will broker services from public cloud providers such as AWS, Azure, GCE, or VirtuStreamThe alternative is so-called “shadow IT” where each application team attempts to manage their own applications without understanding the global impacts of their choices. In such scenarios, security is typically first to go and data protection follows closely.

I’ve written before how with move to public cloud, the responsibility of infrastructure availability shifts to the cloud provider. But that does not negate the need for a comprehensive data protection strategy.

You still need to protect your data on-premises or in the cloud from external threats such as ransomware or internally caused data corruption events (as the application is frequently the cause of corruption, not just infrastructure failures), or from the common (and sometimes embarrassing) “threat” of “I deleted the wrong data and I need it back.”

Companies weigh the costs and benefits of any investment. There are places where different types of infrastructure deliver the right answer. For IT to remain relevant, it needs to support different types of environments. IT’s future is in delivering better on-premises services, becoming a service broker, and ensuring that data is securely stored and protected.

Conclusion
The cloud is real and it is part of every IT team’s life. IT can be instrumental in the successful adoption of the cloud, as long as they approach it with calmness and reason—and an open mind. The goal isn’t to design the world’s greatest hybrid cloud architecture. It’s about choice and designing for application services instead of looking at servers and storage separately from the applications. There will be well-designed private clouds and public clouds that are better fits for specific applications. But the applications will dictate what works best for them; they will not accept a least-common denominator hybrid cloud.
In the end, hybrid cloud is not a goal in itself; it is a result of a well-executed strategy for applications and data.

Tales from EMC World 2016: Building a Modern Data Center

Last year we built a data center as part of our efforts to display our full portfolio of products at EMC World. We would have never imagined the level of interaction and interest we received with hundreds of customers coming through the exhibit every day. Our customers, partners, and internal EMC folks love technology and there is no better way to get a ‘feel’ for it than actually touching it.

This year we decided to do the same thing showcasing some of the most exciting technology advances in years. We organized our live modern data center by 5 key pillars:

Building a modern data center 2

Flash – everyone understands the benefits of Flash from a performance perspective, mainly delivering predictable low response times. But supply-side innovation is allowing us to embed much denser 3D-NAND technologies delivering unprecedented density and lowering CAPEX and OPEX in ways not possible before. All Flash arrays make more sense than ever and we showcased the coolest kids on the block:

  • Unity All Flash systems combining the benefits of flash with unprecedented efficiency and simplicity. Unity is built with end-to-end simplicity in mind bringing innovation like an HTML5-based GUI and a REST API to simplify operations. We also previewed CloudIQ, a cloud-based analytics dashboard to address and manage larger topologies much more proactively.
  • VMAX All Flash combining the benefits of flash with uncontended data services and availability features like SRDF. Through the integration with CloudArray customers can be strategic about their hybrid cloud strategy. Flash where you need it and cloud where you don’t.
  • XtremIO is the uncontended all flash array market leader delivering integrated copy data management capabilities allowing customers to leverage flash in ways they could never before. Being able to deliver copies on demand means better business agility. Being able to do so without tradeoffs around performance and efficiency is the hallmark of the XtremIO architecture and something competitors struggle to match.
  • One interesting addition to our data center this year was DSSD which helps our customers get business outcomes faster than ever before by essentially stripping code out of the IO path while preserving the benefits of shared storage. Server-side flash has often been used but leads to stranded storage and the need to shuffle data around, limited capacity, and no enterprise features to secure the data set. Compare that to DSSD D5, which can provide 144TB capacity, deliver 10MM IOPS at microseconds response times, all in 5U.

(more…)

Storage Networking Performance Matters For Mainframe FICON Environments

Deirdre Wassell

Director, Dell EMC Connectrix Product Marketing
From mainframe operations, to systems programming, to storage product management, to technical product marketing, Deirdre Wassell’s career reflects her prodigious interest in technology.

A flashback to working in a house of cards…

My first role in IT was working in a data center of a reinsurance company as a Computer Operator.  I was responsible for running the company’s reporting programs on the UNIVersal Automatic Computer or “UNIVAC” mainframe.   To run a program on the UNIVAC mainframe, I’d go to the “Program Closet” and grab the pertinent program, which was a stack of punched cards, and then I’d feed the cards through the UNIVAC card reader and the report would be created.  Fun!

punchcards stacked
Not fun…sometimes a card, which represents a line of code or an instruction, would get damaged and I’d have to recreate it using a keypunch machine.  If the card were severely damaged, I’d have to go to the “Source Code Closet”.  The Source Code Closet contained the Master Program Decks.  Using the card from the Master Program Deck, I’d carefully replicate the damaged card by typing the instruction using the keypunch machine and then I’d rerun the operating deck through the card reader to produce the report.    Memories…

 Flash-forward to 2016; the cards are gone and the mainframe game has changed
Mainframe customers are expanding the role of their highly secure environments to repositories of enterprise data for web servers and web-based application services.  Every day more and more transactions originate from mobile devices that end up at mainframes for retail purchase transactional recording.   The number of transactions occurring daily along with traditional online transactions is in the millions and growing with no end in sight.

Bottom line–the mainframe has evolved to be the platform of record for 3rd platform applications. (more…)

5 Reasons to Consider Cloud-Ready Storage

Nicos Vekiarides

Vice President of Cloud Technology
Former co-founder and CEO of TwinStrata, Nicos Vekiarides is now Vice President of Cloud Technology following the acquisition of TwinStrata by EMC. In his 20+ years of experience, Nicos has led teams focused on revolutionizing storage virtualization, data replication and cloud storage.

Latest posts by Nicos Vekiarides (see all)

With proclamations of 2016 as the year of all flash storage, you may be tempted to think flash drives are the main consideration when choosing a storage array. However, a storage array has a variety of attributes that influence the purchasing decision, including a trusted brand, interoperability, availability, copy services and many others.

Cloud-readiness, or the array’s native ability to attach to cloud/object storage, is an attribute of storage arrays that is gaining prominence as part of the selection criteria. This is for a good reason, as cloud storage can balance the continuous need for on-premises and off-premises storage capacity with economics that make it viable.

cloud-ready-june-16

If you are wondering whether cloud-readiness should be part of your selection criteria, consider the following: (more…)

SUBSCRIBE BELOW

Categories

Archives

Connect with us on Twitter

Click here for the Cloud Chats blog