Author Archive

Continuous Availability without Any Compromise

Parasar Kodati

Product Marketing Manager
Parasar Kodati has more than ten years of experience in product management and marketing spanning scientific computing, embedded software development, and data acquisition technologies. When not working he may be found plotting with his mischievous daughter, cooking Indian street food or reading eastern philosophy.

Everyone needs some summer downtime. If you have not taken your summer downtime yet I hope you have plans to do it soon. However when it comes to business, the last thing we need on a vacation is a frantic phone call that your business has taken some downtime. While downtime in IT systems is unavoidable the question is about how much resilience is built into the system architecture. In fact availability is one of the hallmarks of a modern data center and should be considered in any data center modernization effort. Continuous Availability, unlike other loosely used definitions of high availability, is about delivering a zero RTO service level to ensure business critical applications are never down. A continuous availability technology is able to do this by making sure every IO is captured for instant replication across distance thereby creating active-active data centers that offer highest disaster resilience and also eliminate planned downtime.  Let us look at some of the considerations for choosing the right continuous availability technology for the most demanding all-flash workloads.

VPLEX blog-August
Performance that Maximizes All-Flash Storage Availability
IT organizations around the world are rapidly adopting all-flash arrays to consolidate their business critical applications with uncompromising performance and data services with significantly better total economic value. A zero RTO availability technology needs to be the most efficient data mover to replicate at a speed that matches flash performance. Needless to say, a technology that uses array cycles for replication is eating into the flash performance. This is the reason why a dedicated availability   solution like VPLEX has become an obvious choice for more than half of Global Fortune 500. At the same time you don’t need the availability technology to be duplicating the functionality that the all-flash array already has. Instead the replication technology should maximize the availability of the flash storage that is running more and more business critical workloads.

Future Ready Scale
As famous ice hockey player Wayne Gretzky once said “skate to where the puck is going to be, not where it has been”. Availability technology that is in the data path has to be ready to handle at least twice the IOPS and workloads that it is being purchased for. Clearly the hardware platform needs to have enough room to grow both in terms of scale out architecture as well as enough compute power that future software releases can take advantage of using increasingly parallel computing algorithms.

While linear scaling in performance is appreciated; the licensing costs for a growing environment quickly becomes unattractive. This is where customers should look for licensing models that are more favorable for growing environments. (more…)

Get a Snow Blower for Your Cold Data

Parasar Kodati

Product Marketing Manager
Parasar Kodati has more than ten years of experience in product management and marketing spanning scientific computing, embedded software development, and data acquisition technologies. When not working he may be found plotting with his mischievous daughter, cooking Indian street food or reading eastern philosophy.

I think it is safe to say that this past winter in the Boston area has been more like Spring. Unlike my three year old who is fond of using her shovel (the blue one pictured below) I and my back did not miss the snow. Nevertheless I was prepared with my arsenal to tackle snow on the car, drive way and the back yard. Here they are basking in the 70s in in a picture taken on March 10.

shovel

Cold Data is Accumulating much Faster than You Think
Cold data is something like a more normal snow fall in New England. We clear the car and drive way of the snow but can’t get rid of it completely. What is cold data? Data that is rarely accessed is referred to as cold data. Other names include static data or inactive data. While the chances of access of cold data are very slim, cold data however just sits around occupying valuable primary storage (which is rapidly becoming all flash!). For certain applications data starts to lose value quickly but still needs to be stored for a certain time frame.  A simple example is archive mail boxes. Another example is the rapid accumulation of device data, say from IoT sensors. Let us also note that archival data that is beyond short term back up also presents the same challenge. Whatever the context may be, cold data is simply a consequence of rapid data growth.

Pain Points when Dealing with Cold Data
Now just like we shovel away snow storage administrators spend a lot of time and effort moving this cold to lower tier arrays or to tape. Time and effort aside, in the rare event that access is needed to this data, retrieval is once again a challenge with time consuming processes to load this data back onto the storage network. If you add the cost of your primary storage that is being used up by inactive data, the time and resources it takes to move the cold data out of the primary storage and the recovery costs you will quickly see the inefficiencies of the cold data management chewing up your resources. (more…)

El Nino Effects from California to Chennai

Parasar Kodati

Product Marketing Manager
Parasar Kodati has more than ten years of experience in product management and marketing spanning scientific computing, embedded software development, and data acquisition technologies. When not working he may be found plotting with his mischievous daughter, cooking Indian street food or reading eastern philosophy.

The El Nino effect dramatically amplified the seasonal rainfall for Chennai causing one of the worst flooding in the southeast (or southeastern) Indian city. The disruption was so bad that it actually made material difference to the quarterly financials of IT giants like TCS. On the other half of the globe southern California started seeing the effects and continues to prepare for “potentially destructive” rainfall.

The question of protecting information infrastructure continues to rise given the increased digitalization of the economy. The good news is natural disasters need not be IT disasters. Our customers are asking us more and more for help to protect their information in a way that gives absolute confidence in the ability to recover. They are also reporting incidents where their investments save the day, not just when a natural disaster hits but also for avoiding IT disasters that can be caused by facility issues, power failures, software bugs or manual errors. The most recent recorded incidents include a major provincial hospital in North China reporting a faulty UPS forced array failure which could’ve caused the loss of valuable patient medical records .Another hardware failure event happened at an IT Service Provider that serves banks and financial institutions in Kentucky where 356TB of data were at risk to be lost forever. How did these companies recover their data?  They were smart in the way they planned for a natural disaster. These are the steps they took:

  • Formed cross functional teams representing application owners, DBAs and storage architects to review data protection and availability requirements for different workloads. Just a note that a lack of coordination often results in accidental architectures that are hard to maintain and can lead to poor visibility and gaps in protection coverage
  • Identified potential disasters, failure modes and response strategies for workloads of varying degree of criticality
  • If they were in a natural disaster prone area, they thought about the minimum distance a second datacenter needed to be located for business continuity
  • Clarified metrics around both recovery time (RTO) and data loss tolerance (RPO) and the impact those metrics had on their business. They needed to think about what the impact would be of their entire sales force not accessing key applications for an hour or half a day? How would that change if was the end of the quarter or end of the year?
  • Mapped protection strategies to the different workloads in a way to justify the ROI from different technologies
  • Invested in vendors and technologies that were capable of achieving their data protection objectives for today and that could adapt to their changing business needs

In these examples and in the diagram below the customers’ relied on VPLEX to save the day and their data.
vplex-saves-the-day-feb-9
(more…)

Three New Year’s Resolutions for IT Leaders

Parasar Kodati

Product Marketing Manager
Parasar Kodati has more than ten years of experience in product management and marketing spanning scientific computing, embedded software development, and data acquisition technologies. When not working he may be found plotting with his mischievous daughter, cooking Indian street food or reading eastern philosophy.

The second half of the decade is around the corner. It’s that time of the year to reflect, refresh and even resolve. Weight loss and work out never goes out of fashion but I thought I would put myself in the shoes of IT leadership. There is so much pressure on IT to embrace every new paradigm or fad that sometimes it can be hard to stay focused on fundamentals. These fundamentals don’t change no matter where technology goes. Here is my take on some resolutions for 2016.

Look for opportunities to disrupt yourself
Technology continues to make the competitive arena a level playing field. It is a good time to envision what your business can do with the information you have in terms of acquiring, serving and retaining your customers and employees. Think about the infrastructure that you need for rearchitecting your IT to achieve those goals. What areas need retooling? Where is the biggest bang for the buck?

Spend less time firefighting
In an increasingly digital world the value of information continues to grow and so does the cost of firefighting after an IT infrastructure disaster (caused by man, machine or nature) to minimize data loss and downtime. With the right protection and availability strategy we have heard many customer stories where the business was delighted at the level of availability IT was able to provide in the face of severe natural disasters. (more…)

Is Your Data Center Ready for a Natural Disaster?

Parasar Kodati

Product Marketing Manager
Parasar Kodati has more than ten years of experience in product management and marketing spanning scientific computing, embedded software development, and data acquisition technologies. When not working he may be found plotting with his mischievous daughter, cooking Indian street food or reading eastern philosophy.

Agility of a business is increasingly dependent on agility of IT. Mission critical is no longer limited to medical records and financial transactions.  A few hours of downtime for your CRM system could mean missing your quarterly numbers or a landmark deal.  Downtime for your website could shake the confidence of investors. No wonder data centers have grown to be one of the most important assets for an organization. All this means tighter SLAs and greater expectations to meet and of course with the same budget and resources.

An important step in building a case for data protection investments is to map out the different planned and unplanned causes of downtime so that you can identify where the biggest gaps are in your infrastructure. Here is one example where the frequency and severity of the downtime cause is mapped out. Today I would like to talk about three stories from the top left corner where the frequency of the events are low but the impact is huge.

downtime bubble chart

Business continuity amidst Alberta’s worst floods
The summer of 2013 saw the worst flooding in recorded history in Alberta, Canada. At $1.7 billion in damages the flooding is Canada’s costliest natural disaster. As the flood water rose, the IT personnel at one of Canada’s premier law firms were forced to abandon their datacenter. However, before they evacuated they were able to instantaneously migrate their mission critical workloads to their remote data center and their business continued to operate without disruption. (more…)

SUBSCRIBE BELOW

Categories

Archives

Connect with us on Twitter

Click here for the Cloud Chats blog