Posts Tagged ‘trust’

Tales from EMC World 2016: Building a Modern Data Center

Last year we built a data center as part of our efforts to display our full portfolio of products at EMC World. We would have never imagined the level of interaction and interest we received with hundreds of customers coming through the exhibit every day. Our customers, partners, and internal EMC folks love technology and there is no better way to get a ‘feel’ for it than actually touching it.

This year we decided to do the same thing showcasing some of the most exciting technology advances in years. We organized our live modern data center by 5 key pillars:

Building a modern data center 2

Flash – everyone understands the benefits of Flash from a performance perspective, mainly delivering predictable low response times. But supply-side innovation is allowing us to embed much denser 3D-NAND technologies delivering unprecedented density and lowering CAPEX and OPEX in ways not possible before. All Flash arrays make more sense than ever and we showcased the coolest kids on the block:

  • Unity All Flash systems combining the benefits of flash with unprecedented efficiency and simplicity. Unity is built with end-to-end simplicity in mind bringing innovation like an HTML5-based GUI and a REST API to simplify operations. We also previewed CloudIQ, a cloud-based analytics dashboard to address and manage larger topologies much more proactively.
  • VMAX All Flash combining the benefits of flash with uncontended data services and availability features like SRDF. Through the integration with CloudArray customers can be strategic about their hybrid cloud strategy. Flash where you need it and cloud where you don’t.
  • XtremIO is the uncontended all flash array market leader delivering integrated copy data management capabilities allowing customers to leverage flash in ways they could never before. Being able to deliver copies on demand means better business agility. Being able to do so without tradeoffs around performance and efficiency is the hallmark of the XtremIO architecture and something competitors struggle to match.
  • One interesting addition to our data center this year was DSSD which helps our customers get business outcomes faster than ever before by essentially stripping code out of the IO path while preserving the benefits of shared storage. Server-side flash has often been used but leads to stranded storage and the need to shuffle data around, limited capacity, and no enterprise features to secure the data set. Compare that to DSSD D5, which can provide 144TB capacity, deliver 10MM IOPS at microseconds response times, all in 5U.

(more…)

The Right Ingredients For Staying Ahead of The Bad Guys

John McDonald

John McDonald is a Senior Architect in EMC's Trust Solutions Group, where he is responsible for developing and communicating trust-based solutions that encompass all of EMC's, RSA's and VMware's products. He has over 30 years of experience in the IT industry in general and IT Security in particular, and has worked extensively as a consultant, developer and evangelist across all industries and virtually all major areas of IT and security technology. He has spoken at dozens of industry and vendor IT and Security events, and has written over 20 whitepapers for EMC and RSA. John is also a CISSP and has held certifications in several other areas, including disaster recovery, Microsoft technology and project management.

Latest posts by John McDonald (see all)

shutterstock_180545660

One of the common threads you hear about in major data breaches these days is that the victim’s security team had alerts or events that should have clued them into the fact an attack was underway. In today’s complex security infrastructures it’s not unusual to have security operators/analysts receiving tens of thousands of alerts per day! Security monitoring and incident response need to transition from a basic rules-driven eyes-on-glass SIEM capability to a big data and data science solution. I frequently speak with customers about how IT Security needs to be able to handle a lot more information than current SIEM tools can support, and one question that always comes up is “what information needs to be collected and why?”, so here we go.

To start with you still need to collect all of those alerts and events from your existing security tools. While maintaining eyes-on-glass analysis of each individual alert from every tool isn’t feasible, a security analytics tool can analyze and correlate those events into a group of related activities that can help an analyst understand the potential impact of a sequence of related events instead of having to slice and dice the events manually.

The second type of information is infrastructure context – what’s in the environment, how’s it’s configured, how it’s all related and what is its impact? The analytics system needs to understand what applications are running on what servers connected to which network and what storage. By having access to these relationships the analytics tool can identify the broad-based impact of an attack on a file server by understanding all of the applications that access that file server and weight the alert accordingly. Which brings up another critical point – assets need to be classified based on their potential impact to the organization (aka security classification). If the tool identifies suspicious sequences of activity on both a SharePoint site used to exchange recipes and an Oracle database containing credit card numbers but doesn’t understand the relative value of each impacted asset it can only present both alerts as being of equal impact and let the operator decide which one to handle first. So a consolidated, accurate, up-to-date and classified system of record view your environment is critical.

Events event logs from all of those infrastructure components are the 3rd type of information; not just security events but ‘normal’ activities events as well. This means all possible event logs from operating systems, databases, applications, storage arrays, etc. Given that targeted attacks today can almost always succeed in getting into your infrastructure, these logs can help the analytics tool identify suspicious types of activities that may be occurring inside your infrastructure, even if the events don’t fall into the traditional bucket of security events. Here’s an example – a storage administrator makes an unscheduled snapshot of a LUN containing a database with sensitive data on a storage array, then mounts it on an unsecured server and proceeds to dump the contents of the LUN onto a USB device. The storage array logs show that someone made an unauthorized complete copy of all of your sensitive data, but if you weren’t collecting and analyzing the logs from that storage array you would never know it happened.

The fourth type of information a security analytics tool needs is threat intelligence – what are the bad guys doing in the world outside of your environment. A comprehensive threat intelligence feed into the security analytics tool will allow it to identify attempted communications with known command and control systems or drop sites, new attack tools and techniques, recently identified zero-day vulnerabilities, compromised identities and a host of other information that is potentially relevant. A subscription-based solution is a great solution to this.

The final type of information an analytics tool needs are network packets. Being able to identify a sequence of events that points to an infected server is only the first step – the analyst then needs to determine when the infection occurred and go back and replay the network session that initiated the infection to identify exactly what happened. Think in terms of a crime investigation – with a lot of effort and time the CSIs may be able to partially piece together what occurred based on individual clues, but being able to view a detailed replay of the network activities that led up to the infection is like having a complete video recording of the crime while it happened. Again the goal is to provide the analyst and incident responder with complete information when the alert is raised instead of the having to spend hours manually digging for individual bits.

The volume of information and amount of effort necessary to quickly identify and respond to security incidents in today’s environment is huge, which is why big-data and data science-based tools are absolutely critical to staying ahead of the bad guys.

 

SUBSCRIBE BELOW

Categories

Archives

Connect with us on Twitter

Click here for the Cloud Chats blog