AWS re:Invent Andy Jassy Keynote – IT Application Services

Amazon Web Services (AWS) CEO, Andy Jassy re:Invent keynote was a mixture of new and updated core IT compute functions and the announcement of several new IT services that can be used to create new customer experiences. In my previous post I reviewed the core IT compute function announcements here. I believe this year AWS's focus has matured from a pure IT infrastructure (compute, network, storage) service provider to now provide easily consumed application infrastructure services that will accelerate the development new classes of applications focused on improving customer experiences.

AWS is on pace to release 1,000 new services and functions in 2016. Andy noted that on average an AWS user has three new services or function available to them each day. The pace of AWS innovation is impressive. This year AWS has begun introducing more application services that will change how customers interact with businesses.


AppCycleOver the past few years we have seen an explosion in the amount of digital data that is generated, and stored by applications. Many of the same businesses have struggled using this data to improve efficiency, and improve customer experience. Big Data analytics has required specialized skill sets and complex new tools that are needed to analyze the new data. AWS introduced a set of new data analytics services to make it easier for most businesses to analyze large volumes and variety of data quickly. These services simplify modern data analytics by eliminating the need for complex tool setup, and operations.

In 2015, AWS launched a standard relational database service, Aurora. The Aurora service has become the fastest growing AWS service. Andy reported 14,000 databases have been migrated from commercial (SQLServer, Oracle) relational databases to Aurora. This year PostgreSQL compatibility was added in addition to MySQL support.

Unstructured data (files) is the fastest growing data type for enterprise IT. Most customer invoices, manufacturing manifests, and transaction logs are stored as unstructured file data. AWS customers typically store their unstructured data in the Simple Storage Service (S3). A new service, Athena enabling standard SQL queries to analyze data stored on S3. Traditionally, analyzing unstructured data required migration to a specialized file system such as HDFS, and the setup of complex, and specialized analytics software. With the Athena service no setup is required to analyze data stored on S3. This will enable businesses to easily analyze large structured and unstructured data with familiar tools (SQL). This will allow business to create new data based efficiency and customer experiences faster and more cost effectively.

Image Processing

As more customer experiences move online, it is critical to apply image analysis application capability. Currently image analysis requires new machine learning tools and specialized artificial intelligence expertise. AWS announced the availability of their new Rekognition service that makes it easier to add image analysis to your applications. The service is leveraging the learning's from the analysis of billions of Amazon prime photos. This service will allow businesses too quickly and cost effectively analyze images to identify familiar customer faces and sentiment. Immediate actions can be taken based on customer preferences to improve their experience . 

Voice Processing

With the introduction of voice assistant technology, Apple Siri, Amazon Alexa, and Google Home it has become clear customers prefer to communicate with businesses via voice rather than just a screen and keyboard. The problem is integrating voice communication into applications has required new, and complex artificial learning tools, infrastructure, and skill sets. AWS announced the availability of their voice to text service, Amazon Lex and their text to voice service, Polly. These fully managed services leverage the complex deep learning functionalities of automatic speech recognition (ASR), and natural language understanding (NLU) algorithms leveraging the massive amount of data and infrastructure supporting the Amazon Alexa products. These services will allow businesses to create a new category of speech integrated applications quickly and cost effectively. These services will accelerate the development of new customer engagement and experiences. Capital One demonstrated their use of the Lex and Polly services to create new customer engagement here.

IoT Processing

Gartner, Inc. forecasts that 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from 2015, and will reach 20.8 billion by 2020. In 2016, 5.5 million new things will get connected every day. It is a huge challenge and opportunity to take full advantage of all the new data generated by these connected devices. New custom software must be developed to collect, filter, aggregate, analyze, act on, and ultimately push the data to the cloud for deeper analytics. Much of the new custom software is not run on the same limitless compute and storage capacity of the cloud. IoT software development has required specialized tools and technologies. AWS announced the Greengrass service that is designed to extend the AWS Lambda programming tool set and technology to small, simple, field-based devices. This service will simplify and accelerate new IoT software development by providing common programming tools (AWS Lambda) and technology for software running at the device, field, and cloud. Businesses collecting, acting, and analyzing data from millions and billions of devices will create new competitive advantages in product production and service efficiency. More information on AWS Greengrass is available here.

These new AWS services will allow businesses to collect, and analyze the data about their consumers faster and more deeply to better anticipate needs. Integrating image and voice processing will create new more intimate customer experiences. AWS is leading the way making these technologies readily available which will accelerate the development of new application categories in 2017. A recording of Andy Jassy's complete keynote address is available here.

AWS re:Invent Andy Jassy Keynote - Core IT Compute Features

FstestgrowthITAmazon Web Services (AWS) CEO, Andy Jassy re:Invent keynote was a mixture of his view on enterprise IT and many new services announcements. AWS is now a $13-billion-dollar business, growing at 55% year over year. Andy reported that AWS is now the fastest growing large enterprise IT company in the world. The next four fastest growing enterprise IT companies are using AWS as their primary cloud provider. AWS used this re:Invent conference to announce their readiness for supporting enterprise IT and software as a service workloads in addition to their success with supporting startup businesses. AWS is now services more than 1 million active non-Amazon customers and delivering 300 million hours of EC2 services monthly. AWS has commoditized most of the common IT infrastructure services as robust, easily accessible cloud services. In this post I will recap the the announcements of new and enhanced core enterprise IT compute features. In my next post I will recap how AWS is pivoting from low cost IT infrastructure provider to providing value added business IT services.

The first new capability announcements were a group of new EC2 instance types mostly taking advantage of Moore’s law. The enhanced instance type announcements were:

  • T2 –New instance types: T2.xlarge & T2.2xlarge – 2x memory – for more in memory processing workloads
  • R4 – memory intensive workloads – 2x everything
    • Storage Capacity Instance
    • Speed using DDR4 memory
    • L3 cache capacity
    • vCPU’s
  • I3 – replaces I2 instance type for I/O intensive workloads
    • 9x – 9x IOPS
    • 2x memory capacity
    • 3x storage capacity using NVM to replace SSD media
    • 2x vCPU’s
  • C5 – replaces c4 instance type for compute intensive workloads like AI, transcoding.
    • Featuring Skylake CPU’s
    • 2x CPU capacity
    • 2x CPU speed – move from Haswell to Skylake Intel CPU’s
    • 3x storage capacity moving to NVM from SSD media
    • 4x more memory

In addition, two new instances were announced introducing the availability of GPU’s to machine learning and artificial intelligence processing, and field-programmable gate arrays (FPGA) for offloading repetitive processing tasks.

  • New P1 Instance – Instance including everything need to leverage GPU. Full and share GPU (spot market) available
  • New F1 Instance type – Instance including everything needed to develop FPGA acceleration. FPGA Market Place ready to go at launch

The AWS EC2 catalogue is broad enough to meet the majority of enterprise workload types. Each of these instances can be launched in a few minutes. AWS has also recognized a class of use cases such as simple web, application, or basic Linux servers is needed. For these use cases an instance type with a minimum of features including basic storage, networking and operating system is needed. For this use case AWS announced the launch of their Lightsail service. With a few menus driven clicks you can now launch a functioning virtual private server (VPS). The current VPS service does not offer an easy upgrade path to any of the other EC2 instances like the Lightsail offering.

compute service portfolio is broad enough to service most enterprise IT workloads. One of the major challenges enterprise IT has always dealt with is optimizing, and reducing the cost of operating their existing applications. Most of applications created in the past decade run on VMware virtual servers and migrating those workloads to AWS has been disruptive and complex. AWS and VMware have announced a new partnership which will enable VMware workloads to run on dedicated bare metal EC2 instances by mid 2017. This will allow enterprise IT to take advantage of AWS scale and cost advantages while not having to modify their existing applications or operational procedures. 

During this part of his his keynote Andy Jassy is clearly making the case that enterprise IT should be fully embracing AWS as it’s primary cloud provider. Andy’s argument is many startups that threaten to disrupt established businesses today and the software as a service (SAAS) provider’s such as Workday and Salesforce that enterprises on today are already relying on AWS cloud services so you need too as well.  

AWS re:Invent - Compute, Network, and Data Center Design Principles

Amazon Web Services distinguished engineer, James Hamilton was the first keynote presenter at this year's re:Invent conference. James' presentation featured a review of AWS data center and infrastructure design principles. James presented a compelling case that AWS cloud infrastructure is enterprise ready.

Data Center Design

AwsregionsToday AWS is deploying more server capacity daily that is sufficient to support Amazon's entire 2005 need. In 2005, Amazon was an $8.49 billion business. The growth of AWS capacity is being supported by expansion of existing regions and the addition of four new regions next year. While the size and number of AWS regions are growing, the size of each data center is remaining relatively conservative. Each AWS data center is architected to support 50-80K servers and consume 25-32MW of electricity. AWS has maintained this data center size to limit the fault zone and maintain the cost efficiency. AWS design the infrastructure and run their data centers with an overhead of only 10-12% which helps keep their cost to serve low. Each region consists of two to five availability zones consisting of one to eight data centers. AWS data center architecture is highly optimized for enterprise IT availability, and cost. James has blogged about optimizing data center design costs on his blog here.

Network Design

NetworkNetwork design is another area where AWS has taken a unique approach. James stated that early on the scale of AWS "broke the standard vertical network router and switch architectures" and they had to build their web scale network. AWS designs all its networking hardware and writes all its networking software. This allows them to minimize costs and maximize agility since it is optimized for their single purpose. They have found that this approach actually improved their network reliability and enabled them to introduce new capabilities faster. Two interesting topics James discussed were AWS bet on 25GbE instead of the industry standard 40GbE used in most enterprise IT data centers and their use of network ASIC.

The case for 25GbE is based on the cost of optics. AWS networking architecture is built on 100 GbE. 100GbE is four 25 Gbps waves. Most enterprise IT data center designs are leveraging 40 GbE which is four 10Gbps waves. Minimizing the cost of the optics for web scale data center designs has major cost and efficiency benefits. From an engineering perspective it is simple to design and maintain a 25 GbE top of rack switch that aggregates into a single 100 GbE data center switch.

The second interesting network design principle that James shared was AWS use of network ASIC. It is often stated that web scale IT service providers like AWS leverage commodity hardware. CustomsiliconWhile they do rely on built to spec hardware they are also embracing custom silicon to provide a competitive advantage. AWS acquired Annapurna Labs in January of 2015 giving AWS the ability to design and optimize the network silicon in addition to the hardware, and software. The silicon design capability is being used to design specialized application-specific integrated circuit (ASIC) to offload repetitive tasks from the hardware. The offload of repetitive tasks to custom network ASIC's reduces power consumption for the task, while improving performance. Another example how AWS is using its ability to build purpose built infrastructure to provide differentiation.

Server Design

AWS has long been optimizing their infrastructure based on server design. Server
James shared AWS's design philosophy which is based on simplicity and optimizing for power and cooling efficiency over density. Power and cooling costs are more expensive than data center space as James has explained in his blog posts here. James shared an older AWS server design for comparison purposes and highlighted the efficiency difference compared to traditional enterprise IT designs and denser commercial server models.


AWS has committed their data center being 100% powered by renewable energy ( James reported that AWS has reached 40% renewable energy support today and expects to reach 50% by the end of 2017. Meeting these goals is complicated by their explosive infrastructure growth. New AWS projects will generate 2.6 million MWhr of energy annually using a combination of solar and wind generation farms. This type of sustainable energy commitment is critical to our environment. Although power consumption by data centers has plateaued in the past few years, data centers still consume 2% of all US electricity according to US department of energy estimates.

I thought it was interesting that AWS chose to kickoff re:Invent with an overview of their data center and infrastructure design. Many people believe infrastructure no longer matters with little differentiation. After James' presentation I think you will agree infrastructure done right can provide differentiation and hardware design is still important to a well-run enterprise IT environment. A recording of James' keynote is available here and his presentation is available here.

AWS re:Invent Kickoff

Business Digital Transformation is accelerating as companies race to engage their customers and deliver products and services via technology. The best IT organizations are transforming their focus from just a technology support role to helping the business envision the new possible. Cloud computing has become the preferred way to deliver technology infrastructure services. Over the past year, cloud computing technology has continued to mature to enterprise grade. Many solutions such as Dell Technologies Enterprise Hybrid Cloud is now on it’s fourth major version, converged infrastructure and public cloud service sales are growing at double digits. In addition, VMware has announced the capability to run their Cloud Foundation compute (vSphere) network(NSX), and storage (vSAN) on IBM and Amazon Cloud Services over the past two months.

This has led me to attend my first AWS re:Invent conference starting today. I am excited to learn more about the VMware Cloud Foundation on AWS offering and several of the new AWS services including:

  • Lambda architecture
  • Serverless architectures
  • Database service transition from relational architectures
  • Machine Learning/Artificial Intelligence
  • IOT services

Many of the enterprise IT organizations I am working with are creating a bifurcated cloud strategy where all new application development is designed and deployed in clouds. Existing applications that can be transferred to cloud services are moving quickly to cloud infrastructure services without major transformations. This allows IT teams to get out of traditional infrastructure and data center management work. The resources freed up from traditional IT and data center management tasks will be applied to modernizing existing applications and creating new custom software to deliver new products and services.

My schedule for today is:


GPS01  --  Global Partner Summit Keynote

ARC205  --  Born in the Cloud; Built Like a Startup

ARC202  --  Accenture Cloud Platform Serverless Journey

BDM201  --  Big Data Architectural Patterns and Best Practices on AWS

DEV205  --  Monitoring, Hold the Infrastructure: Getting the Most from AWS Lambda

DAT306  --  ElastiCache Deep Dive: Best Practices and Usage Patterns

BDM306  --  Netflix: Using Amazon S3 as the fabric of our big data ecosystem

GA02  --  Tuesday Night Live with James Hamilton

I will be posting my thoughts throughout the conference here throughout the week.

Get Ready for the Cloud Foundry Summit Europe

The main Cloud Foundry European user conference, Cloud Foundry Summit is scheduled for next week in Frankfurt, Germany (9/26-9/28). This is the second year of the event and with the continued momentum of the Cloud Foundry project and user adaption as the premiere modern application development platform they are expecting over 600 attendee's this year. Leading into this year's user summit a new release of the Cloud Foundry platform was been released (v242) on 9/13 with major improvements to log aggregation and container management.

The first day of the event schedule is dedicated to training for application development and operations practitioner's including an "unconference" with a couple hours of lightening talks from theCloud Foundry user community. The second day of the conference kicks off with a keynote from Cloud Foundry CEO, Sam Ramji and is followed by a number of great breakouts on the status of the Cloud Foundry technology projects and successful users. The third day is packed with more, great breakout sessions and concludes with a chat betweenSam Ramji and Cloud Foundry board chairman, John Roese reflecting on the experiences of the past year and their aspirations for the next year. This year all the keynote and lightening talks will be live streamed. You can find the schedule and register for the live stream here.

In addition to Sam and John's talks I am looking forward to seeing the work from Brian Gallagher's Dojo team.  Brian Gallagher led the creation of the first foundation member sponsored Dojo. Brain's team has made a number of great code contributions and provided leadership for a key infrastructure projects to make it simpler to deploy and run Cloud Foundry. I recently had an opportunity to talk to John and Brian about their plans for the upcoming Cloud Foundry Summit including how to get a free summit pass and an invite to DellEMC customer appreciation party.




VMware Embraces Multi-Cloud

Last VMware hosted their annual user conference, VMworld. VMworld has always been a special event because of the strong technology ecosystem and user community that has developed around VMware products and especially their vSphere technology. Over the past few years much of the talk at VMworld focused on enabling enterprise IT to build cloud infrastructures as the natural next step of the VMware virtualized data center. Over the past five years VMware has been investing heavily through acquisition (i.e. Integrien, DynamicOps, Desktone) and organic development (i.e. vCloudAir) in products to enable enterprise IT mature their virtualized data center into an on premises cloud infrastructure. In parallel, public clouds (i.e. AWS, Azure, Google, and Salesforce) have emerged. MulticloudStanding up cloud infrastructures is no longer the challenge, provisioning, managing, and monitoring workloads using all these different cloud services is the biggest challenge since each of these clouds have proprietary interfaces and API's. As a result cloud silo's have emerged isolating workloads and data sets in those silo's. Enterprise IT is managing a portfolio of cloud services to support of their application workloads and needs tools and solutions that allow them to efficiently manage, connect, and secure workloads running across multiple clouds. I have been referring to these types of services as a set of Cloud Interworking functions.

Many customers are running traditional application workloads on VMware clouds today. These clouds are architected and optimized for their applications and workflows. Many customers would like the option running these workloads in service provider clouds to realize cost, and scale benefits. VMware introduced Cloud Foundation solution and their SDDC manager that provides a common VMware provisioning, management, and monitoring experience across multiple cloud provider infrastructures. IBM is the first to offer this capability on their IBM Cloud public cloud infrastructure. This will help eliminate the cloud silo's that are created when trying to leverage VMware clouds on multiple cloud infrastructure providers.


More about the VMware Cloud Foundation offering on IBM Cloud can be found here. Additional public cloud providers such as Virtustream have announced the intention to offer VMware Cloud Foundation services as well.

In addition VMware positioned NSX as the best way to provide secure and manageable inter cloud network connectivity. One of the major challenges to moving existing workloads to public clouds is the network domain architectures are different from one cloud service to the next. Software defined networking deployed across cloud services allow workloads to run without modification in multiple cloud service providers. In addition with software defined networking with NSX provides a single management, monitoring, and discovery interface for your network across clouds. Software defined micro segmentation services allow you to implement finer security granularity across all the cloud services supporting your application workings. This year Rajiv Ramaswami gave a great talk on the challenges with cross cloud service networking and the value of the VMware NSX solution here.

The third pillar of VMware's Multi Cloud solution strategy is providing an enterprise grade digital workspace experience for end users. Enterprises need a way to manage the distribution of application access to end users across many types of devices, and locations in timely manner while maintaining data security and governance. This challenge is becoming greater as the velocity of new application creation is increasing, and the number and type of new devices is accelerating. VMware announced the expansion of their partnership with IBM to provide hosted desktop and application services and the progress of their work with and their new analytics application. Providing a consistent, automated, secure solution to manage application access regardless of which cloud the application is running via a variety of end user devices, and network types is critical for enterprise IT today. This can now be more easily be provided through a combination of cloud service providers. More information on this capability is available here and here.

  Cloud interworking

The pivot by VMware to provide solutions that will simplify, and automate cloud interworking services will be a milestone in cloud adaption by enterprise IT. VMware is enabling simplified provisioning, management, and monitoring of workloads across multiple cloud providers. With NSX software defined networking enterprise IT can now manage, monitor, and secure their application communication across multiple cloud providers for the first time. Simplifying connecting end users to your applications via a consistent and secure digital work space across a variety of end user devices and network locations is critical. I believe this is the year enterprise IT will focus more on using cloud services rather than how to build them. These cloud interworking services will expand the choice for workload placement based on cost, location, and availability. The speed businesses can consume cloud services in 2016 accelerates the new products, services, and customer experiences that will differentiate from their competitors.

VMworld 2016

VMworld 2016 will be kicking off this weekend. The theme of this year's event is be_TOMORROW which certainly reflects the state of business and the IT industry. I think it's also reflective for VMware since their products were responsible for the last big IT technology shift, virtualization but new products are needed for the next wave of cloud native applications. VMworld has always been the event where the next wave of new IT cloud technologies have been introduced and this year VMware and its partners will be making their case for their role in tomorrow's IT ecosystem. I expect to see a continued maturation of VMware's software defined data center (SDDC) offering automating storage, network, and compute. Last year they shared their vision for cross cloud management and realigned their management products into new bundles of suites. This year I expect major enhancements to be announced to deliver on their management vision and new SDDC bundled with converged and hyper-converged appliances.

The event is expected to attract over 24,000 IT professionals with over 400 technical breakout sessions and in my opinion the best hands on lab of any event. One of the reasons so many of us attend year after year is the opportunity to network in person. This year I will be again participating in the v0dgeball tournament on Sunday afternoon. It is a fun way to see friends from across the IT ecosystem and find out about all the new startup companies as well. The v0dgeball event starts at 3pm on Sunday and admission is free. All the proceeds benefit the Wounded Warrior Project. More information about the event is available here.

My company EMC will again have a big presence at VMworld and will be introducing enhancements to our hybrid cloud offerings and tighter integration with our converged infrastructure offerings. It has become clear that customers are expecting the hybrid cloud offerings to include data protection and security services that are simpler and easier to use. We will be introducing some great integration work our EMC and VMware engineers have completed to simplify the deployment and management of these services. EMC will be at booth #1223 in the Solutions Exchange during the week with a number of great presentations and our engineering experts to answer your questions. More information on Everything VMworld by EMC is available here.

I am excited with the product announcement previews I've seen as part of EMCElect and the CiscoChampions programs this week. I think this will be an exciting week and I will be blogging the highlights and most interesting announcements during the conference. I look forward to seeing all my friends and making some new one's this year.


Cloud Inter-Working – Distributed Data Access

In my previous post, Cloud Interworking Services described a new set of IT infrastructure services that enable reliable, and secure inter cloud access. In this post I am going to describe inter cloud data access by your
applications. As more applications leverage cloud infrastructure services data sets are being distributed CDAacross several clouds. Most applications will need access to data sets stored in one more cloud infrastructure services different from where they are running. For example when developing a new customer engagement mobile application that runs in your private
cloud you may need access to data stored in the cloud and SAP ADAapplication data running at Virtustream. A well architected cloud infrastructure needs to enable friction-less data access by the new mobile application. Application access to any of your data sets is a basic requirement to compete in the digital economy. The faster IT can iterate on application development the faster the business will deliver customer value.

Application access to data sets created, and maintained remotely is not a new challenge for IT. Starting at the beginning of this decade the industry began using storage virtualization technologies to enable data sets to be accessible in multiple data centers. Products like EMC VPLEX, Hitachi USPV, and Netapp V-Series provide these capabilities. These storage virtualization technologies were primarily designed to enable rapid restart business continuity between clouds up to 100’s of miles away. It is not easy for multiple applications to easily access the same data sets simultaneously without implementing a complex, distributed lock manager to keep the data sets in a consistent state. I have seen many customers successfully create snapshot copies of the data to enable other application to access read only copies of transactional data sets for analytics processing. Storage virtualization is limited by distance and network latency typically not exceeding 50ms or <100 miles. Storage virtualization is mostly limited to block storage protocol limiting application access.

More recently storage gateway technologies have been introduced to place data sets in their most cost effective cloud service while maintaining application access over traditional block storage and file protocols. Typically these storage gateways will cache the most frequently accessed data locally to minimize access latency. The storage gateways will pull the data it doesn’t have cached locally transparent to applications. The challenge with most storage gateway technologies is the data is not easily accessible by applications running anywhere but the source site. Some of these storage gateway products I see most often are EMC CloudArray, and Panzura.

12FactorBoth storage virtualization and gateways technologies do not allow IT to provide ubiquitous access to data sets across multiple cloud services. In order to de-couple the data and applications a new architecture is required. New applications should access all data through standard API’s rather than traditional storage protocols. Data sets must be accessible independent of any single application and cloud infrastructure. Application architectures for modern mobile, web, and social application follow The Twelve-Factor App architecture where data sources are treated as backing services that are attached at run time. For example, a modern 12-Factor App should be able to attach and detach to any number of MySQL databases and data object store the same way each time regardless of which cloud infrastructure the application or data set is operating.

DatafabricFor existing data sets that are tightly coupled to applications new data fabrics will be necessary to virtualize access to data sources. For example, if you want an application to perform data analytics against data sets in SQL database and HDFS file system your application will need to rely on a data fabric product like Pivotal Hawq to access the two different data formats and execute a SQL query. New applications will leverage data fabric API’s to access legacy data sources such as ERP databases. These modern data fabrics manage metadata describing data sets including location, and format. Since new applications are creating more unstructured data (i.e. audio, video, images) in addition to tradition structured data (spreadsheets, SQL databases) application will need a data fabric to manage access consistently regardless of format.

Application access to all your data sets is critical to developing, and operating new software. While we have been making IT infrastructures more flexible with storage virtualization and gateways, the new data fabrics are critical to enabling the consumption of cloud infrastructure. In order for companies to successfully compete in the digital economy they need to be able to quickly develop new custom software delivering differentiated products, and customer experiences. In order to get the application development speed, and scale these applications need to be deployed in cloud infrastructures with a robust data inter cloud service.

Cloud Interworking Services

In my previous post, Cloud Is Not A Place I presented my case for enterprise IT needing four types of cloud services to support their application workloads. Many enterprise IT customers I work with are adapting a Bi-Modal IT strategy. One mode of cloud services for supporting their traditional 3-tier client-server applications such as SAP/R3, Oracle ERP, SharePoint, SQLServer based application. Most of these traditional systems are their systems of record. The second mode of cloud services is optimized for modern mobile, web, social and big data applications such as, and custom developed web portal systems. Many of these applications are their systems of customer engagement.

CloudappWorkloadsMany applications workloads can be supported by just one of these cloud types but all enterprise IT application portfolio’s require a combination of more than one of these cloud types. For example, many businesses run SAP for ERP and use for CRM. These two application workloads will be support by different cloud types. As you add more application workloads you must deal with applications that need access to other application generated data sets which may not run on the same cloud type service. You also see opportunities to leverage one cloud type for primary data and other cloud types for redundancy and protection. Frictionless access between these different cloud services is critical.

A new class of cloud services I call Cloud Interworking services is needed. These Cloud Interworking services are critical to maximizing application workload placement and inter-operability. I believe these Cloud Interworking services will enable enterprise IT organizations to provide the most differentiated and cost effective IT services for their businesses.

We have identified three basic Cloud Interworking services that modern enterprise IT need to support:

  • Data Set Access – access data sets easily from any cloud
  • Data Security – encryption of data in transit and at rest
  • Data Protection – data copies that can be used to restore failed data access requests

In my next series of posts I am going to discuss how these capabilities can be implemented today. These Cloud Interworking services will enable enterprise IT infrastructure teams to become their companies cloud portfolio manager. As the cloud portfolio manager they will be able reduce friction with their application development team while reducing costs and improving their agility.

EMCWorld 2016: Future of Data Center Services with SUPERNAP

Many customers I have been meeting with recently are looking to get their IT out of the data center business. Data Centers are viewed as expensive and difficult to maintain for many businesses. Many are leveraging the public cloud providers as a means to accomplish the goal of zero data center but are concerned about losing the advantages of having IT infrastructure architecture control. One of the best things about attending EMC World is the opportunity to connect with other leaders in the IT industry. As part of the EMC Elect community I had the opportunity to visit the Switch SUPERNAP data centers in Las Vegas.

Our visit included data center facility tours and presentation of the SUPERNAP capabilities. SUPERNAP's Missy Young, started the presentation with a review of SUPERNAP's history. In 2000, Rob Roy founded Switch in Las Vegas to offer advanced managed technology services for startups and large enterprise customers. In 2002 Rob was able to acquire a Nevada based former Enron facility with the largest fiber optic capability in the country which would offer customers unprecedented network capacity, performance, and redundancy. In 2006, Rob created the SUPERNAP data center business and ecosystem. SUPERNAP provides companies with the data center space to house the compute, and storage combined with the Switch network capacity, performance, and redundancy. Today SUPERNAP is operating data center services in northern and southern Nevada as well as internationally. In addition to providing customers with the world's only Gold Tier IV co-location data center facility and operations certified data center services, SUPERNAP is leveraging over 200 Rob Roy patented inventions to improve the cost effectiveness and environmental sustainability of data center services. The SUPERNAP data centers do not use traditional raised floor, or power designs. For data center cooling they are leveraging their patented SUPERNAP T-SCIF (Thermal Separate Compartment in Facility) system that is designed to keep 100% of the equipment heat separate from the data center air. The heat from each rack is captured, moved to the ceiling compartments using natural air pressure, and then vented outside while cool air is continually added to the building. This Switch T-SCIF heat containment cabinet platform not only cools the data center efficiently it also allows SUPERNAP customers to fully utilize their rack space without worrying about equipment cooling limitations. SUPERNAP can provide over 40 kW power capacity to each rack which is 30-50% more than many of the top enterprise data centers I have seen. This can result in big savings for customers paying for data center services by the rack. 


When you arrive at the SUPERNAP facilities you are immediately impressed with the size and scale of the space. Once inside the exterior wall surrounding each of the buildings you continue to experience their commitment to the physical security as armed guards meet you at the entrance and escort you throughout the facility. The tour of facility helped create perspective on the size of their install base. Each building is divided into four modular sections built out as the space is sold. During the tour we were able to see the unique power distribution, cooling, and roofing design that supports the Gold Tier IV classification.

The other big advantage SUPERNAP offers their customers is aggregating network bandwidth purchasing power through their Core Cooperative. Customers running at SUPERNAP data centers can typically reduce their network costs by 30-60% and improve their redundancy by participating in the Core Cooperative. Due to tax agreements customers often have much lower taxes on data center services and equipment.

SUPERNAP is expanding their service to the eastern region of the United States with their announced plans to build a data center campus in Grand Rapids, Michigan ( This will provide a data center service alternative for east coast companies with similar benefits to those using the Nevada based services.

Data center hosting and co-location services have been offered for many years by many regional and national providers. Typically customers have used these services as an alternative to investing in their own data centers but usually at a higher cost. With the networking and the proprietary data center design technology that SUPERNAP uses their customers realize the benefits of world class data center services at a fraction of their current data center cost while maintaining IT system architectural and operational control. Based on the growth of the SUPERNAP capacity in both Nevada and in Michigan I think many more businesses will be considering this option for hosting their IT infrastructure in the future.