Innovation

Influencer Engagement - Dell EMC Elect Review

Disclosure: I have been recognized as a Dell EMC Elect member for the past three years

Mark Browne recently joined our weekly CTO Ambassador Tech Talk to update the team on the new Dell EMC Elect program. For those of you inside the Dell EMC firewall a recording of the presentation is available here. The Dell EMC Elect program recognizes IT professionals and subject matter experts whose opinion is trusted and valued by the enterprise IT community. The Dell EMC Elect members provide independent product and services information that includes their experiences working with Dell EMC products. Most often their information is shared via their social media platform (i.e. personal blogs, Facebook, and Twitter). We believe that a large majority of our customers turn to people they know and respect for referrals above any other source so it is important that we enable this enterprise IT influencer community.

The Dell EMC program enables the Dell EMC Elect influencer community by providing access to Dell EMC thought leaders, early access information for new product and services launches, and opportunities to provide feedback our strategy and plans. This elevated level of access creates symbiotic relationship between IT influencers, Dell EMC, and our customers. The Dell EMC Elect membership benefits from the opportunity to understand our strategy and learn about our newest products and features so they have time to compose their opinions so they can be shared as soon after general availability as possible. As respected experts by the IT community their opinions help the IT community make more informed buying decisions.

Identifying the Dell EMC influencers from across the globe is a challenge Mark discussed. Each December there is an open nomination process where nominations for the Dell EMC Elect recognition can be submitted by anyone. This year there were 600 nominations and 153 people were selected for the Dell EMC Elect recognition by a team of Dell EMC Elect governors that evaluated their influence, knowledge, and previous contributions to the enterprise IT community on behalf of Dell EMC. The Dell EMC Elect come from a variety of backgrounds including Customers, Partners, Employee's, and Independent. It is a global community with representatives from North America, Europe, Middle East, Africa, Asia Pacific & Japan, and Latin America. A complete list of the Dell EMC Elect member is available here and there is a list of the Elect member under the @DellEMC twitter handle. You can follow the Dell EMC Elect chatter on Twitter by filtering on the hashtag #dellemcelect and on their Dell EMC community network page here. As Mark pointed out it is not too early to start thinking about people you will nominate for 2018 Dell EMC Elect recognition. The nomination process will start in December and be announced on Twitter with the #DellEMCElect hashtag and the community network page.

IT professionals are relying more and more on respected industry peers for advice. It is critical for suppliers like Dell EMC to engage with these influencers so they have the information and access to make informed recommendations. Equally important is the feedback the influencer community shares about products, solutions, partnership, and support. I think the Dell EMC Elect program does an excellent job facilitating two-way collaboration with the IT influencer community. I would encourage you to get to know the Dell EMC Elect members. They are all easy to find on social media and most IT industry conferences. They are a great resource for you.


AWS re:Invent Andy Jassy Keynote – IT Application Services

Amazon Web Services (AWS) CEO, Andy Jassy re:Invent keynote was a mixture of new and updated core IT compute functions and the announcement of several new IT services that can be used to create new customer experiences. In my previous post I reviewed the core IT compute function announcements here. I believe this year AWS's focus has matured from a pure IT infrastructure (compute, network, storage) service provider to now provide easily consumed application infrastructure services that will accelerate the development new classes of applications focused on improving customer experiences.

AWS is on pace to release 1,000 new services and functions in 2016. Andy noted that on average an AWS user has three new services or function available to them each day. The pace of AWS innovation is impressive. This year AWS has begun introducing more application services that will change how customers interact with businesses.

Analytics

AppCycleOver the past few years we have seen an explosion in the amount of digital data that is generated, and stored by applications. Many of the same businesses have struggled using this data to improve efficiency, and improve customer experience. Big Data analytics has required specialized skill sets and complex new tools that are needed to analyze the new data. AWS introduced a set of new data analytics services to make it easier for most businesses to analyze large volumes and variety of data quickly. These services simplify modern data analytics by eliminating the need for complex tool setup, and operations.

In 2015, AWS launched a standard relational database service, Aurora. The Aurora service has become the fastest growing AWS service. Andy reported 14,000 databases have been migrated from commercial (SQLServer, Oracle) relational databases to Aurora. This year PostgreSQL compatibility was added in addition to MySQL support.

Unstructured data (files) is the fastest growing data type for enterprise IT. Most customer invoices, manufacturing manifests, and transaction logs are stored as unstructured file data. AWS customers typically store their unstructured data in the Simple Storage Service (S3). A new service, Athena enabling standard SQL queries to analyze data stored on S3. Traditionally, analyzing unstructured data required migration to a specialized file system such as HDFS, and the setup of complex, and specialized analytics software. With the Athena service no setup is required to analyze data stored on S3. This will enable businesses to easily analyze large structured and unstructured data with familiar tools (SQL). This will allow business to create new data based efficiency and customer experiences faster and more cost effectively.

Image Processing

As more customer experiences move online, it is critical to apply image analysis application capability. Currently image analysis requires new machine learning tools and specialized artificial intelligence expertise. AWS announced the availability of their new Rekognition service that makes it easier to add image analysis to your applications. The service is leveraging the learning's from the analysis of billions of Amazon prime photos. This service will allow businesses too quickly and cost effectively analyze images to identify familiar customer faces and sentiment. Immediate actions can be taken based on customer preferences to improve their experience . 

Voice Processing

With the introduction of voice assistant technology, Apple Siri, Amazon Alexa, and Google Home it has become clear customers prefer to communicate with businesses via voice rather than just a screen and keyboard. The problem is integrating voice communication into applications has required new, and complex artificial learning tools, infrastructure, and skill sets. AWS announced the availability of their voice to text service, Amazon Lex and their text to voice service, Polly. These fully managed services leverage the complex deep learning functionalities of automatic speech recognition (ASR), and natural language understanding (NLU) algorithms leveraging the massive amount of data and infrastructure supporting the Amazon Alexa products. These services will allow businesses to create a new category of speech integrated applications quickly and cost effectively. These services will accelerate the development of new customer engagement and experiences. Capital One demonstrated their use of the Lex and Polly services to create new customer engagement here.

IoT Processing

Gartner, Inc. forecasts that 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from 2015, and will reach 20.8 billion by 2020. In 2016, 5.5 million new things will get connected every day. It is a huge challenge and opportunity to take full advantage of all the new data generated by these connected devices. New custom software must be developed to collect, filter, aggregate, analyze, act on, and ultimately push the data to the cloud for deeper analytics. Much of the new custom software is not run on the same limitless compute and storage capacity of the cloud. IoT software development has required specialized tools and technologies. AWS announced the Greengrass service that is designed to extend the AWS Lambda programming tool set and technology to small, simple, field-based devices. This service will simplify and accelerate new IoT software development by providing common programming tools (AWS Lambda) and technology for software running at the device, field, and cloud. Businesses collecting, acting, and analyzing data from millions and billions of devices will create new competitive advantages in product production and service efficiency. More information on AWS Greengrass is available here.

These new AWS services will allow businesses to collect, and analyze the data about their consumers faster and more deeply to better anticipate needs. Integrating image and voice processing will create new more intimate customer experiences. AWS is leading the way making these technologies readily available which will accelerate the development of new application categories in 2017. A recording of Andy Jassy's complete keynote address is available here.


AWS re:Invent Andy Jassy Keynote - Core IT Compute Features

FstestgrowthITAmazon Web Services (AWS) CEO, Andy Jassy re:Invent keynote was a mixture of his view on enterprise IT and many new services announcements. AWS is now a $13-billion-dollar business, growing at 55% year over year. Andy reported that AWS is now the fastest growing large enterprise IT company in the world. The next four fastest growing enterprise IT companies are using AWS as their primary cloud provider. AWS used this re:Invent conference to announce their readiness for supporting enterprise IT and software as a service workloads in addition to their success with supporting startup businesses. AWS is now services more than 1 million active non-Amazon customers and delivering 300 million hours of EC2 services monthly. AWS has commoditized most of the common IT infrastructure services as robust, easily accessible cloud services. In this post I will recap the the announcements of new and enhanced core enterprise IT compute features. In my next post I will recap how AWS is pivoting from low cost IT infrastructure provider to providing value added business IT services.

The first new capability announcements were a group of new EC2 instance types mostly taking advantage of Moore’s law. The enhanced instance type announcements were:

  • T2 –New instance types: T2.xlarge & T2.2xlarge – 2x memory – for more in memory processing workloads
  • R4 – memory intensive workloads – 2x everything
    • Storage Capacity Instance
    • Speed using DDR4 memory
    • L3 cache capacity
    • vCPU’s
  • I3 – replaces I2 instance type for I/O intensive workloads
    • 9x – 9x IOPS
    • 2x memory capacity
    • 3x storage capacity using NVM to replace SSD media
    • 2x vCPU’s
  • C5 – replaces c4 instance type for compute intensive workloads like AI, transcoding.
    • Featuring Skylake CPU’s
    • 2x CPU capacity
    • 2x CPU speed – move from Haswell to Skylake Intel CPU’s
    • 3x storage capacity moving to NVM from SSD media
    • 4x more memory

In addition, two new instances were announced introducing the availability of GPU’s to machine learning and artificial intelligence processing, and field-programmable gate arrays (FPGA) for offloading repetitive processing tasks.

  • New P1 Instance – Instance including everything need to leverage GPU. Full and share GPU (spot market) available
  • New F1 Instance type – Instance including everything needed to develop FPGA acceleration. FPGA Market Place ready to go at launch

The AWS EC2 catalogue is broad enough to meet the majority of enterprise workload types. Each of these instances can be launched in a few minutes. AWS has also recognized a class of use cases such as simple web, application, or basic Linux servers is needed. For these use cases an instance type with a minimum of features including basic storage, networking and operating system is needed. For this use case AWS announced the launch of their Lightsail service. With a few menus driven clicks you can now launch a functioning virtual private server (VPS). The current VPS service does not offer an easy upgrade path to any of the other EC2 instances like the Lightsail offering.

Newinstances
AWS
compute service portfolio is broad enough to service most enterprise IT workloads. One of the major challenges enterprise IT has always dealt with is optimizing, and reducing the cost of operating their existing applications. Most of applications created in the past decade run on VMware virtual servers and migrating those workloads to AWS has been disruptive and complex. AWS and VMware have announced a new partnership which will enable VMware workloads to run on dedicated bare metal EC2 instances by mid 2017. This will allow enterprise IT to take advantage of AWS scale and cost advantages while not having to modify their existing applications or operational procedures. 

Vmwareonaws
During this part of his his keynote Andy Jassy is clearly making the case that enterprise IT should be fully embracing AWS as it’s primary cloud provider. Andy’s argument is many startups that threaten to disrupt established businesses today and the software as a service (SAAS) provider’s such as Workday and Salesforce that enterprises on today are already relying on AWS cloud services so you need too as well.  


AWS re:Invent - Compute, Network, and Data Center Design Principles

Amazon Web Services distinguished engineer, James Hamilton was the first keynote presenter at this year's re:Invent conference. James' presentation featured a review of AWS data center and infrastructure design principles. James presented a compelling case that AWS cloud infrastructure is enterprise ready.

Data Center Design


AwsregionsToday AWS is deploying more server capacity daily that is sufficient to support Amazon's entire 2005 need. In 2005, Amazon was an $8.49 billion business. The growth of AWS capacity is being supported by expansion of existing regions and the addition of four new regions next year. While the size and number of AWS regions are growing, the size of each data center is remaining relatively conservative. Each AWS data center is architected to support 50-80K servers and consume 25-32MW of electricity. AWS has maintained this data center size to limit the fault zone and maintain the cost efficiency. AWS design the infrastructure and run their data centers with an overhead of only 10-12% which helps keep their cost to serve low. Each region consists of two to five availability zones consisting of one to eight data centers. AWS data center architecture is highly optimized for enterprise IT availability, and cost. James has blogged about optimizing data center design costs on his blog here.

Network Design

NetworkNetwork design is another area where AWS has taken a unique approach. James stated that early on the scale of AWS "broke the standard vertical network router and switch architectures" and they had to build their web scale network. AWS designs all its networking hardware and writes all its networking software. This allows them to minimize costs and maximize agility since it is optimized for their single purpose. They have found that this approach actually improved their network reliability and enabled them to introduce new capabilities faster. Two interesting topics James discussed were AWS bet on 25GbE instead of the industry standard 40GbE used in most enterprise IT data centers and their use of network ASIC.

The case for 25GbE is based on the cost of optics. AWS networking architecture is built on 100 GbE. 100GbE is four 25 Gbps waves. Most enterprise IT data center designs are leveraging 40 GbE which is four 10Gbps waves. Minimizing the cost of the optics for web scale data center designs has major cost and efficiency benefits. From an engineering perspective it is simple to design and maintain a 25 GbE top of rack switch that aggregates into a single 100 GbE data center switch.

The second interesting network design principle that James shared was AWS use of network ASIC. It is often stated that web scale IT service providers like AWS leverage commodity hardware. CustomsiliconWhile they do rely on built to spec hardware they are also embracing custom silicon to provide a competitive advantage. AWS acquired Annapurna Labs in January of 2015 giving AWS the ability to design and optimize the network silicon in addition to the hardware, and software. The silicon design capability is being used to design specialized application-specific integrated circuit (ASIC) to offload repetitive tasks from the hardware. The offload of repetitive tasks to custom network ASIC's reduces power consumption for the task, while improving performance. Another example how AWS is using its ability to build purpose built infrastructure to provide differentiation.

Server Design

AWS has long been optimizing their infrastructure based on server design. Server
James shared AWS's design philosophy which is based on simplicity and optimizing for power and cooling efficiency over density. Power and cooling costs are more expensive than data center space as James has explained in his blog posts here. James shared an older AWS server design for comparison purposes and highlighted the efficiency difference compared to traditional enterprise IT designs and denser commercial server models.

Sustainability

Sustainability
AWS has committed their data center being 100% powered by renewable energy (https://aws.amazon.com/about-aws/sustainability/). James reported that AWS has reached 40% renewable energy support today and expects to reach 50% by the end of 2017. Meeting these goals is complicated by their explosive infrastructure growth. New AWS projects will generate 2.6 million MWhr of energy annually using a combination of solar and wind generation farms. This type of sustainable energy commitment is critical to our environment. Although power consumption by data centers has plateaued in the past few years, data centers still consume 2% of all US electricity according to US department of energy estimates.

I thought it was interesting that AWS chose to kickoff re:Invent with an overview of their data center and infrastructure design. Many people believe infrastructure no longer matters with little differentiation. After James' presentation I think you will agree infrastructure done right can provide differentiation and hardware design is still important to a well-run enterprise IT environment. A recording of James' keynote is available here and his presentation is available here.


AWS re:Invent Kickoff

Business Digital Transformation is accelerating as companies race to engage their customers and deliver products and services via technology. The best IT organizations are transforming their focus from just a technology support role to helping the business envision the new possible. Cloud computing has become the preferred way to deliver technology infrastructure services. Over the past year, cloud computing technology has continued to mature to enterprise grade. Many solutions such as Dell Technologies Enterprise Hybrid Cloud is now on it’s fourth major version, converged infrastructure and public cloud service sales are growing at double digits. In addition, VMware has announced the capability to run their Cloud Foundation compute (vSphere) network(NSX), and storage (vSAN) on IBM and Amazon Cloud Services over the past two months.

This has led me to attend my first AWS re:Invent conference starting today. I am excited to learn more about the VMware Cloud Foundation on AWS offering and several of the new AWS services including:

  • Lambda architecture
  • Serverless architectures
  • Database service transition from relational architectures
  • Machine Learning/Artificial Intelligence
  • IOT services

Many of the enterprise IT organizations I am working with are creating a bifurcated cloud strategy where all new application development is designed and deployed in clouds. Existing applications that can be transferred to cloud services are moving quickly to cloud infrastructure services without major transformations. This allows IT teams to get out of traditional infrastructure and data center management work. The resources freed up from traditional IT and data center management tasks will be applied to modernizing existing applications and creating new custom software to deliver new products and services.

My schedule for today is:

 

GPS01  --  Global Partner Summit Keynote

ARC205  --  Born in the Cloud; Built Like a Startup

ARC202  --  Accenture Cloud Platform Serverless Journey

BDM201  --  Big Data Architectural Patterns and Best Practices on AWS

DEV205  --  Monitoring, Hold the Infrastructure: Getting the Most from AWS Lambda

DAT306  --  ElastiCache Deep Dive: Best Practices and Usage Patterns

BDM306  --  Netflix: Using Amazon S3 as the fabric of our big data ecosystem

GA02  --  Tuesday Night Live with James Hamilton

I will be posting my thoughts throughout the conference here throughout the week.


IoT - Winning the IT Gold Rush

The Internet of Things (IoT) is the new IT "gold rush". IoT promises to revolutionize everything we do--the way we Live, Learn, Heal, Work, Get around, and What we eat. Every technology company is positioning new products and services to enable IoT for businesses. This is creating a lot of confusion, and unrealistic expectations. The smart business leaders I'm talking to today are leveraging the patterns of previous technology revolutions to guide their IoT strategy. Technology revolutions tend to take a decade or more to generate meaningful revenue. The challenge can be once we reach the tipping point those that are not prepared will become irrelevant quickly. The companies preparing smartly today will reap the IoT rewards of tomorrow.

I believe there are three waves of IoT adaption:

  • Wave 1 - IoT infrastructure – installing the modern compute, network connectivity, and data storage capability
  • Wave 2 - IoT applications – building the new application that will enable new products, and services leveraging the IoT infrastructure of Wave 1
  • Wave 3 – IoT enabled transformation of industries leveraging the applications and infrastructure of Wave 1 & 2

Today we are clearly in the first wave of IoT adaption. Businesses are adding IT capability for IoT workloads. Two major IT trends that I see from customers are:

  • new capability to handle the volume, variety, and velocity of IoT data
  • new data analytics capability

The smart businesses I'm working with are not building this capability as a new technology silo but instead building these new capabilities integrated with their existing IT infrastructure capability. If you look at the adaption patterns of the three main technology disruptions of the past twenty years: internet, mobile, and cloud they continued to leverage the capability and data from the previous generation. Today almost all the most successful new mobile applications are accessing existing customer relationship data. The smart businesses are adding new flash, and NVM data media to their IT infrastructure that is capable of ingesting, and processing data at 10 to 100x faster speeds to their architectures. When you have a single Wind Turbine generating 400 data points a second, a wind farm will easily overwhelm the IT infrastructure of most enterprises today. But if you incrementally add new data media like flash, and NVM technologies combining it with access to traditional product maintenance records you can start to greatly improve the output and reduce maintenance costs for your product.

The second major enterprise  infrastructure trend I am seeing for IoT is the investment in next generation data analytics capability. From a technology perspective smart businesses are gathering their data in a virtual repositories called data lakes. The type of data and minimal structure different them from traditional data warehouses where the analytics processing is often predetermined. New roles such as chief data officers, chief data analytics officer, and data scientists are being created to better understand the businesses data assets, and govern those assets. I had a CTO of a major US health care provider tell me last week he has an unlimited amount of data available but the winners will be able to mine more data, faster for actionable information. Imagine if a patient with a chronic disease like high blood pressure could be monitored 24x7x365 by a consumer cost wearable device that could collect your vital signs every minute. Using the minute by minute data your healthcare provider could be compared your information to 1000's of others to optimize your maintenance care continuously. The same device would alert your doctor immediately if your vital signs indicated the need for immediate attention. The IT capabilities including network connectivity, data processing speeds, and data science knowledge are being created right now.

I work for Dell EMC and we are focused on augmenting our products to enable the new IoT infrastructure capabilities needed. Our CTO, John Roese recently presented our strategy at IOT Solutions World Congress. During this interview at the conference he summarized Dell Technologies IoT vision to provide the new IoT infrastructure capabilities to enable the second wave of adaption.

 

With any new technology it is easy to get caught up in the hype and excitement of the possibilities. The smart businesses will apply the learnings of the past to be prepared for the inevitable tipping point of IoT. Smart businesses are investing to add the new IT infrastructure capabilities that are needed for the next waves of IoT adaption. The second and third waves will come faster. Business that have the necessary IoT capability and can efficiently access their existing systems of record and data will be most successful.


Get Ready for the Cloud Foundry Summit Europe

The main Cloud Foundry European user conference, Cloud Foundry Summit is scheduled for next week in Frankfurt, Germany (9/26-9/28). This is the second year of the event and with the continued momentum of the Cloud Foundry project and user adaption as the premiere modern application development platform they are expecting over 600 attendee's this year. Leading into this year's user summit a new release of the Cloud Foundry platform was been released (v242) on 9/13 with major improvements to log aggregation and container management.

The first day of the event schedule is dedicated to training for application development and operations practitioner's including an "unconference" with a couple hours of lightening talks from theCloud Foundry user community. The second day of the conference kicks off with a keynote from Cloud Foundry CEO, Sam Ramji and is followed by a number of great breakouts on the status of the Cloud Foundry technology projects and successful users. The third day is packed with more, great breakout sessions and concludes with a chat betweenSam Ramji and Cloud Foundry board chairman, John Roese reflecting on the experiences of the past year and their aspirations for the next year. This year all the keynote and lightening talks will be live streamed. You can find the schedule and register for the live stream here.

In addition to Sam and John's talks I am looking forward to seeing the work from Brian Gallagher's Dojo team.  Brian Gallagher led the creation of the first foundation member sponsored Dojo. Brain's team has made a number of great code contributions and provided leadership for a key infrastructure projects to make it simpler to deploy and run Cloud Foundry. I recently had an opportunity to talk to John and Brian about their plans for the upcoming Cloud Foundry Summit including how to get a free summit pass and an invite to DellEMC customer appreciation party.

 

 

 


VMware Embraces Multi-Cloud

Last VMware hosted their annual user conference, VMworld. VMworld has always been a special event because of the strong technology ecosystem and user community that has developed around VMware products and especially their vSphere technology. Over the past few years much of the talk at VMworld focused on enabling enterprise IT to build cloud infrastructures as the natural next step of the VMware virtualized data center. Over the past five years VMware has been investing heavily through acquisition (i.e. Integrien, DynamicOps, Desktone) and organic development (i.e. vCloudAir) in products to enable enterprise IT mature their virtualized data center into an on premises cloud infrastructure. In parallel, public clouds (i.e. AWS, Azure, Google, and Salesforce) have emerged. MulticloudStanding up cloud infrastructures is no longer the challenge, provisioning, managing, and monitoring workloads using all these different cloud services is the biggest challenge since each of these clouds have proprietary interfaces and API's. As a result cloud silo's have emerged isolating workloads and data sets in those silo's. Enterprise IT is managing a portfolio of cloud services to support of their application workloads and needs tools and solutions that allow them to efficiently manage, connect, and secure workloads running across multiple clouds. I have been referring to these types of services as a set of Cloud Interworking functions.

Many customers are running traditional application workloads on VMware clouds today. These clouds are architected and optimized for their applications and workflows. Many customers would like the option running these workloads in service provider clouds to realize cost, and scale benefits. VMware introduced Cloud Foundation solution and their SDDC manager that provides a common VMware provisioning, management, and monitoring experience across multiple cloud provider infrastructures. IBM is the first to offer this capability on their IBM Cloud public cloud infrastructure. This will help eliminate the cloud silo's that are created when trying to leverage VMware clouds on multiple cloud infrastructure providers.

Cloudfoundation

More about the VMware Cloud Foundation offering on IBM Cloud can be found here. Additional public cloud providers such as Virtustream have announced the intention to offer VMware Cloud Foundation services as well.

In addition VMware positioned NSX as the best way to provide secure and manageable inter cloud network connectivity. One of the major challenges to moving existing workloads to public clouds is the network domain architectures are different from one cloud service to the next. Software defined networking deployed across cloud services allow workloads to run without modification in multiple cloud service providers. In addition with software defined networking with NSX provides a single management, monitoring, and discovery interface for your network across clouds. Software defined micro segmentation services allow you to implement finer security granularity across all the cloud services supporting your application workings. This year Rajiv Ramaswami gave a great talk on the challenges with cross cloud service networking and the value of the VMware NSX solution here.

The third pillar of VMware's Multi Cloud solution strategy is providing an enterprise grade digital workspace experience for end users. Enterprises need a way to manage the distribution of application access to end users across many types of devices, and locations in timely manner while maintaining data security and governance. This challenge is becoming greater as the velocity of new application creation is increasing, and the number and type of new devices is accelerating. VMware announced the expansion of their partnership with IBM to provide hosted desktop and application services and the progress of their work with Salesforce.com and their new analytics application. Providing a consistent, automated, secure solution to manage application access regardless of which cloud the application is running via a variety of end user devices, and network types is critical for enterprise IT today. This can now be more easily be provided through a combination of cloud service providers. More information on this capability is available here and here.

  Cloud interworking

The pivot by VMware to provide solutions that will simplify, and automate cloud interworking services will be a milestone in cloud adaption by enterprise IT. VMware is enabling simplified provisioning, management, and monitoring of workloads across multiple cloud providers. With NSX software defined networking enterprise IT can now manage, monitor, and secure their application communication across multiple cloud providers for the first time. Simplifying connecting end users to your applications via a consistent and secure digital work space across a variety of end user devices and network locations is critical. I believe this is the year enterprise IT will focus more on using cloud services rather than how to build them. These cloud interworking services will expand the choice for workload placement based on cost, location, and availability. The speed businesses can consume cloud services in 2016 accelerates the new products, services, and customer experiences that will differentiate from their competitors.


Cloud Inter-Working – Distributed Data Access

In my previous post, Cloud Interworking Services described a new set of IT infrastructure services that enable reliable, and secure inter cloud access. In this post I am going to describe inter cloud data access by your
applications. As more applications leverage cloud infrastructure services data sets are being distributed CDAacross several clouds. Most applications will need access to data sets stored in one more cloud infrastructure services different from where they are running. For example when developing a new customer engagement mobile application that runs in your private
cloud you may need access to data stored in the Salesforce.com cloud and SAP ADAapplication data running at Virtustream. A well architected cloud infrastructure needs to enable friction-less data access by the new mobile application. Application access to any of your data sets is a basic requirement to compete in the digital economy. The faster IT can iterate on application development the faster the business will deliver customer value.

Application access to data sets created, and maintained remotely is not a new challenge for IT. Starting at the beginning of this decade the industry began using storage virtualization technologies to enable data sets to be accessible in multiple data centers. Products like EMC VPLEX, Hitachi USPV, and Netapp V-Series provide these capabilities. These storage virtualization technologies were primarily designed to enable rapid restart business continuity between clouds up to 100’s of miles away. It is not easy for multiple applications to easily access the same data sets simultaneously without implementing a complex, distributed lock manager to keep the data sets in a consistent state. I have seen many customers successfully create snapshot copies of the data to enable other application to access read only copies of transactional data sets for analytics processing. Storage virtualization is limited by distance and network latency typically not exceeding 50ms or <100 miles. Storage virtualization is mostly limited to block storage protocol limiting application access.

More recently storage gateway technologies have been introduced to place data sets in their most cost effective cloud service while maintaining application access over traditional block storage and file protocols. Typically these storage gateways will cache the most frequently accessed data locally to minimize access latency. The storage gateways will pull the data it doesn’t have cached locally transparent to applications. The challenge with most storage gateway technologies is the data is not easily accessible by applications running anywhere but the source site. Some of these storage gateway products I see most often are EMC CloudArray, and Panzura.

12FactorBoth storage virtualization and gateways technologies do not allow IT to provide ubiquitous access to data sets across multiple cloud services. In order to de-couple the data and applications a new architecture is required. New applications should access all data through standard API’s rather than traditional storage protocols. Data sets must be accessible independent of any single application and cloud infrastructure. Application architectures for modern mobile, web, and social application follow The Twelve-Factor App architecture where data sources are treated as backing services that are attached at run time. For example, a modern 12-Factor App should be able to attach and detach to any number of MySQL databases and data object store the same way each time regardless of which cloud infrastructure the application or data set is operating.

DatafabricFor existing data sets that are tightly coupled to applications new data fabrics will be necessary to virtualize access to data sources. For example, if you want an application to perform data analytics against data sets in SQL database and HDFS file system your application will need to rely on a data fabric product like Pivotal Hawq to access the two different data formats and execute a SQL query. New applications will leverage data fabric API’s to access legacy data sources such as ERP databases. These modern data fabrics manage metadata describing data sets including location, and format. Since new applications are creating more unstructured data (i.e. audio, video, images) in addition to tradition structured data (spreadsheets, SQL databases) application will need a data fabric to manage access consistently regardless of format.

Application access to all your data sets is critical to developing, and operating new software. While we have been making IT infrastructures more flexible with storage virtualization and gateways, the new data fabrics are critical to enabling the consumption of cloud infrastructure. In order for companies to successfully compete in the digital economy they need to be able to quickly develop new custom software delivering differentiated products, and customer experiences. In order to get the application development speed, and scale these applications need to be deployed in cloud infrastructures with a robust data inter cloud service.


Cloud Interworking Services




In my previous post, Cloud Is Not A Place I presented my case for enterprise IT needing four types of cloud services to support their application workloads. Many enterprise IT customers I work with are adapting a Bi-Modal IT strategy. One mode of cloud services for supporting their traditional 3-tier client-server applications such as SAP/R3, Oracle ERP, SharePoint, SQLServer based application. Most of these traditional systems are their systems of record. The second mode of cloud services is optimized for modern mobile, web, social and big data applications such as Salesforce.com, and custom developed web portal systems. Many of these applications are their systems of customer engagement.

CloudappWorkloadsMany applications workloads can be supported by just one of these cloud types but all enterprise IT application portfolio’s require a combination of more than one of these cloud types. For example, many businesses run SAP for ERP and use Salesforce.com for CRM. These two application workloads will be support by different cloud types. As you add more application workloads you must deal with applications that need access to other application generated data sets which may not run on the same cloud type service. You also see opportunities to leverage one cloud type for primary data and other cloud types for redundancy and protection. Frictionless access between these different cloud services is critical.

A new class of cloud services I call Cloud Interworking services is needed. These Cloud Interworking services are critical to maximizing application workload placement and inter-operability. I believe these Cloud Interworking services will enable enterprise IT organizations to provide the most differentiated and cost effective IT services for their businesses.

We have identified three basic Cloud Interworking services that modern enterprise IT need to support:

  • Data Set Access – access data sets easily from any cloud
  • Data Security – encryption of data in transit and at rest
  • Data Protection – data copies that can be used to restore failed data access requests

In my next series of posts I am going to discuss how these capabilities can be implemented today. These Cloud Interworking services will enable enterprise IT infrastructure teams to become their companies cloud portfolio manager. As the cloud portfolio manager they will be able reduce friction with their application development team while reducing costs and improving their agility.