AWS re:Invent Kickoff

Business Digital Transformation is accelerating as companies race to engage their customers and deliver products and services via technology. The best IT organizations are transforming their focus from just a technology support role to helping the business envision the new possible. Cloud computing has become the preferred way to deliver technology infrastructure services. Over the past year, cloud computing technology has continued to mature to enterprise grade. Many solutions such as Dell Technologies Enterprise Hybrid Cloud is now on it’s fourth major version, converged infrastructure and public cloud service sales are growing at double digits. In addition, VMware has announced the capability to run their Cloud Foundation compute (vSphere) network(NSX), and storage (vSAN) on IBM and Amazon Cloud Services over the past two months.

This has led me to attend my first AWS re:Invent conference starting today. I am excited to learn more about the VMware Cloud Foundation on AWS offering and several of the new AWS services including:

  • Lambda architecture
  • Serverless architectures
  • Database service transition from relational architectures
  • Machine Learning/Artificial Intelligence
  • IOT services

Many of the enterprise IT organizations I am working with are creating a bifurcated cloud strategy where all new application development is designed and deployed in clouds. Existing applications that can be transferred to cloud services are moving quickly to cloud infrastructure services without major transformations. This allows IT teams to get out of traditional infrastructure and data center management work. The resources freed up from traditional IT and data center management tasks will be applied to modernizing existing applications and creating new custom software to deliver new products and services.

My schedule for today is:

 

GPS01  --  Global Partner Summit Keynote

ARC205  --  Born in the Cloud; Built Like a Startup

ARC202  --  Accenture Cloud Platform Serverless Journey

BDM201  --  Big Data Architectural Patterns and Best Practices on AWS

DEV205  --  Monitoring, Hold the Infrastructure: Getting the Most from AWS Lambda

DAT306  --  ElastiCache Deep Dive: Best Practices and Usage Patterns

BDM306  --  Netflix: Using Amazon S3 as the fabric of our big data ecosystem

GA02  --  Tuesday Night Live with James Hamilton

I will be posting my thoughts throughout the conference here throughout the week.


IoT - Winning the IT Gold Rush

The Internet of Things (IoT) is the new IT "gold rush". IoT promises to revolutionize everything we do--the way we Live, Learn, Heal, Work, Get around, and What we eat. Every technology company is positioning new products and services to enable IoT for businesses. This is creating a lot of confusion, and unrealistic expectations. The smart business leaders I'm talking to today are leveraging the patterns of previous technology revolutions to guide their IoT strategy. Technology revolutions tend to take a decade or more to generate meaningful revenue. The challenge can be once we reach the tipping point those that are not prepared will become irrelevant quickly. The companies preparing smartly today will reap the IoT rewards of tomorrow.

I believe there are three waves of IoT adaption:

  • Wave 1 - IoT infrastructure – installing the modern compute, network connectivity, and data storage capability
  • Wave 2 - IoT applications – building the new application that will enable new products, and services leveraging the IoT infrastructure of Wave 1
  • Wave 3 – IoT enabled transformation of industries leveraging the applications and infrastructure of Wave 1 & 2

Today we are clearly in the first wave of IoT adaption. Businesses are adding IT capability for IoT workloads. Two major IT trends that I see from customers are:

  • new capability to handle the volume, variety, and velocity of IoT data
  • new data analytics capability

The smart businesses I'm working with are not building this capability as a new technology silo but instead building these new capabilities integrated with their existing IT infrastructure capability. If you look at the adaption patterns of the three main technology disruptions of the past twenty years: internet, mobile, and cloud they continued to leverage the capability and data from the previous generation. Today almost all the most successful new mobile applications are accessing existing customer relationship data. The smart businesses are adding new flash, and NVM data media to their IT infrastructure that is capable of ingesting, and processing data at 10 to 100x faster speeds to their architectures. When you have a single Wind Turbine generating 400 data points a second, a wind farm will easily overwhelm the IT infrastructure of most enterprises today. But if you incrementally add new data media like flash, and NVM technologies combining it with access to traditional product maintenance records you can start to greatly improve the output and reduce maintenance costs for your product.

The second major enterprise  infrastructure trend I am seeing for IoT is the investment in next generation data analytics capability. From a technology perspective smart businesses are gathering their data in a virtual repositories called data lakes. The type of data and minimal structure different them from traditional data warehouses where the analytics processing is often predetermined. New roles such as chief data officers, chief data analytics officer, and data scientists are being created to better understand the businesses data assets, and govern those assets. I had a CTO of a major US health care provider tell me last week he has an unlimited amount of data available but the winners will be able to mine more data, faster for actionable information. Imagine if a patient with a chronic disease like high blood pressure could be monitored 24x7x365 by a consumer cost wearable device that could collect your vital signs every minute. Using the minute by minute data your healthcare provider could be compared your information to 1000's of others to optimize your maintenance care continuously. The same device would alert your doctor immediately if your vital signs indicated the need for immediate attention. The IT capabilities including network connectivity, data processing speeds, and data science knowledge are being created right now.

I work for Dell EMC and we are focused on augmenting our products to enable the new IoT infrastructure capabilities needed. Our CTO, John Roese recently presented our strategy at IOT Solutions World Congress. During this interview at the conference he summarized Dell Technologies IoT vision to provide the new IoT infrastructure capabilities to enable the second wave of adaption.

 

With any new technology it is easy to get caught up in the hype and excitement of the possibilities. The smart businesses will apply the learnings of the past to be prepared for the inevitable tipping point of IoT. Smart businesses are investing to add the new IT infrastructure capabilities that are needed for the next waves of IoT adaption. The second and third waves will come faster. Business that have the necessary IoT capability and can efficiently access their existing systems of record and data will be most successful.


Get Ready for the Cloud Foundry Summit Europe

The main Cloud Foundry European user conference, Cloud Foundry Summit is scheduled for next week in Frankfurt, Germany (9/26-9/28). This is the second year of the event and with the continued momentum of the Cloud Foundry project and user adaption as the premiere modern application development platform they are expecting over 600 attendee's this year. Leading into this year's user summit a new release of the Cloud Foundry platform was been released (v242) on 9/13 with major improvements to log aggregation and container management.

The first day of the event schedule is dedicated to training for application development and operations practitioner's including an "unconference" with a couple hours of lightening talks from theCloud Foundry user community. The second day of the conference kicks off with a keynote from Cloud Foundry CEO, Sam Ramji and is followed by a number of great breakouts on the status of the Cloud Foundry technology projects and successful users. The third day is packed with more, great breakout sessions and concludes with a chat betweenSam Ramji and Cloud Foundry board chairman, John Roese reflecting on the experiences of the past year and their aspirations for the next year. This year all the keynote and lightening talks will be live streamed. You can find the schedule and register for the live stream here.

In addition to Sam and John's talks I am looking forward to seeing the work from Brian Gallagher's Dojo team.  Brian Gallagher led the creation of the first foundation member sponsored Dojo. Brain's team has made a number of great code contributions and provided leadership for a key infrastructure projects to make it simpler to deploy and run Cloud Foundry. I recently had an opportunity to talk to John and Brian about their plans for the upcoming Cloud Foundry Summit including how to get a free summit pass and an invite to DellEMC customer appreciation party.

 

 

 


VMware Embraces Multi-Cloud

Last VMware hosted their annual user conference, VMworld. VMworld has always been a special event because of the strong technology ecosystem and user community that has developed around VMware products and especially their vSphere technology. Over the past few years much of the talk at VMworld focused on enabling enterprise IT to build cloud infrastructures as the natural next step of the VMware virtualized data center. Over the past five years VMware has been investing heavily through acquisition (i.e. Integrien, DynamicOps, Desktone) and organic development (i.e. vCloudAir) in products to enable enterprise IT mature their virtualized data center into an on premises cloud infrastructure. In parallel, public clouds (i.e. AWS, Azure, Google, and Salesforce) have emerged. MulticloudStanding up cloud infrastructures is no longer the challenge, provisioning, managing, and monitoring workloads using all these different cloud services is the biggest challenge since each of these clouds have proprietary interfaces and API's. As a result cloud silo's have emerged isolating workloads and data sets in those silo's. Enterprise IT is managing a portfolio of cloud services to support of their application workloads and needs tools and solutions that allow them to efficiently manage, connect, and secure workloads running across multiple clouds. I have been referring to these types of services as a set of Cloud Interworking functions.

Many customers are running traditional application workloads on VMware clouds today. These clouds are architected and optimized for their applications and workflows. Many customers would like the option running these workloads in service provider clouds to realize cost, and scale benefits. VMware introduced Cloud Foundation solution and their SDDC manager that provides a common VMware provisioning, management, and monitoring experience across multiple cloud provider infrastructures. IBM is the first to offer this capability on their IBM Cloud public cloud infrastructure. This will help eliminate the cloud silo's that are created when trying to leverage VMware clouds on multiple cloud infrastructure providers.

Cloudfoundation

More about the VMware Cloud Foundation offering on IBM Cloud can be found here. Additional public cloud providers such as Virtustream have announced the intention to offer VMware Cloud Foundation services as well.

In addition VMware positioned NSX as the best way to provide secure and manageable inter cloud network connectivity. One of the major challenges to moving existing workloads to public clouds is the network domain architectures are different from one cloud service to the next. Software defined networking deployed across cloud services allow workloads to run without modification in multiple cloud service providers. In addition with software defined networking with NSX provides a single management, monitoring, and discovery interface for your network across clouds. Software defined micro segmentation services allow you to implement finer security granularity across all the cloud services supporting your application workings. This year Rajiv Ramaswami gave a great talk on the challenges with cross cloud service networking and the value of the VMware NSX solution here.

The third pillar of VMware's Multi Cloud solution strategy is providing an enterprise grade digital workspace experience for end users. Enterprises need a way to manage the distribution of application access to end users across many types of devices, and locations in timely manner while maintaining data security and governance. This challenge is becoming greater as the velocity of new application creation is increasing, and the number and type of new devices is accelerating. VMware announced the expansion of their partnership with IBM to provide hosted desktop and application services and the progress of their work with Salesforce.com and their new analytics application. Providing a consistent, automated, secure solution to manage application access regardless of which cloud the application is running via a variety of end user devices, and network types is critical for enterprise IT today. This can now be more easily be provided through a combination of cloud service providers. More information on this capability is available here and here.

  Cloud interworking

The pivot by VMware to provide solutions that will simplify, and automate cloud interworking services will be a milestone in cloud adaption by enterprise IT. VMware is enabling simplified provisioning, management, and monitoring of workloads across multiple cloud providers. With NSX software defined networking enterprise IT can now manage, monitor, and secure their application communication across multiple cloud providers for the first time. Simplifying connecting end users to your applications via a consistent and secure digital work space across a variety of end user devices and network locations is critical. I believe this is the year enterprise IT will focus more on using cloud services rather than how to build them. These cloud interworking services will expand the choice for workload placement based on cost, location, and availability. The speed businesses can consume cloud services in 2016 accelerates the new products, services, and customer experiences that will differentiate from their competitors.


VMworld 2016


Vmworldus
VMworld 2016 will be kicking off this weekend. The theme of this year's event is be_TOMORROW which certainly reflects the state of business and the IT industry. I think it's also reflective for VMware since their products were responsible for the last big IT technology shift, virtualization but new products are needed for the next wave of cloud native applications. VMworld has always been the event where the next wave of new IT cloud technologies have been introduced and this year VMware and its partners will be making their case for their role in tomorrow's IT ecosystem. I expect to see a continued maturation of VMware's software defined data center (SDDC) offering automating storage, network, and compute. Last year they shared their vision for cross cloud management and realigned their management products into new bundles of suites. This year I expect major enhancements to be announced to deliver on their management vision and new SDDC bundled with converged and hyper-converged appliances.

The event is expected to attract over 24,000 IT professionals with over 400 technical breakout sessions and in my opinion the best hands on lab of any event. One of the reasons so many of us attend year after year is the opportunity to network in person. This year I will be again participating in the v0dgeball tournament on Sunday afternoon. It is a fun way to see friends from across the IT ecosystem and find out about all the new startup companies as well. The v0dgeball event starts at 3pm on Sunday and admission is free. All the proceeds benefit the Wounded Warrior Project. More information about the event is available here.

My company EMC will again have a big presence at VMworld and will be introducing enhancements to our hybrid cloud offerings and tighter integration with our converged infrastructure offerings. It has become clear that customers are expecting the hybrid cloud offerings to include data protection and security services that are simpler and easier to use. We will be introducing some great integration work our EMC and VMware engineers have completed to simplify the deployment and management of these services. EMC will be at booth #1223 in the Solutions Exchange during the week with a number of great presentations and our engineering experts to answer your questions. More information on Everything VMworld by EMC is available here.

I am excited with the product announcement previews I've seen as part of EMCElect and the CiscoChampions programs this week. I think this will be an exciting week and I will be blogging the highlights and most interesting announcements during the conference. I look forward to seeing all my friends and making some new one's this year.

 


Cloud Inter-Working – Distributed Data Access

In my previous post, Cloud Interworking Services described a new set of IT infrastructure services that enable reliable, and secure inter cloud access. In this post I am going to describe inter cloud data access by your
applications. As more applications leverage cloud infrastructure services data sets are being distributed CDAacross several clouds. Most applications will need access to data sets stored in one more cloud infrastructure services different from where they are running. For example when developing a new customer engagement mobile application that runs in your private
cloud you may need access to data stored in the Salesforce.com cloud and SAP ADAapplication data running at Virtustream. A well architected cloud infrastructure needs to enable friction-less data access by the new mobile application. Application access to any of your data sets is a basic requirement to compete in the digital economy. The faster IT can iterate on application development the faster the business will deliver customer value.

Application access to data sets created, and maintained remotely is not a new challenge for IT. Starting at the beginning of this decade the industry began using storage virtualization technologies to enable data sets to be accessible in multiple data centers. Products like EMC VPLEX, Hitachi USPV, and Netapp V-Series provide these capabilities. These storage virtualization technologies were primarily designed to enable rapid restart business continuity between clouds up to 100’s of miles away. It is not easy for multiple applications to easily access the same data sets simultaneously without implementing a complex, distributed lock manager to keep the data sets in a consistent state. I have seen many customers successfully create snapshot copies of the data to enable other application to access read only copies of transactional data sets for analytics processing. Storage virtualization is limited by distance and network latency typically not exceeding 50ms or <100 miles. Storage virtualization is mostly limited to block storage protocol limiting application access.

More recently storage gateway technologies have been introduced to place data sets in their most cost effective cloud service while maintaining application access over traditional block storage and file protocols. Typically these storage gateways will cache the most frequently accessed data locally to minimize access latency. The storage gateways will pull the data it doesn’t have cached locally transparent to applications. The challenge with most storage gateway technologies is the data is not easily accessible by applications running anywhere but the source site. Some of these storage gateway products I see most often are EMC CloudArray, and Panzura.

12FactorBoth storage virtualization and gateways technologies do not allow IT to provide ubiquitous access to data sets across multiple cloud services. In order to de-couple the data and applications a new architecture is required. New applications should access all data through standard API’s rather than traditional storage protocols. Data sets must be accessible independent of any single application and cloud infrastructure. Application architectures for modern mobile, web, and social application follow The Twelve-Factor App architecture where data sources are treated as backing services that are attached at run time. For example, a modern 12-Factor App should be able to attach and detach to any number of MySQL databases and data object store the same way each time regardless of which cloud infrastructure the application or data set is operating.

DatafabricFor existing data sets that are tightly coupled to applications new data fabrics will be necessary to virtualize access to data sources. For example, if you want an application to perform data analytics against data sets in SQL database and HDFS file system your application will need to rely on a data fabric product like Pivotal Hawq to access the two different data formats and execute a SQL query. New applications will leverage data fabric API’s to access legacy data sources such as ERP databases. These modern data fabrics manage metadata describing data sets including location, and format. Since new applications are creating more unstructured data (i.e. audio, video, images) in addition to tradition structured data (spreadsheets, SQL databases) application will need a data fabric to manage access consistently regardless of format.

Application access to all your data sets is critical to developing, and operating new software. While we have been making IT infrastructures more flexible with storage virtualization and gateways, the new data fabrics are critical to enabling the consumption of cloud infrastructure. In order for companies to successfully compete in the digital economy they need to be able to quickly develop new custom software delivering differentiated products, and customer experiences. In order to get the application development speed, and scale these applications need to be deployed in cloud infrastructures with a robust data inter cloud service.


Cloud Interworking Services




In my previous post, Cloud Is Not A Place I presented my case for enterprise IT needing four types of cloud services to support their application workloads. Many enterprise IT customers I work with are adapting a Bi-Modal IT strategy. One mode of cloud services for supporting their traditional 3-tier client-server applications such as SAP/R3, Oracle ERP, SharePoint, SQLServer based application. Most of these traditional systems are their systems of record. The second mode of cloud services is optimized for modern mobile, web, social and big data applications such as Salesforce.com, and custom developed web portal systems. Many of these applications are their systems of customer engagement.

CloudappWorkloadsMany applications workloads can be supported by just one of these cloud types but all enterprise IT application portfolio’s require a combination of more than one of these cloud types. For example, many businesses run SAP for ERP and use Salesforce.com for CRM. These two application workloads will be support by different cloud types. As you add more application workloads you must deal with applications that need access to other application generated data sets which may not run on the same cloud type service. You also see opportunities to leverage one cloud type for primary data and other cloud types for redundancy and protection. Frictionless access between these different cloud services is critical.

A new class of cloud services I call Cloud Interworking services is needed. These Cloud Interworking services are critical to maximizing application workload placement and inter-operability. I believe these Cloud Interworking services will enable enterprise IT organizations to provide the most differentiated and cost effective IT services for their businesses.

We have identified three basic Cloud Interworking services that modern enterprise IT need to support:

  • Data Set Access – access data sets easily from any cloud
  • Data Security – encryption of data in transit and at rest
  • Data Protection – data copies that can be used to restore failed data access requests

In my next series of posts I am going to discuss how these capabilities can be implemented today. These Cloud Interworking services will enable enterprise IT infrastructure teams to become their companies cloud portfolio manager. As the cloud portfolio manager they will be able reduce friction with their application development team while reducing costs and improving their agility.


EMCWorld 2016: Future of Data Center Services with SUPERNAP

Many customers I have been meeting with recently are looking to get their IT out of the data center business. Data Centers are viewed as expensive and difficult to maintain for many businesses. Many are leveraging the public cloud providers as a means to accomplish the goal of zero data center but are concerned about losing the advantages of having IT infrastructure architecture control. One of the best things about attending EMC World is the opportunity to connect with other leaders in the IT industry. As part of the EMC Elect community I had the opportunity to visit the Switch SUPERNAP data centers in Las Vegas.

Our visit included data center facility tours and presentation of the SUPERNAP capabilities. SUPERNAP's Missy Young, started the presentation with a review of SUPERNAP's history. In 2000, Rob Roy founded Switch in Las Vegas to offer advanced managed technology services for startups and large enterprise customers. In 2002 Rob was able to acquire a Nevada based former Enron facility with the largest fiber optic capability in the country which would offer customers unprecedented network capacity, performance, and redundancy. In 2006, Rob created the SUPERNAP data center business and ecosystem. SUPERNAP provides companies with the data center space to house the compute, and storage combined with the Switch network capacity, performance, and redundancy. Today SUPERNAP is operating data center services in northern and southern Nevada as well as internationally. In addition to providing customers with the world's only Gold Tier IV co-location data center facility and operations certified data center services, SUPERNAP is leveraging over 200 Rob Roy patented inventions to improve the cost effectiveness and environmental sustainability of data center services. The SUPERNAP data centers do not use traditional raised floor, or power designs. For data center cooling they are leveraging their patented SUPERNAP T-SCIF (Thermal Separate Compartment in Facility) system that is designed to keep 100% of the equipment heat separate from the data center air. The heat from each rack is captured, moved to the ceiling compartments using natural air pressure, and then vented outside while cool air is continually added to the building. This Switch T-SCIF heat containment cabinet platform not only cools the data center efficiently it also allows SUPERNAP customers to fully utilize their rack space without worrying about equipment cooling limitations. SUPERNAP can provide over 40 kW power capacity to each rack which is 30-50% more than many of the top enterprise data centers I have seen. This can result in big savings for customers paying for data center services by the rack. 

Supernap

When you arrive at the SUPERNAP facilities you are immediately impressed with the size and scale of the space. Once inside the exterior wall surrounding each of the buildings you continue to experience their commitment to the physical security as armed guards meet you at the entrance and escort you throughout the facility. The tour of facility helped create perspective on the size of their install base. Each building is divided into four modular sections built out as the space is sold. During the tour we were able to see the unique power distribution, cooling, and roofing design that supports the Gold Tier IV classification.

The other big advantage SUPERNAP offers their customers is aggregating network bandwidth purchasing power through their Core Cooperative. Customers running at SUPERNAP data centers can typically reduce their network costs by 30-60% and improve their redundancy by participating in the Core Cooperative. Due to tax agreements customers often have much lower taxes on data center services and equipment.

SUPERNAP is expanding their service to the eastern region of the United States with their announced plans to build a data center campus in Grand Rapids, Michigan (https://www.supernap.com/news/switch-confirms-plans-for-massive-michigan-data-center.html). This will provide a data center service alternative for east coast companies with similar benefits to those using the Nevada based services.

Data center hosting and co-location services have been offered for many years by many regional and national providers. Typically customers have used these services as an alternative to investing in their own data centers but usually at a higher cost. With the networking and the proprietary data center design technology that SUPERNAP uses their customers realize the benefits of world class data center services at a fraction of their current data center cost while maintaining IT system architectural and operational control. Based on the growth of the SUPERNAP capacity in both Nevada and in Michigan I think many more businesses will be considering this option for hosting their IT infrastructure in the future.


EMC World 2016


Emcworld2016EMC World 2016 is upon us again and final preparations are underway. The theme of this year's conference is Modernize and reflects the major challenge for enterprise IT. Modernize, Automate, and Transform their operations to enables business transformation to compete in the digital economy. EMC World starts officially on Monday May 2.

This year the EMC CTO team has been heavily involved in the content planning for EMC World. This year we have eight great breakout session's focused on modern management & operations, storage architectures, IOT, and cloud native application infrastructure solutions:

Emcworldsessions

In addition to the breakout sessions our global CTO, John Roese will be hosting a round table meeting on Tuesday (5/3) with 25 CIO/CTO's to discuss EMC's technology vision.

 

On Wednesday (5/4) John will be hosting a meet-up with the EMC Elect, CTO Ambassadors, and Cisco Champions.

 

John is hosting an interesting Guru Session on Wednesday (5/4) at 3pm with famous strategy consultant and author, Geoffrey Moore. Geoffrey is the best-selling author of Crossing the Chasm and Escape Velocity. John and Geoffrey will be discussing the enterprise IT challenges transforming to compete in the digital economy.

 

This year all the daily keynote sessions will be webcast from the emc.com website. I am looking forward to all the great content and recapping the highlights with blog posts all next week.


Cloud Is Not A Place

ITImparativesEnterprise IT organizations are being challenged to transform to enable their business to
compete in the digital economy. IT is being challenged to reduce the cost of operating their traditional application portfolio, enable new mobile, web, social, and analytics applications, while not compromising their data security and compliance requirements. These competing imperatives are forcing enterprise IT to embrace modern cloud infrastructure to help meet these needs. The challenge many are struggling with is finding a public cloud service that can meet all their MultiCloudneeds: traditional client-server application, along with development of new applications with much more agile, flexible, and less expensive infrastructure. In addition, many are expanding their use of software as a service (SAAS) for not only CRM, and payroll services but for HR, collaboration, and office productivity. The question is how do you find one cloud provider to meet all these workload needs? I believe this is the wrong question. Cloud is not a place but an operating model. Enterprise IT will need to manage a portfolio of cloud services optimized for multiple groups of applications with diverse workload requirements.


At a high level, determining the IT Cloud Services needed for your application workloads is 2x2cloudbased on two dimensions:

  • Application architecture: traditional client-server, modern mobile, web, and social
    applications
  • Application locality: can it run off-premises or must it run on-premises

This creates four categories of cloud services. The lower left quadrant is optimized to service traditional client server application like SAP/R3, and Oracle ERP applications. The lower right CloudExamplesquadrant is a new type of off-premises cloud service provider that provides application expertise in addition to the price advantages of public cloud. EMC Virtustream, and Oracle cloud are examples of these cloud service providers. The upper left cloud services are optimized for modern mobile, web, social application architectures that you want to run on-premises. The upper right quadrant is general-purpose public cloud providers and software as service providers. Each of these cloud types is architected to minimize the cost to run the target workloads while providing just the services the application needs. For example, a Oracle database application requires a highly resilient storage infrastructure. If your table space storage suddenly becomes unavailable it is going be a really bad day. For a Hadoop based application if a data node suddenly becomes unavailable existing and new requests are re-routed to other copies of the data with minimal user impact. You need to make sure your application workloads are mapped to the appropriate cloud service.

There has never been a single IT infrastructure architecture to service all application workloads. The best IT organizations are able to offer a portfolio of infrastructure services with differentiated services, flexibility, performance, and costs characteristics. I have described a model that has enabled many of my customers to start thinking about their cloud service needs. As this model of cloud portfolio services is created new IT services and organization roles and skills are created. In future blog posts I will discuss the new cloud inter-working services, organizational roles and skill sets needed.


EMC CTO Ambassadors

When I joined the EMC Office of the CTO in 2014 after many years as a field engineer and an EMC customer many people were interested in our opinion on the future of IT technology. As an EMC field engineer I worked with many customers designing technology solutions that would need to support their core business for the next 10 years with currently available products. We often discussed how the solution could be designed to accommodate new technologies that we knew were on the near term horizon. The challenge was:

  • How do we know the new technologies we should be considering?
  • What was the informed opinion on when the new technologies would be commercially viable?

When I joined the EMC CTO office we had no formal process for sharing our knowledge, and points of view with our field engineers, let alone our customers. Doing a bit of research I found this was not unique to the enterprise IT product industry. As I thought about this problem and the value our customers would receive I proposed we create a team of technologists that we would share the EMC Office of the CTO research project results, and our educated points of view on the future of IT technology. To my delight I found the support of one of EMC's leading technologists, Steve Todd. He encouraged me to present a plan to EMC's CTO, John Roese. With John's support I began recruiting CTO Ambassador's that would learn about our research learnings, and John's points of view on IT trends. Working with Steve we created the first messages for the CTO Ambassador's and we launched the program by the end of 2014 leveraging the EMC Executive Briefing program to engage with our customers.

We quickly realized there was tons of great feedback and idea's shared during these CTO Ambassador vision meetings so we quickly added a CTO Ambassador to each meeting to create meeting feedback reports. On a quarterly basis the team reviewed the meeting feedback reports and we discovered trends that we used as input to future messages, and projects. The meeting feedback has been invaluable to improving our focus on the topic's most important to our customers and the industry.

Recently I was able to capture EMC Global CTO, John Roese's feedback on the CTO Ambassador program.

 

In addition to scaling the number of CTO Ambassador's to 90+ volunteer participants globally in 2015 we wanted to provide a single publicly accessible portal to share more details about our research projects, and points of view. We recently launched our Innovation @EMC portal, newsletter, and CTOAmbassador Twitter handle. I will be talking about this soon.

I am excited that the EMC CTO Ambassador program has been successful in exposing the most important work being led by the CTO Office through our local technologists across the globe. The CTO Ambassadors have hosted over 200 customer meetings since it started. In the beginning it was challenging to convince some of our leading EMC technology thought leaders the value of supporting the program and working with us to share their work and point of view but with the great customer feedback and idea's we have collected for them it is getting easier. For the EMC employee's that have volunteered their time many are seen as EMC, and industry thought leaders. Many are now recognized as principle engineers and have been nominated by their peers and received EMC R&R rewards. If you are coming to an the EMC Executive Briefing be sure to ask to meet with our CTO Ambassadors to learn more about EMC's technology vision. We would love to hear your feedback.