The Evolution of Cloud Connectivity
“Intelligence is based on how efficient a species became at doing the things they need to survive.” ― Charles Darwin
“My theory of evolution is that Darwin was adopted.” ― Steven Wright
YesterdayIn case you missed it, the first phase of cloud computing has left the building. Thousands of companies are in the cloud. Practically all organizations regardless of size already have production applications in a public, off-premises cloud or a private cloud. Yep. Been there, done that.
And the vast majority of these applications use the classic “SaaS-style” public cloud model. Someone develops a useful service and hosts it on Amazon Web Services (AWS), Microsoft Azure, IBM Cloud Marketplace, Google Cloud Platform (GCP) or one of several other cloud vendors. Accessing this external service is typically performed via a well-defined API. Typically this API invocation is made using a simple REST call (or a convenient library wrapper around a REST call). This request originates from a web browser, native app on a mobile device or some server-side application and traverses the web. Using only port 443 or 80, it connects through a series of firewalls to the actual service running in the external cloud environment. The request is serviced by a process running in a service provider’s computing environment and returns a result to the client application.
Conventional SaaS-style Access
Only the Beginning
However this scenario greatly simplifies a real-world example of accessing a service. Quite honestly, this is a very basic, hello-world cloud connectivity model.
Today’s enterprise is a federation of companies with vast collections of dynamic services that are enabled/disabled frequently with ever-changing sets of authentication and access control. To survive in this environment, a modern enterprise needs to develop an intimate yet secure ecosystem of partners, suppliers and customers. So unlike the rudimentary connectivity case, the typical production application is composed of many dozens and perhaps hundreds of services, some internal to an enterprise and some residing in a collection of external cloud infrastructures or data centers. For example, the incredibly successful Amazon ecommerce website performs 100-150 internal service calls just to get data to build a personalized web experience.
Many of these external services that exist either in an external cloud vendor or another company’s data center often need to reach back to the originating infrastructure to access internal services and data to complete their tasks. Some services may even go further and also need access to information across cloud, network and company boundaries.
This ain’t your father’s cloud infrastructure.
Get off My Cloud
A particular use case is when a service running in a cloud environment, e.g., AWS, needs to authenticate access to this service. One solution is to provide a duplicate or subset of the internal authentication credentials (usually housed in some LDAP repository, e.g., Active Directory) directly in the public cloud. However this is redundant and brings potentially dangerously insecure authentication-synchronization and general data management issues. Unsurprisingly this scenario of accessing internal authentication or entitlements information residing in an internal directory turns out to be quite common for practically all service access.
Another example involves powerful cloud-based analytics or business intelligence services. In many cases such off-premises analytics-as-a-service providers need access to internal real-time data feeds that reside on the premises of a customer. That customer may not want to put that private real-time stream into the cloud environment for a variety of reasons, e.g., security, unnecessary data synchronization, additional management, etc.
The architectural solutions for both of these use cases involve either negotiating with the enterprise customer to create a REST API and deploy a family of application servers (extremely complex and highly improbable), or more typically, setting up a virtual private network (VPN) to achieve a real-time, “fat-pipe” connection.
Old-School Approach to Application Connectivity
Nothing Else Matters
While the technical aspects of setting up a legacy-style VPN are relatively straightforward, there is often a lengthy period of corporate signoffs and inter-company negotiations that precede the technical work. For some companies this period of time can be many weeks. For some other large corporations, getting approvals for yet another VPN can take several months. This painfully long lead-time negatively impacts business agility and the all-important time-to-revenue.
In addition, VPN access is at the low-level TCP layer of the network stack. Despite various access control systems, the open nature of a VPN represents a security risk by potentially providing unauthorized (and authorized) users free-reign to many internal enterprise services. Also, VPN implementations vary. Some are proprietary and may cause potential issues when interfacing among various VPN vendors, especially VPNs that extend access to mobile devices.
What a Wonderful World
Ideally you would want to completely eliminate any legacy VPN requirement to significantly reduce unnecessary friction from the sales and deployment process. And you’d want an agile, on-demand connection that connects Application-to-Application (A2A) via a “white list” approach. To help future proof your infrastructure and accelerate operations, a container deployment approach based on the popular Docker would be more than useful and attractive to your developers.
Do You Believe in Magic
As of December 2011, the Internet standards bodies (IETF and W3C) formally approved a mechanism for a persistent connection over the web without using any additional ports and consequently maintaining your friendships in the InfoSec group. This standard is called “WebSocket” and effectively is a “TCP for the Web”.
Like most innovations being used for the first time, WebSocket was initially used as a mere replacement for inelegant browser push (AJAX) mechanisms to send data from a server to a user.
But by using the WebSocket protocol and its standardized API as a powerful foundation for wide-area TCP-style distributed computing, we get a phenomenally powerful innovation. Enhancing basic WebSocket functionality with the necessary enterprise-grade security and reliability envelope, applications can now easily and most importantly securely access services on-demand through the firewall. This type of enhanced approach to WebSocket avoids the awkward conversion of any enterprise application protocol to coarse-grained HTTP semantics. Performance is rarely an issue with WebSocket.
WebSocket for App-to-App (A2A) Communication
This LAN is Your LAN
If you’re looking for a way for an external cloud application to access an internal, on-premises service in an on-demand Application-to-Application manner, the Kaazing Websocket Intercloud Connect (KWIC… yep, yet-another caffeine-induced acronym) provides this functionality. It’s based on the open-source Kaazing Gateway and works with any TCP-based protocol. You can see an example of KWIC used for LDAP access in the AWS Marketplace (if you don’t need support, KWIC is totally free…).