How to Replace Your Legacy VPN in Minutes

KWICies #007 – When You Gotta Go, You Gotta Go

Intelligence is the ability to adapt to change” –
Stephen Hawking

 

The lion and the calf shall lie down together but the calf won’t get much sleep”
Woody Allen

An Enterprise Parable

The alluring sound of the cloud siren beckons you to write a killer service using the latest server-side technologies to solve a huge problem in your industry. You haul out the buzzword-compliant big guns, i.e., big-data, machine intelligence and Docker microservices. Your team builds this awesome, mission-critical, revenue-generating web service. All this gleaming power is now ready to be used by your Fortune 500 customers.

You pause and secretly sneer with glee.

The management gods smile upon you. It’s an evil smile, but what the heck, at least those bastards are smiling.

You present to your first customers.

“… and it’s scalable and secure. Its accessed with a simple HTTP call. You merely upload your users’ credentials into our repository in the cloud and we take care of authentication.   Thank you.  Payments to ACME Software are due 30 days ARO”.

But the customers don’t rejoice. In fact they remain stone-faced, stand up silently and exit single-file.

The management gods drop their smiles and their lit cigars.   You reach into your pocket and solemnly crinkle that envelope with that down payment on your new digs in the Hamptons.

Cue the sad music.

But I like the Beach…

What Goes On in the Datacenter Stays in the Datacenter

Most companies have legal constraints, regulatory restrictions or corporate mandates on what types of systems can physically exist in an off-premises cloud. Very often, critical user credentials and other types of “family jewel” information cannot leave the premises of the enterprise. In many geographic locations, in particular Europe, there are stringent rules on where data must reside. Data protection directives and privacy laws can be quite strict and heavily regulated

In addition to data protection and privacy constraints, many corporations simply may not want their highly sensitive services in a public cloud provider. It may not even be for data sensitivity reasons; companies are very reluctant to create subsets of data since it involves costly data synchronization and maintenance. So there are a variety of reasons that certain systems and services must remain on-premises.

Preferable off-premises islands of information

However, these powerful SaaS-style systems of engagement running in a public cloud commonly require access to the on-premises systems of record.

For example, a CRM system running in Microsoft Azure may require authentication from an on-premises LDAP service. A portfolio reconciliation system running in Google GCP needs real-time market data feeds originating from several on-premises sources. A managed machine intelligence service running in Amazon AWS requires event-based data from your supply chain partners.

These are not unusual on-prem/off-prem scenarios.  Au contraire mon ami, most SaaS services have this requirement.

Historically there were three basic solutions for this type of requirement:

  1. You ask your customers to create a conventional REST-style web service that allows your external cloud service to call them. Your champions at your customer hear you, feel a stabbing sensation in their stomachs and wince politely. This solution is quite painful and very costly for their IT departments. Designing and deploying an application server that requires several months to develop and incurs maintenance expenses is painful for your customers.
  1. You ask your customers to open incoming non-standard TCP ports for your SaaS cloud service.This is potentially a humongous security hole. You are destined to be a case study in the future. Prepare your CV.
  1. You ask your customers to install a legacy VPN.Your customers are familiar with VPNs. They know they can install an expensive hardware VPN device from a large networking company with a lengthy maintenance agreement. Or they can deploy a software SSL VPN, perhaps even an open-source one with the fugly user interface and confusing administrative dashboard. All your customers need is to get approval from their InfoSec and Operations teams. And signoff from your own InfoSec team. And from their Managing Director. And their CTO. And their CIO. Should be simple, right?

The usual 30-year old answer to this scenario is to setup a legacy VPN to connect the two systems.

Do I really have a choice?

However there are many downsides to setting up traditional or cloud-based VPNs:

  • The on-boarding process can be onerous especially between external organizations, despite the straightforward technology setup.
  • They are not easy to manage in an agile, constantly changing federated environment, which is the norm.
  • VPNs may require additional infrastructure for mobile devices that experience disconnects, cross-application network connection retries, additional security, etc.
  • Even one VPN can be quite difficult for a business unit to deploy, maintain and understand the security issues. In a business-driven cloud services world, this reduces agility for the revenue generators in an enterprise.
  • They typically allow low-level potentially dangerous access especially if home computers are used to access corporate assets.
  • VPN Access control commonly uses the hard-to-manage, black list security model.
  • They present huge surface areas with many attack vectors for hackers to exploit. Some researchers have evendiscovered many VPN vendors leak low-level IP data.
  • VPN vendor hardware and software are not always interoperable or compatible. A particular VPN architecture may not be suitable across multiple VPN vendors.
  • VPN products typically offer poor user experiences.
  • TCP and Web VPN requirements are not necessarily the same. This drives up costs. In terms of security,
  • Do legacy VPNs fit in a multi-cloud, on-demand and microservices world? All connectivity must be uber convenient and on-demand.

And as the Internet-of-Things (IoT) and Web-of-Things (WoT) wave matures over the next 5-10 years, VPNs are simply too clumsy, inconvenient and heavyweight to handle agile remote connectivity for the many billions of devices to come.

And these devices will arrive in huge waves. The connectivity and data volumes are large now, but when IP is implemented over Bluetooth LE, the connectivity fabric will spread faster than a Lady Gaga video on YouTube. You don’t have to be Nostradamus to predict a future discontinuity in the increased data volumes, the increased number of machine intelligence applications and the necessary secure connectivity.

Tweet: “Customers definitely want secure connectivity with all these apps, but they also want convenience.”

Is Elon Musk driving that Really Smart Car?

Enter WebSocket

We’ve talked about WebSocket in detail back in KWICies #003. To quickly review, our fearless hero WebSocket is an official IETF wire protocol (Dec 2011) and an (essentially) official W3C JavaScript API to use it (note: the W3C only specifies a JavaScript API). WebSocket is a peer protocol to HTTP; in other words both HTTP and WebSocket (and their TLS/SSL encrypted versions) are physically implemented “on top of” TCP.

The Web is now a humongous collection of APIs and Services

Unlike HTTP, WebSocket is a persistent (and full-duplex) connection between two endpoints. A persistent connection means event-based programming is now finally possible over the web. Btw if you really want to be hip, replace “event-based” with “reactive”; you’ll make the application server developers swoon during your next corporate presentation.

HTTP is clearly an excellent protocol for document up/download and we certainly have tweaked it over the past 5-7 years to do things it was never intended.  And HTTP remains the protocol of choice if you need caching of static entities. But it was never intended for asynchronous distributed computing.

On the other hand, WebSocket can be thought of as a “TCP for the web” (certainly not physically true). As a persistent and full-duplex connection, WebSocket allows all sorts of additional protocols to be implemented over the web, e.g., messaging, events, telemetry, data acquisition, et al.  And like TCP, WebSocket is a low-level transport; many other types of higher-level application protocols and APIs can be implemented over WebSocket.  As a matter of fact, any TCP-based application protocol can use WebSocket as a transport to traverse the web.

Similar to any other wire protocol, WebSocket does not have to be used with a browser (e.g., Slack’s native client uses WebSocket under the hood), but there are certainly a lot of examples of WebSocket use in a browser (Google docs, Trello, BrowserQuest, etc.).

And since WebSocket is like a TCP, you can envision other non-browser use cases like… wait for it… replacing many VPN scenarios.  The Kaazing KWIC software leverages this new communication model by securely converting TCP to WebSocket from one side and reversing the process on the other side. Literally in a few minutes you can have secure hybrid cloud services connectivity using the WebSocket-powered KWIC software without the pain and administrative headaches of a legacy VPN.

If you need on-demand, program-to-program service connectivity for your modern applications, why are you still dealing with old-school, 30 year old VPN technology?

Frank Greco