< Table of Contents

Service Reference

This document describes all of the elements and properties you can use to configure Kaazing Gateway services.

Overview

You can use the optional service element to configure one or more services running on Kaazing Gateway.

Structure

The Gateway configuration file (gateway-config.xml or gateway-config.xml) defines the service configuration element and its subordinate elements and properties that are contained in the top-level gateway-config element:

service

Each service can contain any of the subordinate elements listed in the following table.

Note: Subordinate elements must be specified in the order shown in the following table.

Subordinate Element Description
name The name of the service. You must provide a name for the service. The name element can be any name.
description A description of the service is optional.
accept The URLs on which the service accepts connections.
connect The URL of a back-end service or message broker to which the proxy service or broadcast service connects.
balance The URI that is balanced by a balancer service. See balancer service for details.
type The type of service. One of the following: balancer, broadcast, directory, echo, ibmmq , kerberos5.proxy , management.jmx, , proxy, amqp.proxy, jms , jms.proxy .
properties The service type-specific properties.
accept-options Options for the accept element. See accept-options.
connect-options Options for the connect element. See connect-options.
realm-name The name of the security realm used for authorization. If you do not include a realm name, then authentication and authorization are not enforced for the service.
authorization-constraint The user roles that are authorized to access the service. See authorization-constraint.
mime-mapping Mappings of file extensions to MIME types. Each mime-mapping entry defines the HTTP Content-Type header value to be returned when a client or browser requests a file that ends with the specified extension. See mime-mapping.
cross-site-constraint The cross-origin sites (and their methods and custom headers) allowed to access the service. See cross-site-constraint.

Supported URL Schemes:

When specifying URLs for the accept or connect elements, you can use tcp://{hostname}:{port} to make a basic TCP connection, or specify any of the supported protocol schemes:

Notes:

accept

Required? Yes; Occurs: At least once.

The URLs on which the service accepts connections (see Supported URL schemes).

Example

<service>
  <accept>ws://balancer.example.com:8081/echo</accept>
  .
  .
  .
</service>

Notes

connect

Required? Yes; Occurs: At least once.

The URL of a back-end service or message broker to which the proxy service (for example, proxy or amqp.proxy service) or broadcast service connects (see Supported URL schemes).

Example

<service>
 <accept>ws://example.com</accept>
 <connect>tcp://192.0.2.11:5943</connect>
...
</service>

Notes

balance

Required? Optional; Occurs: one or more.

A URI that matches the accept URI in a balancer service. The balance element is added to a service in order for that service to be load balanced between cluster members.

Example

The following example shows a Gateway with a balancer service and an Echo service that contains a balance element. Note that the accept URI in the balancer service matches the balance URI in the Echo service.

Balancer Service

<service>
  <name>balancer service</name>
  <accept>ws://balancer.example.com:8081/echo</accept>

  <type>balancer</type>

  <accept-options>
    <ws.bind>192.168.2.10:8081</ws.bind>
  </accept-options>

</service>

Gateway Service Participating in Load Balancing

<service>
  <name>Echo service</name>
  <accept>ws://node1.example.com:8081/echo</accept>
  <balance>ws://balancer.example.com:8081/echo</balance>

  <type>echo</type>

  <cross-site-constraint>
    <allow-origin>http://directory.example.com:8080</allow-origin>
  </cross-site-constraint>
</service>

Notes

type

The type of service. For each service that you configure, you define any of the service types in the following table to customize the Gateway for your environment.

Type of Service Description
balancer Configures load balancing using either the built-in load balancing features of the Gateway or a third-party load balancer. When you configure a balancer service, the Gateway balances load requests for any other Gateway service type. Services running on Kaazing Gateway support peer load balancer awareness with the balance element for a cluster of Gateways. See the Configure the Gateway for High Availability topic that describes Gateway clusters and load balancing in detail.
broadcast Configures the Gateway to accept connections initiated by the back-end server or broker and broadcast (or relay) messages that are sent along that connection to clients.
directory Specifies the directory path of your static files relative to GATEWAY_HOME/web, where GATEWAY_HOME is the directory where you installed Kaazing Gateway. Note: An absolute path cannot be specified.
echo Receives a string of characters through a WebSocket and returns the same characters to the sender. The service echoes any input. This service is used primarily for validating the basic Gateway configuration. The echo service runs a separate port to verify cross-origin access.
kerberos5.proxy Connects the Gateway to the Kerberos Key Distribution Center.
management.jmx Track and monitors user sessions and configuration data using JMX Managed Beans.
amqp.proxy Enables the use of the Advanced Message Queuing Protocol (AMQP) that is an open standard for messaging middleware and was originally designed by the financial services industry to provide an interoperable protocol for managing the flow of enterprise messages. To guarantee messaging interoperability, AMQP defines both a wire-level protocol and a model, the AMQP Model, of messaging capabilities. An example of a message broker that provides built-in support for AMQP is RabbitMQ.
proxy Enables a client to make a WebSocket connection to a back-end server or broker that cannot natively accept WebSocket connections.
jms Uses the jms service, which allows you to configure the Gateway to connect to any back-end JMS-compliant message broker. The jms service offloads connections and topic subscriptions using a single connection between the Gateway and your JMS-compliant message broker.
jms.proxy Establishes a connection between the Gateway and the next Gateway for each client connection. The benefit of using the jms.proxy service is that you can control security independently per connection, and enable a fail-fast when a user fails to authenticate correctly. In addition, delta messages can be passed through from jms service in the internal Gateway through a DMZ Gateway that is running the jms.proxy service in Enterprise Shield™ configurations.

balancer

Use the balancer service to balance load for requests for any other Gateway service type.

Example

The following example shows a Gateway with a balancer service and an Echo service that contains a balance element. Note that the accept URI in the balancer service matches the balance URI in the Echo service.

Balancer Service

<service>
  <accept>ws://balancer.example.com:8081/echo</accept>

  <type>balancer</type>

  <accept-options>
    <ws.bind>192.168.2.10:8081</ws.bind>
  </accept-options>

  <cross-site-constraint>
    <allow-origin>http://directory.example.com:8080</allow-origin>
  </cross-site-constraint>
</service>

Gateway Service Participating in Load Balancing

<service>
  <name>Echo service</name>
  <accept>ws://node1.example.com:8081/echo</accept>
  <balance>ws://balancer.example.com:8081/echo</balance>

  <type>echo</type>

  <cross-site-constraint>
    <allow-origin>http://directory.example.com:8080</allow-origin>
  </cross-site-constraint>
</service>

Notes

broadcast

Use the broadcast service to relay information from a back-end service or message broker. The broadcast service has the following property:

Property Description
accept The URL of the broadcast service to which a back-end service or message broker connects.

Examples

<service>
  <name>broadcast service</name>
  <accept>sse://localhost:8000/sse</accept>
  <accept>sse+ssl://localhost:9000/sse</accept>

  <type>broadcast</type>

  <properties>
    <accept>udp://localhost:50505</accept>
  </properties>

  <cross-site-constraint>
    <allow-origin>http://localhost:8000</allow-origin>
  </cross-site-constraint>

  <cross-site-constraint>
    <allow-origin>https://localhost:9000</allow-origin>
  </cross-site-constraint>
</service>
<service>
  <name>broadcast service</name>
  <accept>sse://www.example.com:8000/sse</accept>
  <accept>sse+ssl://www.example.com:9000/sse</accept>
  <connect>tcp://news.example.com:50505/</connect>
  <type>broadcast</type>

  <cross-site-constraint>
    <allow-origin>http://www.example.com:8000</allow-origin>
  </cross-site-constraint>
  <cross-site-constraint>
    <allow-origin>https://www.example.com:9000</allow-origin>
  </cross-site-constraint>
</service>

directory

Use the directory service to expose directories or files hosted on the Gateway. The directory service has the following properties:

Note: The properties must be specified in the order shown in the following table.

Property Required or Optional? Description
directory Required The path to the directory to be exposed on the Gateway.
options Optional Enables directory browsing of the files and folders in the location specified by the directory property. The value indexes must be entered. For example, <options>indexes</options> enables directory browsing. Omitting the options property disables directory browsing. Browsing a directory with welcome-file will serve the welcome file.
welcome-file Optional The path to the file to be exposed on the Gateway.
error-pages-directory Optional The path to the directory containing the 404.html file. By default, the Gateway includes a 404.html file in GATEWAY_HOME/error-pages. See the Notes for more information.
location Optional You add this element to configure Cache-Control for a resource(s) hosted by the directory service. location provides a resource(s)-specific scope to the cache-control setting, enabling you to specify the Cache-Control for multiple locations served by the directory service. The patterns and cache-control elements are child elements of location. See HTTP Caching with the Directory Service and Cache-Control Examples.
patterns Optional You add this element to instruct the Gateway on which file types and names to apply Cache-Control directives. For example, **/* specifies all files and **/*.html specifies HTML files. The `patterns` element uses the Apache DirectoryScanner syntax. Note: as patterns applies to a URL path, the separator is / not \. The patterns element contains one or more patterns that are whitespace separated, for example. See HTTP Caching with the Directory Service and Cache-Control Examples.
cache-control Optional You add this element to configure the caching behavior for the resource(s) matching by the patterns value. The syntax is <cache-control>value</cache-control> where value is a string containing valid Cache-Control directives, as specified by RFC 7234, section 5.2.2 Response Cache-Control Directives. The directive names used in cache-control must be those specified in RFC 7234, such as max-age, public, no-cache, etc. The format must be as specified in RFC 7234, namely comma-separated. Example: <cache-control>max-age=60, public, no-store</cache-control>. See HTTP Caching with the Directory Service and Cache-Control Examples.
symbolic-links Optional Enables the use of symbolic links in the directory property element. To enable symbolic links, use the value <symbolic-links>follow</symbolic-links>. If symbolic-links is omitted, then symbolic links are not allowed. You can also disable symbolic links by using <symbolic-links>restricted</symbolic-links>. This setting does not effect a symbolic link that points to a location under the value of directory. It effects a symbolic link that points to a target outside of the value of directory.

HTTP Caching with the Directory Service

HTTP/1.0 provided the Expires header as a simple way for an origin server to mark a response with a time before which a client cache could return the response validly. The server might then either respond with a 304 (Not Modified) status code, implying that the cache entry is valid, or it might send a normal 200 (OK) response to replace the cache entry. The problem with this mechanism is that neither origin servers or clients can give full and explicit instructions to caches, leading incorrect caching of some responses that should not have been cached, and failure to cache some responses that could have been cached.

In RFC 7234, HTTP/1.1 adds the new `Cache-Control` header to make caching requirements more explicit than in HTTP/1.0. The Cache-Control header allows an extensible set of directives to be transmitted in both requests and responses. See Cache-Control Examples.

Here are a few examples of Cache-Control directives that are used in the cache-control property:

max-age

You use this Cache-Control directive to state that the resource included in the response is to be considered stale after its age is greater than the specified number of seconds. For example, <cache-control>max-age=0</cache-control> means the resource expires immediately.

You can specify your preferred time interval syntax in milliseconds, seconds, minutes, or hours (spelled out or abbreviated). For example, all of the following are valid: 1800s, 1800sec, 1800 secs, 1800 seconds, 1800seconds, 3m, 3min, or 3 minutes. If you do not specify a time unit then seconds are assumed.

You can also use the formula [m+]N [time unit]. m represents the last modified date. N can be 0 or a positive integer. 0 signifies the resource expires immediately. [time unit] is optional, and defaults to seconds if omitted. For example: max-age=m+2 hours instructs the client to expire the file two hours after the file was last modified.

There is an important difference between expiry of a file based on when the client downloaded the resource and expiry based on the last modified date of the resource. Here is the distinction:

See Cache-Control Examples.

public

A Cache-Control directive stating that any cache may store the response, even if the response would normally be non-cacheable or cacheable only within a private cache. See See Cache-Control Examples.

Note that this does not guaranteed privacy. Only cryptographic mechanisms can provide true privacy. See Secure Network Traffic with the Gateway.

no-store

A Cache-Control directive stating that a cache must not store any part of either the immediate request or response. This directive applies to both private and shared caches. See Cache-Control Examples.

Cache-Control Examples

Directory Service Example

The following is an example of a service element of type directory that accepts connections on example.com by default:

<service>
  <accept>http://www.example.com:80/</accept>
  <accept>https://www.example.com:443/</accept>
    <type>directory</type>
  <properties>
    <directory>/</directory>
    <welcome-file>index.html</welcome-file>
    <error-pages-directory>/error-pages</error-pages-directory>
    <location>
      <patterns>**/*</patterns>
      <cache-control>max-age=1 year</cache-control>
    </location>
</service>
Cache-Control Examples

You can use any valid Cache-Control directives, as specified by RFC 7234, section 5.2.2 Response Cache-Control Directives. In the following examples, only a few of the available directives are used.

Examples for max-age:

The resources expire immediately and the client MUST validate the resource with the Gateway before using it:

<location>
  <patterns>**/*</patterns>
  <cache-control>max-age=0</cache-control>
</location>

Do not cache the resource at all, but validate the response with the Gateway before processing it (see RFC 7234, section 5.2.2.2 no-cache):

<cache-control>no-cache</cache-control>

The resources expire after one minute:

<cache-control>max-age=60</cache-control>

or

<cache-control>max-age=60 seconds</cache-control>

or

<cache-control>max-age=1 minute</cache-control>

The resources expire after one hour:

<cache-control>max-age=1 hour</cache-control>

or

<cache-control>max-age=60 minutes</cache-control>

The following resources expire in one year:

<cache-control>max-age=1 year</cache-control>

Expire two hours after the file was last modified (note that whitespace is allowed between m+ and N, and also between the m and the + of m+. The m can be upper or lowercase):

<cache-control>max-age=m+2 hours</cache-control>

or

<cache-control>max-age=m+ 2 hours</cache-control>

or

<cache-control>max-age=m + 2 hours</cache-control>

Examples of pattern matching for Cache-Control

You can configure a file type pattern for the Gateway to use with cache control. Here are some examples.

Files in specific folders should be cached for a year:

<properties>
  <location>
    <patterns>
      **/images/*
      **/css/*
      **/js/*
    </patterns>
    <cache-control>max-age=1 year</cache-control>
  </location>
</properties>

Images should be cached for a year and HTML files expire immediately:

<properties>
  <!-- Default all files to a far future expires -->
  <location>
    <patterns>**/*.gif</patterns>
    <cache-control>max-age=1 year</cache-control>
  </location>

  <!-- Always reload HTML files -->
  <location>
    <patterns>**/*.html</patterns>
    <cache-control>max-age=0</cache-control>
  </location>
</properties>

Here is an example of conflicting pattern matching, where the pattern for the first location appears to conflict with the pattern matching of the second location:

<properties>
  <!-- Default all files to a far future expires -->
  <location>
    <patterns>**/*</patterns>
    <cache-control>max-age=1 year</cache-control>
  </location>

  <!-- Always reload HTML files -->
  <location>
    <patterns>**/*.html</patterns>
    <cache-control>no-cache</cache-control>
  </location>
</properties>

There is no validation of conflicting directives on the Gateway. Recipients of the HTTP response can handle the headers as required by RFC 7234, or as they choose.

Directory Service Examples

<service>
  <accept>http://localhost:8000/</accept>
  <accept>https://localhost:9000/</accept>

  <type>directory</type>
  <properties>
    <directory>/</directory>
    <welcome-file>index.md</welcome-file>
    <error-pages-directory>/error-pages</error-pages-directory>
  </properties>
</service>

Notes

echo

This type of service receives a string of characters through a WebSocket connection and returns, or echoes the same characters to the sender.

Example

<service>
  <name>Echo service</name>
  <accept>ws://localhost:8001/echo</accept>
  <accept>wss://localhost:9001/echo</accept>

  <type>echo</type>

  <cross-site-constraint>
    <allow-origin>http://localhost:8000</allow-origin>
  </cross-site-constraint>

  <cross-site-constraint>
    <allow-origin>https://localhost:9000</allow-origin>
  </cross-site-constraint>
</service>

Notes

kerberos5.proxy

Use the kerberos5.proxy service to connect the Gateway to the Kerberos Key Distribution Center.

Example

<service>
  <accept>ws://localhost:8000/kerberos5</accept>
  <connect>tcp://kdc.example.com:88</connect>

  <type>kerberos5.proxy</type>

  <cross-site-constraint>
    <allow-origin>*</allow-origin>
  </cross-site-constraint>
</service>

Notes

management.jmx

Use the management.jmx service type to track and monitor user sessions and configuration data using JMX Managed Beans.

Example

<service>
  <name>JMX Management</name>
  <description>JMX Management Service</description>

  <type>management.jmx</type>

  <properties>
    <connector.server.address>jmx://${gateway.hostname}:2020/</connector.server.address>
  </properties>

  <!-- secure monitoring using a security realm -->
 <realm-name>demo</realm-name>

  <!-- configure the authorized user roles -->
  <authorization-constraint>
    <require-role>ADMINISTRATOR</require-role>
 </authorization-constraint>
</service>

Notes

http.proxy

The http.proxy service includes properties and functionality for eliminating the public exposure of internal network and host information.

Before reviewing the contents of this section, we recommend that you review HTTP headers.

This section includes:

Note: Both accept and connect URIs must end with a forward slash (http://www.example.com/path/) or an error will be thrown at startup. If the URI does not have a path (http://www.example.com), then both the accept and connect URIs can end without a forward slash (/). Web browsers will automatically append the forward slash to the URI (http://www.example.com/).

Replacing the Host in the Header

When proxying client requests to backend servers, the http.proxy service replaces the host name in the HTTP Host request header with the host name in the connect URI.

For example, in the following http.proxy configuration, the Host name in the client HTTP request header will be replaced with corp.example.com before sending the request to the backend server.

Client request Host header:

GET /example.html HTTP/1.1
Host: example.com
...

Gateway http.proxy service:

<service>
 ...
 <accept>http://example.com/example.html/</accept>
 <connect>http://corp.example.com/</connect>
 <type>http.proxy</type>
  ...
</service>

Request Host header sent to backend server by the Gateway:

GET /example.html HTTP/1.1
Host: corp.example.com
...

Prepending Paths

For the http.proxy service, the URI path from the accept element is prepended to the path in the connect element. For example, let's look at the following http.proxy service configuration:

<service>
 ...
 <accept>http://maven.example.com/</accept>
 <connect>http://artifactory.example.com/artifactory/public-repository/</connect>
 <type>http.proxy</type>
  ...
</service>

In this example, a client request for http://maven.example.com/ is proxied to http://artifactory.example.com/artifactory/public-repository/. Also, a request for http://maven.example.com/corp/index.html is prepended to the connect path and proxied to http://artifactory.example.com/artifactory/public-repository/corp/index.html. To prevent attackers from accessing other URLs, any request that uses the parent directory symbol ../ in a way that would make the logical path be above the connect url path returns a 404 error. For example, the request http://maven.example.com/../private-repository/corp/super-secret-key returns a 404 error.

Note: If an accept URI ends with a forward slash (http://maven.example.com/), the connect URI must end with a forward slash (http://artifactory.example.com/artifactory/public-repository/) or an error will be thrown at startup. If an accept URI does not end with a forward slash, the connect URI must not end with a forward slash or an error will be thrown at startup.

Redirection Properties

The following properties enable you to control the internal information exposed as part of HTTP redirects.

http.maximum.redirects

Required? Optional

If set, http.maximum.redirects configures the Gateway to follow HTTP 302 redirects when it is connecting to a server. The number specified for http.maximum.redirects is the maximum number of HTTP 302 redirects the Gateway will follow before returning the 302 directly to the client. This connect-options element is useful for any service where the Gateway is using HTTP on an outbound connection, such as http.proxy using HTTP or WS in a connect.

Example

<service>
 <accept>http://${gateway.hostname}:${gateway.base.port}/remoteService</accept>
 <connect>http://internal.example.com:1080</connect>

 <type>http.proxy</type>

 <connect-options>
  <http.maximum.redirects>3</http.maximum.redirects>
  ...
 </connect-options>

...
</service>
rewrite-location

Required? Optional

Sets the http.proxy service to rewrite the HTTP Location headers of redirect responses. There are two values, enabled and disabled. The default is enabled. When enabled, rewrite-location uses the location mapping configured in location-mapping.

location-mapping

Required? Optional

When rewrite-location is set to enabled, location-mapping sets the proxy service mapping configuration for rewriting the HTTP Location headers. You can specify location-mapping multiple times. For example:

<service>
   ...
   <type>http.proxy</type>
   <accept>http://example.com:8000/foo/</accept>
   <connect>http://internal.example.com:7233/bar/</connect>
   
   <properties>
      <rewrite-location>enabled</rewrite-location>
      <location-mapping>
          <from>http://localhost:8080/</from>
          <to>http://localhost:8110/</to>
      </location-mapping>
      <location-mapping>
          <from>http://internal.example.com:7223/bar/</from>
          <to>http://example.com:8000/foo/</to>
      </location-mapping>
      ...
   </properties>
...
</service>

In this example, if the response code is 301 or 302 and a server returned the Location http://internal.example.com:7223/bar/index.html, the Gateway will rewrite the Location as http://example.com:8000/foo/index.html.

Forwarding Properties

The following property enables you to control the internal information exposed as part of HTTP forwarding.

use-forwarded

Required? Optional

You can use use-forwarded in the http.proxy service to modify the Forwarded HTTP header fields in a client request sent to the backend server. This is a security precaution. It prevents internal host information from being sent to external clients in the HTTP response.

This setting accepts the following values:

Cookie Properties

The following properties help you control the Domain attribute of the Set-Cookie header in responses to clients.

rewrite-cookie-domain

Required? Optional

Sets the proxy service to rewrite the Domain attribute of the Set-Cookie header in responses. There are two values, enabled and disabled. The default is disabled. When the value is enabled, the Domain attribute is rewritten using the mapping in cookie-domain-mapping.

cookie-domain-mapping

Required? Optional

When rewrite-cookie-domain is set to enabled, cookie-domain-mapping sets the mapping configuration for rewriting the Domain attribute of the Set-Cookie header. cookie-domain-mapping can be specified multiple times. The proxy service uses the first matching mapping to rewrite the Domain attribute. For example:

<service>
   ...
   <type>http.proxy</type>
   <accept>http://example.com:8000/foo/</accept>
   <connect>http://internal.example.com:7233/bar/</connect>
   <properties>
      <rewrite-cookie-domain>enabled</rewrite-cookie-domain>
      <cookie-domain-mapping>
          <from>internal.example.com</from>
          <to>example.com</to>
      </cookie-domain-mapping>
      ...
   </properties>
   ...
</service>

Using the example above, if the backend server returned Domain=internal.example.com, the Gateway will rewrite the Domain attribute in the Set-Cookie header as Domain=example.com.

Using http.proxy with Enterprise Shield

For information on using http.proxy with Enterprise Shield, see Use Case 4: Using http.proxy with Enterprise Shield.

proxy, amqp.proxy, and jms.proxy

Use the proxy, amqp.proxy, or jms.proxy service to enable a client to make a WebSocket connection to a back-end service or message broker that cannot natively accept WebSocket connections.

The following descriptions will help you understand when and how to configure properties for the proxy service and amqp.proxy service. See the jms.proxy reference for details about that feature.

connect.strategy

Required? Optional

Creates either an immediate, prepared, or deferred connection with a backend service. By default, a proxy connection to the backend service is initiated when a new client connection request arrives. You can use connect.strategy to change this default behavior.

There are three values allowed:

maximum.pending.bytes

Required? Optional

The size of data the service buffers for one client connection before slowing incoming and outgoing data. The value must be a positive integer with either no specified unit or appended with kB or MB (the unit is case insensitive) to indicate kilobytes or megabytes. If no unit is specified, the default unit is bytes. If you do not specify this property, its default value is 64kB. For example:

The Gateway uses this buffer when the speed of the data coming into the service is faster than the speed of the data being consumed by the receiving end, which is either the client or the back-end service or message broker. The buffer stores the data up to the limit you specify in this property per client connection, then slows the incoming data, until either the client or the back-end service or message broker (whichever is consuming the data) has consumed more than 50% of the outgoing data flow.

For example, suppose you set this property to 128kB. If the back-end service or message broker sends 256kB of data to a client and the client has only consumed 128kB, the remaining 128kB (the limit you set in the property) is buffered. At this time, the Gateway suspends reading the data from the back-end service or message broker; as the client consumes the buffered data, the size of the buffered data decreases. When the buffered data falls below 64kB, the Gateway resumes reading the data from the back-end service or message broker.

maximum.recovery.interval

Required? Optional

The maximum interval (in seconds) between attempts of the service to establish a connection to the back-end service or message broker specified by the connect element.

If the back-end service or message broker becomes unavailable or the Gateway cannot establish a connection to it, the Gateway triggers a recovery. The Gateway attempts to reestablish a connection to the back-end service or message broker. Initially, the interval between attempts is short, but grows until it reaches the specified value. From that point on, the Gateway attempts to reestablish a connection only at the interval specified.

During this recovery phase, the Gateway unbinds the service, and clients attempting to connect to this service receive a “404 Not Found” error. Once the back-end service or message broker recovers and the Gateway establishes a connection, the Gateway binds the service and clients can connect to the service. See the “Examples” section below the table for a code snippet using this property.

prepared.connection.count

Required? Optional

You may this property in either of the following use cases:

virtual.host

Required? Optional

Specifies the AMQP virtual host to which the Gateway can proxy clients that are connected to this service.

After the Gateway authenticates the client, the virtual host is injected into the AMQP protocol and messages can be exchanged. This ensures the target virtual host comes from a validated and trusted source such as the Gateway, rather than relying on what is set by the client, which can be manipulated.

You may choose to configure a virtual host when you want to:

See the “Examples” section below this table for a code snippet using this property.

proxy, amqp.proxy Service Examples

For jms.proxy examples, see jms.proxy.

<service>
  <name>proxy service</name>
  <accept>ws://${gateway.hostname}:${gateway.base.port}/remoteService</accept>
  <accept>wss://${gateway.hostname}:${gateway.base.port}/remoteService</accept>
  <connect>tcp://internal.example.com:port-number</connect>

  <type>proxy</type>

  <properties>
    <maximum.pending.bytes>128kB</maximum.pending.bytes>
    <maximum.recovery.interval>30</maximum.recovery.interval>
    <prepared.connection.count>10</prepared.connection.count>
  </properties>

  <cross-site-constraint>
    <allow-origin>http://${gateway.hostname}:${gateway.base.port}</allow-origin>
  </cross-site-constraint>
  <cross-site-constraint>
    <allow-origin>https://${gateway.hostname}:${gateway.base.port}</allow-origin>
  </cross-site-constraint>
</service>
<service>
  <name>proxy service</name>
  <accept>ws://www.example.com:80/service</accept>
  <connect>tcp://internal.example.com:port-number</connect>

  <type>proxy</type>

</service>
<!-- This service connects only to the AMQP virtual host vhost1. -->
<service>
  <name>AMQP proxy service</name>
  <accept>ws://${gateway.hostname}:${gateway.port}/app1</accept>
  <connect>pipe://vhost1</connect>

  <type>amqp.proxy</type>

  <properties>
    <virtual.host>/vhost1</virtual.host>
  </properties>
</service>

  <!-- This service connects only to the AMQP virtual host vhost2.-->
<service>
  <accept>ws://${gateway.hostname}:${gateway.port}/app2</accept>
  <connect>pipe://vhost2</connect>

  <type>amqp.proxy</type>

  <properties>
    <virtual.host>/vhost2</virtual.host>
  </properties>
</service>

  <!-- Proxy service accepts on named pipes to connect to the AMQP broker.-->
<service>
  <accept>pipe://vhost1</accept>
  <accept>pipe://vhost2</accept>
  <connect>tcp://${gateway.hostname}:5672</connect>

  <type>proxy</type>

</service>

Notes

turn.proxy

Use the turn.proxy service to connect WebRTC clients and peers with a TURN server while protecting the TURN server’s information from public exposure.

The TURN server allows a host behind a NAT (the TURN client) to request that the TURN server act as a relay for connections to a peer in an WebRTC session. When a peer sends a packet to the relay address, the server relays the packet to the client. When the client sends a data packet to the server, the server relays it to the appropriate peer using the relayed transport address as the source.

If desired, the turn.proxy service on the Gateway can mask the relay addresses and TURN server address to ensure that private addresses are not exposed.

Property Required? Description
key.alias Required The alias in the certificate in the Gateway keystore containing the shared secret.
key.algorithm Optional The algorithm used for the HMAC computation of the password using the username and the shared secret. If this property is omitted the default value is HmacSHA1.
mapped.address Optional The IP address and port that overrides the STUN MAPPED-ADDRESS attribute and is xored with the magic cookie attribute XOR-MAPPED-ADDRESS. If this property is omitted, the STUN address value in the transport is used. Setting mapped.address to an invalid URI ensures that local network characteristics are not shared with connecting peers.
masking.key Optional This XOR cipher key is used to mask the outgoing XOR-RELAYED-ADDRESS and unmask the XOR-PEER-ADDRESS in order to protect the TURN server's IP addresses from exposure. For addresses in the form IPv4:PORT, the mask is performed by xoring each byte of the expression with each byte of the IPv4 address. For IPv6:PORT, the mask will be repeated 4 times and each byte of the resulting expression will be xored with each byte of the address. The masking.key format is 0x%08X, for example 01100011.

Example:

<service>
  <name>turn.proxy</name>
  <description>TURN Proxy Service</description>
  <!--
  enter the hostname and port from the url property in turn.rest,
  without the transport suffix
  -->
  <accept>tcp://${gateway.hostname}:22000</accept>    
  <!-- enter the URI for the TURN server -->
  <connect>tcp://coturn:3478</connect>

  <type>turn.proxy</type>

  <properties>
    <!-- the alias used when adding the shared key password to the keystore -->
    <key.alias>turnshared</key.alias>
    <!-- the mapped.address used by STUN -->
    <mapped.address>192.0.2.15:3478</mapped.address>
  </properties>
</service>
Notes:

turn.rest

Use the turn.rest service to enable WebRTC clients and peers to locate a TURN server, and to ensure that the ephemeral TURN credentials required by the TURN server are shared between the Gateway and the TURN server.

The REST API for TURN Services is a standard API for obtaining access to TURN services via ephemeral (time-limited) credentials. The credentials are stored in the keystore of the Gateway, and then sent to the TURN server over TCP and checked by a TURN server using the standard TURN protocol.

The usage of ephemeral credentials ensures that access to the TURN server can be controlled even if the credentials can be discovered by the user, or a man-in-the-middle, as is the case in WebRTC where TURN credentials must be specified in Javascript.

Note: While traditional TURN authentication requires double-encrypted transports (one for authentication and one for the end-to-end connection) the turn.rest service does not have this performance drawback.

Also, the benefit of using the turn.rest service is that you can control security independently per connection, and enable a fail-fast when a user fails to authenticate correctly.

Property Required? Description
key.alias Required The alias attached to the shared key from the TURN server. The alias is used by the turn.proxy and turn.rest services to obtain access to the key.
key.algorithm Required The algorithm used to encrypt shared key credentials. The default it is hmacsha1.
credentials.generator Required The credential generator class used to generate the credentials. You may implement and reference your own class. Kaazing includes a class that integrates with coTURN named org.kaazing.gateway.service.turn.rest.DefaultCredentialsGenerator.
credentials.ttl Optional The time for which the credentials are valid. For more information, see the Response section of the TURN REST API. The default value is 86400 seconds. You can specify your TTL syntax in milliseconds, seconds, minutes, or hours (spelled out or abbreviated). For example, all of the following are valid: 1800s, 1800sec, 1800 secs, 1800 seconds, 1800seconds, 3m, 3min, or 3 minutes. If you do not specify a time unit then seconds are assumed.
username.separator Required The separator used in the one time use credentials, it defaults to :. When setting up coTURN, the separator is specific with the parameter --rest-api-separator=:.
url Required The URI used to contact the turn.proxy service. The URI entered here should correspond to the URI in the turn.proxy accept element, with the suffix ?transport=scheme where the scheme can be tcp or udp. For example, if the turn.rest url is <url>turn:${gateway.hostname}:22000?transport=tcp</url>, then the turn.proxy accept is <accept>tcp://${gateway.hostname}:22000</accept>. If the scheme is omitted, both TCP and UDP are tried. When you use the turn.rest service, the Gateway does attempt to use STUN before TURN per the ICE framework, but if you want to use an open source STUN server instead of the Gateway turn.rest service, the value for url is (note the omitted scheme): <url>stun:${gateway.hostname}:22000</url>.

Example:

<service>
  <name>turn.rest</name>
  <description>TURN Rest Service</description>
  <!-- ensure that HTTPS and the turn.rest suffix are used -->
  <accept>https://${gateway.hostname}:18032/turn.rest</accept>

  <type>turn.rest</type>

  <properties>
    <!-- the alias used when adding the shared key password to the keystore  -->
    <key.alias>turnshared</key.alias>
    <!-- enter the algorithm used to encrypt the credentials -->
    <key.algorithm>HmacSHA1</key.algorithm>
    <!-- enter the credential generator class used to generate the credentials -->
    <credentials.generator>class:org.kaazing.gateway.service.turn.rest.DefaultCredentialsGenerator</credentials.generator>
    <!-- enter the time the credentials are valid for -->
    <credentials.ttl>22400</credentials.ttl>            
    <!-- enter the username separator used in the credentials -->
    <username.separator>:</username.separator>
    <!-- 
    enter the hostname and port from the turn.proxy service accept URI,
    And include the transport suffix
    -->
    <url>turn:${gateway.hostname}:22000?transport=tcp</url>        
  </properties>

  <!-- specify a real name for the security realm used -->
  <realm-name>demo</realm-name>    
  <!-- restrict cross site constraints before running in production -->
  <authorization-constraint>
    <require-role>AUTHORIZED</require-role>
  </authorization-constraint>

  <!-- for testing, allow any origin -->
  <cross-site-constraint>
    <allow-origin>*</allow-origin>
  </cross-site-constraint>
</service>
Note: For information on configuring the turn.rest service as part of a WebRTC deployment, see Deploy WebRTC using the Gateway.

update.check

The Update Check service checks to see if there are new versions of the Gateway available when the Gateway is started. New versions includes patches, and minor and major releases. The Update Check service is enabled by default.

Example

The following XML is the complete configuration for the service. No other entries are valid.

<service>
 <name>Update Checker</name>
 <description>Checks to see if newer versions of the Gateway are available</description>
 <type>update.check</type>
</service>
Notes:

Pipe scheme

The pipe:// scheme is a URI scheme internal to the Gateway and used to connect one service to another service running in the same Gateway instance. Essentially, the pipe scheme is a named, logical channel between two services on the local Gateway.

The format of the pipe:// scheme is pipe://string, such as pipe://jms-common. The URI must conform to the standard URI syntax. Any values entered after the pipe:// scheme and string, such as a path, are invalid. The URI <accept>pipe://customera/app1</accept> is invalid. If a path is used, the Gateway will respond with an error message.

The pipe:// scheme is available to the accept and connect elements. It is often used with Enterprise Shield and the virtual.host (to segregate applications using the same AMQP broker) and protocol.transport (as pipe.transport).

Let’s look at an example using tiered connection speeds.

You could offer different connection speeds by defining a separate jms.proxy service for each tier and then pipe the client connections into a single jms service. Here’s an example of the configuration for the jms.proxy service for the bottom tier:

<service>
  <name>JMS Minimum Level</name> <!-- the name of the service -->
  <accept>ws://example.com:8001/jms-slow</accept> <!-- clients in the bottom tier connect using this URI -->
  <connect>pipe://jms-common</connect> <!-- connections are piped to the JMS service with the accept pipe://jms-common -->

  <type>jms.proxy</type> <!-- this jms.proxy service is used for the bottom tier only -->

  <accept-options>
    <tcp.maximum.outbound.rate>1kB/s</tcp.maximum.outbound.rate> <!-- this element sets the outbound speed for the bottom tier -->
  </accept-options>

  <cross-site-constraint>
    <allow-origin>*</allow-origin>
  </cross-site-constraint>
</service>

Here’s an example of the configuration for the jms.proxy service for the middle tier:

<service>
  <name>JMS Medium Level</name> <!-- the name of the service -->
  <accept>ws://example.com:8001/jms-medium</accept> <!-- clients in the middle tier connect using this URI -->
  <connect>pipe://jms-common</connect> <!-- connections are piped to the JMS service with the accept pipe://jms-common -->

  <type>jms.proxy</type> <!-- this jms.proxy service is used for the middle tier only -->

  <accept-options>
    <tcp.maximum.outbound.rate>20MB/s</tcp.maximum.outbound.rate> <!-- this element sets the outbound speed for the middle tier -->
  </accept-options>

  <cross-site-constraint>
    <allow-origin>*</allow-origin>
  </cross-site-constraint>
</service>

To accept the pipes from the JMS Minimum and Medium Level jms.proxy services, the JMS service has a <accept>pipe://jms-common</accept> in its configuration:

<service>
  <name>JMS Common</name> <!-- the name of the service -->
  <accept>pipe://jms-common</accept> <!-- matches the pipe URI in the connect elements of the jms.proxy services -->
  <accept>ws://example.com:8001/jms</accept> <!-- normal, non-tiered connections will connect here -->

  <type>jms</type>

  <properties>
    <connection.factory.name>ConnectionFactory</connection.factory.name>
    <context.lookup.topic.formatdynamic>Topics/%s</context.lookup.topic.format>
    <context.lookup.queue.format>dynamicQueues/%s</context.lookup.queue.format>

    <env.java.naming.factory.initial>
      org.apache.activemq.jndi.ActiveMQInitialContextFactory
    </env.java.naming.factory.initial>
    <env.java.naming.provider.url>
      tcp://localhost:61616
    </env.java.naming.provider.url>
  </properties>

  <cross-site-constraint>
    <allow-origin>*</allow-origin>
  </cross-site-constraint>
</service>

properties

The service’s type-specific properties.

Example

<service>
  <accept>http://${gateway.hostname}:${gateway.extras.port}/</accept>

  <type>directory</type>

  <properties>
    <directory>/</directory>
    <welcome-file>index.md</welcome-file>
  </properties>
    .
    .
    .
</service>

Notes

accept-options and connect-options

Required? Optional; Occurs: zero or one

Use the accept-options element to add options to all accepts for the service (see also the accept element).

You can configure accept-options on the service or the service-defaults elements:

Use the connect-options element to add options to all connections for the service (see also the connect element).

Option accept-options connect-options Description
protocol.bind yes no Binds the URL(s) on which the service accepts connections (defined by the accept element). Set protocol to one of the following: ws, wss, http, https, ssl, tcp, udp. See protocol.bind.
protocol.transport yes yes Specifies the URI for use as a transport layer (defined by the accept element). Set protocol.transport to any of the following: http.transport, ssl.transport, tcp.transport, pipe.transport. See protocol.transport.
ws.maximum.message.size yes no Specifies the maximum incoming WebSocket message size allowed by the Gateway. See ws.maximum.message.size.
http.keepalive no yes Enables or disables HTTP keep-alive (persistent) connections, allowing you to reuse the same TCP connection for multiple HTTP requests or responses. This improves HTTP performance especially for services like http proxy. http.keepalive is enabled by default. See http.keepalive.
http.keepalive.connections no yes Specifies the maximum number of idle keep-alive connections to upstream servers that can be cached. The connections time out based on the setting for the http.keepalive.timeout configuration option. See http.keepalive.connections.
http.keepalive.timeout yes yes Specifies how much time the Gateway waits after responding to an HTTP or HTTPS request and receiving a subsequent request. The value for http.keepalive.timeout should be greater-than or equal-to the value for ws.inactivity.timeout to prevent emulated connections from terminating prematurely. See http.keepalive.timeout.
http.maximum.redirects yes yes Sets a limit on the number of redirects the Gateway will follow from a back-end origin server. Works with the proxy service. If the number of redirects is exceeded the proxy service sends an error code to the Kaazing client. See http.maximum.redirects.
ssl.ciphers yes yes Lists the cipher strings and cipher suite names used by the secure connection. See ssl.ciphers.
ssl.protocols yes yes Lists the TLS/SSL protocol names on which the Gateway can accept connections. See ssl.protocols and socks.ssl.protocols.
ssl.encryption yes yes Signals Kaazing Gateway to enable or disable encryption on incoming traffic.
ssl.verify-client yes no Signals Kaazing Gateway to require a client to provide a digital certificate that the Gateway can use to verify the client’s identity.
socks.mode This feature is available in Kaazing Gateway - EnterpriseEdition. yes yes The mode that you can optionally set to forward or reverse to tell the Gateway how to interpret SOCKS URIs to initiate the connection. See socks.mode.
socks.timeout This feature is available in Kaazing Gateway -Enterprise Edition. no yes Specifies the length of time (in seconds) to wait for SOCKS connectionsto form. If the connection does not succeed within the specified time, then the connection fails and is closed and the client must reconnect. For more information, see socks.timeout.
socks.ssl.ciphers This feature is available in Kaazing Gateway - Enterprise Edition. yes yes Lists the cipher strings and cipher suite names used by the secure SOCKS connection.
socks.ssl.protocols This feature is available in Kaazing Gateway - Enterprise Edition. yes yes Lists the TLS/SSL protocol names on which the Gateway can accept connections for Enterprise Shield™ configurations that are running the SOCKS protocol over SSL. See ssl.protocols and socks.ssl.protocols.
socks.ssl.verify-client This feature is available in Kaazing Gateway - Enterprise Edition. yes yes A connect mode you can set to required, optional, or none to verify how to secure the SOCKS proxy against unauthorized use by forcing the use of TLS/SSL connections with a particular certificate. When required,the DMZ Gateway expects the internal Gateway to prove its trustworthiness by presenting certificates during the TLS/SSL handshake.
socks.retry.maximum.interval This feature is available in Kaazing Gateway - Enterprise Edition. yes no The maximum interval the Gateway waits before retrying if an attempt toconnect to the SOCKS proxy fails. The Gateway initially retries afterwaiting for 500ms; the subsequent wait intervals are as follows: 1s, 2s, 4s, and so on up to the value of socks.retry.maximum.interval. After the maximum interval is reached, the Gateway continues to reconnect to the SOCKS proxy at the maximum interval.
tcp.maximum.outbound.rate This feature is available in Kaazing Gateway- Enterprise Edition. yes no Specifies the maximum bandwidth rate at which bytes can be written from the Gateway (outbound) to each client session. This option controls the rate of outbound traffic being sent per client connection for clients connecting to a service (see tcp.maximum.outbound.rate).
ws.inactivity.timeout yes yes Specifies the maximum number of seconds that the network connection can be inactive (seconds is the default time interval syntax). The Gateway drops the connection if it cannot communicate with the client in the number of seconds specified (see ws.inactivity.timeout). You can specify your preferred time interval syntax in milliseconds, seconds, minutes, or hours (spelled out or abbreviated). For example, all of the following are valid: 1800s, 1800sec, 1800 secs, 1800 seconds, 1800seconds, 3m, 3min, or 3 minutes. If you do not specify a time unit then seconds are assumed. The value for http.keepalive.timeout should be greater-than or equal-to the value for ws.inactivity.timeout to prevent emulated connections from terminating prematurely..
http.server.header yes no Controls the inclusion of the HTTP Server header. By default, the Gateway writes a HTTP Server header. See http.server.header.
http.max.authentication.attempts no yes The connect-options property http.max.authentication.attempts must be set in order for the Gateway to respond to HTTP 401 codes. The http.max.authentication.attempts must be set with an integer value specifying how many times an HTTP connector will attempt to authenticate when it is challenged. The default value is 0. For more information, see Respond to Challenges on HTTP 401 Status Code.
ws.version (deprecated) no yes The ws.version element has been deprecated.

protocol.bind

Required? Optional; Occurs: zero or more; Where protocol can be ws, wss, http, https, socks, ssl, tcp, or udp

Use the protocol.bind element to configure network protocol bindings for your Gateway services. Configure protocol.bind as an accept-option or a connect-option to bind a URI or URIs on which the Gateway can accept or make connections. The Gateway binds the URI or port or IP address specified in the protocol.bind element to bind the public URI in the accept or connect element to the URI or port or IP address

Specify any of the following protocol schemes:

Note: For TCP and UDP URLs in accept, tcp.bind, udp.bind, and protocol.transport elements, you can use the name of a network interface in place of a hostname or IP address. For example, <accept>tcp://@eth0:8123</accept> (Linux/Mac) and <tcp.bind>[@Local Area Connection]:8123</tcp.bind> (Windows). Use square brackets around a subinterface name or when the name contains spaces (<tcp.bind>[@eth0:1]:8123</tcp.bind>). Binding to an interface binds to all IP addresses defined on that interface (IPv4, IPv6).

By using interface names instead of IP address, the Gateway configuration file, gateway-config.xml, can be copied between Gateway cluster members without the need to update each cluster member’s configuration with its IP address.

See the Configure the Gateway on an Internal Network document for more information about configuring the protocol.bind element.

Example: Binding to Specific Ports

The following example shows external addresses (that users will see) for the WebSocket (ws) and WebSocket Secure (wss) protocols on localhost:8000 and localhost:9000. Internally, however, these addresses are bound to ports 8001 and 9001 respectively.

<service>
  <name>Echo Config</name>
  <accept>ws://localhost:8000/echo</accept>
  <accept>wss://localhost:9000/echo</accept>

  <type>echo</type>

  <accept-options>
    <ws.bind>8001</ws.bind>
    <wss.bind>9001</wss.bind>
  </accept-options>
</service>
Example: Binding a Public URI to IP Addresses in a Cluster Configuration

In the following example, the ws.bind and wss.bindelements in accept-options are used to bind the public URI in the accept elements to the local IP address of the cluster member. This allows the accept URIs in the balancer service to be identical on every cluster member. Only the ws.bind element needs to be unique in each cluster member (contain the local IP address of that cluster member).

<service>
  <accept>ws://balancer.example.com:8081/echo</accept>
  <accept>wss://balancer.example.com:9091/echo</accept>

  <type>balancer</type>

  <accept-options>
    <ws.bind>192.168.2.10:8081</ws.bind>
    <wss.bind>192.168.2.10:9091</wss.bind>
  </accept-options>
</service>

protocol.transport

Required? Optional; Occurs: zero or more; Where protocol can be http, ssl, tcp, pipe, and socks.

Use the protocol.transport accept-option or connect-option to replace the default transport with a new transport. This allows you to change the behavior of the connection without affecting the protocol stack above the transport. For example, a TCP transport normally connects to a remote IP address and port number. However, you could replace that, for instance, with an in-memory (pipe) transport that communicates with another service in the same Gateway.

Specify any of the following transports:

Example: Configuring the Transport in accept-options

In the following example, the HTTP transport is replaced with a new (socks+ssl) transport that is capable of doing a reverse connection using the SOCKS protocol over TLS/SSL.

<service>
   <accept>wss://gateway.example.com:443/path</accept>
 <connect>tcp://internal.example.com:1080</connect>
    .
    .
    .
  <accept-options>
    <http.transport>socks+ssl://gateway.dmz.net:1080</http.transport>
    <socks.mode>reverse</socks.mode>
    <socks.retry.maximum.interval>1 second</socks.retry.maximum.interval>
  </accept-options>
</service>
Example: Configuring the Transport in connect-options

In the following example, the socks+ssl transport performs a reverse connection using the SOCKS protocol over TLS/SSL.

<service>
  <accept>wss://gateway.example.com:443/path</accept>
  <connect>wss://gateway.example.com:443/path</connect>
     .
     .
     .
  <connect-options>
    <http.transport>socks+ssl://gateway.dmz.net:1080</http.transport>
    <socks.mode>reverse</socks.mode>
    <socks.timeout>2 seconds</socks.timeout>
    <ssl.verify-client>required</ssl.verify-client>
  </connect-options>
</service>

ws.maximum.message.size

Required? Optional; Occurs: zero or one

Configures the maximum message size the service can accept from a WebSocket client connection.

Although the ws.maximum.message.size is optional, you should configure this element to keep clients from accidentally or deliberately causing the Gateway to spend resources processing large messages. Setting this element is useful in preventing denial of service attacks because you can limit the size of the message (such as a particularly large message) incoming to the Gateway from a client.

The actual maximum message size that the Gateway can handle is influenced by the JVM settings (such as maximum heap size), available memory on the system, network resources, available disk space and other operating system resources. The maximum message size is also influenced by the configuration and capabilities of back-end services to which the Gateway might be forwarding these messages. The best way to determine the true maximum message size for your environment and use case is to perform some testing.

If you do not specify ws.maximum.message.size in the gateway-config.xml file, then the default maximum incoming message is limited to 128k.

If you specify ws.maximum.message.size in the gateway-config.xml file, then specify a positive integer. You can append a k, K, m, or M to indicate kilobytes or megabytes (the unit is case insensitive). If a unit is not included, then ws.maximum.message.size defaults to bytes. For example:

If an incoming message from a client exceeds the value of ws.maximum.message.size, then the Gateway terminates the connection with the client and disconnects the client, and records a message in the Gateway log.

Example

The following example sets a maximum incoming message limit of 64 kilobytes:

<service>
  <accept>ws://localhost:8000/echo</accept>
  <accept>wss://localhost:9000/echo</accept>
  <accept-options>
    <ssl.encryption>disabled</ssl.encryption>
    <ws.bind>8001</ws.bind>
    <wss.bind>9001</wss.bind>
    <ws.maximum.message.size>64k</ws.maximum.message.size>
  </accept-options>
</service>

http.keepalive

Required? Optional; Occurs: zero or one

Use the http.keepalive element in connect-options to enable or disable HTTP keep-alive (persistent) connections, allowing you to reuse the same TCP connection for multiple HTTP requests or responses. This improves HTTP performance especially for services like http proxy. http.keepalive is enabled by default.

http.keepalive.connections

Required? Optional; Occurs: zero or one

Use the http.keepalive.connections element in connect-options to specify the maximum number of idle keep-alive connections to upstream servers to upstream servers that can be cached.

The connection times out based on the setting for the http.keepalive.timeout configuration option. The best practice is to specify a value small enough to allow upstream servers to process new incoming connections as well. The following example specifies one connection is cached in worker until it is reused or timed out.

http.keepalive.timeout

Required? Optional; Occurs: zero or one

Use the http.keepalive.timeout element in either accept-options or connect-options to set the number of seconds the Gateway waits after responding to a request and receiving a subsequent request on an HTTP or HTTPS connection before closing the connection. The default value is 30 seconds.

Typically, you specify the http.keepalive.timeout element to conserve resources because it avoids idle connections remaining open. You can specify your preferred time interval syntax in milliseconds, seconds, minutes, or hours (spelled out or abbreviated). For example, all of the following are valid: 1800s, 1800sec, 1800 secs, 1800 seconds, 1800seconds, 3m, 3min, or 3 minutes. If you do not specify a time unit then seconds are assumed.

Important: The value for http.keepalive.timeout should be greater-than or equal-to the value for ws.inactivity.timeout to prevent emulated connections from terminating prematurely.
Example

The following example shows a service element with an HTTP or HTTPS connection time limit of 120 seconds:

<service>
  <accept>ws://localhost:8000/echo</accept>
  <accept>wss://localhost:9000/echo</accept>
  . . .
  <accept-options>
    <http.keepalive.timeout>120 seconds</http.keepalive.timeout>
  </accept-options>
</service>

ssl.ciphers

Required? Optional; Occurs: zero or one; Values: cipher strings and cipher suite names for OPENSSL and Java 7.

Use ssl.ciphers to list the encryption algorithms used by TLS/SSL on the secure connection (WSS, HTTPS or SSL). By default (or if you do not specify this element on a secure connection), the Gateway uses HIGH,MEDIUM,!ADH,!KRB5.

Examples
Notes

ssl.protocols and socks.ssl.protocols

Required? Optional; Occurs: zero or one; Values: SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2

Specify a comma-separated list of the TLS/SSL protocol names on which the Gateway can accept or make connections. The list of protocols you specify are negotiated during the TLS/SSL handshake when the connection is created. See How TLS/SSL Works with the Gateway to learn more about secure communication between clients and the Gateway. See the Java Documentation for a list of valid protocol names.

The ssl.protocols and socks.ssl.protocols elements are optional, and in general, there is no need to configure either element except to prevent usage of specific TLS/SSL protocols for which a vulnerability has been discovered. A good example is the POODLE attack that exploited a vulnerability in SSLv3.

If you configure these elements, then you must explicitly name the TLS/SSL protocols you want to enable. If you do not configure the ssl.protocols or socks.ssl.protocols element, or you configure either element but do not specify any protocols, then the default value is taken from the underlying JVM. The protocol values are case-sensitive.

Typically, you configure the ssl.protocols or socks.ssl.protocols in the accept-options for inbound requests from clients. You might also specify these elements in the connect-options for an Enterprise Shield™ configuration, although this is less common because Gateway-to-Gateway communication usually occurs in a controlled environment and the TLS/SSL protocol you use is controlled. The ssl.protocols and socks.ssl.protocols elements are more useful in the accept-options when accepting requests from clients that are not in your direct control.

Note: These elements were introduced in Kaazing Gateway release 4.0.6 and can be used for configurations running Kaazing Gateway 4.0.6 or later releases. For configurations running Kaazing Gateway 4.0.5 or earlier releases, you can disable the SSLv3 protocol by disabling SSLv3 ciphers with ><ssl.ciphers>!SSLv3</ssl.ciphers>. See ssl.ciphers for more information.

If you configure the ssl.protocols or the socks.ssl.protocols element to enable SSLv3, but disable SSLv3 cipher suites with the ssl.ciphers or socks.ssl.ciphers elements, then the connection does not occur and the Gateway will not accept SSLv3 connections. Similarly, if you enable TLSv1 with the ssl.protocols or the socks.ssl.protocols element, but disable the TLSv1 ciphers, then the handshake will not succeed and the connection cannot go through.

Example: Simple Configuration Using ssl.protocols to Accept TLSv1, TLSv1.2, and TLSv1.1 Connections

The following example shows a proxy service. Because the accept URL the wss:// scheme, we know that this is a secure connection. The ssl.protocols element in the following example indicates that we want the Gateway to accept only TLSv1, TLSv1.2, and TLSv1.1 protocols, say, from clients over this secure connection.

<service>
  <name>DMZ Gateway</name>
  <accept>wss://example.com:443/myapp</accept>
    ...
  <type>proxy</type>

  <properties>
    ...
  </properties>

  <accept-options>
    <ssl.protocols>TLSv1,TLSv1.2,TLSv1.1</ssl.protocols>
    ...
  </accept-options>

</service>
Example: Enterprise Shield™ Configuration Using socks.ssl.protocols to Accept Reverse Connections on TLSv1.2

This example shows a proxy service in the DMZ configured for Enterprise Shield™, for which the connect behavior is reversed. Instead of connecting to another host, the Gateway accepts connections instead. Thus, the setting is configured as connect-options in this example. For more information about Enterprise Shield™ and forward and reverse connectivity, see Configure Enterprise Shield™ for Kaazing Gateway.

Because this configuration connects a Gateway to another Gateway in a controlled data center, the example only configures the TLSv1.2 protocol for secure connections. For this type of topology we don’t expect to make any other kinds of connections.

The prefix for this example is socks.ssl, rather than just ssl to explicitly reference the SSL layer that is transporting the SOCKS protocol.

<service>
  <name>DMZ Gateway</name>
  <accept>wss://example.com:443/myapp</accept>
  <connect>wss://example.com:443/myapp</connect>
    ...
  <type>proxy</type>

  <properties>
    ...
  </properties>

  <connect-options>
    <http.transport>socks://internal.example.com:1080</http.transport>
    <socks.mode>reverse</socks.mode>
    <socks.transport>ssl://internal.example.com:1080</socks.transport>
    <socks.ssl.protocols>TLSv1.2</socks.ssl.protocols>
  </connect-options>
     ...
</service>
Example: Enterprise Shield™ Configuration Using ssl.protocols and socks.ssl.protocols

This example combines the previous two examples to show an Enterprise Shield™ configuration in which ssl.protocols is specified in the accept-options, and socks.ssl.protocols is specified in the connect-options.

On the frontplane, the Gateway accepts connections from clients only using the TLSv1, TLSv1.2, and TLSv1.1 protocols. On the backplane, the Gateway only accepts (reverse) connections using the protocol TLSv1.2 (from another Gateway).

<service>
  <name>DMZ Gateway</name>
  <accept>wss://example.com:443/myapp</accept>
  <connect>wss://example.com:443/myapp</connect>
    ...
  <type>proxy</type>

  <properties>
    ...
  </properties>

  <accept-options>
    <ssl.protocols>TLSv1,TLSv1.2,TLSv1.1</ssl.protocols>
    ...
  </accept-options>

  <connect-options>
    <http.transport>socks://internal.example.com:1080</http.transport>
    <socks.mode>reverse</socks.mode>
    <socks.transport>ssl://internal.example.com:1080</socks.transport>
    <socks.ssl.protocols>TLSv1.2</socks.ssl.protocols>
  </connect-options>
     ...

</service>

ssl.encryption

Required? Optional; Occurs: zero or one; Values: enabled, disabled

This element allows you to enable or disable TLS/SSL encryption on incoming traffic to the Gateway, turning off TLS/SSL certificate verification for an HTTPS or WSS accept. By default (or if you do not specify this element), encryption is enabled for HTTPS and WSS.

For example, if the Gateway is deployed behind a TLS/SSL offloader (a network device designed specifically for handling a company’s TLS/SSL certificate traffic), where the incoming traffic to the TLS/SSL offloader is secured over HTTPS and the outgoing traffic from the TLS/SSL offloader to the Gateway is not secure, you can disable encryption so that the Gateway accepts the unsecured traffic on a connection that uses HTTPS/WSS. Basically, the Gateway trusts traffic from the TLS/SSL offloader and therefore the Gateway does not need to verify the connection itself.

You can include the accept-options element on a service that accepts over HTTPS or WSS, then disable encryption by setting the ssl.encryption element to disabled. Even when encryption is disabled, the Gateway returns the response as HTTPS/WSS. If you do not include these elements or set the ssl.encryption element to enabled, the Gateway treats incoming traffic on HTTPS or WSS as secure and handles the TLS/SSL certificate verification itself.

See Secure Network Traffic with the Gateway for more information about HTTPS/WSS.

Example: Using ssl.encrption in accept-options

The following example shows a service element containing the accept-options and ssl.encryption elements, which signal the Gateway to listen on address www.example.com, with encryption disabled. The example uses the proxy service, which is common, but not required. See the type element for a list of service types.

<service>
  <accept>wss://www.example.com/remoteService</accept>
  <connect>tcp://localhost:6163</connect>

  <type>proxy</type>
   .
   .
   .
  <accept-options>
    <ssl.encryption>disabled</ssl.encryption>
  </accept-options>
</service>

Alternatively, the IP address can be used in the configuration parameters. You can also specify an IP address and port for the external address. Typically when you disable encryption on the incoming traffic, as the Gateway is behind a TLS/SSL offloader, you will also have a network mapping section mapping www.example.com to internal address gateway.dmz.net:9000.

Example: Using ssl.encrption in connect-options

The following example for an Enterprise Shield™ topology shows a service element containing several connect-options including an ssl.encryption option that disables encryption.

<service>
  <accept>wss://dmz.example.com:443/remoteService</accept>
  <connect>tcp://internal.example.com:8010</connect>

  <type>proxy</type>

  <properties>
    <prepared.connection.count>1</prepared.connection.count>
  </properties>

  <accept-options>
    <ssl.ciphers>DEFAULT</ssl.ciphers>
    <ssl.verify-client>none</ssl.verify-client>
  </accept-options>

  <connect-options>
    <tcp.transport>socks+ssl://dmz.example.com:1443</tcp.transport>
    <socks.mode>reverse</socks.mode>
    <socks.ssl.ciphers>NULL</socks.ssl.ciphers>
    <ssl.encryption>disabled</ssl.encryption>
    <socks.ssl.verify-client>required</socks.ssl.verify-client>
  </connect-options>
</service>
Notes

ssl.verify-client

Required? Optional; Occurs: zero or one; Values: required, optional, none

By default, when the Gateway accepts a secure URI (for example, WSS, HTTPS, SSL), the Gateway provides its digital certificate to connecting clients but does not require that the clients provide a certificate of their own — the Gateway trusts all clients. For added security, implement a mutual verification pattern where, in addition to the Gateway presenting a certificate to the client, the client also presents a certificate to the Gateway so that the Gateway can verify the client’s authenticity.

To configure that, you can use the ssl.verify-client on an accept to specify that the Gateway requires a client to provide a digital certificate that the Gateway can use to verify the client’s identity. This configuration ensures that both the clients and the Gateway are verified via TLS/SSL before transmitting data, establishing a mutually-verified connection.

If you configure the ssl.verify-client option with the value … Then …
required A client certificate is required. The Gateway requires that the client connecting to the Gateway over the secure URI in the accept must provide a digital certificate to verify the client’s identity. After the Gateway has verified the client certificate, then the client can connect to the Gateway service.
optional The client certificate is not required, but if a client provides a certificate, the Gateway attempts to verify it. If the client provides a certificate and verification fails, then the client is not allowed to connect.
none The client recognizes that a certificate is not required and it does not send a certificate. All clients can connect to the secure service on the Gateway.
Example

In the following example, the Gateway accepts on a secure URI (wss://) and requires that all clients connecting to the Gateway on that URI provide a digital certificate verifying their identity.

<service>
  <accept>wss://example.com:443</accept>
  <connect>tcp://server1.corp.example.com:5050</connect>

  <type>proxy</type>

  <accept-options>
    <ssl.verify-client>required</ssl.verify-client>
  </accept-options>
</service>
Notes

socks.mode

Required? Optional; Occurs: zero or one

Use the socks.mode in accept-options or connect-options to initiate the Gateway connection using the SOCKet Secure (SOCKS) protocol in one of the following modes:

For more information about Enterprise Shield™ and forward and reverse connectivity, see Configure Enterprise Shield™ with the Gateway.

Example

The following example shows a service element with the socks.mode set to reverse. This configuration causes the Gateway to interpret the SOCKS URI as a connect URI:

<service>
  <accept>pipe://pipe-1</accept>
  <connect>tcp://broker.example.com:8010/</connect>

  <type>proxy</type>

  <accept-options>
    <pipe.transport>socks+ssl://dmz.example.com:1443</pipe.transport>
    <socks.mode>reverse</socks.mode>
    <socks.retry.maximum.interval>45 seconds</socks.retry.maximum.interval>
  </accept-options>
</service>
Example

The following example shows a connect-options element with the socks.mode set to reverse.

<service>
  <accept>tcp://dmz.example.com:8000/</accept>
  <connect>pipe://pipe-1</connect>

  <type>proxy</type>

  <connect-options>
    <pipe.transport>socks+ssl://dmz.example.com:1443</pipe.transport>
    <socks.mode>reverse</socks.mode>
  </connect-options>
</service>

socks.timeout

Required? Optional; Occurs: zero or one

Use the socks.timeout connect-option to specify the length of time (in seconds) to wait for a SOCKS connection to form before closing the connection. If you do not specify socks.timeout for your Gateway configuration, then a timeout is not enforced.

Note the following behavior for reverse and forward SOCKS connections:

Example

The following example shows a socks.timeout that is set to 10 seconds. If the forward connection is not formed within 10 seconds, then the connection is closed and the client must initiate another connection.

<service>
  <accept>wss://www.example.com:443/remoteService</accept>
  <connect>tcp://localhost:6163</connect>

  <type>proxy</type>

  <accept-options>
    <ssl.ciphers>DEFAULT</ssl.ciphers>
    <ssl.verify-client>none</ssl.verify-client>
  </accept-options>

  <connect-options>
    <pipe.transport>socks+ssl://dmz.example.com:1443</pipe.transport>
    <socks.mode>reverse</socks.mode>
    <socks.timeout>10 sec</socks.timeout>
  </connect-options>
</service>

socks.ssl.ciphers

Required? Optional; Occurs: zero or one; Values: cipher strings and cipher suite names for OPENSSL and Java 7.

Use socks.ssl.ciphers to list the encryption algorithms used by TLS/SSL on the secure connection (WSS, HTTPS or SSL). By default (or if you do not specify this element on a secure connection), the Gateway uses HIGH,MEDIUM,!ADH,!KRB5.

Example for SOCKS Ciphers

The following example shows a proxy service for the DMZ Gateway in an Enterprise Shield™ topology. The Gateway receives secure client connections (wss://) and specifies the ciphers used on the accept URI (DEFAULT), but does not require mutual verification from the clients (ssl.verify-client). In addition, the internal Gateway connects over SOCKS and TLS/SSL (socks+ssl://) to the DMZ Gateway, specifies the ciphers used (NULL), and requires mutual verification (socks.ssl.verify-client). For more information about forward and reverse connectivity, see Configure Enterprise Shield™ with the Gateway.

<service>
  <accept>wss://dmz.example.com:443/remoteService</accept>
  <connect>tcp://internal.example.com:8000</connect>

  <type>proxy</type>

  <properties>
    <prepared.connection.count>1</prepared.connection.count>
  </properties>

  <accept-options>
    <ssl.ciphers>DEFAULT</ssl.ciphers>
    <ssl.verify-client>none</ssl.verify-client>
  </accept-options>

  <connect-options>
    <tcp.transport>socks+ssl://dmz.example.com:1443</tcp.transport>
    <socks.mode>reverse</socks.mode>
    <socks.ssl.ciphers>NULL</socks.ssl.ciphers>
    <socks.ssl.verify-client>required</socks.ssl.verify-client>
  </connect-options>
</service>
Notes

socks.ssl.verify-client

Required? Optional; Occurs: zero or one; Values: required, optional, none

In an Enterprise Shield™ topology over socks+ssl://, the DMZ Gateway provides the internal Gateway with a digital certificate that the internal Gateway uses to verify the DMZ Gateway’s identity before establishing the secure connection. For added security, you can use the socks.ssl.verify-client option on the DMZ Gateway to require that the internal Gateway provide a digital certificate to establish a secure connection. This configuration ensures that both the DMZ Gateway and internal Gateway are verified via TLS/SSL before transmitting data, establishing mutual verification.

If you configure the socks.ssl.verify-client option with the value … Then …
required A certificate is required. The DMZ Gateway requires that the client connecting from the internal Gateway over the SOCKS transport must provide a digital certificate to verify the client’s identity. After the DMZ Gateway has verified the client certificate, then the reverse connection can be formed.
optional A certificate is not required, but if a client provides a certificate then the DMZ Gateway attempts to verify it. If the verification fails, then the client is not allowed to connect.
none The client recognizes that a certificate is not required and it does not send a certificate. All clients can connect to the secure service on the DMZ Gateway.

For more information, see Configure Enterprise Shield™ with the Gateway.

Example

In the following example, the DMZ Gateway accepts on a WebSocket URI and connects over a named pipe. The DMZ Gateway also listens for connections on port 1443 as pipe.transport URI over SOCKS and TLS/SSL (socks+ssl://). To increase security, the socks.ssl.verify-client is set to required, which specifies that the internal Gateway URI must provide a digital certificate to the DMZ Gateway.

<service>
  <accept>wss://dmz.example.com:443/remoteService</accept>
  <connect>pipe://pipe-1</connect>

  <type>proxy</type>

  <properties>
    <prepared.connection.count>1</prepared.connection.count>
  </properties>

  <accept-options>
    <ssl.ciphers>DEFAULT</ssl.ciphers>
    <ssl.verify-client>none</ssl.verify-client>
  </accept-options>

  <connect-options>
    <pipe.transport>socks+ssl://dmz.example.com:1443</pipe.transport>
    <socks.mode>reverse</socks.mode>
    <socks.ssl.ciphers>NULL</socks.ssl.ciphers>
    <socks.ssl.verify-client>required</socks.ssl.verify-client>
  </connect-options>
</service>
Notes

socks.retry.maximum.interval

Required? Optional; Occurs: zero or one

Use the socks.retry.maximum.interval accept-option in an Enterprise Shield™ topology to set the maximum interval of time that the internal Gateway waits to retry a reverse connection to the DMZ Gateway after a failed attempt. The internal Gateway initially retries after waiting for 500ms; the subsequent wait intervals are as follows: 1s, 2s, 4s, and so on up to the value of socks.retry.maximum.interval. Once the maximum interval is reached, the Gateway continues to reconnect to the SOCKS proxy at the maximum interval. If no maximum is specified, then the default retry interval is 30 seconds. For more information about configuring the SOCKS proxy, see Configure Enterprise Shield™ with the Gateway.

Example

The following example shows a service element containing a SOCKS proxy connection retry interval time limit of 60 seconds:

<service>
  <accept>pipe://pipe-1</accept>
  <connect>tcp://broker.example.com:8010/</connect>

  <type>proxy</type>

  <accept-options>
    <pipe.transport>socks+ssl://dmz.example.com:1443</pipe.transport>
    <socks.mode>reverse</socks.mode>
    <socks.retry.maximum.interval>60 seconds</socks.retry.maximum.interval>
  </accept-options>
</service>

tcp.maximum.outbound.rate

Required? Optional; Occurs: zero or one

Use the tcp.maximum.outbound.rate accept option to specify the maximum bandwidth rate at which bytes can be written from the Gateway to a client session. This option delays outbound messages as a way to control the maximum rate, per client session, at which the Gateway can send data to clients connecting to a service.

You must specify the value of tcp.maximum.outbound.rate as a positive integer with either no specified unit or appended with B/s (byte), kB/s (kilobyte), KiB/s (kibibyte), MB/s (megabyte), or MiB/s (Mebibytes) per second. (See the NIST Reference for more information about these units.) Do not use spaces between the numeric portion and the units (for example, 40MB/s is supported but 40 MB/s is not supported).

You must specify the value of tcp.maximum.outbound.rate as a positive integer with either no specified unit or appended with a unit of measurement from the following table. (See the NIST Reference for more information about these units.) Do not use spaces between the numeric portion and the units (for example, 40MB/s is supported but 40 MB/s is not supported).

Unit Abbreviation Bytes per Second per Unit Notes
Byte per second B/s 1 Example: 512B/s
kilobyte per second kB/s 1000 (10^3) Decimal kilobytes per second. Example: 1000kB/s
kilobyte per second KiB/s 1024 (2^10) Kibibytes per second (kilobytes binary). Example: 1KiB/s
megabyte per second MB/s 1,000,000 (10^6) Decimal megabytes per second. Example: 1MB/s
megabyte per second MiB/s 1,048,576 (2^20) Mebibytes per second (megabytes binary) Example: 512MiB/s
Example

The following example shows a portion of a Gateway configuration file containing three services, each with a different bandwidth constraint: VIP, premium, and freemium. The VIP service has the best bandwidth at 1 megabyte per second (line 5). The premium service is slower at 1 kibibyte per second (line 13), and the free service is the slowest at only 512 bytes per second (line 21). The example shows these variations configured for the proxy service, which is common, but not required. See the type element for a list of service types.

<service>
  <accept>ws://service.example.com/vip</accept>
  <type>proxy</type>
  <accept-options>
    <tcp.maximum.outbound.rate>1MB/s</tcp.maximum.outbound.rate>
  </accept-options>
</service>

<service>
  <accept>ws://service.example.com/premium</accept>
  <type>proxy</type>
  <accept-options>
    <tcp.maximum.outbound.rate>1KiB/s</tcp.maximum.outbound.rate>
  </accept-options>
</service>

<service>
  <accept>ws://service.example.com/freemium</accept>
  <type>proxy</type>
  <accept-options>
    <tcp.maximum.outbound.rate>512B/s</tcp.maximum.outbound.rate>
  </accept-options>
</service>
Notes

ws.inactivity.timeout

Required? Optional; Occurs: zero or one

Specifies the maximum number of seconds that the network connection can be inactive (seconds is the default time interval syntax). The Gateway will drop the connection if it cannot communicate with the client in the number of seconds specified (see ws.inactivity.timeout). You can specify your preferred time interval syntax in milliseconds, seconds, minutes, or hours (spelled out or abbreviated). For example, all of the following are valid: 1800s, 1800sec, 1800 secs, 1800 seconds, 1800seconds, 3m, 3min, or 3 minutes. If you do not specify a time unit then seconds are assumed. An inactive connection can result from a network failure (such as a lost cellular or Wi-Fi connection) that prevents network communication from being received on any established connection. Thus, when ws.inactivity.timeout is set to a nonzero time interval, the Gateway will drop the connection if it cannot communicate with the client in the number of seconds specified.

Important: The value for http.keepalive.timeout should be greater-than or equal-to the value for ws.inactivity.timeout to prevent emulated connections from terminating prematurely.

Some use cases for the ws.inactivity.timeout property include:

Example

In the following example, the ws.inactivity.timeout property specifies that if the Gateway cannot communicate with a client over the past five-seconds, then the connection to that client will be dropped.

<service>
  <accept>ws://gateway.example.com/echo</accept>
  <connect>ws://internal.example.com/echo</connect>

  <type>echo</type>

 <accept-options>
   <ws.inactivity.timeout>5s</ws.inactivity.timeout>
 </accept-options>
   .
   .
   .
</service>
Notes

http.server.header

Required? Optional; Occurs: zero or more; Values enabledor disabled

Enables or disables the inclusion of the HTTP server header. By default, the Gateway writes a HTTP server header. In general, there is no need to configure this accept option unless you want to obscure server header information.

This setting is ignored for services that do not accept HTTP or WebSocket connections.

Hint: Instead of specifying this setting on every service, consider adding it using the service-defaults element to globally apply the setting across all services running on the Gateway.

Example
<service>
  ...
  <accept-options>
    <http.server.header>disabled</http.server.header>
  </accept-options>
    ...
</service>

ws.version (deprecated)

Required? Optional; Occurs: zero or more; Where version can be rfc6455 or draft-75

The ws.version element has been deprecated. If you are using an existing configuration that includes the ws.version element, you can continue to use it. However, if the scheme of the URI inside the connect element is ws:// or wss://, then the WebSocket version defaults to rfc6455 and there is no need to explicitly set ws.version.

The ws.version element was used to tell the Gateway which version of the WebSocket protocol to use for the service connections. You would specify this element only if the scheme of the URI inside the connect element is ws: or wss: (to indicate that the WebSocket protocol was being used). If you did not specify the ws.version in connect-options, then the WebSocket version defaults to rfc6455.

Example

The following example shows addresses for the WebSocket (ws) and WebSocket Secure (wss) protocols and uses WebSocket version draft-75 to connect to a service running on release 3.2 of the Gateway. The example uses the proxy service, which is common, but not required. See the type element for a list of service types.

<service>
  <accept>ws://${gateway.hostname}:8000/proxy</accept>
    <connect>wss://${gateway.hostname}:5566/data</connect>
  <connect-options>
    <ws.version>draft-75</ws.version>
  </connect-options>
</service>

realm-name

The name of the security realm used for authorization.

Example

<service>
  <accept>wss://localhost:9000/kerberos5</accept>
  <connect>tcp://kerberos.example.com:88</connect>
  <type>kerberos5.proxy</type>
  <realm-name>demo</realm-name>
    .
    .
    .
</service>

Notes

auth-constraint

This element has been deprecated. Use the authorization-constraint element instead. 

authorization-constraint

Required? Optional; Occurs: zero or more

Use the authorization-constraint element to configure the user roles that are authorized to access the service. authorization-constraint contains the following subordinate element:

Subordinate Element Description
require-role The name of the user role to be included in the authorization-constraint or * to indicate any valid user
require-valid-user Grants access any user whose credentials have been successfully authenticated.

Example

The following example of a proxy service element is configured with an authorization-constraint. The example uses the proxy service, which is common, but not required. See the type element for a list of service types.

<service>
  <accept>ws://localhost:8000/remoteService</accept>
  <connect>tcp://localhost:6163</connect>

  <type>proxy</type>

  <authorization-constraint>
    <require-role>AUTHORIZED</require-role>
  </authorization-constraint>
</service>

mime-mapping

Required? Optional; Occurs: zero or more

The mime-mapping element defines the way the Gateway maps a file extension to a MIME type. See the the main description for mime-mapping (service-defaults). You can override the default configuration or add a new MIME type mapping for a particular service by adding a mime-mapping element to the service entry. You can only add mime-mapping elements immediately before any cross-site constraints for a service.

Example

The following example shows a directory service that includes two mime-mapping elements for files with the PNG and and HTML extensions. The Gateway sets the content or MIME type for files with the PNG extension as a PNG image and files with the HTML extension as an HTML text file:

<service>
  <accept>ws://localhost:8000</accept>
  <accept>wss://localhost:9000</accept>

  <type>directory</type>

  <accept-options>
    <ws.bind>8001</ws.bind>
    <wss.bind>9001</wss.bind>
  </accept-options>

  <mime-mapping>
    <extension>png</extension>
    <mime-type>image/png</mime-type>
  </mime-mapping>
  <mime-mapping>
    <extension>html</extension>
    <mime-type>text/html</mime-type>
  </mime-mapping>

  <cross-site-constraint>
    <allow-origin>http://localhost:8000</allow-origin>
  </cross-site-constraint>
  <cross-site-constraint>
    <allow-origin>https://localhost:9000</allow-origin>
  </cross-site-constraint>
  </service>

Notes

cross-site-constraint

Required? Optional; Occurs: zero or more

Use cross-site-constraint to configure how a cross-origin site is allowed to access a service. cross-site-constraint contains the following subordinate elements:

Note: You must specify the properties for the cross-site-constraint element in the order shown in the table.

Subordinate Element Description
allow-origin Specifies the cross-origin site or sites that are allowed to access this service: To allow access to a specific cross-site origin site, specify the protocol scheme, fully qualified host name, and port number of the cross-origin site in the format: <scheme>://<hostname>:<port>. For example: <allow-origin>http://localhost:8000</allow-origin>. To allow access to all cross-site origin sites, including connections to gateway services from pages loaded from the file system rather than a web site, specify the value *. For example: <allow-origin>*</allow-origin>. Specifying * may be appropriate for services that restrict HTTP methods or custom headers, but not the origin of the request.
allow-methods A comma-separated list of methods that can be invoked by the cross-origin site. For example: <allow-methods>POST,DELETE</allow-methods>.
allow-headers A comma-separated list of custom header names that can be sent by the cross-origin site when it accesses the service. For example, <allow-headers>X-Custom</allow-headers>.
maximum-age Specifies the number of seconds that the results of a preflight request can be cached in a preflight result cache. See the W3C Access-Control-Max-Age header response header for more information. For example, <maximum-age>1 second</maximum-age>.

Example

The following example of a proxy service element includes a cross-site-constraint, allowing access to the back-end service or message broker by the site http://localhost:8000 (note the different port number).

<service>
  <accept>ws://localhost:8001/remoteService</accept>
  <connect>tcp://localhost:6163</connect>

  <type>proxy</type>

  <authorization-constraint>
    <require-role>AUTHORIZED</require-role>
  </authorization-constraint>

  <cross-site-constraint>
    <allow-origin>http://localhost:8000</allow-origin>
  </cross-site-constraint>
</service>

Notes

Summary

In this document, you learned about the Gateway service element and how to specify it in your Gateway configuration file. For more information about the location of the configuration files and starting the Gateway, see Setting Up the Gateway. For more information about Kaazing Gateway administration, see the documentation.