Viewing By Entry / Main
August 5, 2020
BlazeDS and LCDS: Channels, Channels Everywhere (Redux)

I originally posted this back in January, but with LCDS 2.6 now released, it's worth noting again. Here it is again, for anyone who missed it:

=================================================

BlazeDS, and the released 2.6 update to Adobe LiveCycle Data Services, add several additional channel/endpoint options to the mix of available channels currently available for communication between Flex RIA's and backend server technologies. With all the available channels and endpoints, (and more coming!) the question of how to decide which one (or which set) to use for a particular application becomes relavent.

Seth Hodgson, Engineer Extraordinaire on the LiveCycle Data Services team compiled the following helpful discussion and general guidelines that will probably make it into product documentation, etc for the next release, but I thought I'd share it now to help anyone who could use this now to navigate the available connectivity options.

So...thanks Seth!

Ok, on to the post....

Now, if you’re just doing remote procedure calls (RemoteObject or proxied HTTPService or WebService) or Data Management without auto-sync, than use the AMFChannel. Simple, performant call/response semantics over HTTP.

The HTTPChannel (probably should have been named the AMFXChannel) is exactly the same as the AMFChannel behaviorally, but serializes data in an XML format called AMFX. This channel only exists for customers who require all data sent over the wire to be non-binary for auditing purposes. There’s no other reason to use this channel over the AMFChannel for RPC-based apps.

So if your app is only making RPC calls to the server, the answer is simple. You can stop reading here and get back to developing your app using one of the two options above (most likely the AMFChannel).

Real time data push to web clients doesn't have as simple an answer as the RPC scenario unfortunately. There are a variety of trade-offs and pros and cons that need to be taken into consideration. Although the answer isn't as simple, it's still fairly prescriptive based on the needs of your app and the tables below should help you select the right channel(s) and get back to work. One thing to remember. If your application uses real time data push, but also does some RPC as well, you do not need to use separate channels. All of the channels below can send RPC invocations to the server just fine. Use a single ChannelSet (possibly containing just a single channel) for all of your RPC, messaging and data management components.

We'll start with recommendations in my order of preference for BlazeDS and follow that with recommendations for LCDS.

BlazeDS
1. AMFChannel/Endpoint configured for Long Polling (no fallback needed)

The channel issues polls to the server in the same fashion as our traditional polling, but if no data is available to return immediately the server “parks” the poll request until data arrives for the client or the configured server wait interval elapses.

The client can be configured to issue its next poll immediately following a poll response making this channel configuration feel very “real-time”.
A reasonable server wait time would be 1 minute. This eliminates the majority of busy polling from clients without being so long that you’re keeping server sessions alive indefinitely or running the risk of a network component between the client and server timing out the connection.

Pros Cons

Valid HTTP request/response pattern over standard ports that nothing in the network path will have trouble with.

When many messages are being “pushed” to the client this configuration has the overhead of a poll roundtrip for every pushed message (or small batch of messages that queue between polls). Most applications are not pushing data so frequently for this to be a problem.

The Servlet API uses blocking IO so you must define a upper bound for the number of long poll requests parked on the server at any single instant. If your number of clients exceeds this limit, the excess clients will devolve to simple polling on the default 3 second interval with no server wait. Say your app server request handler thread pool has a size of 500. You could set the upper bound for waited polls to 250 or 300 or 400 depending on the relative amount of non-poll requests you expect to service concurrently.


2. StreamingAMFChannel/Endpoint (in a ChannelSet followed by the polling AMFChannel below for fallback)

Because HTTP connections are not duplex, this channel sends a request to “open” a HTTP connection between the server and client, over which the server will write an infinite response of pushed messages. This channel uses a separate transient connection from the browser’s connection pool for each send it issues to the server. The streaming connection is used purely for messages pushed from the server down to the client. Each message is pushed as an HTTP response chunk (HTTP 1.1 Transfer-Encoding: chunked) .

Pros Cons

No polling overhead associated with pushing messages to the client.

Uses standard HTTP ports so firewalls don’t interfere and all requests/responses are HTTP so packet inspecting proxies won’t drop the packets.

Holding on to the “open” request on the server and writing an infinite response is not “nice” HTTP behavior. HTTP proxies that buffer responses before forwarding them can effectively swallow the stream. Assign the channel’s ‘connect-timeout-seconds’ property a value of 2 or 3 to detect this and trigger fallback to the next channel in your ChannelSet.

No support for HTTP 1.0 clients (not that this matters; does anyone still use a 1.0 client?). If the client is 1.0, the open request is faulted and the client falls back to the next channel in its ChannelSet.

The Servlet API uses blocking IO so like long polling above, you need to set a configured upper bound on the number of streaming connections you’ll allow. Clients that exceed this limit will not be able to open a streaming connection and will fallback to the next channel in their ChannelSet.


3. AMFChannel/Endpoint with simple polling enabled (no Long Polling) and piggybacking enabled (no fallback needed)

Same as our traditional, simple polling support but with piggybacking enabled when the client sends a message to the server between its regularly scheduled poll requests the channel piggybacks a poll request along with the message being sent, and the server piggybacks any pending messages for the client along with the response.

Pros Cons

Valid HTTP request/response pattern over standard ports that nothing in the network path will have trouble with.

User experience is more “real-time” than with just simple polling on an interval.

Doesn’t have thread resource constraints like long polling and streaming due to the Servlet API’s blocking IO.

Less “real-time” than long polling or streaming; requires client interaction with the server to receive pushed data faster than the channel's configured polling interval.



LiveCycle Data Services
1. RTMPChannel/Endpoint (in a ChannelSet with fallback to NIO AMFChannel configured to Long Poll)

The RTMPChannel creates a single duplex socket connection to the server and gives the server the best notification of the player being shut down. If the direct connect attempt fails the Player will attempt a CONNECT tunnel through an HTTP proxy if one is defined by the browser (resulting in a direct, tunneled duplex socket connection to the server). Worst case, it falls back to adaptive HTTP requests that "tunnel" RTMP data back and forth between client and server, or it fails to connect entirely.

Pros Cons

Single, stateful duplex socket that gives clean, immediate notification when a client is closed. The HTTP-based channels/endpoints generally don't receive notification of a client going away until the HTTP session on the server times out; that's not great for a call-center application where you need to know whether reps are online or not.

The player internal fallback to HTTP CONNECT trick to traverse an HTTP proxy if one is configured in the browser gives the same pro as above, and is a technique that just isn't possible from ActionScript or Javascript.

RTMP generally uses a non-standard port so it is often blocked by client firewalls. Network components that do stateful packet inspection may also drop RTMP packets, killing the connection. Fallback to HTTP CONNECT through a proxy or adaptive HTTP tunnel requests is difficult in our deployment scenario within a Java servlet container which generally already has the standard HTTP ports bound; requiring non-trivial networking configuration to route these requests to the RTMPEndpoint.


2. NIO AMFChannel/Endpoint configured for Long Polling (no fallback needed)

Behaviorally the same as the servlet-based AMFChannel/Endpoint but uses an NIO server and minimal HTTP stack to support scaling up to thousands of connections.

Pros Cons

The same pros as mentioned above, along with much better scalability and no configured upper bound on the number of parked poll requests.

Because the servlet pipeline is not being used, this endpoint requires more network configuration to route requests to it on a standard HTTP port if you need to concurrently service HTTP servlet requests. However, it can share the same port as any other LCDS NIO AMF/HTTP endpoint for the app.


3. NIO StreamingAMFChannel/Endpoint (in a ChannelSet followed by the polling AMFChannel below for fallback)

Behaviorally the same as the servlet-based StreamingAMFChannel/Endpoint but uses an NIO server and minimal HTTP stack to support scaling up to thousands of connections.

Pros Cons

The same pros as mentioned above, along with much better scalability and no configured upper bound on the number of streaming connections.

Because the servlet pipeline is not being used, this endpoint requires more network configuration to route requests to it on a standard HTTP port if you need to concurrently service HTTP servlet requests. However, it can share the same port as any other LCDS NIO AMF/HTTP endpoint for the app.


4. NIO AMFChannel/Endpoint with simple polling enabled (no Long Polling) and piggybacking enabled (no fallback needed)

Same as the description above.

Pros Cons

Same as the pros above, and it shares the same FlexSession as other LCDS NIO AMF/HTTP endpoints

Same cons as above and because the servlet pipeline is not being used, this endpoint requires more network configuration to route requests to it on a standard HTTP port if you need to concurrently service HTTP servlet requests. However, it can share the same port as any other LCDS NIO AMF/HTTP endpoint for the app.


The NIO AMF/HTTP endpoints use the same client-side Channel classes as their servlet-based endpoint counterparts. They just scale better than the servlet based endpoints. If the web app is not servicing general servlet requests, you can configure the servlet container to bind non-standard HTTP/S ports leaving 80 and 443 free to be used by your LC DS NIO endpoints. Because LC DS is a super-set of BlazeDS, you still have access to the servlet-based endpoints if you want to use them instead.

Reasons to use the servlet-based endpoints would could be because you need to include 3rd party servlet filter processing of requests/responses or need to access data structures in the application server's HttpSession (the NIO HTTP endpoints are not part of the servlet pipeline, so while they provide a FlexSession in the same manner that RTMP connections do, these session instances are disjoint from the J2EE HttpSession.

Hope that helps!

Damon

Comments

It'd be much nicer if there are diagrams to illustrate the packet flow of the different channels.