Tigase Bosh connection optimisation

3 posts / 0 new
Last post
Tigase Bosh connection optimisation

Good day,

I have made a XMPP client with Strophe [http://strophe.im/] and I have in an other computer a tigase server.
I saw in Firebug that during the connection of my client to my server, 8 XML HTTP Requests has sent just for it. For understanding, I have searched on google, and I have found that : http://xmpp.org/extensions/xep-0206.html

But 8 request * 1000 users = a lot of requests and lags for nothing.

That's why I have looked if it's possible to optimize this, and I have found if I jump steps in my client, the server doesn't followed.

After discovered all this informations, I have decided to change the server code.

Have you an other solution to optimize the 8 first Bosh HTTP Request ?
If I change the server code source, what I can do to jump some steps of the connection ?

PS : For example, the steps where the server send the mechanisms can be jumped cause I know the mechanisms available...


Sorry I have forget to sign my last post.


Well, there are a few ways to improve it.
The simplest and the most effective is to use HTTP keep-alive connections. This eliminates the lag completely and the load on the server is actually the same as with the standard XMPP connections.
Of course Tigase fully support HTTP keep-alive so you just have to make sure it is also supported on the client.

For many reasons HTTP keep-alive may not be always possible. So every request or response is/or can be made on a separate HTTP connection. This is where we experience the lag and high load on the server. (Although 1000 users is still not a problem.)
I have run Bosh tests and on a decent dedicated server 60k online Bosh clients are handled without noticeable delays and resources usage on the server are within acceptable ranges.

If you really need/want to improve stuff a bit then I think the simplest way would be to do a similar thing as EJabberd does. That is introduce a small delay on the Bosh component (BoshSession I guess) before sending any packet to the client. Let's say if the server waits a bit (100ms, 1sec, ....) then there is a high chance there might be more data to be sent to the same client. All the packets can be combined together into a single body element as a response.
This is where I would look for first optimisations.