-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A way to limit the number of packets sent in a single polling request #1531
Comments
Hi! You are right, I was indeed able to reproduce the issue. Note: This does not apply to the WebSocket transport as each packet is sent in its own frame, but since the client connects first in HTTP long-polling, this is a real issue. If I understand correctly:
I'd rather go for the 2nd solution, what do you think? Related: socketio/socket.io#3946 |
2nd solution would be ideal; I suggested as second option because I thought it could be significantly more complicated, in particular because of the complexities of predicting the size of the payload before serialization – I don't know in detail how this works in socket.io, but in my experience this tend to become quite complicated when you start dealing with charsets, compression, etc. So yes, 2nd option is overall better; but if that turns out to be unfeasible, or require too much effort, the 1st option would work as well for my specific situation. And as a user of the library, an imperfect but workable solution available sooner is more valuable than a better solution in 6 months or more. 😄 Regarding your note about long-polling: our application has some users using long-polling only, typically because they are unable to upgrade to websocket due to firewall or proxy restrictions; for these users, this issue can happen at any time, in fact this is how I noticed the problem. |
So that clients in HTTP long-polling can decide how many packets they have to send to stay under the maxHttpBufferSize value. This is a backward compatible change which should not mandate a new major revision of the protocol (we stay in v4), as we only add a field in the JSON-encoded handshake data: ``` 0{"sid":"lv_VI97HAXpY6yYWAAAC","upgrades":["websocket"],"pingInterval":25000,"pingTimeout":5000,"maxPayload":1000000} ``` Related: socketio/socket.io-client#1531
The server will now include a "maxPayload" field in the handshake details, allowing the clients to decide how many packets they have to send to stay under the maxHttpBufferSize value. Related: - socketio/socket.io-client#1531 - socketio/engine.io@088dcb4
Here we go! So, we went with the 2nd solution: I have a few other changes to make, I hope to release this in a few days. |
I just upgraded to version 4.5.0 and tested, it works great! Thanks for fixing this. |
Awesome, thanks for the feedback ❤️ |
Is your feature request related to a problem? Please describe.
With transport polling, the number of packets in the write buffer can grow very large, to the point that the next request becomes larger than the
maxHttpBufferSize
setting on the server. When this happens, the request fails and the client gets disconnected.Describe the solution you'd like
It would be useful to have a way to limit the number of packets sent in a single request to the server. This could be a new option to set the maximum number of packets to send in a single request. Eg.
At the moment of sending the request, the client takes from the buffer and send a number of packets up to
maxPacketsPerRequest
; if the buffer contains more packets than this value, the remaining ones are left in the buffer and sent at the next cycle.Alternatively, the limit could be based on the size of the request: the client takes packets from the buffer up the point where the resulting request reaches a maximum payload size. The limit could then be set to the same value as
maxHttpBufferSize
on the server, ensuring that the client never sends too large a request. This is probably harder to build though, because you need to predict the serialised size of every packet.The text was updated successfully, but these errors were encountered: