2

The current default for the F5 HTTP/2 profile has a Concurrent Streams Per Connection default of 10. This seems a bit conservative. IETF recommended that this value being no smaller than 100, so as to not unnecessarily limit parallelism https://www.rfc-editor.org/rfc/rfc7540#section-6.5.2

NGINX for example has a default of 128 for while Citrix Netscaler has 100 as default maximum number of concurrent HTTP/2 streams in a connection. Same goes for Tomcat and Apache.

So, should we tune this value up from 10 to say 100? What effects will that have on the appliance? Also, should we then also tune any of the other default params for better performance?

1 Answer 1

2

So, should we tune this value up from 10 to say 100?

I would definitely set it to 100+ as performance-wise this a much better value to make the most of HTTP/2 parallelism.

What effects will that have on the appliance?

HTTP/2 will have a better performance when loading websites that open multiple parallel connections but security-wise, a malicious clients may be able to exhaust BIG-IP resources faster if they can get hold of a connection with potentially 100 parallel connections when compared to just 10.

Also, should we then also tune any of the other default params for better performance?

It depends on your app requirements and your environment. For example, in a reliable network, you might want to increase the Frame Size which specifies the maximum size of the payload of HTTP/2 data frames or reduce the idle timeout to 60s (default = 300s) to avoid connections sitting idle unnecessarily.

Have a look at my article as there's a description of each setting:

Overview of the BIG-IP HTTP/2 profile: https://support.f5.com/csp/article/K04412053

There's also one I wrote for DevCentral as a general overview: HTTP/2 Protocol in Plain English: https://devcentral.f5.com/s/articles/http-2-protocol-in-plain-english-using-wireshark-33639

Cheers, Rodrigo.

3
  • After googling a bit about F5 and HTTP/2 we also found this github.com/andydavies/… which in principle has the same type of waterfalls that we experience on the F5 with the default setting for concurrent streams. In addition, there may seem to be an issue with stream prioritisation as well. Do you know if the BBR congestion control tcp_notsent_lowat issue mentioned in the refered article would be a problem on the Big-IP as well? As i understand big-ip kernel is Centos based so might be affected
    – flalar
    Commented Jan 23, 2020 at 20:17
  • The article talks about TCP buffer delays in the network path due to lack of prioritisation. BIG-IP does pass on priority frames back and forth client<->BIG-IP<->server but TCP buffer behaviour is NOT based on HTTP/2 priority. However, we do support TCP BBR congestion control algorithm (on TCP profile). I'd also have a go at enabling "auto send buffer" option on TCP profile and see how it goes. I believe it enables Westwood+ congestion control algorithm which is more aggressive and might reduce latency. Commented Jan 31, 2020 at 9:16
  • Note that f5 itself has indicated that the HTTP2 profile is 2x-3x slower than HTTP1.1: support.f5.com/csp/article/K34767034
    – Josh M.
    Commented Oct 28, 2022 at 15:30

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .