Replies: 1 comment 3 replies
-
I have been doing a bit of digging into this and I realize you are setting this value today. But I am wanting to double check that I am clear. client_body_timeout appears to be specific to the http_core_module and happens on the client side of the request workflow. Now we get into the question of where. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Currently, there is no "native" way in NIC to configure the
client_body_timeout
directive.When using NGINX for a gRPC request stream, or bi-directional (request and response stream), it is extremely common to need to set the
grpc_send_timeout
,grpc_read_timeout
, andclient_body_timeout
directives higher than the default 60s in order to ensure the infrequent grpc keepalives and occasional traffic flow do not cause NGINX to close the connection with a 408 (Request Time-out) prematurely.This is something that has been brought up and discussed in a variety of forums, including gitlab issues, stackoverflow, and even the community nginx-ingress controller documents these expectations: https://kubernetes.github.io/ingress-nginx/examples/grpc/#notes-on-using-responserequest-streams
Note that some discussions also brought up
client_header_timeout
andgrpc_socket_keepalive
directives as well, and while they may be related and may be required for some of these gRPC streaming situations, my particular issue only requires setting a higherclient_body_timeout
NIC supports defining the
grpc_send_timeout
andgrpc_read_timeout
today, as both a global default configuration, as well as per upstream. Unfortunately,client_body_timeout
can only be set using one of the snippets solutions. Would be great to have that configuration also available in the Upstream definition, and ideally also a global default in the global configmap configuration.Thanks
Beta Was this translation helpful? Give feedback.
All reactions