Previously when requests are issued to HTTP/2 downstream connection,
but it turns out that connection is down, handlers of those requests
are deleted. In some situations, we only know connection is down when
we write something to network, so we'd like to handle this kind of
situation in more robust manner. In this change, certain seconds
passed after last network activity, we first issue PING frame to
downstream connection before issuing new HTTP request. If writing
PING frame is failed, it means connection was lost. In this case,
instead of deleting handler, pending requests are migrated to new
HTTP2/ downstream connection, so that it can continue without
affecting upstream connection.
This commit limits the number of concurrent HTTP/1 downstream
connections to same host. By defualt, it is limited to 8 connections.
--backend-connections-per-frontend option was replaced with
--backend-http1-connections-per-host, which changes the maximum number
of connections per host. This limitation only kicks in when h2 proxy
is used (-s option).
This commit adds functionality to customize access logging format in
nghttpx. The format variables are inspired by nginx. The default
format is combined format.
This is not obvious but it makes intermediaries flush and forward DATA
frame boundary without excessive buffering. Since we have different
TCP connections frontend and backend, this may not work. This is
still experimental.
Use the same behaviour the current Google server does: start with 1300
TLS record size and after transmitting 1MiB, change record size to
16384. After 1 second idle time, reset to 1300. Only applies to
HTTP/2 and SPDY upstream connections.
Previously read and write timeouts work independently. When we are
writing response to the client, read timeout still ticks (e.g., HTTP/2
or tunneled HTTPS connection). So read timeout may occur during long
download. This commit fixes this issue. This commit only fixes the
upstream part. We need similar fix for the downstream.
Previously we empties request headers after they are sent to
downstream in order to free memory. But it turns out that we use
request headers when rewriting location header response field. Also
user reported that request headers are useful to add new features.
This commits defers the deletion of request headers to the point when
response headers are deleted (which is after response headers are sent
to upstream client).
h2-14 now allows extensions to define new error codes. To allow
application callback to access such error codes, we uses uint32_t as
error_code type for structs and function parameters. Previously we
treated unknown error code as INTERNAL_ERROR, but this change removes
this and unknown error code is passed to application callback as is.
Previously we only update consumed flow control window when number of
bytes read in nghttp2 and spdylay callback is 0. Now we notify
nghttp2 library the consumed bytes even if number of bytes read > 0.
This change also uses newly added spdylay_session_consume() API, so we
require spdylay >= 1.3.0.
This option limits the number of backend connections per frontend.
This is meaningful for the combination of HTTP/2 and SPDY frontend and
HTTP/1 backend.
--no-location-rewrite option disallows location header rewrite on
--http2-bridge, --client and default mode. This option is useful when
connecting nghttpx proxy with --http2-bridge to backend nghttpx with
http2-proxy mode.
This change rewrites logging system of nghttpx. Previously access log
and error log are written to stderr or syslog and there was no option
to change stderr to something else. With this change, file path of
access log and error log can be configured separately and logging to
regular file is now added. To support rotating log, if SIGUSR1 signal
is received by nghttpx, it closes the current log files and reopen it
with the same name. The format of access log is changed and has same
look of apache's. But not all columns are not supported yet.
If SPDY or HTTP/2 ustream is used and HTTP/2 downstream is used, only
call {spdylay,nghttp2}_resume_data when complete DATA frame was read
in backend to avoid to transmit too small DATA frame to the upstream.
The profiler and benchmarking showed that calling evbuffer_add()
repeatedly is very costly. To avoid this, we buffer up small writes
into one large chunk and call evbuffer_add() less times.