This might help throughput, but it interfere stream priority. The
throughput issue is generally caused by the small buffer size to store
response body, which was 16K. We increased it to 128K to compensate
this change.
For HTTP/2, we do this validation in libnghttp2. http-parser does
this partially, when it parses URI, but it does not do anything for
Host header field. libspdylay does not perform anything. So do some
additional validation for HTTP/1 and SPDY cases. integration tests
were also added to make sure they work.
Header field related functions are now gathered into FieldStore class.
This commit only handles request. Subsequent commit will do the same
thing for response.
This commits enables HTTP/2 server push from HTTP/2 backend to be
relayed to HTTP/2 frontend. To use this feature, --http2-bridge or
--client is required. Server push via Link header field contiues to
work.
This change is required to show path attribute to mruby script. It is
desirable to construct URI from parts. Just checking method and path
is "*" is awkward.
There are many requests which changes its meaning when we rewrite
path. This is due to bad percent-encoding in URI; reserved characters
are just used without percent encoding. It seems this is common in ad
services, but I suspect more to come. For reverse proxying situation,
sane service most likely encodes URI properly, so probably this is not
an issue.
-b option syntax is now <HOST>,<PORT>[;<PATTERN>[:...]]. The optional
<PATTERN>s specify the request host and path it is used for. The
<PATTERN> can contain path, host + path or host. The matching rule is
closely designed to ServeMux in Go programming language.
Currently, we use same number of HTTP/2 sessions per worker with given
backend addresses. New option to specify the number of HTTP/2 session
per worker will follow.
Previously when requests are issued to HTTP/2 downstream connection,
but it turns out that connection is down, handlers of those requests
are deleted. In some situations, we only know connection is down when
we write something to network, so we'd like to handle this kind of
situation in more robust manner. In this change, certain seconds
passed after last network activity, we first issue PING frame to
downstream connection before issuing new HTTP request. If writing
PING frame is failed, it means connection was lost. In this case,
instead of deleting handler, pending requests are migrated to new
HTTP2/ downstream connection, so that it can continue without
affecting upstream connection.
This commit limits the number of concurrent HTTP/1 downstream
connections to same host. By defualt, it is limited to 8 connections.
--backend-connections-per-frontend option was replaced with
--backend-http1-connections-per-host, which changes the maximum number
of connections per host. This limitation only kicks in when h2 proxy
is used (-s option).
This commit adds functionality to customize access logging format in
nghttpx. The format variables are inspired by nginx. The default
format is combined format.
This is not obvious but it makes intermediaries flush and forward DATA
frame boundary without excessive buffering. Since we have different
TCP connections frontend and backend, this may not work. This is
still experimental.
Use the same behaviour the current Google server does: start with 1300
TLS record size and after transmitting 1MiB, change record size to
16384. After 1 second idle time, reset to 1300. Only applies to
HTTP/2 and SPDY upstream connections.
Previously read and write timeouts work independently. When we are
writing response to the client, read timeout still ticks (e.g., HTTP/2
or tunneled HTTPS connection). So read timeout may occur during long
download. This commit fixes this issue. This commit only fixes the
upstream part. We need similar fix for the downstream.
Previously we empties request headers after they are sent to
downstream in order to free memory. But it turns out that we use
request headers when rewriting location header response field. Also
user reported that request headers are useful to add new features.
This commits defers the deletion of request headers to the point when
response headers are deleted (which is after response headers are sent
to upstream client).
h2-14 now allows extensions to define new error codes. To allow
application callback to access such error codes, we uses uint32_t as
error_code type for structs and function parameters. Previously we
treated unknown error code as INTERNAL_ERROR, but this change removes
this and unknown error code is passed to application callback as is.