Given the present typical SSL session sizes, on 32-bit platforms it is
now beneficial to store all data in a single allocation, since rbtree
node + session id + ASN1 representation of a session takes 256 bytes of
shared memory (36 + 32 + 150 = about 218 bytes plus SNI server name).
Storing all data in a single allocation is beneficial for SNI names up to
about 40 characters long and makes it possible to store about 4000 sessions
in one megabyte (instead of about 3000 sessions now). This also slightly
simplifies the code.
Session ids are not expected to be longer than 32 bytes, but this is
theoretically possible with TLSv1.3, where session ids are essentially
arbitrary and sent as session tickets. Since on 64-bit platforms we
use fixed 32-byte buffer for session ids, added an explicit length check
to make sure the buffer is large enough.
Session cache allocations might fail as long as the new session is different
in size from the one least recently used (and freed when the first allocation
fails). In particular, it might not be possible to allocate space for
sessions with client certificates, since they are noticeably bigger than
normal sessions.
To ensure such allocation failures won't clutter logs, logging level changed
to "warn", and logging is now limited to at most one warning per second.
OpenSSL tries to save TLSv1.3 sessions into session cache even when using
tickets for stateless session resumption, "because some applications just
want to know about the creation of a session". To avoid trashing session
cache with useless data, we do not save such sessions now.
The variables have prefix $proxy_protocol_tlv_ and are accessible by name
and by type. Examples are: $proxy_protocol_tlv_0x01, $proxy_protocol_tlv_alpn.
Previously, all received user input was logged. If a multi-line text was
received from client and logged, it could reduce log readability and also make
it harder to parse nginx log by scripts. The change brings to PROXY protocol
the same behavior that exists for HTTP request line in
ngx_http_log_error_handler().
SSL_sendfile() expects integer file descriptor as an argument, but nginx
uses OS file handles (HANDLE) to work with files on Windows, and passing
HANDLE instead of an integer correctly results in build failure. Since
SSL_sendfile() is not expected to work on Windows anyway, the code is now
disabled on Windows with appropriate compile-time checks.
Multiple C4306 warnings (conversion from 'type1' to 'type2' of greater size)
appear during 64-bit compilation with MSVC 2010 (and older) due to extensively
used constructs like "(void *) -1", so they were disabled.
In newer MSVC versions C4306 warnings were replaced with C4312 ones, and
these are not generated for such trivial type casts.
In 2014ed60f17f, "#if SSL_CTRL_SET_ECDH_AUTO" test was incorrectly used
instead of "#ifdef SSL_CTRL_SET_ECDH_AUTO". There is no practical
difference, since SSL_CTRL_SET_ECDH_AUTO evaluates to a non-zero numeric
value when defined, but anyway it's better to correctly test if the value
is defined.
The SSL_R_BAD_RECORD_TYPE ("bad record type") errors are reported by
OpenSSL 1.1.1 or newer when using TLSv1.3 if the client sends a record
with unknown or unexpected type. These errors are now logged at the
"info" level.
When reading exactly rev->available bytes, rev->available might become 0
after FIONREAD usage introduction in efd71d49bde0. On the next call of
ngx_readv_chain() on systems with EPOLLRDHUP this resulted in return without
any actions, that is, with rev->ready set, and this in turn resulted in no
timers set in event pipe, leading to socket leaks.
Fix is to reset rev->ready in ngx_readv_chain() when returning due to
rev->available being 0 with EPOLLRDHUP, much like it is already done in
ngx_unix_recv(). This ensures that if rev->available will become 0, on
systems with EPOLLRDHUP support appropriate EPOLLRDHUP-specific handling
will happen on the next ngx_readv_chain() call.
While here, also synced ngx_readv_chain() to match ngx_unix_recv() and
reset rev->ready when returning due to rev->available being 0 with kqueue.
This is mostly cosmetic change, as rev->ready is anyway reset when
rev->available is set to 0.
Some servers might emit Content-Range header on 200 responses, and this
does not seem to contradict RFC 9110: as per RFC 9110, the Content-Range
header has no meaning for status codes other than 206 and 416. Previously
this resulted in duplicate Content-Range headers in nginx responses handled
by the range filter. Fix is to clear pre-existing headers.
Starting with OpenSSL 1.1.1, various additional errors can be reported
by OpenSSL in case of client-related issues, most notably during TLSv1.3
handshakes. In particular, SSL_R_BAD_KEY_SHARE ("bad key share"),
SSL_R_BAD_EXTENSION ("bad extension"), SSL_R_BAD_CIPHER ("bad cipher"),
SSL_R_BAD_ECPOINT ("bad ecpoint"). These are now logged at the "info"
level.
To ensure optimal use of memory, SSL contexts for proxying are now
inherited from previous levels as long as relevant proxy_ssl_* directives
are not redefined.
Further, when no proxy_ssl_* directives are redefined in a server block,
we now preserve plcf->upstream.ssl in the "http" section configuration
to inherit it to all servers.
Similar changes made in uwsgi, grpc, and stream proxy.
Similar to 70e65bf8dfd7, the change is made to ensure that the ability to
cancel resolver tasks is fully controlled by the caller. As mentioned in the
referenced commit, it is safe to make this timer cancelable because resolve
tasks can have their own timeouts that are not cancelable.
The scenario where this may become a problem is a periodic background resolve
task (not tied to a specific request or a client connection), which receives a
response with short TTL, large enough to warrant fallback to a TCP query.
With each event loop wakeup, we either have a previously set write timer
instance or schedule a new one. The non-cancelable write timer can delay or
block graceful shutdown of a worker even if the ngx_resolver_ctx_t->cancelable
flag is set by the API user, and there are no other tasks or connections.
We use the resolver API in this way to maintain the list of upstream server
addresses specified with the 'resolve' parameter, and there could be third-party
modules implementing similar logic.
When we generate the last_buf buffer for an UDP upstream recv error, it does
not contain any data from the wire. ngx_stream_write_filter attempts to forward
it anyways, which is incorrect (e.g., UDP upstream ECONNREFUSED will be
translated to an empty packet).
This happens because we mark the buffer as both 'flush' and 'last_buf', and
ngx_stream_write_filter has special handling for flush with certain types of
connections (see d127837c714f, 32b0ba4855a6). The flags are meant to be
mutually exclusive, so the fix is to ensure that flush and last_buf are not set
at the same time.
Reproduction:
stream {
upstream unreachable {
server 127.0.0.1:8880;
}
server {
listen 127.0.0.1:8998 udp;
proxy_pass unreachable;
}
}
1 0.000000000 127.0.0.1 → 127.0.0.1 UDP 47 45588 → 8998 Len=5
2 0.000166300 127.0.0.1 → 127.0.0.1 UDP 47 51149 → 8880 Len=5
3 0.000172600 127.0.0.1 → 127.0.0.1 ICMP 75 Destination unreachable (Port
unreachable)
4 0.000202400 127.0.0.1 → 127.0.0.1 UDP 42 8998 → 45588 Len=0
Fixes d127837c714f.
Both "count" and "duration" variables are 32-bit, so their product might
potentially overflow. It is used to reduce 64-bit start_time variable,
and with very large start_time this can result in incorrect seeking.
Found by Coverity (CID 1499904).
Now, if the directive is given an empty string, such configuration cancels
loading of certificates, in particular, if they would be otherwise inherited
from the previous level. This restores previous behaviour, before variables
support in certificates was introduced (3ab8e1e2f0f7).
Previously, if caching was disabled due to Expires in the past, nginx
failed to cache the response even if it was cacheable as per subsequently
parsed Cache-Control header (ticket #964).
Similarly, if caching was disabled due to Expires in the past,
"Cache-Control: no-cache" or "Cache-Control: max-age=0", caching was not
used if it was cacheable as per subsequently parsed X-Accel-Expires header.
Fix is to avoid disabling caching immediately after parsing Expires in
the past or Cache-Control, but rather set flags which are later checked by
ngx_http_upstream_process_headers() (and cleared by "Cache-Control: max-age"
and X-Accel-Expires).
Additionally, now X-Accel-Expires does not prevent parsing of cache control
extensions, notably stale-while-revalidate and stale-if-error. This
ensures that order of the X-Accel-Expires and Cache-Control headers is not
important.
Prodded by Vadim Fedorenko and Yugo Horie.
If a module adds multiple WWW-Authenticate headers (ticket #485) to the
response, linked in r->headers_out.www_authenticate, all headers are now
cleared if another module later allows access.
This change is a nop for standard modules, since the only access module which
can add multiple WWW-Authenticate headers is the auth request module, and
it is checked after other standard access modules. Though this might
affect some third party access modules.
Note that if a 3rd party module adds a single WWW-Authenticate header
and not yet modified to set the header's next pointer to NULL, attempt to
clear such a header with this change will result in a segmentation fault.
When using auth_request with an upstream server which returns 401
(Unauthorized), multiple WWW-Authenticate headers from the upstream server
response are now properly copied to the response.
When using proxy_intercept_errors and an error page for error 401
(Unauthorized), multiple WWW-Authenticate headers from the upstream server
response are now properly copied to the response.
Most of the known duplicate upstream response headers are now ignored
with a warning.
If syntax permits multiple headers, these are now properly linked to
the lists, notably Vary and WWW-Authenticate. This makes it possible
to further handle such lists where it makes sense.
With this change, duplicate Content-Length and Transfer-Encoding headers
are now rejected. Further, responses with invalid Content-Length or
Transfer-Encoding headers are now rejected, as well as responses with both
Content-Length and Transfer-Encoding.
The h->next pointer properly provided as NULL in all cases where known
output headers are added.
Note that there are 3rd party modules which might not do this, and it
might be risky to rely on this for arbitrary headers.
Since introduction of offset handling in ngx_http_upstream_copy_header_line()
in revision 573:58475592100c, the ngx_http_upstream_copy_content_encoding()
function is no longer needed, as its behaviour is exactly equivalent to
ngx_http_upstream_copy_header_line() with appropriate offset. As such,
the ngx_http_upstream_copy_content_encoding() function was removed.
Further, the u->headers_in.content_encoding field is not used anywhere,
so it was removed as well.
Further, Content-Encoding handling no longer depends on NGX_HTTP_GZIP,
as it can be used even without any gzip handling compiled in (for example,
in the charset filter).