Tuning NGINX for Performance

时间 : 15-07-03 栏目 : 性能优化 作者 : 老薛 评论 : 0 点击 : 1,699 次

NGINX is well known as a high performance load balancer, cache, and web server,
powering over 40% of the busiest websites in the world. Most of the
default NGINX and Linux settings work well for most use cases, but it
can be necessary to do some tuning to achieve optimal performance. This
blog post will discuss some of the NGINX and Linux settings to consider
when tuning a system. There are many settings available, but for this
post we will cover the few settings recommended for most users to
consider adjusting. The settings not covered in this post are ones that
should only be considered by those with a deep understanding of NGINX
and Linux, or after a recommendation by the NGINX support or
professional services teams. NGINX professional services has worked with
some of the world’s busiest websites to tune NGINX to get the maximum
level of performance and are available to work with any customer who
needs to get the most out of their system.

Introduction

A basic understanding of the NGINX architecture and configuration
concepts is assumed. This post does not attempt to duplicate the NGINX
documentation, but provides an overview of the various options with
links to the relevant documentation.

A good rule to follow when tuning is to change one setting at a time,
and set it back to the default value if it does not result in a
positive change in performance.

We will start with a discussion of Linux tuning since some of these
values can impact some of the values you will use for your NGINX
configuration.

Linux Configuration

Modern Linux kernels (2.6+) do a good job in sizing the various
settings but there are some settings that you may want to change. If the
operation system settings are too low, error meesages in the kernel log
help indicate that you need to adjust them. There are many possible
Linux settings but we will cover those settings that are most likely in
need of tuning for normal workloads. Please refer to Linux documentation
for details on adjusting these settings.

The Backlog Queue

The following settings relate directly to connections and how they
are queued. If you have high rate of incoming connections and you are
getting uneven levels of performance (for example some connections
appear to be stalling), then changing these settings can help.

net.core.somaxconn – The
size of the queue for connections waiting for acceptance by NGINX.
NGINX accepts connections very quickly, so this value generally does not
usually need to be very large and the default can be very low, but
increasing can be a good idea if your website experiences heavy traffic.
Error messages in the kernel log indicate that the value is too small;
increase it until the errors stop. Note: if you set this to a value
greater than 512, change thebacklogparameter to thelistentdirective in the NGINX configuration to match.

net.core.netdev_max_backlog – The
rate at which packets are buffered by the network card before being
handed off to the CPU. For machines with a high amount of bandwidth, it
might need to increased. Check the kernel log for errors related to this
setting, and consult the network card documentation for advice on
changing it.

File Descriptors

File descriptors are operating system resources used to handle things
such as connections and open files. NGINX can use up to two file
descriptors per connection. For example, if it is proxying, there is
generally one file descriptor for the client connection and another for
the connection to the proxied server, though this ratio is much lower if
HTTP keepalives are used. For a system serving a large number of
connections, these settings may need to be adjusted:

sys.fs.file_max – The system wide limit for file descriptors

nofile – The user file descriptor limit, set in the /etc/security/limits.conf file

Ephemeral Ports

When NGINX is acting as a proxy, each connection to an upstream server uses a temporary, or ephemeral port.

net.ipv4.ip_local_port_range – The
start and end of the range of port values. If you see that you are
running out of ports, you can increase this range. A common setting is
ports 1024 to 65000.

net.ipv4.tcp_fin_timeout – The
time a port must be inactive before it can reused for another
connection. The default is often 60 seconds, which can usually be safely
reduced to 30 or even 15 seconds.

 

NGINX Configuration

The following are some NGINX directives that can impact
performance. As stated above, we will only be discussing those
directives that we recommend most users look at adjusting. Any directive
not mentioned here is one that we recommend not to be changed without
direction from the NGINX team.

Worker Processes

NGINX can run multiple worker processes, each capable of processing a
large number of connections. You can control how many worker processes
are run and how connections are handled with the following directives:

worker_processes – The
number of NGINX worker processes. In most cases, running one worker
process per CPU core works well. This can be achieved by setting this
directive toauto. There are times when you may want to
increase this number, such as when the work processes have to do a lot
of disk I/O. The default is 1.

worker_connections – The
maximum number of connections that can be processed at one time by each
worker process. The default is 512, but most systems can handle a
larger number. The appropriate setting depends on the size of the server
and the nature of the traffic, and can be discovered through testing.

Keepalives

Keepalive connections can have a major impact on performance by
reducing the CPU and network overhead needed for opening and closing
connections. NGINX terminates all client connections and has separate
and independent connections to the upstream servers. NGINX supports
keepalives for the client and upstream servers. The following directives
deal with client keepalives:

keepalive_requests – The
number of requests a client can make over a single keepalive
connection. The default is 100, but a much higher value can be
especially useful for testing when the load generating tool is sending
many requests from a single client.

keepalive_timeout – How long an idle keepalive connection remains open.

The following directive deals with upstream keepalives:

keepalive – The
number of idle keepalive connections to an upstream server that remain
open for each worker process. There is no default value.

To enable keepalive connections to the upstream you must add the following directives:

proxy_http_version 1.1;
proxy_set_header Connection "";

Access Logging

Logging every request takes both CPU and I/O cycles, and one way to
reduce the impact is to enable access log buffering. This causes NGINX
to buffer a series of log entries and write them to the file together
instead with a separate write operation for each. Access log buffering
is enabled by setting the buffer size with thebuffer=sizeoption to theaccess_logdirective. You can tell NGINX to write the entries in the buffer after a specified amount of time with theflush=time.
With these two options included, NGINX writes entries to the log file
when the next log entry will not fit into the buffer or the entries in
the buffer are older than the specified time, respectively. Log entries
are also written when a worker process is reopening log files or is
shutting down. It is also possible to disable access logging completely.

Sendfile

Sendfile is
an operating system feature that can be enabled on NGINX. It can enable
faster TCP data transfers by doing in-kernel copying of data from one
file descriptor to another, often achieving zero-copy. NGINX can use it
to write cached or on-disk content down a socket, without any context
switching to user space, making it extremely fast and using less CPU
overhead. Because the data never touches user space, it’s not possible
to insert filters that need to access the data into the processing
chain, so you cannot use any of the NGINX filters that change the
content, for example the gzip filter. It is disabled by default.

Limits

NGINX and NGINX Plus allow you to set various limits that help to
prevent clients from consuming too many resources, which can adversely
the performance of your system as well as user experience and
security. The following are some of these directives:

limit_conn and limit_conn_zone – Limit
the number of connections NGINX allows, for example from a single
client IP address. Setting them can help prevent individual clients from
opening too many connections and consuming too many resources.

limit_rate – Limit
the amount of bandwidth allowed for a client on a single connection.
Setting it can prevent the system from being overloaded by certain
clients and can help to ensure that all clients receive good quality of
service.

limit_req and limit_req_zone – Limit the rate of requests being processed by NGINX. As withlimit_rate,
setting them can help prevent the system from being overloaded by
certain clients and can help to ensure that all clients receive good
quality of service. They can also be used to improve security,
especially for login pages, by limiting the requests rate so that it is
adequate for a human user but too slow for programs trying to access
your application.

max_conns – For
a server in an upstream group, set the maximum number of simultaneous
connections it accepts. This can help prevent the upstream servers from
being overloaded. The default is zero, meaning that there is no limit.

queue – Ifmax_connsis set for any upstream servers, governs what happens when a request
cannot be processed because there are no available servers in the
upstream group and some of those servers have reached themax_connslimit. This directive can be set to the number of requests to queue and
for how long. If this directive is not set, no queueing occurs.

Additional Considerations

Some additional features of NGINX that can be used to increase the
performance of a web application don’t really fall under the heading of
tuning, but are worth mentioning because their impact can be
considerable. We will discuss two of these features.

Caching

By enabling caching on an NGINX instance that is load balancing a set
of web or application servers, you can dramatically improve the
response time to clients while at the same time dramatically reducing
the load on the backend servers. Caching is a subject of its own and
will not be covered here. For information, see NGINX Content Caching in the NGINX Admin Guide.

Compression

Compressing responses to clients can greatly reduce their size,
requiring less bandwidth. Because compressing data consumes CPU
resources, it is most useful when there is value to reducing bandwidth
usage. It is important to note that you should not enable compression
for objects that are already compressed, such as JPEG files. For more
information, see Compression and Decompression in the NGINX Admin Guide.

For more information, see:

本文标签

除非注明,文章均为( 老薛 )原创,转载请保留链接: http://www.bdkyr.com/performance/784.html

随便看看

0