Quantcast
Channel: Nginx Forum - Other discussion
Viewing all 607 articles
Browse latest View live

pread() error (no replies)

$
0
0
Hi,

I have a video on demand server.
12x1 TB SATA2 RAID10 with 1Gbps connection, Debian 7 (ext4)

I have high disk utilization and so I wanted to make some changes to increase performance for big video files:

I made following changes:

sendfile off;
aio on;
directio 512k;
output_buffers 1 1m;

The disk utilization and the load average went down and the memory usage went up, which is no problem because I have enough RAM.
So far so good. The only problem is that I get often following error now:

[crit] 26649#26649: *39145 pread() "/opt/nginx/html/XXX/91.mp4" failed (22: Invalid argument), request: "GET /XXX/XX.mp4?start=2189.223&start_sec=2189.223 HTTP/1.1"

The client get then a 5xx response back.

I use a native HTML5 player and the kernel player for playback. The kernel player add a querystring to the request which I dont use. But only these requests produce the error and not every time.

Best Regards
Wolfgang

Nginx 1.6.3 compile bug with openssl 1.0.2 (with ALPN enabled) ? (no replies)

$
0
0
I cannot compile the nginx branch 1.6.3 against openssl 1.0.2d. There is compile error:

src/http/modules/ngx_http_ssl_module.c: In function 'ngx_http_ssl_alpn_select':
src/http/modules/ngx_http_ssl_module.c:351:5: error: 'c' undeclared (first use in this function)
src/http/modules/ngx_http_ssl_module.c:351:5: note: each undeclared identifier is reported only once for each function it appears in
make[1]: *** [objs/src/http/modules/ngx_http_ssl_module.o] Error 1
make: *** [build] Error 2

When you look at the code there,

#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation

static int
ngx_http_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, const unsigned char **out,
unsigned char *outlen, const unsigned char *in, unsigned int inlen,
void *arg)
{
unsigned int srvlen;
unsigned char *srv;
#if (NGX_DEBUG)
unsigned int i;
#endif
#if (NGX_HTTP_SPDY)
ngx_http_connection_t *hc;
#endif
#if (NGX_HTTP_SPDY || NGX_DEBUG)
ngx_connection_t *c;

c = ngx_ssl_get_connection(ssl_conn);
#endif

#if (NGX_DEBUG)
for (i = 0; i < inlen; i += in[i] + 1) {
ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0,
"SSL ALPN supported by client: %*s", in[i], &in[i + 1]);
}
#endif

#if (NGX_HTTP_SPDY)
hc = c->data;

if (hc->addr_conf->spdy) {
srv = (unsigned char *) NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE;
srvlen = sizeof(NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1;

} else
#endif
{
srv = (unsigned char *) NGX_HTTP_NPN_ADVERTISE;
srvlen = sizeof(NGX_HTTP_NPN_ADVERTISE) - 1;
}

if (SSL_select_next_proto((unsigned char **) out, outlen, srv, srvlen,
in, inlen)
!= OPENSSL_NPN_NEGOTIATED)
{
return SSL_TLSEXT_ERR_NOACK;
}

ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0,
"SSL ALPN selected: %*s", *outlen, *out);

return SSL_TLSEXT_ERR_OK;
}

#endif

What will happen if you have TLSEXT_TYPE_application_layer_protocol_negotiation (openssl ALPN enabled) and do not have NGX_DEBUG and NGX_HTTP_SPDY - which is the most probable situation and you get error of undefined variable in
ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0,
"SSL ALPN selected: %*s", *outlen, *out);

because there is no "c" pointer?

Am I right?

Blacklist in nginx (no replies)

$
0
0
The goal is to block listed IP address with less time consuming. i have a list of 400000 IPs, i need block them on nginx , i can use geo modules and defind vars like:
geo $bl {
***
}
if ($bl = 1) {
return 403;
}

also i can get developer to write module to work like Geo modules, create variable based on IP if IP is in Blacklist or not.

now i have questions.
what is fastets ?(Geo or custom modules)
how much 400000 IPs in Geo modules consume time ? (respons and request processing time)
how many IP can i add in Geo modules ?

php upload program gives 404 not found message (no replies)

$
0
0
The attached program (index.php) results in a 404 Not Found message. The exact message in the nginx error log is as follows:

2015/11/16 11:44:53 [error] 17711#17711: *815 FastCGI sent in stderr: "PHP message: PHP Parse error: syntax error, unexpected '{', expecting ',' or ';' in /var/www/workspace/index.php on line 11" while reading response header from upstream, client: 127.0.0.1, server: wip, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "wip"

At this point, I cannot find the uploaded file anywhere and suspect that no file was uploaded anywhere. I suspect the problem has to do with the location of $_FILE, but I am at a loss to identify or locate any file or directory where the file was supposed to be uploaded.

Can anyone tell me where to find the uploaded file and/or if it got uploaded?

FreeBSD only use one worker process? (no replies)

$
0
0
Hello, I am testing Nginx-1.9.3/1.9.6 in FreeBSD 10.2.

I have set worker process 8 (= my cpu core), and then nginx really spawned 8 worker process.
However, nginx only use one worker process while I running apache benchmark and monitoring from top -P.
I observed that the working worker process had higher PRI, but nginx should balance worker loading in my thought.
Is this right in FreeBSD for some reason like its scheduler? or I have setup something wrong in my nginx.conf?

Below is my scratch from ps -aux while apache benchmark testing.

# ps -aux | grep nginx
ray 4870 49.0 0.2 32344 12676 - R 5:43PM 0:39.52 nginx: worker process (nginx)
root 4869 0.0 0.0 24152 3468 - Is 5:43PM 0:00.00 nginx: master process ./sbin/ngi
ray 4871 0.0 0.1 28248 7560 - I 5:43PM 0:00.01 nginx: worker process (nginx)
ray 4872 0.0 0.1 28248 7560 - I 5:43PM 0:00.01 nginx: worker process (nginx)
ray 4873 0.0 0.1 28248 7560 - I 5:43PM 0:00.01 nginx: worker process (nginx)
ray 4874 0.0 0.1 28248 7560 - I 5:43PM 0:00.01 nginx: worker process (nginx)
ray 4875 0.0 0.1 28248 7560 - I 5:43PM 0:00.01 nginx: worker process (nginx)
ray 4876 0.0 0.1 28248 7560 - I 5:43PM 0:00.01 nginx: worker process (nginx)
ray 4877 0.0 0.1 28248 7560 - I 5:43PM 0:00.01 nginx: worker process (nginx)

You can see only one process working.


In addition, I have tested max connections of Nginx to approve this problem.
Generally, max connections of Nginx should be worker process * worker connection, but I could only reached the limit of worker_connections(set =10000) based on nginx http stub status module:

Active connections: 9985
server accepts handled requests
5205900 5205900 5103413
Reading: 0 Writing: 1 Waiting: 9984

Below is my nginx.conf:

user ray;
worker_processes 8;
worker_rlimit_nofile 1000000;
error_log logs/error.log warn;

events {
use kqueue;
accept_mutex off;
worker_connections 10000;
}

http {
include mime.types;
default_type application/octet-stream;

access_log off;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
keepalive_requests 10000;


gzip on;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
server {
listen 80 backlog=4096 reuseport so_keepalive=30::10;
server_name localhost;

#charset koi8-r;

#access_log logs/host.access.log main;

location / {
root /mnt/html;
index index.html index.htm;
}
location /nginx_status {
# Turn on stats
stub_status on;
access_log off;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /mnt/html;
}
}
}

502 Bad Gateway (1 reply)

$
0
0
Hi All,

I have spend a lot of time to find the fix for the error. Its been noted that this happens only when i try to upload a zip file which is above 56KB.

The sever works fine till i have 55KB of file uplaoded. I have tried chaning many parametrs tofix this but does not work.

I am using ngnix mojolicious reverse proxy

My configs and Erros

upstream ppp {
server 127.0.0.1:8080 max_fails=10 fail_timeout=60s;
}
server {
listen 80;
server_name localhost;
client_max_body_size 90000M;
access_log /var/log/nginx/ppp.log main;
error_log /var/log/nginx/ppp.error.log;
location / {
proxy_pass http://ppp;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "http";
proxy_buffers 32 16k;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
}
}


user nginx;
worker_processes 4;
worker_rlimit_nofile 8192;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;


events {
worker_connections 4096;
multi_accept on;
}


http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
#tcp_nopush on;

keepalive_timeout 65;

#gzip on;
# ADDED
include /etc/nginx/conf.d/*.conf;

}



### Errors


Failed upload file 'Automation.zip' because of "Bad Gateway". Response: "<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.8.0</center>
</body>
</html>
"



###


2015/11/20 10:13:18 [error] 6281#0: *3 upstream prematurely closed connection while reading response header from upstream, client: 10.x.x.x.x, server: localhost, request: "POST / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "10.y.y.y"

NGINX TOMCAT + HTTPS (no replies)

$
0
0
Hi all,


I have a problem with NGINX :D


My problem :
NGINX is my reverse proxy, when I try to access my url NGINX ip is replaced by that of TOMCAT


On my local network :
https://@ip_nginx/app ==> https://@ip_tomcat:8443/app ==> the web page is ok but I see the ip of my tomcat
I wish that the redirection is done without seeing the ip of my TOMCAT


On Internet :
https://toto/app ==> https://@ip_tomcat:8443/app
it doesn't work because since my Internet browser tries to join a private IP


My config :
Tomcat 8 :
Port Redirection 8080 to 8443 with certificate
attached my config file "server.txt"


Nginx 1.9.6 :
attached my config file of my site "site.txt"

thanks for your help

NGINX issues with cookie names (no replies)

$
0
0
Somewhere in our organization, we have a cookie being set that has a %40 (@ - at sign) in the name of the cookie. When nginx encounters this cookie, it responds to users with an "IOnvalid Request" message. If I manipulate the name of the cookie and remove the HTTP-encoded at sign, everything works as expected.

Is there any way to configure nginx to ignore this one cookie?

Latest release is broken? (1 reply)

$
0
0
Hello!

I'm always installing NGINX from source on ubuntu machines.
Today it seems that new Release file that was created for both Mainline and Stable in malformed.

Tried with both Mainline and Stable, during apt-get update i get - "Unable to find expected entry 'nginx/source/Sources' in Release file (Wrong sources.list entry or malformed file)"

Is it something broken on my side or Release file is currently broken?
Tried to install on multiple fresh machines.

speed issues - how to diagnose it correctly? (1 reply)

$
0
0
I have a Linux setup with nginx passing requests from external IP to internal application. When I'm accessing it, the app is responding very slow, like tens of seconds. However, in application output log I see it processes requests pretty quick, like building a page in 0.6 sec and then passing linked resources (css, jpg) in under 0.001 second.

However, it often ends in nginx returning 504. Sometimes page is loading only partially. Sometimes load is complete, but too much time it takes.

When 504, it just 504 in nginx log. Like it wasn't nginx fault, but the application one.

What else, apart of reading logs, can I do to diagnose it?

Nginx 1.8.0 - checking generally security (no replies)

$
0
0
Hello,

i am using a digitalocean lamp stack on ubuntu 14.04.2 and Nginx 1.8.0

At var/log/nginx i see many error log files 1, 2, 4 ,5 e.t.c. is it normal ? also owner and qroup of this files is: www-data:adm
i am checking security genearally...

Thanks a lot.
George

Every other 60 second timeouts (4 replies)

$
0
0
Hi,

We're using NGINX as a reverse proxy and are having difficulty getting consistent throughput. The service we have stood up behind the proxy is a NodeJS service running on CoreOS on an AWS ec2 instance, sitting behind an ELB. For the proxy, we're using NGINX as part of KONG running on a CoreOS Docker container. The version is openresty/1.9.3.1. We're using a simply proxy_pass to route the traffic though.

We have a few of these NGINX instances all configured the same. They'll work great for a while (1-4 days) and then the traffic passing through will enter a pattern where every other call proxies through instantaneously, the other calls will delay in NGINX for 60 seconds before making it out to the upstream service and getting a response back to proxy back to the caller.

I've configured other test upstream services to see if there's anything tied to the service being fronted, however as soon as I apply the config changes to NGINX, the report goes away for a while, so this path is a slow process and also hard to validate once we think we have a fix as we basically need to just wait 2-3X times the standard failure time to see if it fails again.

Has anyone seen this before or know how to fix it?

Thanks!
-Brian

Reverse proxy issues (1.4.6, 1.8.0) (1 reply)

$
0
0
I'm recreating the topic: https://forum.nginx.org/read.php?15,263481

As I've done some tests and localized the problem more or less, it seems to point on a particular issue, but I can't edit the old topic's subject.

I'm deploying an ASP.NET 5 app on Ubuntu 14.04, using nginx (tried versions 1.4.6 and 1.8.0) as reverse proxy like this:

server {
listen 8080;
server_name localhost;

proxy_connect_timeout 5;
proxy_send_timeout 5;
proxy_read_timeout 5;
send_timeout 5;

location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:5002;
}
}

If I'm accessing the app directly via http://127.0.0.1:5002 from local browser, it works correctly and responds fast. However, if accessing 192.168.0.1:8080 or, locally, 127:0.0.1:8080 it doesn't responds correctly.

According to the app log, the requested page is actually rendered quite fast, then application starts to pass it to nginx, but suddenly stops transmitting data without warning (or nginx stops to read the data, while playing as if it is still listening). And transmission is always cut only when close to the end, so if there is more data, the part of this data can make its way through nginx. And if there is very small amount of data to be passed, then almost nothing is passed at all, just response headers. Nginx seems to be unaware of the transmission stop and just waits it to continue, and once it hits the timeout, it just transmits all the data it received to the moment. For small simple template pages it is just response headers. For the larger page with a couple of kilobytes it is also most of the page's html transmitted, but page is incomplete (no closing body and html tags and some other markup).

Sometimes it just the error 504, as no data from the app was transmitted in response.

nginx V1.8.0 : Size of URL (8 replies)

$
0
0
If I come here, it is that I 'm going crazy.

I use nginx to reverse proxy . Everything works fine , but from the moment the size of the URL is too long (nothing crazy , as it must do in the 1000 characters) it no longer works.

Below the message ok a URL, and the message when it blocks .

OK :
2015/12/29 18:12:41 [debug] 1036#2784: *23 WSARecv: fd:420 rc:0 1378 of 131072

KO :
2015/12/29 18:03:05 [debug] 1036#2784: *2 WSARecv: fd:408 rc:-1 0 of 131072
2015/12/29 18:03:05 [info] 1036#2784: *2 WSARecv() failed (10054: FormatMessage() error:(15105)) while waiting for request, client: 2.12.36.49, server: 0.0.0.0:80

Here is the configuration I use:

http {
include C:/nginx-1.8.0/conf/mime.types;
default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log C:/nginx-1.8.0/logs/access_main.log main;

sendfile on;
tcp_nopush on;

#keepalive_timeout 0;
keepalive_timeout 65;

tcp_nodelay on;
gzip on;
gzip_comp_level 5;
gzip_http_version 1.0;
gzip_min_length 0;
gzip_types text/plain text/html text/css image/x-icon application/x-javascript;
gzip_vary on;

include C:/nginx-1.8.0/conf.d/*.conf;

server {
listen 80;
server_name localhost;
access_log C:/nginx-1.8.0/logs/web2print.access.log;
error_log C:/nginx-1.8.0/logs/korben.nginx_error.log debug;

location /web2print/ {
proxy_pass http://192.100.100.82:6080/web2print/;
}

}
}



If you could help me , I 'd appreciate it .

Thank you

Gorki

Files treated differently. Why? (3 replies)

$
0
0
file 1: https://textfiles.meulie.net/survival/tenherbs.txt
file 2: https://textfiles.meulie.net/survival/1hrrads.txt

When trying to open file 1 in a browser, I get prompted to download it. When trying to open file 2, it displays in the browser.

I've checked both with Redbot, and can't see any difference:
file 1: https://redbot.org/?uri=https%3A%2F%2Ftextfiles.meulie.net%2Fsurvival%2Ftenherbs.txt
file 2: https://redbot.org/?uri=https%3A%2F%2Ftextfiles.meulie.net%2Fsurvival%2F1hrrads.txt

No special config in the site-config either. What can explain this difference in treatment?

Regards,
Evert

X-Accel-Redirect bracket character not url encoded (1.9.4) (1 reply)

$
0
0
Hello,

I'm noticing an issue where the square bracket characters are not url encoded when passing to the upstream server. My config is

upstream avatar {
server localhost:8879;
}

location /avatar-internal/ {
internal;
proxy_pass http://avatar/;
}

I'm passing a url encoded value in the X-Accel-Redirect header,
e.g. /avatar-internal/file%5D.jpg?x=1

but nginx sends the url
file].jpg?x=1

instead of the correct
file%5D.jpg?x=1

It does not re-encode [ and ] characters. But it seems to re-encode other special characters like ? and #. I'm using version 1.9.4. This seems like a bug.

nginx load balancer config? - help! :) (no replies)

$
0
0
Hi! I'm new to NGINX and I would like to setup a load balancer for our web server. this is what my /etc/nginx/sites-available/default looks like

upstream server {
server 192.168.1.101;
server 192.168.1.102;
}
server {
listen 80;
location / {
proxy_set_header X*Forwarded*For $proxy_add_x_forwarded_for;
proxy_pass http://server;
}
}

but if i run (nginx -t), i get this error -

nginx: [emerg] unknown directive "upstream server" in /etc/nginx/sites-enabled/default:1
nginx: configuration file /etc/nginx/nginx.conf test failed

searched the internet and they fix this kind of issue by adding it to the http {} or adding this line in the nginx.conf file - include /etc/nginx/sites-enabled/*;
I have tried several solution found on the internet and still got this error. I even created a new VM for a fresh setup, still the same. BTW, i was following this example - https://www.youtube.com/watch?v=SpL_hJNUNEI

Inputs will be greatly appreciated. Thank you!

nginx-upsync-module (2 replies)

$
0
0
hi nginx ers,

I make a new nginx module: syncing upstreams from consul or etcd and so on;

Now it only support consul, I think it is cool, github adress: https://github.com/weibocom/nginx-upsync-module .

If you are interested in the module, please try it, any feedback is welcome.

Thanks

[newbie] nginx extremely slow(1 reqs/sec) when serving static files (no replies)

$
0
0
Hi,
there.

I use nginx to serve static files and django(gunicorn), which turns out to be extremely slow (~1 requests/second, cannot believe it) even for static files. I've tried almost all methods I could find, including: disable iptables; disable gunicorn; add file cache; increate worker_rlimit_nofile, disable access lo, but performance does not improve.

As the throughput is so slow, I believe it should be a silly problem, but I could not figure it out after fighting for over 1 days. So any help or suggestion is highly appreciated.

A. Here is the basic environment of my server:
A1.ping
PING myserver.com (182.92.**.**): 56 data bytes
64 bytes from 182.92.**.**: icmp_seq=0 ttl=42 time=7.652 ms

A2. top
top - 00:28:52 up 1:08, 2 users, load average: 0.03, 0.03, 0.05
Tasks: 115 total, 1 running, 114 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.7 us, 0.2 sy, 0.0 ni, 99.0 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 1882832 total, 1383480 free, 163208 used, 336144 buff/cache
KiB Swap: 2097148 total, 2097148 free, 0 used. 1577200 avail Mem

A3. ulimit -n
65535
cat /proc/sys/fs/file-max
185146
cat /proc/sys/fs/file-nr
1024 0 185146
less /etc/sysctl.conf
net.ipv4.tcp_max_syn_backlog = 32768
net.ipv4.tcp_synack_retries = 2
net.ipv4.ip_local_port_range = 10000 65000
net.core.somaxconn = 32768
net.core.netdev_max_backlog = 32768
net.ipv4.tcp_max_orphans = 32768
...

A4. iptables // even if I disable iptables, throughput did not increate
sudo iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 127.0.0.1 127.0.0.1 tcp dpt:3306
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:1234 state NEW,ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW,ESTABLISHED

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:1234 state ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:80 state ESTABLISHED

A5. nginx.conf
user nginx;
worker_processes auto; // with 2 cpu cores
worker_cpu_affinity 01 10;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
worker_rlimit_nofile 65535;

events {
worker_connections 16384;
use epoll;
multi_accept on;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent ($request_time-$upstream_response_time) "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;

open_file_cache max=65535 inactive=5m;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

include /etc/nginx/mime.types;
default_type application/octet-stream;

include /etc/nginx/conf.d/*.conf; // where contains gzip.conf
include /etc/nginx/servers/*.conf; // where contains server.conf
}

A6. server.conf
server {
listen 80 default backlog=8192;
server_name myhost.com;
root /www/myhost/latest;
access_log /var/log/nginx/myhost.log main;

location / {
proxy_pass http://0.0.0.0:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
}

location ~ /images {
if ($http_origin ~ "myhost.com$") {
add_header "Access-Control-Allow-Origin" $http_origin;
add_header 'Access-Control-Allow-Methods' 'GET';
}
root /storage;
expires max;
}

location ~ /.ht {
deny all;
}
}

B. When I run AB tests with 2 or 50 concurrencies, the performance is poor:
ab -s 30000 -n 50 -c 50 http://myserver.com/images/test.jpg

Requests per second: 0.97 [#/sec] (mean)
Transfer rate: 174.02 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 5 9 6.4 8 70
Processing: 8219 23303 10201.3 22035 73695
Waiting: 7 794 2325.1 76 13760
Total: 8227 23311 10200.9 22044 73700

B1. netstat -an|awk '/^tcp/{++S[$NF]}END{for (a in S)print a,S[a]}'
LISTEN 4
SYN_RECV 1
ESTABLISHED 67 // changes over time, but TIME_WAIT has a maximum value of 1 or 2;

B2. top
nginx is fine with memory or CPU usage

B3. if I enable static_status module, it also seems fine:
Active connections: 55
server accepts handled requests
164 164 235
Reading: 0 Writing: 40 Waiting: 15 // changes over time

C. Useful information I could find:

C1. access log files:
117.114.129.170 - - [16/Jan/2016:23:24:29 +0800] "GET /images/test.jpg HTTP/1.0" 200 219868 (37.269--) "-" "ApacheBench/2.3" "-"
Where 37.269 is $request_time as configured above;
From B1 I'm sure that all connections are connected(not in TIME_WAIT state), so it seems to be nginx

C2. I enable debug level for error logs:
There seems to be a lot of "Resource temporarily unavailable" errors.

2016/01/16 23:47:51 [debug] 1270#0: accept() not ready (11: Resource temporarily unavailable)
2016/01/16 23:47:51 [debug] 1270#0: *468 recv() not ready (11: Resource temporarily unavailable)

C3. And here is a typical log for a request, which I understand little:
2016/01/16 21:42:01 [debug] 15027#0: *907 accept: 117.114.**.** fd:21
2016/01/16 21:42:01 [debug] 15027#0: posix_memalign: 00007F7B51D3A9C0:256 @16
2016/01/16 21:42:01 [debug] 15027#0: *907 event timer add: 21: 60000:1452951781579
2016/01/16 21:42:01 [debug] 15027#0: *907 reusable connection: 1
2016/01/16 21:42:01 [debug] 15027#0: *907 epoll add event: fd:21 op:1 ev:80002001
2016/01/16 21:42:01 [debug] 15027#0: accept() not ready (11: Resource temporarily unavailable)
2016/01/16 21:42:01 [debug] 15027#0: worker cycle
2016/01/16 21:42:01 [debug] 15027#0: accept mutex locked
2016/01/16 21:42:01 [debug] 15027#0: epoll timer: 44947
2016/01/16 21:42:01 [debug] 15027#0: epoll: fd:21 ev:0001 d:00007F7B48FF1880
2016/01/16 21:42:01 [debug] 15027#0: *907 post event 00007F7B4FE4A420
2016/01/16 21:42:01 [debug] 15027#0: *907 delete posted event 00007F7B4FE4A420
2016/01/16 21:42:01 [debug] 15027#0: *907 http wait request handler
2016/01/16 21:42:01 [debug] 15027#0: *907 malloc: 00007F7B51DDC200:1024
2016/01/16 21:42:01 [debug] 15027#0: *907 recv: fd:21 104 of 1024
2016/01/16 21:42:01 [debug] 15027#0: *907 reusable connection: 0
2016/01/16 21:42:01 [debug] 15027#0: *907 posix_memalign: 00007F7B51DC2520:4096 @16
2016/01/16 21:42:01 [debug] 15027#0: *907 http process request line
2016/01/16 21:42:01 [debug] 15027#0: *907 http request line: "GET /images/test.jpg HTTP/1.0"
2016/01/16 21:42:01 [debug] 15027#0: *907 http uri: "/images/test.jpg"
2016/01/16 21:42:01 [debug] 15027#0: *907 http args: ""
2016/01/16 21:42:01 [debug] 15027#0: *907 http exten: "jpg"
2016/01/16 21:42:01 [debug] 15027#0: *907 http process request header line
2016/01/16 21:42:01 [debug] 15027#0: *907 http header: "Host: static.tongshijia.com"
2016/01/16 21:42:01 [debug] 15027#0: *907 http header: "User-Agent: ApacheBench/2.3"
2016/01/16 21:42:01 [debug] 15027#0: *907 http header: "Accept: */*"
2016/01/16 21:42:01 [debug] 15027#0: *907 http header done
2016/01/16 21:42:01 [debug] 15027#0: *907 event timer del: 21: 1452951781579
2016/01/16 21:42:01 [debug] 15027#0: *907 generic phase: 0
2016/01/16 21:42:01 [debug] 15027#0: *907 rewrite phase: 1
2016/01/16 21:42:01 [debug] 15027#0: *907 test location: "/"
2016/01/16 21:42:01 [debug] 15027#0: *907 test location: "50x.html"
2016/01/16 21:42:01 [debug] 15027#0: *907 test location: ~ "/static/imager"
2016/01/16 21:42:01 [debug] 15027#0: *907 test location: ~ "/static"
2016/01/16 21:42:01 [debug] 15027#0: *907 test location: ~ "/images"
2016/01/16 21:42:01 [debug] 15027#0: *907 using configuration "/images"
2016/01/16 21:42:01 [debug] 15027#0: *907 http cl:-1 max:1048576
2016/01/16 21:42:01 [debug] 15027#0: *907 rewrite phase: 3
2016/01/16 21:42:01 [debug] 15027#0: *907 posix_memalign: 00007F7B51DE08E0:4096 @16
2016/01/16 21:42:01 [debug] 15027#0: *907 http script var
2016/01/16 21:42:01 [debug] 15027#0: *907 http script regex: "tongshijia.com$"
2016/01/16 21:42:01 [notice] 15027#0: *907 "tongshijia.com$" does not match "", client: 117.114.129.170, server: static.tongshijia.com, request: "GET /images/test.jpg HTTP/1.0", host: "static.tongshijia.com"
2016/01/16 21:42:01 [debug] 15027#0: *907 http script if
2016/01/16 21:42:01 [debug] 15027#0: *907 http script if: false
2016/01/16 21:42:01 [debug] 15027#0: *907 post rewrite phase: 4
2016/01/16 21:42:01 [debug] 15027#0: *907 generic phase: 5
2016/01/16 21:42:01 [debug] 15027#0: *907 generic phase: 6
2016/01/16 21:42:01 [debug] 15027#0: *907 generic phase: 7
2016/01/16 21:42:01 [debug] 15027#0: *907 generic phase: 8
2016/01/16 21:42:01 [debug] 15027#0: *907 access phase: 9
2016/01/16 21:42:01 [debug] 15027#0: *907 access phase: 10
2016/01/16 21:42:01 [debug] 15027#0: *907 post access phase: 11
2016/01/16 21:42:01 [debug] 15027#0: *907 content phase: 12
2016/01/16 21:42:01 [debug] 15027#0: *907 content phase: 13
2016/01/16 21:42:01 [debug] 15027#0: *907 content phase: 14
2016/01/16 21:42:01 [debug] 15027#0: *907 content phase: 15
2016/01/16 21:42:01 [debug] 15027#0: *907 content phase: 16
2016/01/16 21:42:01 [debug] 15027#0: *907 content phase: 17
2016/01/16 21:42:01 [debug] 15027#0: *907 http filename: "/storage/images/test.jpg"
2016/01/16 21:42:01 [debug] 15027#0: *907 add cleanup: 00007F7B51DC34F8
2016/01/16 21:42:01 [debug] 15027#0: *907 cached open file: /storage/images/test.jpg, fd:18, c:5, e:0, u:5
2016/01/16 21:42:01 [debug] 15027#0: *907 http static fd: 18
2016/01/16 21:42:01 [debug] 15027#0: *907 http set discard body
2016/01/16 21:42:01 [debug] 15027#0: *907 xslt filter header
2016/01/16 21:42:01 [debug] 15027#0: *907 HTTP/1.1 200 OK
Server: nginx
.........
2016/01/16 21:42:01 [debug] 15027#0: *907 write new buf t:1 f:0 00007F7B51DE0BC8, pos 00007F7B51DE0BC8, size: 329 file: 0, size: 0
2016/01/16 21:42:01 [debug] 15027#0: *907 http write filter: l:0 f:0 s:329
2016/01/16 21:42:01 [debug] 15027#0: *907 http output filter "/images/test.jpg?"
2016/01/16 21:42:01 [debug] 15027#0: *907 http copy filter: "/images/test.jpg?"
2016/01/16 21:42:01 [debug] 15027#0: *907 image filter
2016/01/16 21:42:01 [debug] 15027#0: *907 xslt filter body
2016/01/16 21:42:01 [debug] 15027#0: *907 http postpone filter "/images/test.jpg?" 00007FFF6F320C20
2016/01/16 21:42:01 [debug] 15027#0: *907 write old buf t:1 f:0 00007F7B51DE0BC8, pos 00007F7B51DE0BC8, size: 329 file: 0, size: 0
2016/01/16 21:42:01 [debug] 15027#0: *907 write new buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 219868
2016/01/16 21:42:01 [debug] 15027#0: *907 http write filter: l:1 f:0 s:220197
2016/01/16 21:42:01 [debug] 15027#0: *907 http write filter limit 0
2016/01/16 21:42:01 [debug] 15027#0: *907 tcp_nopush
2016/01/16 21:42:01 [debug] 15027#0: *907 writev: 329
2016/01/16 21:42:01 [debug] 15027#0: *907 sendfile: @0 219868
2016/01/16 21:42:01 [debug] 15027#0: *907 sendfile: 16807, @0 16807:219868
2016/01/16 21:42:01 [debug] 15027#0: *907 http write filter 00007F7B51DE0DA0
2016/01/16 21:42:01 [debug] 15027#0: *907 http copy filter: -2 "/images/test.jpg?"
2016/01/16 21:42:01 [debug] 15027#0: *907 http finalize request: -2, "/images/test.jpg?" a:1, c:1

Feel free to tell me if more information needed, and again thanks a lot for any help/suggestions.

regards,
aqingsir

Nginx 1.8.1 update question (no replies)

$
0
0
Hi,

I am running Nginx on Centos 6.7 32 bits

I am using the official Nginx repo and I've lately updated Nginx to v 1.8.1 (update nginx-1.8.1-1.el6.ngx.i386).
Then I found out that the index.html I had in the default location (/usr/share/nginx/html/index.html) got overwriten by the default Nginx index.html file.

I guess some might argue that I might use a virtual host but since I am only hosting one single site I didn't feel I needed it (yet I may do so in the future indeed ;))

But apart from this remark (virtual host usage), is that an expected behaviour that this file gets overwriten when updating version ?

Thanks for your input on this.

Yves
Viewing all 607 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>