2012-05-16 20 views
6

Sto eseguendo nginx e tornado su istanze c1.medium.Tornado, Nginx, Apache ab - apr_socket_recv: Connessione ripristinata dal peer (104)

Quando corro ab il sotto è la mia uscita. Nginx non funzionerà. Ho provato a modificare il file di configurazione per ninx senza alcun risultato. Se corro su una sola porta passando nginx e.g. `

http://127.0.0.1:8050/pixel?tt=ff` 

quindi veloce. Vedi il fondo. Questo deve essere un problema di nginx, quindi come posso risolvere? Di seguito è riportato anche il file conf per nginx.

[email protected]:/etc/service# ab -n 10000 -c 50 http://127.0.0.1/pixel?tt=ff 
This is ApacheBench, Version 2.3 <$Revision: 655654 $> 
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ 
Licensed to The Apache Software Foundation, http://www.apache.org/ 
Benchmarking 127.0.0.1 (be patient) 
Completed 1000 requests 
Completed 2000 requests 
Completed 3000 requests 
Completed 4000 requests 
Completed 5000 requests 
Completed 6000 requests 
Completed 7000 requests 
Completed 8000 requests 
Completed 9000 requests 
apr_socket_recv: Connection reset by peer (104) 
Total of 9100 requests completed 

Questo dovrebbe fumare ma ancora non è così.

ho impostare le seguenti parmamerts

ulimit is at 100000 

# General gigabit tuning: 
net.core.rmem_max = 16777216 
net.core.wmem_max = 16777216 
net.ipv4.tcp_rmem = 4096 87380 16777216 
net.ipv4.tcp_wmem = 4096 65536 16777216 
net.ipv4.tcp_syncookies = 1 
# this gives the kernel more memory for tcp 
# which you need with many (100k+) open socket connections 
net.ipv4.tcp_mem = 50576 64768 98152 
net.core.netdev_max_backlog = 2500 

Ecco la mia nginx conf:

user www-data; 
worker_processes 1; # 2*number of cpus 
pid /var/run/nginx.pid; 
worker_rlimit_nofile 32768; 
events { 
     worker_connections 30000; 
     multi_accept on; 
     use epoll; 
} 

http { 
     upstream frontends { 
      server 127.0.0.1:8050; 
      server 127.0.0.1:8051; 
     } 
     sendfile on; 
     tcp_nopush on; 
     tcp_nodelay on; 
     keepalive_timeout 65; 
     types_hash_max_size 2048; 
     # server_tokens off; 
     # server_names_hash_bucket_size 64; 
     # server_name_in_redirect off; 

     include /etc/nginx/mime.types; 
     default_type application/octet-stream; 

     # Only retry if there was a communication error, not a timeout 
    # on the Tornado server (to avoid propagating "queries of death" 
    # to all frontends) 
    proxy_next_upstream error; 

     server { 
     listen 80; 
     server_name 127.0.0.1; 
       ##For tornado 
       location/{ 
        proxy_pass_header Server; 
        proxy_set_header Host $http_host; 
        proxy_redirect off; 
        proxy_set_header X-Real-IP $remote_addr; 
        proxy_set_header X-Scheme $scheme; 
        proxy_pass http://frontends; 
       } 

se corro ab dai passando nginx:

ab -n 100000 -c 1000 http://127.0.0.1:8050/pixel?tt=ff 



[email protected]:/home/ubuntu/workspace/rtbopsConfig/rtbServers/rtbTornadoServer# ab -n 100000 -c 1000 http://127.0.0.1:8050/pixel?tt=ff 
This is ApacheBench, Version 2.3 <$Revision: 655654 $> 
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ 
Licensed to The Apache Software Foundation, http://www.apache.org/ 

Benchmarking 127.0.0.1 (be patient) 
Completed 10000 requests 
Completed 20000 requests 
Completed 30000 requests 
Completed 40000 requests 
Completed 50000 requests 
Completed 60000 requests 
Completed 70000 requests 
Completed 80000 requests 
Completed 90000 requests 
Completed 100000 requests 
Finished 100000 requests 


Server Software:  TornadoServer/2.2.1 
Server Hostname:  127.0.0.1 
Server Port:   8050 

Document Path:   /pixel?tt=ff 
Document Length:  42 bytes 

Concurrency Level:  1000 
Time taken for tests: 52.436 seconds 
Complete requests:  100000 
Failed requests:  0 
Write errors:   0 
Total transferred:  31200000 bytes 
HTML transferred:  4200000 bytes 
Requests per second: 1907.08 [#/sec] (mean) 
Time per request:  524.363 [ms] (mean) 
Time per request:  0.524 [ms] (mean, across all concurrent requests) 
Transfer rate:   581.06 [Kbytes/sec] received 

Connection Times (ms) 
       min mean[+/-sd] median max 
Connect:  0 411 1821.7  0 21104 
Processing: 23 78 121.2  65 5368 
Waiting:  22 78 121.2  65 5368 
Total:   53 489 1845.0  65 23230 

Percentage of the requests served within a certain time (ms) 
    50%  65 
    66%  69 
    75%  78 
    80%  86 
    90% 137 
    95% 3078 
    98% 3327 
    99% 9094 
100% 23230 (longest request) 


2012/05/16 20:48:32 [error] 25111#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET/HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1" 
2012/05/16 20:48:32 [error] 25111#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET/HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1" 
2012/05/16 20:53:48 [error] 28905#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET/HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1" 
2012/05/16 20:53:48 [error] 28905#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET/HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1" 
2012/05/16 20:55:35 [error] 30180#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET/HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1" 
2012/05/16 20:55:35 [error] 30180#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET/HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1" 

Oupput quando si utilizza il - v 10 opzione su ab:

GIF89a 
LOG: Response code = 200 
LOG: header received: 
HTTP/1.1 200 OK 
Date: Wed, 16 May 2012 21:56:50 GMT 
Content-Type: image/gif 
Content-Length: 42 
Connection: close 
Etag: "d5fceb6532643d0d84ffe09c40c481ecdf59e15a" 
Server: TornadoServer/2.2.1 
Set-Cookie: rtbhui=867bccde-2bc0-4518-b422-8673e07e19f6; Domain=rtb.rtbhui.com; expires=Fri, 16 May 2014 21:56:50 GMT; Path=/ 
+0

ho trovato il mio problema ... sadly..haha ... il mio chef ha iniziato a ricominciare sotto la supervisione di ogni chef gestito a 30 secondi. ho avuto un bug lì. Risolto il problema quindi risolto. – Tampa

risposta

2

Ho avuto lo stesso problema utilizzando il benchmark Apache su un'app sinatra usando webrick. Ho trovato la risposta here.

In realtà è un problema del server Apache.

Il bug è stato rimosso nelle versioni superiori di apache. Prova a scaricarli here.

0

Ho avuto lo stesso problema e alla ricerca di informazioni nei registri ho ottenuto queste righe:

Oct 15 10:41:30 bal1 kernel: [1031008.706185] nf_conntrack: table full, dropping packet. 
Oct 15 10:41:31 bal1 kernel: [1031009.757069] nf_conntrack: table full, dropping packet. 
Oct 15 10:41:32 bal1 kernel: [1031009.939489] nf_conntrack: table full, dropping packet. 
Oct 15 10:41:32 bal1 kernel: [1031010.255115] nf_conntrack: table full, dropping packet. 

Nel mio caso particolare, il modulo conntrack è usare iptables all'interno perché lo stesso server hanno il firewall.

Una soluzione per la correzione che sta scaricando il modulo conntrack, e altri e facile è con questo due linee applicate nei criteri firewall:

iptables -t raw -I PREROUTING -p tcp -j NOTRACK 
iptables -t raw -I OUTPUT -p tcp -j NOTRACK 
+0

puoi condividere quali log stai cercando? –

Problemi correlati