Change website

From Jan 16 2015,


All post content will be move to we's offical website with many content...

Can access website here: http://justox.com

Thanks for your visit!

Showing posts with label APACHE. Show all posts
Showing posts with label APACHE. Show all posts

Friday 27 December 2013

Fixed Nginx: 413 Request Entity Too Large Error and Solution

'm running nginx as a frond end to php based Apache+mod_fastcgi server. My app lets user upload images upto 2MB in size. When users trying to upload 1.5MB+ size image file using nginx reverse proxy, they are getting the following error on screen:
Nginx 413 Request Entity Too Large
How do I fix this problem and allow image upload upto 2MB in size using nginx web-server working in reverse proxy or stand-alone mode on Unix like operating systems?
You need to configure both nginx and php to allow upload size.

Thursday 19 December 2013

cPanel: Installing Mod_Python on Apache 2

Mod_python is one of the trickier things to install on cPanel servers. Below are the two methods I've used to get mod_python up and running on Apache 2.

Dynamic PHP Extensions Not Loading

I recently saw an issue on one of our servers where we were trying to enable Zend Optimizer and IonCube Loaders, but they just won’t show up on a phpinfo page despite showing up via command line:
-bash-3.2# php -v
PHP 4.4.9 (cli) (built: May 4 2010 13:55:07)
Copyright (c) 1997-2008 The PHP Group
Zend Engine v1.3.0, Copyright (c) 1998-2004 Zend Technologies
with the ionCube PHP Loader v3.3.14, Copyright (c) 2002-2010, by ionCube Ltd., and


 with Zend Optimizer v3.3.3, Copyright (c) 1998-2007, by Zend Technologies

After toggling around with this and finally getting cPanel installed, one of their techs (Kyle P.) figured out that the problem is with PHP being built with the versioning extension, which can apparently cause dynamic modules not to load when PHP is invoked as a DSO (and likely as CGI, but couldn’t reproduce it). The CPanel documentation also recommends against it:
“Versioning – The PHP versioning option was intended to allow the same sort of functionality that the concurrent DSO patches allow. It does not work well and is not recommended by cPanel or the PHP developers.”
Quite honestly, I never used versioning on a server and I knew it wasn’t something that was recommended, but at least now we know why!

PHP 500 Internal Server Errors

500 Internal Server Errors are one of the most common PHP issues that I see customer experience, and it will occur mostly on servers with suPHP or PHP running as CGI.  These errors can be caused by something on the server, or an issue on the user’s site. Here ‘s what you should do if you see errors:
Check the logs
You can solve most problems quickly just by looking at the logs:
/usr/local/apache/logs/error_log
/usr/local/apache/logs/suphp.log
Here are some common errors:
SoftException in Application.cpp:357: UID of script "/home/user/public_html/test.php" is smaller than min_uid
SoftException in Application.cpp:422: Mismatch between target UID (511) and UID (510) of file "/home/user/public_html/test.php"
SoftException in Application.cpp:264: File "/home/user/public_html/test.php" is writeable by others
These are all permission/ownership issues, indicating that the owner of the PHP file being called in incorrect, or the permissions are higher than what is allowed in suphp.conf.
Invalid directions in .htaccess
If you’re running PHP in CGI or suPHP mode, you can’t use php_flag or php_value directives in .htaccess – you either need to use htscanner to allow Apache to parse those commands, or make php-related changes in php.ini within the user’s account. You can check the Apache error log in /usr/local/apache/logs/error_log to see if you get something like this:
/home/user/public_html/.htaccess: Invalid command 'php_flag', perhaps misspelled or defined by a module not included in the server configuration
If the error log indicates a problem with .htaccess, you need to remove the directives indicated and make sure your ssyntax is correct, and that they are in the correct places.
Incorrect ownership or permissions
PHP scripts and their immediate parent folder will usually have permissions limits when PHP runs in CGI/suPHP mode. By default, PHP files and their parent folders can not have group or ‘other’ writable permissions, and cannot be owned by a user on the system other than than the user that owns the home folder it is located in. Additionally, cPanel’s implementation of suPHP does not allow PHP to execute via browser from locations that are not inside a user’s home folder. The first thing you should check is that the PHP script and its parent folder(s) are not writable by ‘group’ or ‘other’, or owned by a different system user. You can usually see if this is an issue by tailing the suphp log in /usr/local/apache/logs/suphp.log, or whatever the suphp.conf has set as the log location.
You can adjust suPHP’s permissions allowances in /opt/suphp/etc/suphp.conf to allow ‘group’ and ‘other’ writable permissions if it’s necessary by modifying these values:
allow_file_group_writeable=false
allow_file_others_writeable=false
allow_directory_group_writeable=false
allow_directory_others_writeable=false

If the problem is with the min_uid value being too low (such as if you’re running a PHP script as root), you can also modify the “min_uid” and “min_gid” values to be more permissive. Changing anything in the suphp.conf requires a restart of Apache.
SuPHP binary missing its “sticky” permissions
Take a look at the suphp binary. It should look a bit like this, and in most shells, will be highlighted in red:
-rwsr-xr-x 1 root root 341K Mar 30 12:25 /opt/suphp/sbin/suphp*
If it’s missing the ‘s’ in the permissions column, you need to re-add the sticky bit to allow users on the system to execute it properly:
chmod +s /opt/suphp/sbin/suphp

Wildcard SSL Installation Script

Us administrators eventually come to the realization that when you have a wildcard SSL certificate for 40 subdomains, you can’t practically have separate IPs and cPanel accounts for all of them. If you have a wildcard SSL certificate for all your su
bdomains, you can easily install the certificate on a single IP address for all the subdomains. For this particular scenario to work:

Install Tomcat 7 on a cPanel Server

CPanel has soon promised that Tomcat 7 will be supported in a future EasyApache release. Until then, you can get easily get the support of Tomcat 7 with just a little bit of manual intervention.  I will mention that right now, Tomcat 7 is not supported by cPanel so there’s no guarantee that their integrated features for Tomcat will work as expected.

Enable custom php.ini in litespeed cpanel server

This document lists the options to use per user php.ini under the environment of hosting control panel(cPanel/WHM, Plesk, DirectAdmin, etc) + LiteSpeed Web Server(LSWS).
Put something like “PHPRC=$VH_ROOT” in lsphp environment (Web Admin Console→Server→External App→lsphp5→Edit→Environment).

PHP Security Vulnerability

There is a vulnerability in certain CGI-based setups (Apache+mod_php and nginx+php-fpm are not affected) that has gone unnoticed for at least 8 years.
Some systems support a method for supplying a [sic] array of strings to the CGI script. This is only used in the case of an `indexed’ query. This is identified by a “GET” or “HEAD” HTTP request with a URL search string not containing any unencoded “=” characters.

cHtaccess-V1.0-cPanel addon for creating htaccess rules.

Here I am inventing a new tool though which you can do a lot of things which we are usually using in day to day webhosting requirements such as generating htaccess, htpassword protected directory and all.
It is not a big deal to generate the htaccess rules via this addon.

Apache module used for injuct malicious content: Linux/Chapro.A

ESET antivirus company reported the detection of new malware Linux / Chapro.A, used by hackers to launch attacks on visitors to a site hosted on compromised Linux-servers.

Mod_rpaf for apache2.4

When you do a reverse proxy in which Nginx/varnish/ApacheBooster will be the one who listen the traffic and forward anything needs to be process through PHP to Apache. With this, Apache will always receive an IP from frontend server (or I should say localhost / 127.0.0.1 / local IP/ server main IP), not the real IP that user uses. In order to let Apache knows what is real user IP, we need to install mod_rpaf, but the apache 2.4 is not suppoting current mod_rpaf version, so you need to apply the following patch to work with apache2.4.

--- mod_rpaf-2.0.c.org  2012-05-17 12:05:34.082130109 +0900
+++ mod_rpaf-2.0.c      2012-05-17 12:16:41.648138252 +0900
@@ -147,8 +147,8 @@
 
 static apr_status_t rpaf_cleanup(void *data) {
     rpaf_cleanup_rec *rcr = (rpaf_cleanup_rec *)data;
-    rcr->r->connection->remote_ip   = apr_pstrdup(rcr->r->connection->pool, rcr->old_ip);
-    rcr->r->connection->remote_addr->sa.sin.sin_addr.s_addr = apr_inet_addr(rcr->r->connection->remote_ip);
+    rcr->r->connection->client_ip   = apr_pstrdup(rcr->r->connection->pool, rcr->old_ip);
+    rcr->r->connection->client_addr->sa.sin.sin_addr.s_addr = apr_inet_addr(rcr->r->connection->client_ip);
     return APR_SUCCESS;
 }
 
@@ -161,7 +161,7 @@
     if (!cfg->enable)
         return DECLINED;
 
-    if (is_in_array(r->connection->remote_ip, cfg->proxy_ips) == 1) {
+    if (is_in_array(r->connection->client_ip, cfg->proxy_ips) == 1) {
         /* check if cfg->headername is set and if it is use
            that instead of X-Forwarded-For by default */
         if (cfg->headername && (fwdvalue = apr_table_get(r->headers_in, cfg->headername))) {
@@ -180,11 +180,11 @@
                 if (*fwdvalue != '\0')
                     ++fwdvalue;
             }
-            rcr->old_ip = apr_pstrdup(r->connection->pool, r->connection->remote_ip);
+            rcr->old_ip = apr_pstrdup(r->connection->pool, r->connection->client_ip);
             rcr->r = r;
             apr_pool_cleanup_register(r->pool, (void *)rcr, rpaf_cleanup, apr_pool_cleanup_null);
-            r->connection->remote_ip = apr_pstrdup(r->connection->pool, ((char **)arr->elts)[((arr->nelts)-1)]);
-            r->connection->remote_addr->sa.sin.sin_addr.s_addr = apr_inet_addr(r->connection->remote_ip);
+            r->connection->client_ip = apr_pstrdup(r->connection->pool, ((char **)arr->elts)[((arr->nelts)-1)]);
+            r->connection->client_addr->sa.sin.sin_addr.s_addr = apr_inet_addr(r->connection->client_ip);
             if (cfg->sethostname) {
                 const char *hostvalue;
                 if (hostvalue = apr_table_get(r->headers_in, "X-Forwarded-Host")) {

save the above codes in text file and apply the patch using the following command.
patch -p1 < patch_mod_rpaf

Update:: Mod_Pagespeed

We’ve upgraded Mod_Pagespeed Easyapache Build to stable version. You can find the source on github.

Installation instructions

  1. Clone the installation scripts onto your CPanel server:
    $> git clone http://github.com/pagespeed/cpanel.git /var/cpanel/easy/apache/custom_opt_mods/Cpanel/
    
  2. Create Speed.pm.tar.gz
    $> cd /var/cpanel/easy/apache/custom_opt_mods/Cpanel/Easy && tar -zcvf Speed.pm.tar.gz pagespeed
    
  3. Login into your cPanel WHM > EasyApache and look for “mod_pagespeed” option. Alternatively, you can run the easyapache installer from command line (/scripts/easyapache). Rebuild the Apache server, reboot it, and you’re good to go!

Install ApacheBooster V2.1

We’ve upgraded ApacheBooster to 2.1. We have made several changes in configuration and added some extra features. Also fixed some major security vulnerabilities.

1) Fixed WHM api bugs
2) Removed deprecated scripts

3) Upgraded incrond. nginx and varnish to stable version

4) Fixed cPanel hook functions
5) Custom PCRE-8.33 , possible fix for “epoll_wait() failed (4: Interrupted system call)”

6) Increased system file descriptors to avoid nginx “[emerg] failed error

7) Added mod_reverseproxy instead of mod_remoteip, possible fix for apache server status.
        https://github.com/Prajithp/mod_reverseproxy

Wednesday 18 December 2013

Nginx Hotlink Protection And How To Use It

Source Wikipedia :
Hotlinking is a term used on the Internet that refers to the practice of displaying an image on a website by linking to the same image on another website, rather than saving a copy of it on the website on which the image will be shown.
Hotlinking can be major issue for bandwidth leeching for some sites. Here is small config part which you can add to prevent those activities.

server {
    location ~ \.(mp4|mp3|wav|avi)$ {
        valid_referers blocked example.com *.example.com;
        if ($invalid_referer) {
            return   403;
        }
    }
}
To specify more file types you need to use pipe “|”.
Also you will notice that i didn’t put “none” in valid_referers, this is because you can fake null referrer and anyone could access your protected files and leech it again without your approval.
You can add those directive in folder too so you will not need to specify extension/s eg. location /videos/ { .. } and this will protect any file inside this directory.
Notice :
You should consider to remove “none” from valid_referers because some browsers like IE can not understand this redirect.

Nginx Download File Trigger

Few days ago i had a chance to work bit different with Nginx, and what was needed to be done is how to determinate if user had successfully downloaded file. Why this is needed because file should be erased from server after download and PHP would be bad solution, also they told me that they don’t want to use any additional application eg. programming languages.

As you can see we had one Nginx server, already setup to meet they requirements, and they had one web service for deleting files eg. counter as shown below.
Let’s wget some file from server
wget http://www.codestance.com/somefile.tar
--2013-07-03 16:04:14--  http://www.codestance.com/somefile.tar
Resolving www.codestance.com... 93.139.143.110
Connecting to www.codestance.com|93.139.143.110|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 46397440 (44M) [text/plain]
Saving to: "somefile.tar"

100%[======================================================================>] 46,397,440   273M/s   in 0.2s

2013-07-03 16:04:14 (273 MB/s) - "somefile.tar"
By checking nginx access log we can see that client downloaded file but we can not check if download was successful or not, maybe is canceled? Anyway, log file is not for this kind of information.
93.139.143.110 - - [03/Jul/2013:16:04:14 +0000] "GET /somefile.tar HTTP/1.1" 200 46397440 "-" "Wget/1.12 (linux-gnu)" "-"
Now let’s add some configuration to nginx vhost (i added simple php script which point to /download and also one RESTful server)
location /download {
    post_action /adcounter;
}

location /adcounter{
    proxy_pass http://www.codestance.com/api/v1/adcounter?FileName=$request&ClientIP=$remote_addr& bytes_sent=$body_bytes_sent&status=$request_completion&params=$args;
    internal;
}
By upper example, no matter if user was downloaded file successfully or he canceled internal post will be on our counting API or any other app which need this information. You must notice internal command.
Let’s download same file again
wget http://www.codestance.com/somefile.tar
--2013-07-03 16:12:53--  http://www.codestance.com/somefile.tar
Resolving www.codestance.com... 93.139.143.110
Connecting to www.codestance.com|93.139.143.110|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 46397440 (44M) [text/plain]
Saving to: "somefile.tar"

100%[======================================================================>] 46,397,440   298M/s   in 0.2s

2013-07-03 16:12:53 (298 MB/s) - "somefile.tar"
We will check again our access log
93.139.143.110 - - [03/Jul/2013:16:12:53 +0000] "GET /api/v1/adcounter?FileName=GET /somefile.tar HTTP/1.1&ClientIP=93.139.143.110&bytes_sent=46397440&status=OK&params= HTTP/1.0" 200 10 "-" "Wget/1.12 (linux-gnu)"
And here we go, our access log is very clear about it, it shows status OK and now we know that client has downloaded file.
Now we can start download again but we will push Ctrl+C to break download and we can again check what access log show to us.
93.139.143.110 - - [03/Jul/2013:16:18:01 +0000] "GET /api/v1/adcounter?FileName=GET /somefile.tar HTTP/1.1&ClientIP=93.139.143.110&bytes_sent=27838464&status=&params= HTTP/1.0" 200 10 "-" "Wget/1.12 (linux-gnu)"
You see that status is blanked right? This means that file download was interrupted some how maybe connection broken or user canceled download.
So, you can use Nginx post_action to define sub request upon completion of another request which can be successful or no.
Notice:
Thanks to Zippo, NginX with version 1.2.1 should be setup as FileName=$uri instead of $request
I hope you will find this helpful.
Happy Hacking!

Install and Configure PHP-FPM on Nginx

PHP-FPM (FastCGI Process Manager) is an alternative FastCGI implementation with some additional features useful for websites of any size, especially high-load websites. It makes it particularly easy to run PHP on Nginx.
Included features – from original website :
- Adaptive process spawning
- Basic statistics
- Advanced process management with graceful stop/start
- Ability to start workers with different uid/gid/chroot/environment and different php.ini
- Stdout & stderr logging
- Emergency restart in case of accidental opcode cache destruction
- Accelerated upload support
- Support for a “slowlog”
- Enhancements to FastCGI, such as fastcgi_finish_request() – a special function to finish request & flush all data while continuing to do something time-consuming
..and much more..
Notice :
PHP-FPM is not designed with virtual hosting in mind (large amounts of pools) however it can be adapted for any usage model.
Let’s start with installation (Ubuntu/Debian) :
sudo apt-get update
sudo apt-get install php5-fpm

# check if everything is working
php5-fpm -v
You can also install other required PHP packages :
sudo apt-get install php5-common
sudo apt-get install php5-curl
sudo apt-get install php5-mysqli
...
Notice :
This configuration is based on this Nginx test server.
We can now continue to configure PHP-FPM, but first try to make backup of your original www.conf file :
sudo nano /etc/php5/fpm/pool.d/www.conf
# pool name
[www1]
# user of FPM processess
user = www-data
# group of FPM processess
group = www-data
# address on which FastCGI requests are accepted, this can also be unix socket. Mandatory for each pool
# please take in mind that they say that unix socket is bit faster than tcp but honestly i didn't notice it, what i noticed is that on unix socket i get bunch of errors..
listen = "127.0.0.1:9000"
# how process manager controll number of children. Option is mandatory
# static - child processes are fixed - pm.max_children
# ondemand - processes spawn on demand, opposite to dynamic where started processes on service started
# dynamic - processes are spawned as said depending on directives : max_children, start_servers, min_spare_servers, max_spare_servers
pm = dynamic
pm.max_children = 4096
pm.start_servers = 20
pm.min_spare_servers = 5
pm.max_spare_servers = 128
# number of requests each child should execute before respawning.
pm.max_requests = 1024
# uri on which you can check your fpm status
pm.status_path = /php-status
# log file for slow requests
slowlog = /var/log/php-fpm.slow.log
listen.backlog = -1
After this done you can copy it and paste no more than 4 times, it’s not good to have more than 4 pools because those configs are made for heavy load web site (only one) :
[www1]
user = www-data
group = www-data
listen = "127.0.0.1:9000"
pm = dynamic
pm.max_children = 4096
pm.start_servers = 20
pm.min_spare_servers = 5
pm.max_spare_servers = 128
pm.max_requests = 1024
pm.status_path = /php-status
slowlog = /var/log/php-fpm.slow.log
listen.backlog = -1

[www2]
user = www-data
group = www-data
listen = "127.0.0.1:9001"
pm = dynamic
pm.max_children = 4096
pm.start_servers = 20
pm.min_spare_servers = 5
pm.max_spare_servers = 128
pm.max_requests = 1024
pm.status_path = /php-status
slowlog = /var/log/php-fpm.slow.log
listen.backlog = -1

[www3]
user = www-data
group = www-data
listen = "127.0.0.1:9002"
pm = dynamic
pm.max_children = 4096
pm.start_servers = 20
pm.min_spare_servers = 5
pm.max_spare_servers = 128
pm.max_requests = 1024
pm.status_path = /php-status
slowlog = /var/log/php-fpm.slow.log
listen.backlog = -1

[www4]
user = www-data
group = www-data
listen = "127.0.0.1:9003"
pm = dynamic
pm.max_children = 4096
pm.start_servers = 20
pm.min_spare_servers = 5
pm.max_spare_servers = 128
pm.max_requests = 1024
pm.status_path = /php-status
slowlog = /var/log/php-fpm.slow.log
listen.backlog = -1
What i noticed, when i have only one pool it always brakes when i have heavy load at the some moments but after few seconds it’s working normally again, so i added more than one pool but as i said above, max 4 pools.
After we added pools now we need to setup upstreams in Nginx config with sudo nano /etc/nginx/nginx.conf :
upstream php {
    server 127.0.0.1:9100;
    server 127.0.0.1:9001;
    server 127.0.0.1:9002;
    server 127.0.0.1:9003;
}
Those are our upstream FastCGI servers which will be used as round robin for processing requests.
On Nginx vhost side, we will need change param for passing requests to upstream servers (pools) :
location ~ \.php$ {
    fastcgi_pass php;
}
Now restart Nginx and PHP-FPM and you are done.
Please take in mind that this guide is made for our test server and you will maybe need to play with those numbers, depending on your CPU and memory you can lower down max_children and max_spare_servers.
Happy hacking!

110 Connection Timeout Error On Nginx

Nginx timeout error is not uncommon.
While running maintenance on few nginx servers i saw today error like this one, actually i saw it a lot of times but not on my configs. After digging around for a while i found out that this server could not respond in 60 seconds which is default. Nginx has a directive for read timeout and its call’s proxy_read_timeout, it determines how long will nginx wait until he receive response to a request. This actually is not permanent solution but it’s quick fix.

server {
    location / {
        proxy_set_header  X-Real-IP  $remote_addr;
        proxy_set_header  X-Real-Host  $host;
        proxy_read_timeout 120;
    }
}

Boosting PHP Apps Performance with APC

APC (Alternative PHP Cache) is open source opcode caching mechanism.
I will not talk about which one is better, XCache or APC, i found out that some of them work little bit different in different environments and with different scripts. And this is probably why you did not came here to read about. We will see how to speed up some work.
Personally, i don’t like APC much but my clients do, so i needed to find out some new ways of optimizing caching to work on high load servers which also include memcached and other boys and girls.

As observed from most servers configuring APC with low memory can be a drag on performance causing high executing times. If you install APC without tuning it there can be a lot of fragmentation and this is very bad with cache hit ration no more than 20%.
Let say we have WordPress installed on server with APC and we have 20-30 plugins enabled with 2-3 themes, then 30MB of APC is not enough. After increasing apc.shm_size size to 128 we had 99% cache hit ratio and from 200 request rate per seconds we came to 300 requests per second which is good.
APC Fragmentation Lookup
After configuring APC to do job like it should we significantly improved memory usage because there was no need for nginx to run raw PHP, we had everything in op code already.
If you have more than 8GB ram then depending on what you host on server you can add to apc.shm_size at least 1GB ram.
APC config preview :
apc.enabled = 1
apc.shm_segments = 1
apc.shm_size = 128M
apc.optimization = 0
apc.num_files_hint = 512
apc.user_entries_hint = 1024
apc.ttl = 0
apc.user_ttl = 0
apc.gc_ttl = 600
apc.cache_by_default = 0
apc.slam_defense = 0
apc.use_request_time = 1
apc.mmap_file_mask = /tmp/apc-accountname.XXXXXX
;OR apc.mmap_file_mask = /dev/zero
apc.file_update_protection = 2
apc.enable_cli = 0
apc.max_file_size = 8M
apc.stat = 1
apc.write_lock = 1
apc.report_autofilter = 0
apc.include_once_override = 0
apc.rfc1867 = 0
apc.rfc1867_prefix = "upload_"
apc.rfc1867_name = "APC_UPLOAD_PROGRESS"
apc.rfc1867_freq = 0
apc.localcache = 1
apc.localcache.size = 512
apc.coredump_unmap = 0
apc.stat_ctime = 0

Nginx Tuning For Best Performance

For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400,000 to 500,000 requests per second (clustered), most what i saw is 50,000 to 80,000 (non-clustered) requests per second and 30% CPU load, course, this was 2xIntel Xeon with HT enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
First, you will need to install nginx, my way to install nginx is compiling it from source, but for now we will use apt-get
apt-get install nginx
Backup your original configs and you can start reconfigure your configs. You will need to open your nginx.conf at /etc/nginx/nginx.conf with your favorite editor.
# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that
worker_processes = auto; #some last versions calculate it automatically, thanks to Diego :)

# number of file descriptors used for nginx
# the limit for the maximum FDs on the server is usually set by the OS.
# if you don't set FD's then OS settings will be used which is by default 2000
worker_rlimit_nofile 100000;

# only log critical errors
error_log /var/log/nginx/error.log crit

# provides the configuration file context in which the directives that affect connection processing are specified.
events {
    # determines how much clients will be served per worker
    # max clients = worker_connections * worker_processes
    # max clients is also limited by the number of socket connections available on the system (~64k)
    worker_connections 4000;

    # optmized to serve many clients with each thread, essential for linux
    use epoll;

    # accept as many connections as possible, may flood worker connections if set too low
    multi_accept on;
}

# cache informations about FDs, frequently accessed files
# can boost performance, but you need to test those values
open_file_cache max=200000 inactive=20s; 
open_file_cache_valid 30s; 
open_file_cache_min_uses 2;
open_file_cache_errors on;

# to boost IO on HDD we can disable access logs
access_log off;

# copies data between one FD and other from within the kernel
# faster then read() + write()
sendfile on;

# send headers in one peace, its better then sending them one by one 
tcp_nopush on;

# don't buffer data sent, good for small data bursts in real time
tcp_nodelay on;

# server will close connection after this time
keepalive_timeout 30;

# number of requests client can make over keep-alive -- for testing
keepalive_requests 100000;

# allow the server to close connection on non responding client, this will free up memory
reset_timedout_connection on;

# request timed out -- default 60
client_body_timeout 10;

# if client stop responding, free up memory -- default 60
send_timeout 2;

# reduce the data that needs to be sent over network
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";
Now you can save config and run bottom command
/etc/init.d/nginx start|restart
If you wish to test config first you can run
/etc/init.d/nginx configtest

Just For Security Reason

server_tokens off;

Nginx Simple DDoS Defense

This is far away from secure DDoS defense but can slow down some small DDoS. Those configs are also in test environment and you should do your values.

# limit the number of connections per single IP
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

# limit the number of requests for a given session
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;

# zone which we want to limit by upper values, we want limit whole server
server {
    limit_conn conn_limit_per_ip 10;
    limit_req zone=req_limit_per_ip burst=10 nodelay;
}

# if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
client_body_buffer_size  128k;

# headerbuffer size for the request header from client, its set for testing purpose
client_header_buffer_size 3m;

# maximum number and size of buffers for large headers to read from client request
large_client_header_buffers 4 256k;

# read timeout for the request body from client, its set for testing purpose
client_body_timeout   3m;

# how long to wait for the client to send a request header, its set for testing purpose
client_header_timeout 3m;
Now you can do again test config
/etc/init.d/nginx configtest
And then reload or restart your nginx
/etc/init.d/nginx restart|reload
You can test this configuration with tsung and when you are satisfied with result you can hit Ctrl+C because it can run for hours.
Happy Hacking!
UPDATE:
This configuration is tested on mp3 search engine and our email platform. You must understand that this config is just for nginx.conf and you can override every parameter inside your vhost depending on what system you have eg. wordpress, joomla, drupal ..
Benchmark was made with AB and Tsung directly those scripts with simple working vhost configurations.