New Nginx Conf with Rails Caching

Posted by ezmobius Tue, 12 Sep 2006 16:02:00 GMT

[UPDATE] The config file has been updated and commented so it is easier to figure out. It also sets the right header when it proxies to mongrel and doesn;t choke on the foo.js?394732323 urls that rails generated for static assets.

OK this is very sweet. We have a new Nginx conf file that works perfectly with rails page caching. nginx serves all static files and all rails cached file. Fast

I want to thank Alexy Kovyrin and James Cox for their help in getting this config perfected. This makes nginx truly one of the best options for fronting a cluster of mongrels.

Might as well get the latest version while we’re at it.

curl -O http://sysoev.ru/nginx/nginx-0.4.0.tar.gz
tar -xvzf nginx-0.4.0.tar.gz
cd nginx-0.4.0
./configure --sbin-path=/usr/local/sbin --with-http_ssl_module
make
sudo make install

Now for the new config file. Here you go folks, get it while its hot!

nginx.conf

Tags , , , ,  | 28 comments

Comments

  1. patrick said about 2 hours later:
    If only it had lighty's X-Sendfile support...I searched through the nginx source a few weeks ago and couldn't find anything. Anybody know if it exists?
  2. Ezra said about 3 hours later:
    It does have sendfile support. Do you mean the ability for the mongrel backend to tell nginx to use sendfile to serve a static file from somewhere? I'll play some more and see if I can get that working as well.
  3. patrick said about 6 hours later:
    Yes. I have files outside web root that i want to send to client. Currently using lighty's X-Sendfile header to do that, and it works great. But stuck in fastcgi land until lighty 1.5 or nginx includes this feature. Thanks! nginx looks very promising and I successfully played around with it based on your previous blog posts.
  4. Davy said about 9 hours later:
    Ezra, first off thanks for all the useful information that you blog about! I'm excited to try out nginx even though my site is pretty low traffic. One quick question though, does nginx support any type of virtual hosts? What are the options for serving up multiple rails sites on one machine with nginx?
  5. Davy said about 10 hours later:
    Nevermind on that last question! I just installed it and the vhost part in the config.
  6. Ezra said about 10 hours later:
    Yeah nginx supports virtual hosts and all the stuff you expect from a webserver. Curiously the only thing notably absent is a plain old cgi interface. I will post anopther config with some vhosts defined in it. Basically if you look at the linked config, the server {} block is one vhost. So you make another server block for each vhost and change the domain it looks for. Like this:
        server {
            listen       80;
            server_name   example.com  www.example.com;
            root /Users/ez/nginx/public;
            # the rest of the config.
         }
    
  7. Al said about 16 hours later:
    There is a *undocumented* header field 'X-Accel-Redirect' which looks similar. The main difference is: 1.) set an URI, but not a path 2.) it must be set as internal for nginx: --- location /protected/ { internal; root /some/path; } --- Regards Aleks
  8. rick said about 22 hours later:
    Great stuff, Ezra. I managed to get this 95% working on my custom mephisto setup too. I'll post the config once it's 100%. One odd thing I noticed, it uses the name of the "pack of mongrels" (upstream cluster) as the request.host in rails. For instance, if you were to use home_url using that config, you'd get http://mongrel/. Is there a way to pass the true domain on? For now I'm either using absolute paths without the host name (home_path route), or setting the domain name as the name of the upstream cluster.
  9. Kevin Marsh said 1 day later:
    I too, jumped on the nginx bandwagon and have found it pretty pleasant thus far. rick: I'm having the same issue, only with subdomains. DHH's Account Location plugin uses request.subdomains, which don't get passed to Rails by nginx. Making the upstream cluster the same is impossible, because the magic happens with a wildcard DNS entry (*.example.com). I'm looking for a solution for this. And, to make Capistrano's disable_web command happy, add this to your conf, before all the other 'location' directives:
             if (-f /var/www/example.com/shared/system/maintenance.html){
              rewrite  ^(.*)$  /system/maintenance.html last;
              break;
            }
    
  10. Ezra said 1 day later:
    Kevin- Check out the config file again. I updated it to forward the correct headers to mongrel so account_location works fine.
  11. Kevin Marsh said 1 day later:
    Ezra: Odd, did a quick 'mental-diff' between yours and mine and couldn't find any discrepancies. For the heck of it I thought I'd wipe out mine and start fresh from yours--works like a charm! Thanks!
  12. Al said 1 day later:
    Ezra: have you tried the 'proxy_pass_header Host;' instead of 'proxy_set_header Host $http_host;'. The main difference is, as far as i have seen in the source, that with _pass nginx don't create a $ARRAY internal. Regards Aleks
  13. Al said 1 day later:
    Ezra: I have missreaded the source, the 'proxy_pass_header' is only from backend to client.
  14. ed said 4 days later:
    anyone get file uploading to work? i can upload small (<1MB) files without issue, but not larger ones (5MB, etc). im using the above conf file (nginx configured without ssl though) and discovered the client_max_body_size config var which i set to 1024g, but still i get upstream timeout errors: 2006/09/16 01:04:29 [warn] 3376#0: *64 a client request body is buffered to a temporary file /usr/local/nginx/client_body_temp/00000000001, client: 24.124.123.211, server: serverside.com, URL: "/tracks/create?track_counter=1&upload_id=1158389209048", upstream: "http://127.0.0.1:11000", host: "thedomain.com:4000", referrer: "http://thedomain.com:4000/tracks/new?list_id=3" 2006/09/16 01:08:11 [error] 3376#0: *64 upstream timed out (110: Connection timed out) while sending request to upstream, client: 24.124.123.211, server: serverside.com, URL: "/tracks/create?track_counter=1&upload_id=1158389209048", upstream: "http://127.0.0.1:11007/tracks/create?track_counter=1&upload_id=1158389209048", host: "thedomain.com:4000", referrer: "http://thedomain.com:4000/tracks/new?list_id=3" this last timeout error keeps showing up and the file never completes uploading thoughts?
  15. Davy said 8 days later:
    ed: I was able to change the client_max_body_size to 10m and successfully upload a file larger than 1 meg.
  16. ed said 8 days later:
    Davy, where did you put that line, exactly? thanks
  17. al said 15 days later:
    I'am not davy but you can add this directive in the following context:
    http, server, location
    More here: http://wiki.codemongers.com/NginxHttpCoreModule#client_max_body_size
    There is know a wiki for the english docs ;-) http://wiki.codemongers.com/
    Hth
  18. Labrat said 16 days later:
    I can't get mongrel + apache to play nice with send_file/acts_as_attachment so I tried switching to nginx in this post. Unfortunately, I can't get past the "Welcome to nginx!" page. Can't figure out what I'm doing wrong...
  19. Ezra said 17 days later:
    I have added the client_max_body_size to the linked config file. It turns out you have to put it inside the server block with a rails config.
  20. Labrat said 17 days later:
    Sorry I figured it out (needed to kill nginx and restart). It's really great. the only problem is that I can't get file uploading via acts_as_attachment to work in production...
  21. ed said 18 days later:
    al and ezra, thanks for info. have either of you had timout problems uploading large files? if so, is it the client_body_timeout setting that fixes it? seems logical... im getting timouts when uploading even medium sized files (5MB): error] 6155#0: *62 upstream timed out (110: Connection timed out) while sending request to upstream, client: 24.124.123.211, server: edhickey.com, URL: "/tracks/create?list_id=3&upload_id=1159411079", upstream: "http://127.0.0.1:11000/tracks/create?list_id=3&upload_id=1159411079", host: "edhickey.com:4000", referrer: "http://edhickey.com:4000/tracks/new?list_id=3"
  22. Yan said 27 days later:
    Has anyone had experience with a non standard public caching path? In my rails app I'm caching to public/cache so that sweeping is easier. I tried adding a rewrite rule to nginx to compensate for this, after the standard html rewrite rule: if (-f $document_root/cache/$request_filename.html) { rewrite (.*) /cache/$1.html; break; } Doesn't seem to be working. I'm still playing with it though...
  23. Dan Kubb said 58 days later:

    Ezra, there's a small bug in the log_format part of your config file. There needs to be a dollar sign ($) in front of the word "http_x_forwarded_for" for it to be merged into the logs.

    Thanks for spreading the word about Nginx! I've just started testing it and its blazingly fast. Its also probably the easiest web server (next to Mongrel) that I've ever set up.

  24. Ezra said 60 days later:
    Thanks Dan, It's fixed now. Yeah nginx has really proven itself to be completely rock solid, fast as hell and a complete lightweight wen it comes to resource consumption. nginx++
  25. Kyle said 63 days later:
    Hey thanks for this info. This seems like it is very production worthy and I plan on trying it out on my VPS soon. I will definitely use it in production if I can get SSL to work on it and everything else goes well. I found a great article on how to do different things with the conf file here: http://zh.stikipad.com/notes/show/nginx
  26. Tom Fakes said 71 days later:
    I recently updated my action_cache plugin to support the x-accel-redirect header in nginx. Its pretty sweet! Check it out here: http://blog.craz8.com/articles/2006/11/13/rails-action_cache-now-supports-nginx-and-x-accel-redirect The key point is that you get almost the speed of page caching, but you can do things like access control in before_filters.
  27. wireless said 100 days later:
    For Capistrano's disable_web command you need add extra configuration
    location /system/maintenance.html {
    }
    
    after
    if (-f /var/www/example.com/shared/system/maintenance.html){
              rewrite  ^(.*)$  /system/maintenance.html last;
              break;
    }
    
  28. Dan Kubb said 205 days later:

    Ezra, do you know if its possible to further refine the Nginx config file so that it only does "-f $request_filename" if the request is a GET or HEAD request?

    The idea would be to use page caching for GET/HEAD requests provided the file exists. If the file didn't exist OR the request was POST, PUT, DELETE, etc Nginx would proxy the request to Mongrel.

(leave url/email »)

   Preview comment