Giter VIP home page Giter VIP logo

ngx_slowfs_cache's People

Contributors

piotrsikora avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ngx_slowfs_cache's Issues

Core dump on FreeBSD 10.2 and nginx_1.8.0

Hi,
Sadly but we have a permanent core dump on FreeBSD 10.2 and nginx 1.8.0

root@leo:/usr/ports/www/nginx/work/ngx_slowfs_cache-1.10 # nginx -V
nginx version: nginx/1.8.0
built with OpenSSL 1.0.1p-freebsd 9 Jul 2015
TLS SNI support enabled
configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx-error.log --user=www --group=www --with-file-aio --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx-access.log --with-http_geoip_module --with-http_secure_link_module --with-http_stub_status_module --with-pcre --add-module=/usr/ports/www/nginx/work/ngx_slowfs_cache-1.10 --with-http_ssl_module

root@leo:/usr/ports/www/nginx/work/ngx_slowfs_cache-1.10 # prove t/big_file.t .... Bailout called. Further testing stopped: TEST 1: prepare - Cannot start nginx using command "nginx -p /usr/ports/www/nginx/work/ngx_slowfs_cache-1.10/t/servroot/ -c /usr/ports/www/nginx/work/ngx_slowfs_cache-1.10/t/servroot/conf/nginx.conf > /dev/null".
FAILED--Further testing stopped: TEST 1: prepare - Cannot start nginx using command "nginx -p /usr/ports/www/nginx/work/ngx_slowfs_cache-1.10/t/servroot/ -c /usr/ports/www/nginx/work/ngx_slowfs_cache-1.10/t/servroot/conf/nginx.conf > /dev/null".
root@leo:/usr/ports/www/nginx/work/ngx_slowfs_cache-1.10 # ll
total 9276
-rw-r--r-- 1 root wheel 2288 Mar 7 2013 CHANGES
-rw-r--r-- 1 root wheel 1548 Mar 7 2013 LICENSE
-rw-r--r-- 1 root wheel 5643 Mar 7 2013 README.md
-rw-r--r-- 1 root wheel 163 Mar 7 2013 config
-rw------- 1 root wheel 9396224 Aug 17 09:32 nginx.core
-rw-r--r-- 1 root wheel 35647 Mar 7 2013 ngx_http_slowfs_module.c
drwxr-xr-x 3 root wheel 512 Aug 17 09:32 t/

incompatibility with open_file_cache option

Config:

# run with nginx -p . -c nginx.conf
daemon off;
worker_processes 1;
pid nginx.pid;

events {
  use epoll;
  worker_connections 2000;
}

error_log stderr;

http {
    slowfs_cache_path tmpfs/cache levels=1:2 keys_zone=memcache:100m inactive=1h max_size=100M;
    slowfs_temp_path tmpfs/tmp 1 2;

    access_log /dev/stdout;
    open_file_cache max=100;

    server {
        listen 8010;
        location / {
        root /path/to/files;
        slowfs_cache memcache;
        slowfs_cache_key $uri;
        slowfs_cache_valid 1m;
        slowfs_cache_min_uses 1;
        slowfs_big_file_size 128k;
   }
}

two files in /path/to/files:

$ ls -l
-rw-rw-r-- 1 yura yura 96432450 окт.  12 12:22 aaaa1.txt
-rw-rw-r-- 1 yura yura 96432450 окт.  12 11:54 aaaa.txt

request first file, then second, wait till cache file for first file disappeared and request first file again

$ curl 'http://localhost:8010/aaaa.txt' > /dev/null
$ curl 'http://localhost:8010/aaaa1.txt' > /dev/null
$ # wait here till cache for aaaa.txt disappeared
$ curl 'http://localhost:8010/aaaa.txt' > /dev/null

nginx put in his log (in stdout with this config):

127.0.0.1 - - [12/Oct/2012:15:37:00 +0400] "GET /aaaa.txt HTTP/1.1" 200 96432450 "-" "curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3"
127.0.0.1 - - [12/Oct/2012:15:37:05 +0400] "GET /aaaa1.txt HTTP/1.1" 200 96432450 "-" "curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3"
2012/10/12 15:37:09 [alert] 27573#0: *3 read() has read only 0 of 96432450 bytes (2: No such file or directory) while populating cache, client: 127.0.0.1, server: , request: "GET /aaaa.txt HTTP/1.1", host: "localhost:8010"
2012/10/12 15:37:09 [alert] 27573#0: *3 http file cache copy: "/path/to/files/aaaa.txt" failed (2: No such file or directory) while populating cache, client: 127.0.0.1, server: , request: "GET /aaaa.txt HTTP/1.1", host: "localhost:8010"
127.0.0.1 - - [12/Oct/2012:15:37:09 +0400] "GET /aaaa.txt HTTP/1.1" 200 96432450 "-" "curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3"

Seg Fault on Centos 5 & nginx 1.8.0

Hi,
I used your config example to setup caching for a NFS share. Startup fails with

/etc/init.d/nginx: line 64: 14456 Segmentation fault $nginx -t -c $NGINX_CONF_FILE

[root@img2 nginx]# nginx -V
nginx version: nginx/1.8.0
built by gcc 4.1.2 20080704 (Red Hat 4.1.2-55)
built with OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
TLS SNI support disabled
configure arguments: --user=nginx --group=nginx --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_sub_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_perl_module --with-mail --with-file-aio --with-mail_ssl_module --with-ipv6 --with-http_spdy_module --add-module=/usr/src/user/ngx_slowfs_cache-1.10

I don't have previous experience with SlowFSCache. I wanted to use it because we had to switch to NFS mounted share instead of local storage. Can you have a look at that problem?

Thanks

Mirek

module ingores Etag

Etag was introduced in nginx 1.3.3, as sfc module sources are based on copy from http, it should not drop it

Crash when cache->file.name is not populated

Sometimes ngx_http_file_cache_open() returns before calling ngx_http_file_cache_name(), which will lead to a crash when later calling ngx_http_file_cache_update().

request should be declined instead of calling ngx_http_slowfs_static_send() if cache->file.name is empty, maybe with something like:

    rc = ngx_http_slowfs_cache_send(r);
-    if (rc == NGX_DECLINED) {
+    if ((rc == NGX_DECLINED) && (r->cache->file.name.len > 0)) {
        rc = ngx_http_slowfs_static_send(r);
    }

    return rc;
}

doesn't meet max_size limit

We use this plugin (1.10) in nginx 1.4.3 wih these settings

slowfs_cache_path /var/spool/slowfs/cache levels=1:2 keys_zone=slowfs_cache:128M max_size=40G inactive=3d;

...
    location / {
        log_not_found off;
        slowfs_cache_valid 7d;
        error_page 404 = @bild_fallback;
    }
    location /staticweb/images/product/ {
        slowfs_cache_valid 7d;
    }
...

But the cache folder exceeds the max_size of 40GB. We have to shutdown nginx and do a manual cleanup in order to solve this issue.

POST to site root ('/') returns "405 not allowed"

change the precedence of uri checking at ngx_http_slowfs_handler maybe solwe the problem, here is the right one which works for me:

ngx_http_slowfs_handler(ngx_http_request_t *r)
{
ngx_http_slowfs_loc_conf_t *slowcf;
ngx_int_t rc;

/* checking the URI root, placed first */
if (r->uri.data[r->uri.len - 1] == '/') {
return NGX_DECLINED;
}

/* checking another HTTP request than GET,HEAD moved to the second place, it was the first */
if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) {
return NGX_HTTP_NOT_ALLOWED;
}

.....

$slowfs_cache_status is almost always HIT

My configuration looks like this:

    location / {
        slowfs_cache_valid 7d;
        add_header X-Cache-Status $slowfs_cache_status;
    }

The X-Cache-Status header is almost always HIT even if this is the first request for a given url. Is this a known bug?

This is my test case:

curl -I http://10.172.252.136:8081/r/t/jd/27489985056993_image146x114.jpg
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 28 Oct 2013 10:53:17 GMT
Content-Type: image/jpeg
Content-Length: 3406
Last-Modified: Wed, 14 Aug 2013 00:41:35 GMT
Connection: keep-alive
X-Cache-Status: HIT
Accept-Ranges: bytes

~/slowfs: grep -r 16419733655777_image166x130.jpg *
Übereinstimmungen in Binärdatei cache/e8/c0/bce8007ecc71ec6f0c2d8c439784c0e8
~/slowfs: head -3 cache/e8/c0/bce8007ecc71ec6f0c2d8c439784c0e8 | tail -1
KEY: 10.172.252.136/r/t/jd/27489985056993_image146x114.jpg
~/slowfs: ls -la cache/e8/c0/bce8007ecc71ec6f0c2d8c439784c0e8
-rw------- 1 nginx nginx 3507 28. Okt 11:48 cache/e8/c0/bce8007ecc71ec6f0c2d8c439784c0e8

# ok lets remove the cache file

~/slowfs: rm cache/e8/c0/bce8007ecc71ec6f0c2d8c439784c0e8

# double check

~/slowfs: grep -r 16419733655777_image166x130.jpg *
... nothing found

# now the cache should be really clean?!

curl -I http://10.172.252.136:8081/r/t/jd/27489985056993_image146x114.jpg
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 28 Oct 2013 10:54:29 GMT
Content-Type: image/jpeg
Content-Length: 3406
Last-Modified: Wed, 14 Aug 2013 00:41:35 GMT
Connection: keep-alive
X-Cache-Status: HIT
Accept-Ranges: bytes

# another hit but a new cache file... so this should be a MISS

~/slowfs: grep -r 16419733655777_image166x130.jpg *
Übereinstimmungen in Binärdatei cache/6a/c6/653ea58bcd99cfbb3795208f66e6c66a.
~/slowfs: ls -la cache/6a/c6/653ea58bcd99cfbb3795208f66e6c66a
-rw------- 1 nginx nginx 3168 28. Okt 11:54 cache/6a/c6/653ea58bcd99cfbb3795208f66e6c66a
~/slowfs: head -3 cache/6a/c6/653ea58bcd99cfbb3795208f66e6c66a | tail -2
KEY: 10.172.252.136/h/h/hh/16419733655777_image166x130.jpg

ignore long locked inactive cache entry 0cc175..., count: 1048575

nginx 1.2.3, "ignore long locked inactive cache entry [entry_id], count: 1048575" messages appear after several minutes.

The if statement around this counter increment in ngx_http_slowfs_static_send needs to be removed, and the call to ngx_http_slowfs_cache_update before this code should be moved below it

                if (old_status == NGX_HTTP_CACHE_EXPIRED) {
                    /*
                     * Expired cached files don't increment counter,
                     * because ngx_http_file_cache_exists isn't called.
                     */
                    ngx_shmtx_lock(&c->file_cache->shpool->mutex);
                    c->node->count++;
                    ngx_shmtx_unlock(&c->file_cache->shpool->mutex);
                }

First call to ngx_http_file_cache_open always calls ngx_http_file_cache_exists, but does not on subsequent call from ngx_http_slowfs_static_send. Should always increment counter before this call to ngx_http_slowfs_cache_update so counter isn't 0

Still lots of RPC calls

Hi,

I'm testing your module to prevent problems when NFS server is down. The config is quite simple, like the sample in README file.

And it's working :

find /data/cache1/nginx/nfscache/ -type f | wc -l ; du -hs /data/cache1/nginx/nfscache/

17985
257M /data/cache1/nginx/nfscache/

Logging displays lots of HITs.

But the RPC calls didn't decrease as expected.

Is that normal ? Did you do a "stat" or something at every hit ?

Warm cache without returning data to requestor?

I'm using this code to this to warm the cache:

  location /:cache/warm {
    rewrite ^/:cache/warm(/.*)$ $1;
    root /home/web;
    slowfs_cache        sites;
    slowfs_cache_key    $1;
    slowfs_cache_valid  15d;
    break;
  }

Which works great, but it also returns the entire file. If I want to warm up a lot of large files, this will go much slower.

I tried adding return 200 "ok"; to the location block:

  location /:cache/warm {
    rewrite ^/:cache/warm(/.*)$ $1;
    root /home/web;
    slowfs_cache        sites;
    slowfs_cache_key    $1;
    slowfs_cache_valid  15d;
    return 200 "ok";
    break;
  }

But when I do this, the file doesn't get cached.

Is there a sneaky way to cache the file without serving it?

cache manager does not free the cache

Hello

I use slowfs and like it very much. it saves much of my nervous :) thanks for the great software.

I faced a problem. looks like cache manager does not empty cache for me. i have setup limit of 1800m on tmpfs. the size of tmpfs is 2G. but nginx always fills full 2Gb of data and writes alerts to logs (cant copy file because no space left on device).

How could i check is it a bug or may be my settings? may be i can turn on some debug information for cache manager to see why it dont want to delete files?

thanks for advise,
Andrey.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.