Giter VIP home page Giter VIP logo

disklrucache's Introduction

Disk LRU Cache

A cache that uses a bounded amount of space on a filesystem. Each cache entry has a string key and a fixed number of values. Each key must match the regex [a-z0-9_-]{1,120}. Values are byte sequences, accessible as streams or files. Each value must be between 0 and Integer.MAX_VALUE bytes in length.

The cache stores its data in a directory on the filesystem. This directory must be exclusive to the cache; the cache may delete or overwrite files from its directory. It is an error for multiple processes to use the same cache directory at the same time.

This cache limits the number of bytes that it will store on the filesystem. When the number of stored bytes exceeds the limit, the cache will remove entries in the background until the limit is satisfied. The limit is not strict: the cache may temporarily exceed it while waiting for files to be deleted. The limit does not include filesystem overhead or the cache journal so space-sensitive applications should set a conservative limit.

Clients call edit to create or update the values of an entry. An entry may have only one editor at one time; if a value is not available to be edited then edit will return null.

  • When an entry is being created it is necessary to supply a full set of values; the empty value should be used as a placeholder if necessary.
  • When an entry is being edited, it is not necessary to supply data for every value; values default to their previous value.

Every edit call must be matched by a call to Editor.commit or Editor.abort. Committing is atomic: a read observes the full set of values as they were before or after the commit, but never a mix of values.

Clients call get to read a snapshot of an entry. The read will observe the value at the time that get was called. Updates and removals after the call do not impact ongoing reads.

This class is tolerant of some I/O errors. If files are missing from the filesystem, the corresponding entries will be dropped from the cache. If an error occurs while writing a cache value, the edit will fail silently. Callers should handle other problems by catching IOException and responding appropriately.

Note: This implementation specifically targets Android compatibility.

Download

Download the latest .jar or grab via Maven:

<dependency>
  <groupId>com.jakewharton</groupId>
  <artifactId>disklrucache</artifactId>
  <version>2.0.2</version>
</dependency>

or Gradle:

compile 'com.jakewharton:disklrucache:2.0.2'

Snapshots of the development version are available in Sonatype's snapshots repository.

If you would like to compile your own version, the library can be built by running mvn clean verify. The output JAR will be in the target/ directory. (Note: this requires Maven be installed)

License

Copyright 2012 Jake Wharton
Copyright 2011 The Android Open Source Project

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

disklrucache's People

Contributors

acherkashyn avatar blangel avatar divankov avatar gubatron avatar jakewharton avatar jonasfa avatar kevinsawicki avatar sjudd avatar swankjesse avatar wavesonics avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

disklrucache's Issues

max size doesn't take in consideration minimum 4kb block size.

I am using this cache to store many many small files. I set a max of 256MB but the cache grows up to more than 450MB. That's because each small file takes at least 4kb in disk. Since this is a disk cache I would assume that setting a max would take block size in consideration.

Concurrent edit experience is cumbersome

We're using DiskLruCache in a serverside app that downloads images from a remote server. When two requests simultaneously want to store the same image in the cache, the code gets very ugly very fast. The problem is that edit() returns null and we have no mechanism to wait on the other editor's download to complete.

It would be handy if there was an API to await a snapshot that's currently being created.

I think it would be relatively straightforward to implement this on top of Guava's LoadingCache, but it would be simpler code to just do the blocking mechanics directly in DiskLruCache.

Multi-thread: Infinite loop in trimToSize

Hello guys,

I use DiskLruCache in a multi-thread environment. I get an infinite loop in the trimToSize method:

while (size > maxSize) {
Map.Entry<String, Entry> toEvict = lruEntries.entrySet().iterator().next();
remove(toEvict.getKey());
}

The remove call fails to remove the entry, as an other thread is editing the toEvict entry. Shouldn't we check if the remove worked, and for example silently return if it did not ?

Can't delete a cache that has old, now invalid keys

I'm building an admin option to clear the cache. In other situations, I've done this by deleting the cache using DiskLruCache#delete and then re-opening it. This is failing in this case because the cache has keys that were created back before the [a-z0-9_-]{1,64} key validation pattern existed. This is a problem because deleting cache causes the cache to be trimmed to size causing keys to be removed and the remove method first validates the keys.

It doesn't look like there's anyway to remove invalid keys. Would a reasonable workaround be to just delete the contents of the cache using IoUtils.deleteContents(cache.getDirectory()); and let the cache sort things out?

IllegalStateException in Editor#newOutputStream()

I am using the latest code from github and am seeing an IllegalStateException from the following code in DiskLruCache.java, lines 770 - 772:

if (entry.currentEditor != this) {
throw new IllegalStateException();
}

this happens quite rarely (seen it for the first time after several months, with 200 people using my app), and I have no indication what may cause it. No comments, no exception message. Thats why I choose this path of creating an issue. I'd appreciate any help, and would also suggest better error reporting

Hey! 61 tests 10 failed I have...

I have exported and uploaded test results. Maybe sth happen or OS security policy. But some tests show sth wrong for example;

DiskLruCache.Editor creator = cache.edit("k1");
creator.set(0, "A");
DiskLruCache.Editor updater = cache.edit("k1");
updater.set(0, "C");
DiskLruCache.Snapshot snapshot = cache.get("k1");
assertThat(snapshot.getString(0)).isEqualTo("C");

It was failed.

Maybe, You will check all test result...

testResults.pdf

Thanks.

DiskLruCache and MediaPlayer

Hi,

I'd like to use DiskLruCache to store media files that I would play in a MediaPlayer later.
My problem is that the possible inputs to set the data source of a MediaPlayer are:

  • An Uri
  • A FileDescriptor
  • A path

The DiskLruCache.Snapshot only returns an InputStream so the only solution for me is to write the stream to a temp File and create a new FileInputStream. That's not really something I want to do πŸ˜„
After a quick look under the hood, I saw that you are actually using FileInputStream so I suppose I could just cast the stream but it seems like a bad solution since the API doesn't specify it ==> it could break at any moment.

If we add a getFileInputStream or a getUri method to the DiskLruCacheSnapshot class, would it be a bad thing? I prefer to discuss this with you before working on a pull request.

Thanks for your help!

Two level cache

Hi Jake,
How can I extend this cache abilities into two levels?
I mean, memory cache which when items evicted out of it they go down into disk.

Thanks!

Silent commit failure due to ignore renameTo return value

DiskLruCache.Editor editor = getDiskCache().edit(key);
try {
    File file = editor.getFile(0);
    new FileOutputStream(file).write(data);
    editor.commit();
} finally {
    editor.abortUnlessCommitted();
}

Now, commit calls completeEdit which has a line dirty.renameTo(clean);
File.renameTo has a return value which is discarded. This may fail silently due to a clumsy way of writing the file (as seen above) and think that the commit was commited. However later when getDiskCache.get(key) is called it sees that the file is missing for the wrong reason.

The reason for renameTo failing is because there still a handle open for it in the process and the OS (Windows) may prevent the rename and return false. (side note: the strange thing is that in the past I've been able to rename running/and-locked .exe files without any problems from non-Java apps)

The line was mentioned in #67 where if you check the return value the operation will be still atomic and should throw an IOException as you do in 4c31913 which is a fix for #32.

Also there are other 3 File.delete calls which are not checked, mostly related to backup files but still worth checking the code (and intentionally ignoring it with a comment maybe).

This issue was discovered when trying to build: https://github.com/sjudd/glide/tree/3.0a

Last DiskLruCache.get() in loop returns null on existing key

Hi!

I'm loading images in ListView from DiskLruCache. ListAdapter.getView() method creates AsyncTask which loads image from DiskLruCache. And there is very strange behavior - DiskLruCache.get() returns null on last one ListView element in bunch of lazy loaded ListView elements.

To simplify situation imagine loop:

for (i...) {
SnapShot value = DiskLruCache.get("key"+i);
}

For the last "i" value is null. A checked that on UI thread also, but still same result.

If it's not a bug but my mistake maybe someone would help me with issue?

Thanks!

P.S. DDMS shows that cache entry exist. Clearing and filling it again brings same result.
P.P.S. It does not depend on items count. It could be different entry from time to time, but always last in bunch of get() calls

Add Buffersize in BufferedInputStream constructor

This is a minor, feature request. While creating BufferedInputStream by its constructor, buffer size argument is missing. Because of that, android prints out following warning in logs. When i cache lot of images, following warning is filling up my log window. Please add buffer size argument to the constructor.

08-22 08:55:07.558: I/global(1158): Default buffer size used in BufferedInputStream constructor. It would be better to be explicit if an 8k buffer is required.

Cache is getting cleared after force stopping the app ?

the disklru cache is getting cleared after force stopping the app ? Is this how it works in general .? Or Is it some thing that my code is causing the issue ..

So i am building the cache ..and one of the test case is to force stop the android app .and the time i come back all the cache is deleted ..

Any ideas ?

Android aggresive cache clearing leads to crash

I have the cache put inside a subdirectory in the application cache directory. On devices with low internal storage (Nexus One for example) this will eventually lead to low storage situations where Android will just clear the whole cache directory, including subdirectories.

This gives this:

java.io.FileNotFoundException: /data/data/<package>/cache/images/-1944396650.0.tmp (No such file or directory)
at org.apache.harmony.luni.platform.OSFileSystem.open(Native Method)
at dalvik.system.BlockGuard$WrappedFileSystem.open(BlockGuard.java:232)
at java.io.FileOutputStream.<init>(FileOutputStream.java:94)
at java.io.FileOutputStream.<init>(FileOutputStream.java:66)
at <package>.DiskLruCache$Editor.newOutputStream(DiskLruCache.java:686)

It's thrown because the images directory is now gone. I though listening for ACTION_DEVICE_STORAGE_LOW could solve this but that is broadcasted after Android has cleared the directory.

If would be nice if it was possible to just reset the cache when the whole directory has gone missing or something.

A resource was acquired at attached stack trace but never released.

This version still have leakage issue:

09-09 21:47:38.549 31155 31164 E StrictMode: A resource was acquired at attached stack trace but never released. See java.io.Closeable for information on avoiding resource leaks.
09-09 21:47:38.549 31155 31164 E StrictMode: java.lang.Throwable: Explicit termination method 'close' not called
09-09 21:47:38.549 31155 31164 E StrictMode: at dalvik.system.CloseGuard.open(CloseGuard.java:184)
09-09 21:47:38.549 31155 31164 E StrictMode: at java.io.FileInputStream.(FileInputStream.java:80)
09-09 21:47:38.549 31155 31164 E StrictMode: at com.htc.lib2.photoplatformcachemanager.DiskLruCache.get(DiskLruCache.java:423)
...

Files disappear after a while

While using the diskLRU cache, I see that files are erased from cache after a few days.
Is there anywhere a configurable lifetime of the cached files? Am I missing anything?

Get entry count/iterator

It would be great if we can have a way to know the count of the entities and even better if we can have an iterator of the snapshots like the one in the OkHttp implementation.

Journal size improvement

After opening and closing an android app using Alexander Blom webimageloader library several times, the startup time keeps increasing. In fact, the journal file is never rebuilt and keeps getting bigger (in my case almost 1Mb, 5 seconds of processing !).

After hours of investigation, I discovered that the readJournalFile method doesn't count the redundant Operation number and thus is set to 0 after each startup. Therefore, the journalRebuildRequired method always return false and the journal is never rebuilt.

    private void readJournalLine(String line) throws IOException {
        int firstSpace = line.indexOf(' ');
        if (firstSpace == -1) {
            throw new IOException("unexpected journal line: " + line);
        }

        int keyBegin = firstSpace + 1;
        int secondSpace = line.indexOf(' ', keyBegin);
        final String key;
        if (secondSpace == -1) {
            key = line.substring(keyBegin);
            if (firstSpace == REMOVE.length() && line.startsWith(REMOVE)) {
                //___ ADDΒ THIS ____
                redundantOpCount++;
                //_________________
                lruEntries.remove(key);
                return;
            }
        } else {
            key = line.substring(keyBegin, secondSpace);
        }

        Entry entry = lruEntries.get(key);
        if (entry == null) {
            entry = new Entry(key);
            lruEntries.put(key, entry);
        }

        if (secondSpace != -1 && firstSpace == CLEAN.length() && line.startsWith(CLEAN)) {
            String[] parts = line.substring(secondSpace + 1).split(" ");
            entry.readable = true;
            entry.currentEditor = null;
            entry.setLengths(parts);

            //___ ADDΒ THIS ____
            //if previous read or dirty
            if (lruEntries.get(key) != null)
                redundantOpCount++;
            //_________________
        } else if (secondSpace == -1 && firstSpace == DIRTY.length() && line.startsWith(DIRTY)) {
            entry.currentEditor = new Editor(entry);
        } else if (secondSpace == -1 && firstSpace == READ.length() && line.startsWith(READ)) {
            //___ ADDΒ THIS ____
                redundantOpCount++;
            //_________________
            // this work was already done by calling lruEntries.get()
        } else {
            throw new IOException("unexpected journal line: " + line);
        }
    }

Request for new release 2.0.3

Will it possible to have a new release? I am particularly interested in this commit #65
which increases the cache key length to 120.

Simplify interface and make lib truly asynchronous

I propose to radically simplify interface of DiskLruCache class to the likes of TMCache (https://github.com/tumblr/TMCache):

  1. Simple get/put methods that block (and do not throw) if another thread makes modification to the cache.
  2. No editor() method that throws pesky exceptions.
  3. Ability to put and get values from/to the cache asynchronously by supplying callbacks that will be called on background thread with results (cache must maintain or receive thread pool via ExecutorService). This will allow to use the cache to get complex images from the disk and generate thumbnails for them in background threads.
  4. Complex Entries with indexes inside the cache are NOT needed because they complicate cache logic β€” simple key/value storage is enough.
  5. Make the cache to use getCacheDir() directory by default.

Proposed interface:

public interface Cache<Key, Value>
{
    interface GetResult<Key, Value>
    {
        void cacheDidGetObject(Cache cache, Key key, Value object);
    }

    interface PutResult<Key, Value>
    {
        void cacheDidPutObject(Cache cache, Key key, Value object);
    }

    Value get(Key key); // Blocks until value is retrieved.
    void get(Key key, Cache.GetResult<Key, Value> callback);

    void put(Key key, Value value); // Blocks if another thread puts value with same key.
    void put(Key key, Value value, Cache.PutResult<Key, Value> callback);
} // Cache

Journal READ Corruption Issue

I'm seeing unpredictable but consistent corruption of journal files.  The point of corruption is always a READ line in the journal.  I'm trying to wrap my head around how this could happen and wondering if anyone has suggestions.

I've created an object cache that wraps the DiskLruCache.  Here are my put and get methods:

public Object get(String key) throws IOException, ClassNotFoundException {
        Object result = null;
        String sanitizedKey = getDiskHashKey(key);
        if(cache != null){
            DiskLruCache.Snapshot snapshot = cache.get(sanitizedKey);
            if (snapshot != null) {
                InputStream inputStream = snapshot.getInputStream(0);
                InputStream buffer = new BufferedInputStream(inputStream);
                ObjectInput input = new ObjectInputStream(buffer);
                result = input.readObject();
                snapshot.close();
            }
        }
        return result;
    }

public void put(String key, Object value) throws IOException {
        String sanitizedKey = getDiskHashKey(key);
        DiskLruCache.Editor creator = cache.edit(sanitizedKey);
        if(creator != null){
            try{
                ObjectOutputStream objOutStream = new ObjectOutputStream(new BufferedOutputStream(creator.newOutputStream(0)));
                objOutStream.writeObject(value);
                objOutStream.close();
                creator.commit();
            }catch(Exception e){
                e.printStackTrace();
                creator.abort();
            }
        }
    }

My caches are on the order of 3,000 objects when i start seeing the behavior.  This object cache is accessed by multiple threads but it looks to me like the DiskLruCache itself is doing the necessary synchronization.  I've seen the cache get wiped out several times, and I've managed to capture a cache in the state where it has an invalid entry:

READ 2013-reg-c8d10fe6-8c07-4b33-8d03-b7addf69038d
READ 2013-reg-191ff930-47e3-48b0-92f2-1da067e982d4
READ 2013-reg-b6ff7fd9-7da5-4f71-946f-e5d560060794
RREAD 2013-pst-72fe30af-b568-48cd-8582-bfbaa0e2ae6b
READ 2013-pst-5440007e-4f48-46c0-bec2-3ab4779a4c3e
READ 2013-pst-0117fbeb-a140-43ba-8dd2-4b7fc0f55211

In every case where the cache has become corrupted, the behavior is similar.  Part of a READ entry is written over with another READ. This isn't at the very end of the cache state, its about 85% of the way through the cache.

Uppercase Letters in keynames

Hi Jake,
i know the regex is clean ([a-z0-9_-]{1,64}) but I'm wondering why don't accept uppercase letters.
Since filenames accept them without any issue, why don't extend regex? [a-zA-Z0-9_-]{1,64} would work for you?
I was looking for a "no, because" but didn't find it... is there an explanation for the no-uppercases?
I didn't receive the exception in logcat neither app crash, but could be due to some wrong copy/pasted code...

I get this error: "Caused by: java.io.IOException: failed to delete file: /mnt/sdcard/Android/data/mypackage/cache/media/361bd6dab3696905ef0139a9e9969820a98218 83b4b8d0d014bdf3f7f313eede.0"

details:
at com.jakewharton.disklrucache.Util.deleteContents(Util.java:62)
at com.jakewharton.disklrucache.DiskLruCache.delete(DiskLruCache.java:654)
at com.jakewharton.disklrucache.DiskLruCache.open(DiskLruCache.java:236)

    android_ver = 2.3.5, avail_mem = 50.109375M, display = 480x320, start at 2013-11-13T06:30:55.000+08:00, crash at 2013-11-13T07:08:08.000+08:00

Old Image is always shown

When an image changes from the server, DiskLruCache will always load the old image saved in the file system.

Scenario

  1. Load an image URL provided by the server (the image is saved to DiskLruCache)
  2. Reload the same URL (the image is loaded now from DiskLruCache)
  3. Change the image from server
  4. reload the image URL (the image is also loaded from DiskLruCache but the image has changed in the server)

Actual Results
the old image is shown

Expected results
the new image should be shown

Additional information
Cache-Control and Expiration HTTP headers are correct to let volley reloading image from the server.
(I think DiskLruCache is not designed for this need)

Feature Request: make DiskLruCache.Editor Closeable

In order to always match an edit to a commit() or abort(), even in case of exceptions, it would be nice to be able to do this:

Editor e = cache.edit(key);
try {
  // edit values
  e.commit();
} finally {
  e.close(); // will abort(), unless the editor has already been committed or aborted
}

I can submit a pull request if preferred.

Serializing an ArrayList works in a JUnit test, but not in other methods

See https://gist.github.com/elliottsj/8198330

The JUnit test runs fine and creates the file DiskLruCacheTest/test-store-array-list.0, but when the same code is put into the main method, the program hangs for a while and does not create the file DiskLruCache/test-store-array-list.0.

I'm using JDK 1.7.0_40 in the gist, but I'm having the same issue in my Android app (API 19) where ArrayLists are not being written to the cache.

DiskLruCache.size() does not seem to reflect size correctly.

When putting several items into the cache in sequenc, checking the value returned by size() after each commit and flush does not seem to accurately reflect the size. As an example, I have a cache size of 1.5MB (which is correctly returned by getMaxSize(), however when adding 30-50KB images I begin seeing incorrect values returned for the size after just a few images and after about 7 the size does not change anymore, including when relaunching the app. The files were however written to disk as they can be found in the directory.

Feature Request: support changing the cache size

I think that right now it's impossible to change the cache size in an exception-safe way:

// MyClass.java
private synchronized void updateCacheSize(int maxSize) throws IOException {
  diskLruCache.close();

  // If this throws, we remain with a closed cache
  diskLruCache = DiskLruCache.create(cacheDirectory, appVersion, valueCount, maxSize);

  diskLruCache.flush(); // cause the cache to trim its size
}

Looking at the code, it seems that updating maxSize and calling trimToSize() should work safely, and leave us with a consistent cache if trimToSize() throws:

// DiskLruCache.java
public synchronized void setMaxSize(int maxSize) {
  int oldMaxSize = this.maxSize;
  this.maxSize = maxSize;
  try {
    trimToSize();
  } catch (IOException ignored) {
    this.maxSize = oldMaxSize;
  }
}

This code isn't perfect, since the cache doesn't really return to its pre-trimmed cache contents, but it does let us change the size without risking an closed cache instance.

Total number of files limit of FAT filesystem

In FAT filesystem, which is the default filesystem for external SD Card, we can only store around 15 ~ 20k files in a directory. In most cases, it shouldn't be a problem. But when I need to cache a lot of small images with DiskLruCache, it brings this issue to me.

Hence, I would like to know if it's possible to create the sub-directory within DiskLruCache's cahce directory so we can solve this issue for good.

FAT filesystem limit information: http://stackoverflow.com/questions/2651907/is-there-a-limit-for-the-number-of-files-in-a-directory-on-an-sd-card

Support absolute expiration

I'm intending to use DiskLruCache to cache all Http Request from my application. Every request to a server will be cached for re-using.

It would be great if we can configure the absolute expiration for the cache, so after an amount of time, cache will be cleared automatically, in order to get fresh data, and then cache it.

Validate key Issue

I am having an issue with key validation between my app and beta testers and I am unsure why the validateKey key would find my keys invalid.

See below:

screen shot 2015-09-26 at 12 59 38 pm

Here are some of the keys my users are getting:

54 | 00:00:02:227 | D/asset manager key: a9e9b46-e476-fdd1rx0y0w500h325r_330000_000-mp4
59 | 00:00:07:132 | D/asset manager key: 5f54b52-2ef9-3420rx0y0w135h156r_00cc00_120-mp4
64 | 00:00:08:665 | D/asset manager key: 6c705f7-c0b8-1235rx0y0w395h314r_000033_240-mp4
69 | 00:00:08:984 | D/asset manager key: 7cdb2b3-f0f7-2c76rx0y0w500h153r_cccccc_000-mp4
74 | 00:00:09:477 | D/asset manager key: 5f54b52-2ef9-3420rx0y0w135h156r_00cc00_120-mp4
79 | 00:00:14:999 | D/asset manager key: a91dcba-07a4-58ccrx0y0w227h199r_663300_040-mp4
84 | 00:00:18:176 | D/asset manager key: 5aad806-70a0-6426rx0y0w500h200r_333300_060-mp4
89 | 00:00:25:499 | D/asset manager key: 8e7b5b5-eb57-2f6erx0y0w500h333r_cccc99_060-mp4
94 | 00:00:26:262 | D/asset manager key: a1fa904-3f87-4ee7rx0y0w498h281r_330066_280-mp4
99 | 00:00:29:370 | D/asset manager key: a1fa904-b01f-8ca8rx0y0w404h273r_663333_000-mp4
104 | 00:00:33:981 | D/asset manager key: 0d8298c-7922-8914rx0y0w242h140r_3366cc_220-mp4
109 | 00:00:36:839 | D/asset manager key: 2577a4b-88ea-8282rx0y0w500h372r_333300_060-mp4
114 | 00:00:40:294 | D/asset manager key: 6c20180-1454-b7e4rx0y0w500h273r_330066_280-mp4
119 | 00:00:44:582 | D/asset manager key: toilet_flush_ani_ef1405f6-c75a-aa6f-a5f29f1085e8a151-gif
124 | 00:00:45:149 | D/asset manager key: a5097fb-7f16-fea5rx0y0w299h296r_996633_040-mp4
129 | 00:00:48:491 | D/asset manager key: 25773d1-7499-5fecrx0y0w500h210r_996633_040-mp4
134 | 00:00:52:509 | D/asset manager key: 2e78ac2-e675-95b5rx0y0w500h281r_cccccc_000-mp4
139 | 00:00:55:684 | D/asset manager key: a91dcba-c029-01a1rx0y0w380h177r_663300_040-mp4
144 | 00:00:56:848 | D/asset manager key: a215a90-996d-850erx0y0w500h250r_cccccc_000-mp4
196 | 00:00:57:476 | D/asset manager key: 82369cc-0a2f-4be8rx0y0w280h210r_330000_000-mp4
201 | 00:00:58:393 | D/asset manager key: a1fa904-09fe-25berx0y0w320h240r_330000_000-mp4
206 | 00:01:05:755 | D/asset manager key: a3f8891-f017-cf00rx0y0w500h250r_cc9933_040-mp4
211 | 00:01:07:640 | D/asset manager key: 6890e6e-4c44-fa27rx0y0w500h325r_006600_120-mp4
216 | 00:01:10:147 | D/asset manager key: a215a90-2e33-ab4erx0y0w500h375r_996633_040-mp4
221 | 00:01:12:938 | D/asset manager key: 2e78ac2-5541-5c23rx0y0w500h243r_cccccc_000-mp4
226 | 00:01:14:499 | D/asset manager key: 02a9b27-3e20-4a6drx0y0w325h429r_cccccc_000-mp4
231 | 00:01:16:610 | D/asset manager key: a9534a9-2abc-572frx0y0w500h349r_333300_060-mp4
236 | 00:01:18:263 | D/asset manager key: a9f17f7-dceb-46fbrx0y0w50h50r_cc9933_040-mp4

And here is the code I am using to generate these keys:

        String key = url.replace(".", "-")
                .replaceAll("^[a-z0-9_-]", "")
                .toLowerCase();

    if(key.length() > 63)
        key = key.substring(key.length() - 63, key.length());

    Log.d(TAG, "key: " + key);

    DiskLruCache.Snapshot snapshot = mDiskCache.get(key);

Possible faulty unit test: shrinkMaxSizeEvicts

Other people who have forked from this master have also seen this test failure after not changing anything of substance.

The method that is causing the assert ThreadPoolExecutor.getTaskCount() also states that it "Returns the approximate total number of tasks that have ever been scheduled for execution"

It looks to me like the test is meant to prove that only 1 evict was caused by the shrink, is there a better way we can test this?

Add call to force immediate reduction in cache size

We use this cache to store images and allow the user to adjust the cache size. If there is a sufficiently large difference in current storage vs. new cache size, then the cache is at least partially reduced imediately. However when there is a difference which does not match the requirements of half of the journal size or 2000 ops, then it seems the size will not be adjusted. It would be nice to have a call like DiskLruCache.setMaxSizeImmediate(size) or just a flag on whether it should be immediately enforced.

Auto flush

Manually flushing is a pain. The cache should automatically flush the journal after completing a new entry.

Lost cache in some condition

I found an issue: you created a new cache and just then the app is crashed, the cache will be lost.

I check the journal file and found that: the new cache you just created is marked with "DIRTY", and even if I called commit, there's no "CLEAN" followed that "DIRTY", at this time, if the app crashes, the cache leaves "DIRTY" but not "CLEAN" which it should be. Then you launch the app again, the "DIRTY" cache with no "CLEAN" followed will be deleted, and you lost the cache.

This scenario cannot covered by android test cases.

I found this may be caused by the journal is opened by BufferedWriter which has a 8K buffer internally. So the latest "CLEAN" state is actually in the buffer but not in disk when the crash happens.

Add the following line in completeEdit method seems can solve this:
journalWriter.flush();

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.