Comments (24)
Thanks for your inputs, Theodor, much appreciated. I learned a lot!
I cannot think of other race conditions, so I've released a new version. I will focus on more tests from now on to try and come up with unlikely scenarios. I will also re-test passing on the struct releaser instead of IDisposable. Maybe release a whole different project to benchmark them will side by side. Closing as the issue itself seems to be fixed and there are no other known race conditions.
from asynckeyedlock.
Credited as contributor @ https://github.com/MarkCiliaVincenti/AsyncKeyedLock/blob/master/CONTRIBUTORS.md
Thanks again!
from asynckeyedlock.
Hi Theodor,
First of all I'd like to say that I love your posts on Stackoverflow and secondly thanks for this, much appreciated!
I think what you're saying is theoretically possible. Off the top of my head I think a fix would be as simple as an if statement in TryIncrement that checks whether the counter is 0 and if pooling is enabled. If both conditions are met, it returns false, so then it will instead do a TryAdd.
Any thoughts on this? Need to also think about what happens if pooling is not enabled. I think that the issue could also exist so maybe just check if the counter is 0 and always return false.
I will look at it better tomorrow as am on the smartphone at the moment.
from asynckeyedlock.
Hi Mark! Yep, maybe this issue can be solved by adding state about whether the releaser
is active (in the dictionary) or inactive (removed from the dictionary), but it will be tricky. The ConcurrentDictionary<K,V>
class is great and straightforward when the values are immutable, becomes tricky when the values are mutable, and becomes extra tricky when the mutable values can be pooled and reused. I have implemented the medium trickery case in this StackOverflow answer (it's about a ConcurrentDictionary
with multiple values per key), but implementing the extra trickery case would be at the limits of my mental abilities. 😃
from asynckeyedlock.
Maybe I'm missing something but why do you think it is tricky?
I'm proposing changing:
[MethodImpl(MethodImplOptions.AggressiveInlining)]
internal bool TryIncrement()
{
if (Monitor.TryEnter(this))
{
++_referenceCount;
Monitor.Exit(this);
return true;
}
return false;
}
to:
[MethodImpl(MethodImplOptions.AggressiveInlining)]
internal bool TryIncrement()
{
if (Monitor.TryEnter(this))
{
if (_referenceCount == 0) // race condition
{
Monitor.Exit(this);
return false;
}
++_referenceCount;
Monitor.Exit(this);
return true;
}
return false;
}
If it managed to go through Monitor.TryEnter
then it means that there are no threads concurrently decrementing or removing from dictionary.
from asynckeyedlock.
OK managed to simulate the race condition and the fix works. Thanks for your input, @theodorzoulias!
from asynckeyedlock.
Fixed and released in https://github.com/MarkCiliaVincenti/AsyncKeyedLock/releases/tag/6.0.3
Thanks Theodor!
from asynckeyedlock.
I think that there is still a race condition, but more subtle. It is possible that when the thread-B is resumed by the OS, the releaser
has _referenceCount > 0
, but is now used by another key. This implies that while the thread-B was suspended, the releaser
was released, stored in the pool, and then was taken from the pool to serve a different key.
from asynckeyedlock.
I see. This is incredibly unlikely to happen though. It needs to be suspended exactly after the TryGetValue and before the Monitor.TryEnter. Any fix I can think of would involve so much extra overhead (eg generating a Guid for every pooled releaser, or comparing the key) that using a pool could possibly end up harming performance. I will give it more thought.
from asynckeyedlock.
Yep, this should be very unlikely to happen in practice. But for library-grade components, correctness is supposed to be a paramount quality. Sacrificing correctness for performance is not a path that many people would be willing to take!
from asynckeyedlock.
Yes of course but I need to see how to fix with minimal impact.
from asynckeyedlock.
So it's either comparing the keys which could be expensive depending on the type, or else keeping a key use counter whereby when you get from the pool is incremented by 1, but this would mean more memory used. Would also need to check whether this value is int.MaxValue and if so reset it to 1 instead.
from asynckeyedlock.
You may want to benchmark your implementation and compare it with a simpler Dictionary<K,V>
-based implementation, like Stephen Cleary's one, to see if going with the ConcurrentDictionary<K,V>
is worth the trouble.
Btw you can reduce allocations by returning a concrete struct, for example Task<Releaser>
, instead of Task<IDisposable>
from the Wait
/WaitAsync
methods. The IDisposable
causes boxing, while the concrete struct is not. Returning ValueTask<Releaser>
should be even better.
from asynckeyedlock.
from asynckeyedlock.
The ValueTask<Releaser>
should minimize the allocations in case the semaphore can be acquired immediately/synchronously, in other words in case the SemaphoreSlim.WaitAsync
returns a completed task. Otherwise, in case the semaphore cannot be acquired synchronously, then the ValueTask<Releaser>
offers no advantage. It is just overhead, and the allocations are the same. So the best choice depends on the expected usage of the component. Low contention: ValueTask<Releaser>
. High contention Task<Releaser>
.
from asynckeyedlock.
I meant using Releaser instead of IDisposable. I saw no advantage using the former.
I think I fixed all race conditions in 6.0.4-rc now (also in master). The performance hit though is very painful.
from asynckeyedlock.
Regarding the IDisposable
dilemma, check out this StackOverflow question:
If my struct implements IDisposable will it be boxed when used in a using statement?
I am reviewing now the latest changes, to see if I can find any remaining race condition.
from asynckeyedlock.
I think that I've found another one! I just noticed that the AsyncKeyedLockPool<TKey>.GetObject(TKey key)
method changes the Key
of the releaser
without locking on the releaser
object. So this is possible:
- When the thread-B is resumed by the OS, the
releaser
has non-zero_referenceCount
and the correct_key
, so theTryIncrement
call succeeds. - Another thread-C had previously called
_pool.GetObject(key);
for a different key, and was suspended by the OS just before theitem.Key = key;
command. - The thread-C is resumed by the OS, changes the
Key
of the releaser, and uses the sameSemaphoreSlim
with the thread-B for waiting on a different key.
This implies as previously that while the thread-B was suspended, the releaser
was released, stored in the pool, and then was taken from the pool to serve a different key.
from asynckeyedlock.
I don't think I understood yours but I think it may be related to this?
Thread A goes through TryGetValue for key "ABC" and is suspended right before entering the monitor.
Thread B goes through release for the same key "ABC", obtains the monitor, removes from dictionary and puts the object back into the pool, setting the ReferenceCounter to 1.
Thread A resumes, obtains the lock, checks that the referenceCount > 0, that the key matches (and so far it does) and continues making use of an object that is not only in the dictionary but is also dangerously in the pool.
These race conditions are getting tricky.
from asynckeyedlock.
Regarding the
IDisposable
dilemma, check out this StackOverflow question: If my struct implements IDisposable will it be boxed when used in a using statement?I am reviewing now the latest changes, to see if I can find any remaining race condition.
I'm not sure what you're trying to tell me here. I tried changing the releaser from a struct to a class and using that instead of IDisposable. I made heavy benchmarks and any combination I tried resulted in inferior performance.
from asynckeyedlock.
I don't think I understood yours but I think it may be related to this?
Thread A goes through TryGetValue for key "ABC" and is suspended right before entering the monitor. Thread B goes through release for the same key "ABC", obtains the monitor, removes from dictionary and puts the object back into the pool, setting the ReferenceCounter to 1. Thread A resumes, obtains the lock, checks that the referenceCount > 0, that the key matches (and so far it does) and continues making use of an object that is not only in the dictionary but is also dangerously in the pool.
These race conditions are getting tricky.
I fixed this, and I'm not sure if I also fixed the race condition you last mentioned because I cannot quite understand what you mean.
from asynckeyedlock.
Using a class as a releaser results invariably to allocations. Using a struct as a releaser results to allocations only if the releaser is exposed as IDisposable
. I don't know if you are aiming at reducing the allocations. In case you don't, just ignore my IDisposable
-related remarks. :-)
These race conditions are getting tricky.
Yes indeed! And to be honest I don't have a solution to offer. ConcurrentDictionary
+mutable values+pooling = headache!
from asynckeyedlock.
I had to implement a boolean on the releaser called IsPooled. While it's in the pool, or about to be written to the pool, it's set to true (inside a lock). Inside the lock of the TryIncrement, it's checking if IsPooled is true (it shouldn't be). When we're getting from the pool, we're setting IsPooled to false and setting the key on the releaser before adding to dictionary.
I think there's now an extreme case left.
Thread A is active processing task "ABC"
Thread B is fired for task "ABC", does the TryGetValue and gets suspended
Thread A finishes processing, removes "ABC" from the dictionary and puts the releaser (still held by B) back into the pool
At this point in time, if Thread B resumes, it will find IsPooled == true, and all would be fine.
Scenario 1:
Thread C could start, also for key "ABC", doesn't find it in the dictionary and gets the same pooled object still held by B. Now IsPooled is set to false. If Thread A resumes, it enters TryIncrement and all's OK still, fortunately.
Scenario 2:
Thread C could also start, also for key "ABC", and gets a different object from the pool. Still no problem, if Thread A resumes, it enters TryIncrement, and it finds IsPooled == true and that's sorted.
Scenario 3:
Thread C could also start, but for key "DEF", doesn't find it in the dictionary and gets the same pooled object still held by B. Here what happens is that Thread C will set IsPooled to false and change the key, but when Thread A resumes it goes inside TryIncrement and will find IsPooled == true. In order to fix this I need to put back the code where I also check that the key matches.
Is what you were telling me covered by these scenarios? Or were you talking about something else entirely?
Using a class as a releaser results invariably to allocations. Using a struct as a releaser results to allocations only if the releaser is exposed as
IDisposable
. I don't know if you are aiming at reducing the allocations. In case you don't, just ignore myIDisposable
-related remarks. :-)These race conditions are getting tricky.
Yes indeed! And to be honest I don't have a solution to offer.
ConcurrentDictionary
+mutable values+pooling = headache!
I'll benchmark again after ensuring correctness because I'm pretty sure I had different results, maybe I am missing something.
from asynckeyedlock.
Sorry for not being very clear at describing the scenario. But it's not an easy scenario to describe. It causes me as many headaches to describe it, as it causes to you to decipher my description! After adding the IsPooled
flag, that scenario is probably invalid anyway.
My contribution here had the intention to alert you about the plethora of possibilities for subtle race conditions in your project, so that you can either:
- add enough defensive mechanisms that would make race conditions statistically improbable,
- figure out a solution whose correctness can be mathematically proven, or
- prove mathematically that race conditions are unavoidable with this design, and so another design should be chosen.
from asynckeyedlock.
Related Issues (8)
- Can i use this library for these use cases? HOT 3
- NullReferenceException when accessing from Dependency Injection using Azure Functions HOT 4
- Question: What am I missing here? HOT 5
- New builds? HOT 2
- ArgumentOufOfRangeException although passed value is valid HOT 2
- Update TargetFrameworks (remove 5.0, add 6.0 and 7.0) HOT 3
- Enable deterministic build HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from asynckeyedlock.