Comments (15)
It should be fine since a specific task will run on one thread, however you need to keep in mind that any continuations added using .then
may run on a different thread.
from asyncplusplus.
Hmm, I just found that external linkage of thread_id always results in 0
.
namespace async::detail {
extern __thread std::size_t thread_id;
}
I modified the definition to LIBASYNC_EXPORT THREAD_LOCAL std::size_t thread_id;
. What else did I miss?
from asyncplusplus.
Have you tried using std::this_thread::get_id()
instead?
from asyncplusplus.
Hmm, std::this_thread::get_id()
doesn't index from 0
and I don't know if it's even deterministic. Does default_scheduler
guarantee unique ids?
from asyncplusplus.
Here is what I'm trying to do.
...
std::vector<std::vector<int>> labels;
async::parallel_for(async::irange(0, N), [&](size_t i) {
...
labels[async::detail::thread_id].push_back(i);
});
from asyncplusplus.
You could try using std::unordered_map<std::thread::id, std::vector<int>>
. This is "correct" way to do it since it doesn't rely on internal implementation details of the scheduler.
Using the internal thread_id
is actually a bad idea since it is only defined for threads in the thread pool. However parallel_for
will run a portion of the work directly in the calling thread, which doesn't have a thread_id
(it just gets the default value of 0 since it isn't initialized).
This means that your code has a race condition since 2 threads will have the same thread_id
value: one inside the thread pool and one outside.
from asyncplusplus.
Oh, I see! Hmm, so how can I cache the thread_id to avoid std::this_thread::get_id()
cost?
from asyncplusplus.
get_id
is a very cheap function to call, it's only 2-3 instructions. If you're really worried about the performance then you can cache the value locally by adding this line in your loop:
static __thread std::thread::id thread_id = std::this_thread::get_id();
But I don't think you'll gain much, if anything, from it in terms of performance, so I wouldn't bother.
from asyncplusplus.
Thanks! I'll do some profilings :)
from asyncplusplus.
btw, it seems I should write like this static thread_local std::thread::id thread_id = std::this_thread::get_id();
from asyncplusplus.
Oops, std::unordered_map
isn't thread safe either. So this does't work.
from asyncplusplus.
After looking at this for a bit, I think the "proper" way to solve your problem is to use async::parallel_map_reduce
:
struct Labels {
// labels data
};
Labels initial_labels = /* empty vector */
Labels r = async::parallel_map_reduce(input_data, initial_labels, [](size_t i) -> Labels {
// Labels containing just i
}, [](Labels x, Labels y) -> Labels {
// Combine labels
});
from asyncplusplus.
Well, I used to do exactly the same thing. But the reduce
process incurs too many copies and it happens too early. I want to combine them in another specific thread :)
from asyncplusplus.
I'm afraid that I don't have a good solution for you. What you probably want is something like combinable
, but Async++ doesn't have this functionality.
from asyncplusplus.
Yeah, I think this is a bad idea after all :)
from asyncplusplus.
Related Issues (20)
- exception matching is broken on macos due to -fno-rtti in asyncplusplus library HOT 1
- error on centos 7 HOT 3
- Nested task spawning broken HOT 25
- wait on parallel_for results HOT 6
- Make a release HOT 1
- Conan package HOT 2
- Can you add more support for when_all, when_any, when_seq and fail? HOT 3
- thread wainting for task to end should not run another task HOT 5
- need more examples of Composition HOT 1
- task using unique_ptr HOT 2
- Compilation failures in C++20 due to removal of std::result_of HOT 2
- arm gcc 4.8 error HOT 3
- ERROR: terminate called without an active exception on QT application HOT 1
- Null Pointer Access HOT 1
- How to catch an exception with when_all()? HOT 2
- Chain a subset of tasks HOT 1
- [feature] iterative when_all HOT 1
- [question] Build both static and dynamic versions of the library HOT 8
- MacOS: missing Obj-C autorelease pool handling HOT 3
- Force threadpool_scheduler of size 1 to use just a single thread (for data confinement) HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from asyncplusplus.