resource-disaggregation / jiffy Goto Github PK
View Code? Open in Web Editor NEWVirtual Memory Abstraction for Serverless Architectures
License: Apache License 2.0
Virtual Memory Abstraction for Serverless Architectures
License: Apache License 2.0
macOS Catalina
Apple clang version 11.0.0(clang-1100.0.33.16
cmake .. -DBUILD_JAVA_CLIENT=OFF
โฏ make -j 64
[ 0%] Built target boost_ep
[ 1%] Built target zlib_ep
[ 2%] Built target libevent_ep
[ 3%] Built target openssl_ep
[ 3%] Building Python client
[ 6%] Built target curl_ep
[ 8%] Built target catch_ep
[ 9%] Built target jemalloc_ep
[ 18%] Built target awssdk_ep
[ 18%] Built target thrift_ep
running build
running build_py
[ 18%] Built target pyclient
[ 38%] Built target jiffy_client
[ 87%] Built target jiffy
[ 88%] Linking CXX executable directoryd
[ 88%] Linking CXX executable storaged
Undefined symbols for architecture x86_64:
"Aws::S3::S3Client::S3Client(Aws::Client::ClientConfiguration const&, Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy, bool)", referenced from:
void jiffy::persistent::s3_store_impl::write_impljiffy::storage::string_array(jiffy::storage::string_array const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::write_impljiffy::storage::file_block(jiffy::storage::file_block const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::write_impl<std::__1::unordered_map<jiffy::storage::byte_string, jiffy::storage::byte_string, jiffy::storage::hash_type, jiffy::storage::equal_type, std::__1::allocator<std::__1::pair<jiffy::storage::byte_string const, jiffy::storage::byte_string> > > >(std::__1::unordered_map<jiffy::storage::byte_string, jiffy::storage::byte_string, jiffy::storage::hash_type, jiffy::storage::equal_type, std::__1::allocator<std::__1::pair<jiffy::storage::byte_string const, jiffy::storage::byte_string> > > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::read_impljiffy::storage::string_array(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, jiffy::storage::string_array&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::read_impljiffy::storage::file_block(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, jiffy::storage::file_block&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::read_impl<std::__1::unordered_map<jiffy::storage::byte_string, jiffy::storage::byte_string, jiffy::storage::hash_type, jiffy::storage::equal_type, std::__1::allocator<std::__1::pair<jiffy::storage::byte_string const, jiffy::storage::byte_string> > > >(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::unordered_map<jiffy::storage::byte_string, jiffy::storage::byte_string, jiffy::storage::hash_type, jiffy::storage::equal_type, std::__1::allocator<std::__1::pair<jiffy::storage::byte_string const, jiffy::storage::byte_string> > >&) in persistent_store.cpp.o
(maybe you meant: Aws::S3::S3Client::S3Client(Aws::Client::ClientConfiguration const&, Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy, bool, Aws::S3::US_EAST_1_REGIONAL_ENDPOINT_OPTION))
Undefined symbols for architecture x86_64:
"Aws::S3::S3Client::S3Client(Aws::Client::ClientConfiguration const&, Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy, bool)", referenced from:
void jiffy::persistent::s3_store_impl::write_impljiffy::storage::string_array(jiffy::storage::string_array const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::write_impljiffy::storage::file_block(jiffy::storage::file_block const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::write_impl<std::__1::unordered_map<jiffy::storage::byte_string, jiffy::storage::byte_string, jiffy::storage::hash_type, jiffy::storage::equal_type, std::__1::allocator<std::__1::pair<jiffy::storage::byte_string const, jiffy::storage::byte_string> > > >(std::__1::unordered_map<jiffy::storage::byte_string, jiffy::storage::byte_string, jiffy::storage::hash_type, jiffy::storage::equal_type, std::__1::allocator<std::__1::pair<jiffy::storage::byte_string const, jiffy::storage::byte_string> > > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::read_impljiffy::storage::string_array(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, jiffy::storage::string_array&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::read_impljiffy::storage::file_block(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, jiffy::storage::file_block&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::read_impl<std::__1::unordered_map<jiffy::storage::byte_string, jiffy::storage::byte_string, jiffy::storage::hash_type, jiffy::storage::equal_type, std::__1::allocator<std::__1::pair<jiffy::storage::byte_string const, jiffy::storage::byte_string> > > >(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::unordered_map<jiffy::storage::byte_string, jiffy::storage::byte_string, jiffy::storage::hash_type, jiffy::storage::equal_type, std::__1::allocator<std::__1::pair<jiffy::storage::byte_string const, jiffy::storage::byte_string> > >&) in persistent_store.cpp.o
(maybe you meant: Aws::S3::S3Client::S3Client(Aws::Client::ClientConfiguration const&, Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy, bool, Aws::S3::US_EAST_1_REGIONAL_ENDPOINT_OPTION))
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [directory/directoryd] Error 1
make[1]: *** [directory/CMakeFiles/directoryd.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [storage/storaged] Error 1
make[1]: *** [storage/CMakeFiles/storaged.dir/all] Error 2
[ 88%] Linking CXX executable jiffy_tests
Undefined symbols for architecture x86_64:
"Aws::S3::S3Client::S3Client(Aws::Client::ClientConfiguration const&, Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy, bool)", referenced from:
void jiffy::persistent::s3_store_impl::write_impljiffy::storage::string_array(jiffy::storage::string_array const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::write_impljiffy::storage::file_block(jiffy::storage::file_block const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::write_impl<std::__1::unordered_map<jiffy::storage::byte_string, jiffy::storage::byte_string, jiffy::storage::hash_type, jiffy::storage::equal_type, std::__1::allocator<std::__1::pair<jiffy::storage::byte_string const, jiffy::storage::byte_string> > > >(std::__1::unordered_map<jiffy::storage::byte_string, jiffy::storage::byte_string, jiffy::storage::hash_type, jiffy::storage::equal_type, std::__1::allocator<std::__1::pair<jiffy::storage::byte_string const, jiffy::storage::byte_string> > > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::read_impljiffy::storage::string_array(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, jiffy::storage::string_array&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::read_impljiffy::storage::file_block(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, jiffy::storage::file_block&) in persistent_store.cpp.o
void jiffy::persistent::s3_store_impl::read_impl<std::__1::unordered_map<jiffy::storage::byte_string, jiffy::storage::byte_string, jiffy::storage::hash_type, jiffy::storage::equal_type, std::__1::allocator<std::__1::pair<jiffy::storage::byte_string const, jiffy::storage::byte_string> > > >(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::unordered_map<jiffy::storage::byte_string, jiffy::storage::byte_string, jiffy::storage::hash_type, jiffy::storage::equal_type, std::__1::allocator<std::__1::pair<jiffy::storage::byte_string const, jiffy::storage::byte_string> > >&) in persistent_store.cpp.o
(maybe you meant: Aws::S3::S3Client::S3Client(Aws::Client::ClientConfiguration const&, Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy, bool, Aws::S3::US_EAST_1_REGIONAL_ENDPOINT_OPTION))
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [libjiffy/jiffy_tests] Error 1
make[1]: *** [libjiffy/CMakeFiles/jiffy_tests.dir/all] Error 2
make: *** [all] Error 2
Seems like an error due to the new version of AWS SDK(C++), downgrading to version 1.6.53 resolves the issue.
We want a trigger mechanism that operates at a key-space granularity, which when triggered, launches a new CloudFunction task.
Currently the implementation suggest that the users always know that the blocks are sufficient for their job. No exception handler when add_block function fails at the directory server.
When running 'make test', it will get stuck at fifo_queue_auto_scale_test until timeout.
Run 'make test' and see where it gets stuck.
Get stuck at fifo_queue_auto_scale_test until timeout.
1: 2020-03-25 14:58:27 INFO jiffy::auto_scaling::auto_scaling_service_handler::auto_scaling(...) ===== Fifo queue auto_scaling ======
1: 2020-03-25 14:58:27 INFO jiffy::auto_scaling::auto_scaling_service_handler::auto_scaling(...) Start 1585162707258148
1: 2020-03-25 14:58:27 INFO jiffy::auto_scaling::auto_scaling_service_handler::auto_scaling(...) Add_replica_chain: 1585162707259560
1: 2020-03-25 14:58:27 INFO jiffy::auto_scaling::auto_scaling_service_handler::auto_scaling(...) Update_partition: 1585162707259869
1: 2020-03-25 14:58:27 INFO jiffy::auto_scaling::auto_scaling_service_handler::auto_scaling(...) A 1585162707258148 1721 1412 309
//Stuck here.
Ubuntu18.04
Current implementation is a naive loop running the un-batched commands, there are two challenges for designing the batch command logic: 1. We could not redo the entire batch command, so we need to track the command that needs redo. 2. We need to make sure that the result we send back to client is following the order the client requests.
We want to replace cuckoo-hashmap since it's memory allocator doesn't satisfy our need for the project.
Running test request a big amount of open file descriptors.
Linux default is 1024, better set it to 65536 or larger.
Need to mention this in the docs
The block server already exposes primitives to lock individual blocks and issue specific queries that can only be executed while the block is locked. The client libraries should be able to extend this to support:
Currently the Python key-value interface returns a special bytes
value if an error occurs, e.g.,
>>> kv.get("yyy")
b'!key_not_found'
>>> kv.put("yyy", b"!key_not_found")
b'!ok'
>>> kv.get("yyy")
b'!key_not_found'
With this error-reporting mechanism, the caller is more likely to miss an error (if they don't explicitly check the return value). Furthermore, the caller can't tell if a return value of b'!key_not_found'
is an error message or the actual value under the key.
It might be a good idea to raise exceptions instead.
Configuration options num_blocks, num_block_groups and num_servers are confusing and inconsistent.
Configuration options for auto_scaling_port, service_port, and management_port are inconsistent in configuration files and in code.
Don't have the permission to modify this.
The current python client test only supports python 2.7+, not supporting python 3.6+.
Here is the error log I get when building it.
============================= test session starts ==============================
2: platform darwin -- Python 3.6.5, pytest-4.2.0, py-1.5.3, pluggy-0.8.1 -- /Users/YupengTANG/anaconda3/bin/python
2: cachedir: .pytest_cache
2: rootdir: /Users/YupengTANG/Documents/GitHub/jiffy-test-fix/jiffy/build/pyjiffy, inifile: setup.cfg
2: plugins: cov-2.6.1, remotedata-0.2.1, openfiles-0.3.0, doctestplus-0.1.3, arraydiff-0.2
2: collecting ... collected 8 items
2:
2: test/test_client.py::TestClient::test_chain_replication ERROR [ 12%]
2: test/test_client.py::TestClient::test_close ERROR [ 25%]
2: test/test_client.py::TestClient::test_create ERROR [ 37%]
2: test/test_client.py::TestClient::test_failures ERROR [ 50%]
2: test/test_client.py::TestClient::test_lease_worker ERROR [ 62%]
2: test/test_client.py::TestClient::test_notifications ERROR [ 75%]
2: test/test_client.py::TestClient::test_open ERROR [ 87%]
2: test/test_client.py::TestClient::test_sync_remove ERROR [100%]
2:
2: ==================================== ERRORS ====================================
2: _____________ ERROR at setup of TestClient.test_chain_replication ______________
2:
2: item = <TestCaseFunction test_chain_replication>
2:
2: def pytest_runtest_setup(item):
2:
2: > remote_data = item.get_marker('remote_data')
2: E AttributeError: 'TestCaseFunction' object has no attribute 'get_marker'
2:
2: /Users/YupengTANG/anaconda3/lib/python3.6/site-packages/pytest_remotedata/plugin.py:59: AttributeError
2: ___________________ ERROR at setup of TestClient.test_close ____________________
2:
2: item = <TestCaseFunction test_close>
2:
2: def pytest_runtest_setup(item):
2:
2: > remote_data = item.get_marker('remote_data')
2: E AttributeError: 'TestCaseFunction' object has no attribute 'get_marker'
2:
2: /Users/YupengTANG/anaconda3/lib/python3.6/site-packages/pytest_remotedata/plugin.py:59: AttributeError
2: ___________________ ERROR at setup of TestClient.test_create ___________________
2:
2: item = <TestCaseFunction test_create>
2:
2: def pytest_runtest_setup(item):
2:
2: > remote_data = item.get_marker('remote_data')
2: E AttributeError: 'TestCaseFunction' object has no attribute 'get_marker'
2:
2: /Users/YupengTANG/anaconda3/lib/python3.6/site-packages/pytest_remotedata/plugin.py:59: AttributeError
2: __________________ ERROR at setup of TestClient.test_failures __________________
2:
2: item = <TestCaseFunction test_failures>
2:
2: def pytest_runtest_setup(item):
2:
2: > remote_data = item.get_marker('remote_data')
2: E AttributeError: 'TestCaseFunction' object has no attribute 'get_marker'
2:
2: /Users/YupengTANG/anaconda3/lib/python3.6/site-packages/pytest_remotedata/plugin.py:59: AttributeError
2: ________________ ERROR at setup of TestClient.test_lease_worker ________________
2:
2: item = <TestCaseFunction test_lease_worker>
2:
2: def pytest_runtest_setup(item):
2:
2: > remote_data = item.get_marker('remote_data')
2: E AttributeError: 'TestCaseFunction' object has no attribute 'get_marker'
2:
2: /Users/YupengTANG/anaconda3/lib/python3.6/site-packages/pytest_remotedata/plugin.py:59: AttributeError
2: _______________ ERROR at setup of TestClient.test_notifications ________________
2:
2: item = <TestCaseFunction test_notifications>
2:
2: def pytest_runtest_setup(item):
2:
2: > remote_data = item.get_marker('remote_data')
2: E AttributeError: 'TestCaseFunction' object has no attribute 'get_marker'
2:
2: /Users/YupengTANG/anaconda3/lib/python3.6/site-packages/pytest_remotedata/plugin.py:59: AttributeError
2: ____________________ ERROR at setup of TestClient.test_open ____________________
2:
2: item = <TestCaseFunction test_open>
2:
2: def pytest_runtest_setup(item):
2:
2: > remote_data = item.get_marker('remote_data')
2: E AttributeError: 'TestCaseFunction' object has no attribute 'get_marker'
2:
2: /Users/YupengTANG/anaconda3/lib/python3.6/site-packages/pytest_remotedata/plugin.py:59: AttributeError
2: ________________ ERROR at setup of TestClient.test_sync_remove _________________
2:
2: item = <TestCaseFunction test_sync_remove>
2:
2: def pytest_runtest_setup(item):
2:
2: > remote_data = item.get_marker('remote_data')
2: E AttributeError: 'TestCaseFunction' object has no attribute 'get_marker'
2:
2: /Users/YupengTANG/anaconda3/lib/python3.6/site-packages/pytest_remotedata/plugin.py:59: AttributeError
2: =========================== 8 error in 0.23 seconds ============================
2/3 Test #2: PythonClientTest .................***Failed 5.39 sec
Currently the system will break down when blocks are insufficient
Get openssl error when building from scratch
Found similar problem on GitHub, the starting with Mojave, the headers are no longer installed under /usr/include/ by default -- look under Command Line Tools -> New Features in the release notes.
Running open /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg
solves the problem
Webpage: neovim/neovim#9050
There are also some error for Boost, and adding a new define solves the problem. The error file is generated with Cmake.
Found solution here:https://github.com/jerrymarino/iCompleteMe/issues/7 I just added a #define BOOST_NO_AUTO_PTR to /memorymux/build/external/boost/include/boost/get_pointer.hpp and /memorymux/build/external/boost/include/boost/smart_ptr/shared_ptr.hppThe reason should be that the version of boost is not compatible with the latest clang defaults. Updating boost can resolve this problem.
We want to store B-Tree slot_begin and slot_end into it's metadata, so we need to make it a map<std::string, std::string>
instead of a single string. The B-Tree range function does not support auto scaling, need to add implementation for it.
We also want to add script for user to generate B-Tree automatically, which as follows:
1.Currently there is separated code for generating B-Tree keys, should add a script to generate the keys automatically
2.Add reference to the repo of the alpha words.
We already have support for flushing data on lease-expiry; this can be easily extended to support persistence for in-memory data by periodically flushing data even when leases have not expired.
(A clear and concise description of what the bug is.)
(Provide clear steps to reproduce the bug.)
(A clear and concise description of what you expected to happen.)
(Provide logs generated by Confluo, if applicable.)
(Please provide information regarding the OS, including type and version.)
(Add any other context about the problem here.)
TODO:
Currently, failures during auto-scaling may result in the system reaching an inconsistent state. This is somewhat complicated due to the interaction of chain replication with auto-scaling, but with reasonable assumptions, should be achievable.
Document various methods, classes and packages for:
Jenkins not working now
Currently, we are not actually using the default partition because whenever we destroy a partition and build a default partition on top of it, the client map would be removed, therefore the default partition message "!block_moved" could not reach the request client anyway. We use sending a failure message(seq -1) to end the connection.
This approach conflicts with fault tolerance, since we can't differentiate an adversary sending a message with all (seq -1) from we consciously want to end the connection.
The latency for auto_scaling is too high, we need to get some new protocols to make it work efficiently
The directory file size tracker would access memory that are not allocated. Need to figure out why and fix it
Currently, Batched commands of only one type (get, put, remove, update) is supported. This can easily be extended to support batching together commands of different types.
This will require support at the block server (to support such batched requests) and the client (to partition the query by block and forward the request to corresponding block servers).
1.Currently there is separated code for generating B-Tree keys, should add a script to generate the keys automatically
2.Add reference to the repo of the alpha words.
Better fault tolerance for: (1) auto-scaling (2) fix error catch in replica chain (the operation error in hash_table_partition::run_command).
The current C Client wraps C++ client implementation, and leads to unwanted overheads. Extending the native C Thrift interface would be more efficient.
1.Not clear what the API should look like for seek.Should it be seek(position, size) or seek(position) which only reads out a single string. If it is seek(position, size), it would be hard to handle when size exceeds 128MB.
2.Not clear how the readnext pointer interact with the dequeue pointer. If we readnext, and then dequeue, should it dequeue from the original dequeue point, and also if we dequeue, then we readnext, should we start with the new dequeue point?
In the directory server, there will be sync issues when a thread tries to delete a data block and another thread tries to read the the data blocks(vector).
Whenever a chain is deleted in hash table merging, we want to end all the connected client via sub_map and client_map. Client map is implemented by sending sequence number = -1, need to add similar logic to sub_map
Currently, fault detection is not implemented, although fault recovery is implemented. Adding support for the former should be straightforward at a directory server:
File seek only changes the read position, the write still appends at the end
File and fifo_queue string should not be bigger than 6.8MB(128MB block).When the client wants to reach the new allocated block, it won't be ready because the auto_scaling hasn't been triggered.
If the string is bigger than 128MB(128MB block), we won't able to read them all, since currently the read only supports crossing one block
We need to investigate if the data content "!" would affect the code and the metadata used by the clients.
Current test for auto_scaling test is too simple.
(Provide details regarding your OS type and version.)
Ubuntu 18.04.5
(Provide details regarding the type and version of your C/C++ compiler, Java compiler (only for Java client build issues) and Python interpreter (only for Python client build issues).)
g++ (Ubuntu 8.4.0-1ubuntu1~18.04) 8.4.0
cmake version 3.20.3
(Provide details regarding the CMake arguments used for Confluo's build.)
simply following your quick start, no argument
(Provide the verbatim logs for CMake and make commands on a fresh build. To obtain a fresh build, remove the contents of the out-of-source build directory before running cmake
and make
.)
cmake ..
CMake Warning (dev) at /usr/local/share/cmake-3.20/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to find_package_handle_standard_args
(THRIFT)
does not match the name of the calling package (Thrift). This can lead to
problems in calling code that expects find_package
result variables
(e.g., _FOUND
) to follow a certain pattern.
Call Stack (most recent call first):
cmake-modules/FindThrift.cmake:91 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
cmake-modules/ThriftExternal.cmake:18 (find_package)
cmake-modules/Dependencies.cmake:15 (include)
CMakeLists.txt:25 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /usr/local/share/cmake-3.20/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to find_package_handle_standard_args
(NUMA) does
not match the name of the calling package (Numa). This can lead to
problems in calling code that expects find_package
result variables
(e.g., _FOUND
) to follow a certain pattern.
Call Stack (most recent call first):
cmake-modules/FindNuma.cmake:43 (find_package_handle_standard_args)
cmake-modules/Dependencies.cmake:33 (find_package)
CMakeLists.txt:25 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /usr/local/share/cmake-3.20/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to find_package_handle_standard_args
(memkind)
does not match the name of the calling package (Memkind). This can lead to
problems in calling code that expects find_package
result variables
(e.g., _FOUND
) to follow a certain pattern.
Call Stack (most recent call first):
cmake-modules/FindMemkind.cmake:16 (find_package_handle_standard_args)
cmake-modules/MemkindExternal.cmake:18 (find_package)
cmake-modules/Dependencies.cmake:34 (include)
CMakeLists.txt:25 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Error at libjiffy/CMakeLists.txt:285 (add_dependencies):
The dependency target "boost_ep" of target "jiffy_tests" does not exist.
CMake Error at directory/CMakeLists.txt:10 (add_dependencies):
The dependency target "boost_ep" of target "directoryd" does not exist.
CMake Error at storage/CMakeLists.txt:12 (add_dependencies):
The dependency target "boost_ep" of target "storaged" does not exist.
CMake Generate step failed. Build files cannot be regenerated correctly.
When a namespace is pinned to memory, it is no longer managed via leases, and cannot be evicted. This can be achieved by simply setting the lease duration to infinity (or UINT64_MAX
).
Fifo queue read next pointer currently is independent of the queue head pointer, we should let the queue head pointer modify the Fifo queue read next pointer whenever some message gets dequeued.
macOS Catalina
Apple clang version 11.0.3
cmake .. -DBUILD_JAVA_CLIENT=OFF
make
In file included from src/jemalloc_cpp.cpp:9:
In file included from include/jemalloc/internal/jemalloc_preamble.h:21:
include/jemalloc/internal/../jemalloc.h:215:28: error: exception specification in declaration does not match previous declaration
void JEMALLOC_NOTHROW *je_malloc(size_t size)
^
include/jemalloc/internal/../jemalloc.h:66:21: note: expanded from macro 'je_malloc'
# define je_malloc malloc
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/ SDKs/MacOSX.sdk/usr/include/malloc/_malloc.h:40:7: note: previous declaration is here
void *malloc(size_t __size) __result_use_check __alloc_size(1);
^
Work around by switching to the gcc.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.