Giter VIP home page Giter VIP logo

speedb-io / speedb Goto Github PK

View Code? Open in Web Editor NEW
830.0 830.0 62.0 147.55 MB

A RocksDB compliant high performance scalable embedded key-value store

Home Page: https://www.speedb.io/

License: Apache License 2.0

PowerShell 0.06% CMake 0.41% Makefile 0.70% Python 1.76% Shell 0.93% C++ 84.16% C 2.66% Java 9.24% Dockerfile 0.01% Assembly 0.05% Perl 0.01%
embedded kafka-streams key-value-store kvs performance rocksdb scale speedb storage-engine

speedb's People

Contributors

adamretter avatar agiardullo avatar ajkr avatar akankshamahajan15 avatar cbi42 avatar dhruba avatar emayanke avatar fyrz avatar haoboxu avatar hx235 avatar igorcanadi avatar islamabdelrahman avatar jay-zhuang avatar lightmark avatar liukai avatar ltamasi avatar maysamyabandeh avatar mdcallag avatar miasantreble avatar mrambacher avatar pdillinger avatar riversand963 avatar rven1 avatar sagar0 avatar siying avatar udi-speedb avatar yhchiang avatar yuslepukhin avatar yuval-ariel avatar zhichao-cao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

speedb's Issues

Revert the offsetof() changes to customizable_test

As part of #7 (and later #27 as well) I made some changes to customizable_test because it was failing to build using Clang 12 and 13 with the following error:

options/customizable_test.cc:230:7: error: offset of on non-standard-layout type 'struct SimpleOptions' [-Werror,-Winvalid-offsetof]
     {offsetof(struct SimpleOptions, b), OptionType::kBoolean,
      ^                              ~
/usr/lib/clang/12.0.1/include/stddef.h:104:24: note: expanded from macro 'offsetof'
#define offsetof(t, d) __builtin_offsetof(t, d)
                       ^                     ~
options/customizable_test.cc:233:20: error: offset of on non-standard-layout type 'struct SimpleOptions' [-Werror,-Winvalid-offsetof]
                   offsetof(struct SimpleOptions, cu),
                   ^                              ~~
/usr/lib/clang/12.0.1/include/stddef.h:104:24: note: expanded from macro 'offsetof'
#define offsetof(t, d) __builtin_offsetof(t, d)
                       ^                     ~
options/customizable_test.cc:236:20: error: offset of on non-standard-layout type 'struct SimpleOptions' [-Werror,-Winvalid-offsetof]
                   offsetof(struct SimpleOptions, cs),
                   ^                              ~~
/usr/lib/clang/12.0.1/include/stddef.h:104:24: note: expanded from macro 'offsetof'
#define offsetof(t, d) __builtin_offsetof(t, d)
                       ^                     ~
options/customizable_test.cc:239:21: error: offset of on non-standard-layout type 'struct SimpleOptions' [-Werror,-Winvalid-offsetof]
                    offsetof(struct SimpleOptions, cp),
                    ^                              ~~
/usr/lib/clang/12.0.1/include/stddef.h:104:24: note: expanded from macro 'offsetof'
#define offsetof(t, d) __builtin_offsetof(t, d)
                       ^                     ~
4 errors generated.

I also opened facebook/rocksdb#8525 to track the issue on RocksDB's side.

Updating to Clang 14 fixed the issue on my system, which indicates that this is likely a compiler bug that was fixed, so let's revert the changes back to the original version.

Speedb Namespace

Name space requirements:

  1. New APIs will go in the include/speedb directory in the SPEEDB_NAMESPACE
  • User impact: No impact until they use the new API, and then they will have to follow its semantics.
  1. The existing RocksDB code will not change (still in include/rocksdb and still say ROCKSDB_NAMESPACE)
  • No user impact
  1. The ROCKSDB_NAMESPACE will be defaulted to equal the SPEEDB_NAMESPACE, but can be set to RocksDB (or anything else) in compatibility mode.
  • User impact: If their code uses the ROCKSDB_NAMESPACE macro, no impact. If it hard-codes "rocksdb", they will need to rebuild overriding the ROCKSDB namespace macro.
  • Testing: Need to test when ROCKSDB != SPEEDB and maybe when SPEEDB != "speedb" and when SPEEDB == "rocksdb"

spdb memtable

Why:
Improve write performance (for multiple and single threads)
Allow parallelism when working with the memtable

What :
Inserting data into the memtable without sorting it first will eliminate the locking constraints of the write process.
Since data is inserted unsorted a method of getting fast read is needed - generate 1M buckets and add the keys based on hash to one of them - statistically will be one per bucket.

Who:
Write and Read while wrote oriented workload.

spdb write flow

Why:
Improve performance of multithreaded writes
What :
Redesign the writeflow to spend less time on locks mainly in a multithreaded environment
The write flow feature improves the overall performance by:

  • Reducing locks during writes
  • Efficient WAL: WAL writes consolidation
  • Improve switch memtable latency

Who:
Multi threaded write environments

Speedb Artifacts

  • Artifacts from a Speedb build (JAR, Libraries, etc) will include the Speedb name (e.g. libspeedb.a)
  • Installation packages (CMake packages, Maven artifactory, apt/yum installers, etc) will include "Speedb"
  • The Usage/Version of the library and executables will refer to Speedb

Clean memory manager

Why:

  • Improve index, filter, data caching method while keeping best performance 
  • Dynamically switch between pinning and LRU
  • Enjoy pinning performance while possible and avoid OOM failure by switching to LRU when needed
  • Report usage to the memory quota enforcer

What:
Hold as much metadata as possible in the cache by dynamically switching between pinned and LRU while prioritizing upper LSM levels
Create a read intensity grade for each CF: Based on monitoring of memory consumption and if there is not enough space to pin the full index\filter data move lower levels and least read intensive CF into LRU and evict them from the bottom up.
Once a threshold reached evict pinned to LRU based on least read intensive CF and lower levels

Investigate flaky unit tests

At the moment - there are several unit tests failing in 'main'.
4 of them are mentioned here:
#9 (comment)
Since those 4 are failing in RocksDB 7.2.2 we should decide weather to fix them or remove them.
EnvTest.GenerateRawUniqueIdTrackEnvDetailsOnly
EventListenerTest.OnBlobFileOperationTest
EventListenerTest.OnFileOperationTest
EventListenerTest.ReadManifestAndWALOnRecovery

additional one:
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from PerfContextTest
[ RUN      ] PerfContextTest.DBMutexLockCounter
db/perf_context_test.cc:611: Failure
Expected: (get_perf_context()->db_mutex_lock_nanos) > (0), actual: 0 vs 0
terminate called after throwing an instance of 'testing::internal::GoogleTestFailureException'
  what():  db/perf_context_test.cc:611: Failure
Expected: (get_perf_context()->db_mutex_lock_nanos) > (0), actual: 0 vs 0
Aborted (core dumped)

As long as those tests keep failing we will not be able to make the automation process work as expected.

DB Bench to report accurate response time based when using rate limit

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

dynamic memtable size and flush speed

Why:
improve write performance by actively changing the memtable size, flush schedules and speed of writes.

what:
Introduce a new concept - memtable to L0 flush speed at mb\s
Introduce a decision algorithm based on:
Current disk IO vs disk capabilities - system wide (% of possible)
Current memory utilization vs quota - system wide and CF specific (%of dirty allocated vs taken)
CF specific usage per allocation
Current user write rate
In the algorithm the user write rate should be the goal, and flushes \ memtable sizes \ flush speed \ should be determined by the utilization.
Algorithm should take under consideration under-utilized CFs and flushes that can be slowed down

Testing:
Heavy (maximum) writes to a CF while writing 3 mb\s to 3 other CF
Verify full system performance better than rocksdb
Verify heavy CF performance better than rocksdb
Verify small CF over time performance do not degrigate

OSS: delaying writes in a more granular way

Objectives

  • Slow down writes gradually to prevents stalls
  • Stable performance over time

Product Requirements

Existing

The delayed write option in RocksDB is used to slow down writes when flush or compaction can’t keep up with the incoming write rate. This mechanism ensures the write rate will be aligned with the HW capabilities. This mechanism monitors three parameters to identify stalls (number of unflushed memtables, number of files in L0, and the urgency level of pending flushes). In case a threshold of one of them is exceeded, the delayed write limits the write rate to the constant delayed_write_rate value.

New

When delayed write is enabled and active, the write rate is decreased gradually to moderate the write stalls. It currently does not take memory limitation under consideration and does not stop writes when a memory limit is reached. This might cause OOM error, which can be fixed with a memory monitoring feature.

Tests

  • High write workload vs. RocksDB
  • See decreased performance instead of stalls
  • We can test in on HDD on smaller tests
  • Look at the memory consumption during the test

Tracked Tasks

Smaller flushes

Why :
stabilizing write performance.
slight increase in write performance due to stabilization

What :
reduce the amount of immutable memtables per flush
flushes will be in smaller steps that finishes faster allowing the proper write delay configuration to maintain more stable performance

Who :
write intensive workload that suffer from gittery write delays

Hierarchical map

Why: Improve seek and reduce CPU usage.
When there are many levels and many sst files in each level, the seek operation is slow. This is relevant when the hybrid compaction enabled.

What: Seek only in the relevant SSTs that are in the seek criteria range and avoid opening SST that is not in the required range.

Who: Hybrid compaction, when there are too many levels and too many sst files/holes existing

Technical details:
The Hmap is a linked list of overlap ranges between levels. The linked list should be updated after every flush/compaction completed.

Cost:

  • memory for the linked list. Every pointer is 8 bits.
  • Efficiency: building and updating the linked list taking system resources.

Note: This feature should work when the ORC has not completed (hence, there are still many levels, holes, sst files)
Need to finalize the criteria when to start/stop the hmap.

Snapshot optimization

Why:
Improve performance during read heavy snapshot workload

What :
In Read only workload, when taking a snapshot while Read, don’t create a new snap if it hasn’t changed, use the last one.
In Read only workload, don’t use mutex when the last snapshot is still valid.

Smaller compaction

Why:
stabilizing write performance.
slight increase in write performance due to stabilization

What :
always take maximum of 50% (or lower) of the amount of L0 files to be compacted with L1
compaction steps will be smaller and finish faster freeing up both L0 and L1 for flush\compaction

Who:
write intensive workload that suffer from gittery performance due to L0 flushes

PBS - Probabilistic search implementation in speedb

Why:
Reduce CPU and improve performance of read and seek operation, especially when data resides in memory

What:
Accelerate the index search by using a more sophisticated searching algorithm than binary search.
By utilizing a probabilistic search algorithm (will be explained in a different scientific paper) save on search time and cpu cycles (do less searches then binary search)
In order to select between binary search and probabilistic search another step in the sst creation needs to be done to not waste time on “trying” the right search algorithm.

Update README description

The README file to include the sections:

  • What is Speedb - Project description
  • Base performance benchmarks
  • Installation
  • How to use
  • How to contribute
  • License

Avoid flooding disk I/O on file purges

With the changes in #5, when we need to purge a large amount of files, or large files which don't have their blocks allocated contiguously, or on bitmap-based filesystems (even when we have a contiguous range of blocks for each file), we may incur a high load on the disk when purging obsolete files that would eat into the bandwidth available to user operations and compactions.

Change the implementation of the default Env's DeleteFile() function to delete by truncating in modestly sized chunks (preliminary tests showed that around 500MiB might be a good place to start and tune from there), and allowing the disk to recover in between.

This needs to have extensive performance tests in conjunction with #5.

Memory quota enforcement

Why:
Enforce the soft and hard limits of the memory and issue relevant stalls \ delays \ eviction based on the other memory management capabilities to make sure we don't go OOM (Out Of Memory)

What :
Respect and hard enforce the memory quota assignment (cache size)
Accept 2 parameters, soft and hard.

  • Soft - as today
  • Hard - enforce
    Use the clean and dirty memory management features to report into the quota enforcer
    Use the clean and dirty memory to manage and stay within the quota
    Initiate flushes \ dynamic pinning to stay within the quota

Port db_stress changes from Speedb to the OSS version

In Speedb (proprietary) we made some changes to db_stress to fix bugs and support desired behaviours, so we need to port them to the OSS version.

Expected behavior

db_stress is behaving as in Speedb (proprietary), without bugs.

Actual behavior

There are some outstanding bugs, and missing features that need to be merged.

Steps to reproduce the behavior

Compare with the Speedb (proprietary) version.

UBSan errors: perf_level and iostats_context

running make blackbox_ubsan_crash_test on rocksdb v7.2.2 results in the following errors:
monitoring/perf_step_timer.h:19:31: runtime error: load of null pointer of type 'PerfLevel'
file/writable_file_writer.cc:523:9: runtime error: member access within null pointer of type 'struct IOStatsContext'

seems like these variables are not initialized when reached.

accessing the members through a getter instead of directly, fixes these errors. e.g. GetPerfLevel() instead of perf_level directly.

perf_context is declared and defined in a very similar way (though with thread_local instead of __thread) but no error is raised from it. thats probably just a coincidence where the first access to perf_context is through get_perf_context() and the rest already see an initialized member.

Expected behavior

no errors

Actual behavior

errors

Steps to reproduce the behavior

make blackbox_ubsan_crash_test

Add build related fixes from Speedb

Add build fixes from Speedb into OSS

Expected behavior

The code compiles.

Actual behavior

The code breaks the build.

Steps to reproduce the behavior

Compile with GCC on CentOS 7.9, with GCC 12, and with Clang 13.

set purges as Pri::LOW instead of HIGH

reason for the change:

purges are deletions of obsolete compaction files. by doing the cleanup of these files in the HIGH priority thread pool, the flushes are hindered. we’d like to avoid disturbing the flushes and instead do this cleaning job as a LOW priority.

other notes:

this change is also favored when using the SPDB-680 (delete a file in chunks) feature since with it, the deletion takes longer which would further stall the flushes.

also, consider changing priority here as well db/db_impl/db_impl_compaction_flush.cc:

void DBImpl::BGWorkPurge(void* db) {
  IOSTATS_SET_THREAD_POOL_ID(Env::Priority::HIGH);
  TEST_SYNC_POINT("DBImpl::BGWorkPurge:start");
  reinterpret_cast<DBImpl*>(db)->BackgroundCallPurge();
  TEST_SYNC_POINT("DBImpl::BGWorkPurge:end");
}

Dirty Memory Manager

Why:

Improve performance of multiple column families during writes
Improve memory utilization of multiple CF by distributing it dynamically
Eliminate IO stalls due to flushes
Framework for memory quota enforcement

What :
Proactively trigger flushes to different CF in order to free up used memory.
Better utilize overall memory usage and improve write performance.

Who (is it for):
Multiple DB or multiple CF where DB\CF are not evenly used (some CF\DB are more write intensive then others)

Import range query improvements

Why:
Seek is a commonly used operation with rocksdb. Improving it can significantly improve the overall performance of the application.
improve short seek performance with large amount of levels that don’t have much overlap

What :
Improve the creation of range filters to hold less irrelevant items and reducing the seek times

test issue

Expected behavior

Actual behavior

Steps to reproduce the behavior

db_bench: many default values are not the default in Speedb

these params are:

table_cache_numshardbits is 4 while default is 6.
new table reader for comp inputs
block cache 8mb
index shortening mode
enable_pipelined_write - true in db bench and false otherwise.
delayed_write_rate = 8Mb and default value is 0.

and possibly more.

fix by making db_bench take the default value

Update project license to comply with Apache 2.0

  • LICENSE file
  • NOTICE file
  • COPYING file
  • Copyright headers
    -- Copyright [yyyy] SpeeDB Ltd.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.


  • README mention

Building Speedb fails with Clang 14

Building Speedb on my machine using Clang 14 fails with the following errors:

/home/isaac/projects/speedb-io/options/db_options.h:120:21: error: definition of implicit copy constructor for 'MutableDBOptions' is deprecated because it has a user-declared copy assignment operator [-Werror,-Wdeprecated-copy]
  MutableDBOptions& operator=(const MutableDBOptions&) = default;
                    ^
/home/isaac/projects/speedb-io/db/compaction/compaction_job.cc:439:7: note: in implicit copy constructor for 'rocksdb::MutableDBOptions' first required here
      mutable_db_options_copy_(mutable_db_options),
      ^

To Reproduce

mkdir build
CC=clang CXX=clang++ cmake .. -GNinja
ninja

Expected behavior

The build completes successfully.

Environment (please complete the following information):

  • OS (e.g. output of uname -omsr and distribution name and version): Arch Linux with Linux 5.18.10-arch1-1 x86_64 GNU/Linux
  • Compiler (e.g. output of gcc --version):
clang version 14.0.6
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin

Paired Block Bloom Filter Algorithm

Why :

Reduce false positives rate while using the same amount of memory.

What:

Develop a filter which is fast and low on CPU consumption on the one hand, but with a better memory footprint- FPR trade-off on the other hand.

Technical detail:

In the traditional bloom filter there is a tradeoff between memory usage and performance. Rocksdb blocked bloom filter takes less time but consumes extra memory.

Ribbon filter, on the other hand, takes ~30% less memory but is much slower than the bloom filter (factor of 4).

The idea is to improve bloom filter in both memory consumption and keep it high performant.

Who:

The proposed filter should be most beneficial when there is a need for a very small FPR. Typically this happens when the penalty of a false positive is very big compared to the filter test time (database on the disk), and when true positives are rare.

Integrate a new type of filter policy: Paired Block Bloom Filter

OSS: port the existing delayed write rate mechinsm (with gate flag)

As part of the OSS plan, let's extract the relevant parts of the delayed write mechanism from the proprietary version and port it (as is) to the open source version, gated behind a flag to decide whether to use the RocksDB mechanism (based on the WriteController logic) or use the Speedb one. If all tests pass, we'll set the default value of the flag so that the Speedb logic is used.

Note that a flag needs to be added to db_bench as well in order to allow setting the new option in performance tests and allow comparing runs with and without it.

Add construction of memtable factory from plugins in tools

Currently db_bench, db_stress, and related tools don't allow providing plugin URIs for the memtable factory. This precludes the use of the new memtable (#22) in these tools during testing. Add support for providing memtable factory URI for the use of plugins.

Delayed write: needs_flush_speedup doesn't actually work

needs_flush_speedup is considered in a branch of the code that's effectively dead code.

DBImpl::MaybeScheduleFlushOrCompaction() {
  ...
  bool is_flush_pool_empty =
        env_->GetBackgroundThreads(Env::Priority::HIGH) == 0;
    if (!is_flush_pool_empty) {
      ...
    } else {
  //    // this code is never reached  //     //
      while (unscheduled_flushes_ > 0 &&
            (bg_flush_scheduled_ + bg_compaction_scheduled_ <
                  bg_job_limits.max_flushes ||
              (needs_flush_speedup_ &&
              bg_flush_scheduled_ <= bg_job_limits.max_flushes))) {
        bg_flush_scheduled_++;
        FlushThreadArg* fta = new FlushThreadArg;
        fta->db_ = this;
        fta->thread_pri_ = Env::Priority::LOW;
        env_->Schedule(&DBImpl::BGWorkFlush, fta, Env::Priority::LOW, this,
                      &DBImpl::UnscheduleFlushCallback);
        --unscheduled_flushes_;
    }
}

The reason is that env_->GetBackgroundThreads() returns the pool capacity, not the amount of available slots, so the flush pool will never actually be empty (the intention in RocksDB here was to support a case where flushes are supposed to have the same priority as compactions and thus max_background_flushes is set to 0, but our use of needs_flush_speedup needs to know if there are high priority jobs available, regardless of capacity allocation).

Expected behavior

Available capacity is considered for scheduling based on flush urgency.

Actual behavior

Allocated capacity is considered.

Steps to reproduce the behavior

N/A

Live Config Changes

What
Provide an interface to change configurable parameters on the fly even if the application does not support it

Why

  • Improve user experience when tuning parameters during PoCs, testing and customer environment.
  • Changing parameters without restarting the database

Who

  • Testing
  • Customers (production)
  • POCs

Functional Requirements:

  • The user should be able to change with this method mutable parameters only. This method cannot be used to change immutable parameters.
    Example: The user shall be able to load a config file, with similar format to the Options file and to call the appropriate rocksdb update option (pending RnD decision)
  • The new option/s should be loaded on the fly without the need to reload/recompile the code.
  • When changing a parameter, the old and new values should be printed to the log

unit tests: SPDB-556 intoduced new err in customizable_test

error:

[ RUN      ] CustomizableTest.PrepareOptionsTest
options/customizable_test.cc:667: Failure
base->ConfigureFromString(prepared, "unique={id=P; can_prepare=true}")
NotFound: Missing configurable object: pointer
terminate called after throwing an instance of 'testing::internal::GoogleTestFailureException'
  what():  options/customizable_test.cc:667: Failure
base->ConfigureFromString(prepared, "unique={id=P; can_prepare=true}")
NotFound: Missing configurable object: pointer
Received signal 6 (Aborted)

Steps to reproduce the behavior

./customizable_test

Import shorten memtable switch latency

improve memtable switch latency with memtables that require large upfront allocation, such as as the hash-table-based memtable

Expected behavior

Actual behavior

Steps to reproduce the behavior

Memory Monitor

Why :
Have a single component that knows the memory status of the system.
Know the status for each CF as a reporting and decision support for other features

what:
Track both clean and dirty memory consumption per CF
The memory monitor should register each CF during build and remove it during destroy
The memory monitor should be able to calculate when needed the clean and dirty and flush pending memory size for each CF
The memory monitor should have a method to report individual CF status and system wide status on clean, dirty, sum
User interface for getting this report - logs

Who :
reporting and monitoring.
decision support for other features (clean \ dirty \ quota managers)

Compaction threads - reasearch on changing the default values

Changing the compactions threads from 1 to 8 showed dramatic performance changes especially in the write tests but not only.
We would like a research to be done as to:

  1. what is the advantage of more compaction threads.
  2. what is the dis-advantage

We want to understand the theory behind it before we start the testing to pinpoint our testing efforts (and the release to which this will be active) accordingly.

here is the graph :
image

Performance - new test plan

Currently we have one set of performance tests, including 4 benchmarks with 11 same tests each.
We use mainly this set which good and comprehensive but not necessarily fits for all our purposes.
We are creating four levels of testing:

  1. Regression - after every push to the main branch.
  2. Daily - every night (assuming there was at least one push to main branch since the last one).
  3. Weekly - every week end.
  4. Release - before we release a new version.

The Daily level should be bigger, vaster and more comprehensive then the Regression level. Same goes for Weekly vs. Daily and Release vs. Weekly.

Regression level should be short, no longer than 2 hours and give RnD the very basic approval for nothing was broken and no degradation was found. In any case, that same night the Daily run will take place to have a more profound verification.
Daily Can be several hours - In any case no more than 10 hours.
Weekly should be longer and might include huge DB and threads/CF, features and important non default configurations.
Release is the biggest one. It should include all our features - including all that are set to false by default, different non default configurations, all mentions in https://speedb.atlassian.net/browse/SPDB-552 and other edge cases.

We would like to rethink over the necessity of the current 11 tests for each of the current benchmarks (maybe there should be difference in the tests between the different benchmarks) and the ones to come and consider adding new ones (delete e.g.). In addition. we should consider having images instead of the "fillrandom" test.

The outcome for this ticket should be a document for all the four levels - what should be tested in each level - spread to specific benchmarks and tests.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.