Giter VIP home page Giter VIP logo

surrealdb / surrealdb Goto Github PK

View Code? Open in Web Editor NEW
24.9K 158.0 761.0 20.85 MB

A scalable, distributed, collaborative, document-graph database, for the realtime web

Home Page: https://surrealdb.com

License: Other

Dockerfile 0.04% HTML 0.01% Rust 99.67% Makefile 0.02% Nix 0.26% Shell 0.01%
database distributed distributed-database document-database realtime-database cloud-database collaborative hacktoberfest backend-as-a-service database-as-a-service

surrealdb's Introduction


SurrealDB Icon


SurrealDB Logo SurrealDB Logo

SurrealDB SurrealDB is the ultimate cloud
database for tomorrow's applications

Develop easier.   Build faster.   Scale quicker.


         

     

Blog   Github	  LinkedIn   Twitter   Youtube   Dev   Discord   StackOverflow


  What is SurrealDB?

SurrealDB is an end-to-end cloud-native database designed for modern applications, including web, mobile, serverless, Jamstack, backend, and traditional applications. With SurrealDB, you can simplify your database and API infrastructure, reduce development time, and build secure, performant apps quickly and cost-effectively.

Key features of SurrealDB include:

  • Reduces development time: SurrealDB simplifies your database and API stack by removing the need for most server-side components, allowing you to build secure, performant apps faster and cheaper.
  • Real-time collaborative API backend service: SurrealDB functions as both a database and an API backend service, enabling real-time collaboration.
  • Support for multiple querying languages: SurrealDB supports SQL querying from client devices, GraphQL, ACID transactions, WebSocket connections, structured and unstructured data, graph querying, full-text indexing, and geospatial querying.
  • Granular access control: SurrealDB provides row-level permissions-based access control, giving you the ability to manage data access with precision.

View the features, the latest releases, and documentation.

  Contents

  Features

  • Database server, or embedded library
  • Multi-row, multi-table ACID transactions
  • Single-node, or highly-scalable distributed mode
  • Record links and directed typed graph connections
  • Store structured and unstructured data
  • Incrementally computed views for pre-computed advanced analytics
  • Realtime-api layer, and security permissions built in
  • Store and model data in any way with tables, documents, and graph
  • Simple schema definition for frontend and backend development
  • Connect and query directly from web-browsers and client devices
  • Use embedded JavaScript functions for custom advanced functionality

  Documentation

For guidance on installation, development, deployment, and administration, see our documentation.

  Installation

SurrealDB is designed to be simple to install and simple to run - using just one command from your terminal. In addition to traditional installation, SurrealDB can be installed and run with HomeBrew, Docker, or using any other container orchestration tool such as Docker Compose, Docker Swarm, Rancher, or in Kubernetes.

 Install on macOS

The quickest way to get going with SurrealDB on macOS is to use Homebrew. This will install both the command-line tools, and the SurrealDB server as a single executable. If you don't use Homebrew, follow the instructions for Linux below to install SurrealDB.

brew install surrealdb/tap/surreal

 Install on Linux

The easiest and preferred way to get going with SurrealDB on Unix operating systems is to install and use the SurrealDB command-line tool. Run the following command in your terminal and follow the on-screen instructions.

curl --proto '=https' --tlsv1.2 -sSf https://install.surrealdb.com | sh

If you want a binary newer than what's currently released, you can install the nightly one.

curl --proto '=https' --tlsv1.2 -sSf https://install.surrealdb.com | sh -s -- --nightly

 Install on Windows

The easiest and preferred way to get going with SurrealDB on Windows is to install and use the SurrealDB command-line tool. Run the following command in your terminal and follow the on-screen instructions.

iwr https://windows.surrealdb.com -useb | iex

 Run using Docker

Docker can be used to manage and run SurrealDB database instances without the need to install any command-line tools. The SurrealDB docker container contains the full command-line tools for importing and exporting data from a running server, or for running a server itself.

docker run --rm --pull always --name surrealdb -p 8000:8000 surrealdb/surrealdb:latest start

For just getting started with a development server running in memory, you can pass the container a basic initialization to set the user and password as root and enable logging.

docker run --rm --pull always --name surrealdb -p 8000:8000 surrealdb/surrealdb:latest start --log trace --user root --pass root memory

  Getting started

Getting started with SurrealDB is as easy as starting up the SurrealDB database server, choosing your platform, and integrating its SDK into your code. You can easily get started with your platform of choice by reading one of our tutorials.

Server side code
Client side apps
  • Getting started with Javascript
  • Getting started with Ember.js (coming soon)
  • Getting started with React.js (coming soon)
  • Getting started with Angular.js (coming soon)
  • Getting started with Vue.js (coming soon)
  • Getting started with Svelte (coming soon)

  Quick look

With strongly-typed data types, data can be fully modelled right in the database.

UPDATE person SET
	waist = <int> "34.59",
	height = <float> 201,
	score = <decimal> 0.3 + 0.3 + 0.3 + 0.1
;

Store dynamically computed fields which are calculated when retrieved.

CREATE person SET
	birthday = "2007-06-22",
	can_drive = <future> { time::now() > birthday + 18y }
;

Easily work with unstructured or structured data, in schema-less or schema-full mode.

-- Create a schemafull table
DEFINE TABLE user SCHEMAFULL;

-- Specify fields on the user table
DEFINE FIELD name ON TABLE user TYPE object;
DEFINE FIELD name.first ON TABLE user TYPE string;
DEFINE FIELD name.last ON TABLE user TYPE string;
DEFINE FIELD email ON TABLE user TYPE string ASSERT string::is::email($value);

-- Add a unique index on the email field preventing duplicate values
DEFINE INDEX email ON TABLE user COLUMNS email UNIQUE;

-- Create a new event whenever a user changes their email address
DEFINE EVENT email ON TABLE user WHEN $before.email != $after.email THEN (
	CREATE event SET user = $value, time = time::now(), value = $after.email, action = 'email_changed'
);

Connect records together with fully directed graph edge connections.

-- Add a graph edge between user:tobie and article:surreal
RELATE user:tobie->write->article:surreal
	SET time.written = time::now()
;

-- Add a graph edge between specific users and developers
LET $from = (SELECT users FROM company:surrealdb);
LET $devs = (SELECT * FROM user WHERE tags CONTAINS 'developer');
RELATE $from->like->$devs UNIQUE
	SET time.connected = time::now()
;

Query data flexibly with advanced expressions and graph queries.

-- Select a nested array, and filter based on an attribute
SELECT emails[WHERE active = true] FROM person;

-- Select all 1st, 2nd, and 3rd level people who this specific person record knows, or likes, as separate outputs
SELECT ->knows->(? AS f1)->knows->(? AS f2)->(knows, likes AS e3 WHERE influencer = true)->(? AS f3) FROM person:tobie;

-- Select all person records (and their recipients), who have sent more than 5 emails
SELECT *, ->sent->email->to->person FROM person WHERE count(->sent->email) > 5;

-- Select other products purchased by people who purchased this laptop
SELECT <-purchased<-person->purchased->product FROM product:laptop;

-- Select products purchased by people in the last 3 weeks who have purchased the same products that we purchased
SELECT ->purchased->product<-purchased<-person->(purchased WHERE created_at > time::now() - 3w)->product FROM person:tobie;

Store GeoJSON geographical data types, including points, lines and polygons.

UPDATE city:london SET
	centre = (-0.118092, 51.509865),
	boundary = {
		type: "Polygon",
		coordinates: [[
			[-0.38314819, 51.37692386], [0.1785278, 51.37692386],
			[0.1785278, 51.61460570], [-0.38314819, 51.61460570],
			[-0.38314819, 51.37692386]
		]]
	}
;

Write custom embedded logic using JavaScript functions.

CREATE film SET
	ratings = [
		{ rating: 6, user: user:bt8e39uh1ouhfm8ko8s0 },
		{ rating: 8, user: user:bsilfhu88j04rgs0ga70 },
	],
	featured = function() {
		return this.ratings.filter(r => {
			return r.rating >= 7;
		}).map(r => {
			return { ...r, rating: r.rating * 10 };
		});
	}
;

Specify granular access permissions for client and application access.

-- Specify access permissions for the 'post' table
DEFINE TABLE post SCHEMALESS
	PERMISSIONS
		FOR select
			-- Published posts can be selected
			WHERE published = true
			-- A user can select all their own posts
			OR user = $auth.id
		FOR create, update
			-- A user can create or update their own posts
			WHERE user = $auth.id
		FOR delete
			-- A user can delete their own posts
			WHERE user = $auth.id
			-- Or an admin can delete any posts
			OR $auth.admin = true
;

  Why SurrealDB?

Database, API, and permissions

SurrealDB combines the database layer, the querying layer, and the API and authentication layer into one platform. Advanced table-based and row-based customisable access permissions allow for granular data access patterns for different types of users. There's no need for custom backend code and security rules with complicated database development.

Database, API, and permissions

Tables, documents, and graph

As a multi-model database, SurrealDB enables developers to use multiple techniques to store and model data, without having to choose a method in advance. With the use of tables, SurrealDB has similarities with relational databases, but with the added functionality and flexibility of advanced nested fields and arrays. Inter-document record links allow for simple to understand and highly-performant related queries without the use of JOINs, eliminating the N+1 query problem.

Tables, documents, and graph

Advanced inter-document relations and analysis. No JOINs. No pain.

With full graph database functionality SurrealDB enables more advanced querying and analysis. Records (or vertices) can be connected to one another with edges, each with its own record properties and metadata. Simple extensions to traditional SQL queries allow for multi-table, multi-depth document retrieval, efficiently in the database, without the use of complicated JOINs and without bringing the data down to the client.

Advanced inter-document relations

Simple schema definition for frontend and backend development

With SurrealDB, specify your database and API schema in one place, and define column rules and constraints just once. Once a schema is defined, database access is automatically granted to the relevant users. No more custom API code, and no more GraphQL integration. Simple, flexible, and ready for production in minutes not months.

Simple schema definition

Connect and query directly from web-browsers and client devices

Connect directly to SurrealDB from any end-user client device. Run SurrealQL queries directly within web-browsers, ensuring that users can only view or modify the data that they are allowed to access. Highly-performant WebSocket connections allow for efficient bi-directional queries, responses and notifications.

Connect directly from web-browsers

Query the database with the tools you want

Your data, your choice. SurrealDB is designed to be flexible to use, with support for SurrealQL, GraphQL (coming soon), CRUD support over REST, and JSON-RPC querying and modification over WebSockets. With direct-to-client connection with in-built permissions, SurrealDB speeds up the development process, and fits in seamlessly into any tech stack.

Multiple different query methods

Realtime live queries and data changes direct to application

SurrealDB keeps every client device in-sync with data modifications pushed in realtime to the clients, applications, end-user devices, and server-side libraries. Live SQL queries allow for advanced filtering of the changes to which a client subscribes, and efficient data formats, including DIFFing and PATCHing enable highly-performant web-based data syncing.

Realtime live queries and data changes

Scale effortlessly to hundreds of nodes for high-availability and scalability

SurrealDB can be run as a single in-memory node, or as part of a distributed cluster - offering highly-available and highly-scalable system characteristics. Designed from the ground up to run in a distributed environment, SurrealDB makes use of special techniques when handling multi-table transactions, and document record IDs - with no use of table or row locks.

Scale effortlessly for high-availability

Extend your database with JavaScript functions

Embedded JavaScript functions allow for advanced, custom functionality, with computation logic being moved to the data layer. This improves upon the traditional approach of moving data to the client devices before applying any computation logic, ensuring that only the necessary data is transferred remotely. These advanced JavaScript functions, with support for the ES2020 standard, allow any developer to analyse the data in ever more simple-yet-advanced ways.

Extend your database with JavaScript

Designed to be embedded or to run distributed in the cloud

Built entirely in Rust as a single library, SurrealDB is designed to be used as both an embedded database library with advanced querying functionality, and as a database server which can operate in a distributed cluster. With low memory usage and cpu requirements, the system requirements have been specifically thought through for running in all types of environment.

Designed to be embedded or in the cloud

  Community

Join our growing community around the world, for help, ideas, and discussions regarding SurrealDB.

  Contributing

We would    for you to get involved with SurrealDB development! If you wish to help, you can learn more about how you can contribute to this project in the contribution guide.

  Security

For security issues, view our vulnerability policy, view our security policy, and kindly email us at [email protected] instead of posting a public issue on GitHub.

  License

Source code for SurrealDB is variously licensed under a number of different licenses. A copy of each license can be found in each repository.

For more information, see the licensing information.

surrealdb's People

Contributors

al1l avatar arriqaaq avatar celebratevc avatar delskayn avatar dhghomon avatar ekwuno avatar emmanuel-keller avatar finnbear avatar gguillemas avatar github-actions[bot] avatar hchockarprasad avatar jpteb avatar kearfy avatar lunchspider avatar martasanchez avatar maxwellflitton avatar morigs avatar mumoshu avatar naisofly avatar odonno avatar phughk avatar raphaeldarley avatar roynrishingha avatar rushmorem avatar ryanrussell avatar sgirones avatar silvergasp avatar timpratim avatar tobiemh avatar tsunyoku avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

surrealdb's Issues

Bug: Permission example from README doesn't work

Describe the bug

When I try to use the example for the granular permissions I get a parse error. Maybe I am doing something wrong, but I just copied that SurrealQL code.

Steps to reproduce

I used the docker version of SurrealDB.
Execute this SurrealQL query from the README.

DEFINE TABLE post SCHEMALESS
	PERMISSIONS
		FOR select
			-- Published posts can be selected
			WHERE published = true
			-- A user can select all their own posts
			OR user = $auth.id
		FOR create
			-- A user can create or update their own posts
			WHERE user = $auth.id
		FOR delete
			-- A user can delete their own posts
			WHERE user = $auth.id
			-- Or an admin can delete any posts
			OR $auth.admin = true
;

The Server responds with

{
	"code": 400,
	"details": "Request problems detected",
	"description": "There is a problem with your request. Refer to the documentation for further information.",
	"information": "There was a problem with the database: Parse error on line 8 at character 2 when parsing 'FOR create\n\t\t\t-- A user can create or update their own posts\n\t\t\tWHERE user = $auth.id\n\t\tFOR delete\n\t'"
}

Expected behaviour

I would expect that the example from the README would work, but maybe I am missing something. The DEFINE statement has almost no documentation yet. Maybe there's just an error in the README code.

SurrealDB version

1.0.0-beta.7

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Can it provide native snowflake-id

Is your feature request related to a problem?

N/A

Describe the solution

Database provide twitter snowflake id, then application will no need to implement it.
it will very friendly to application developer.

Alternative methods

the snowflake id can be easy declare by sql such as below

DEFINE FIELD id ON TABLE user TYPE snowflake(begin_time, node_id, sequence);

SurrealDB version

1.0.0-beta.6

Contact Details

[email protected]

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Incorporate cargo-hack in the CI

Is your feature request related to a problem?

Currently running commands like

cargo test --no-default-features --features kv-mem --package surrealdb

result in test failures like

...
running 5 tests
Error: InvalidScript { message: "Embedded functions are not enabled." }
test script_function_simple ... FAILED
test script_function_module_os ... FAILED
Error: InvalidScript { message: "Embedded functions are not enabled." }
test script_function_types ... FAILED
Error: InvalidScript { message: "Embedded functions are not enabled." }
test script_function_arguments ... FAILED
Error: InvalidScript { message: "Embedded functions are not enabled." }
test script_function_context ... FAILED
...
failures:
    script_function_arguments
    script_function_context
    script_function_module_os
    script_function_simple
    script_function_types

test result: FAILED. 0 passed; 5 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.01s

This makes it harder to test only a subset of features during development, which can increase the test-debug cycle. Also not testing all the various combinations of features to make sure they compile and run properly may mean that some configurations may not work.

Describe the solution

Incorporate a tool like cargo hack in the CI. It can test all the different combinations of features to make sure they compile and run successfully.

--feature-powerset

Perform for the feature powerset which includes --no-default-features and default features of the package.

This is useful to check that every combination of features is working properly.

It also has a Github Action.

Alternative methods

We could add cargo hack check --feature-powerset and cargo hack test --feature-powerset to CONTRIBUTING.md and ask engineers to run them before pushing. The advantage of doing it this way is that the CI won't be impacted. However, there is no guarantee that engineers will comply.

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Unsoundness when dropping RocksDB `Datastore` before corresponding `Transactions`

Describe the bug

Safe code (no unsafe) making use of surrealdb::Datastore::new("rocksdb:...") and surrealdb::Datastore::transaction can trigger a segmentation fault, and the only warning was an internal comment in the relevant source code.

Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)

Note

This is probably not an issue for the vast majority of users, as Datastores generally last for the lifetime of a process. As a result, I don't recommend sacrificing time/effort/features to fix it in the near future. I might think of a solution that sacrifices a negligible amount of performance or introduces a negligible memory leak, and submit it in a PR.

A few ideas to fix it

  • Making at least one of the APIs unsafe
  • Making at least one of the APIs private, and ensuring that no other APIs can expose the unsoundness
  • Box::leak to get a real, safe 'static reference, at the expense of a potential memory leak, and optionally a way (unsafe or otherwise) to de-allocate
  • Some form of reference counting (possibly involving the upstream RocksDB integration crate)

If you want me to implement Box::leak idea (the straight-forward one), as opposed to waiting and thinking of a better solution, I can submit a PR.

Steps to reproduce

Cargo.toml

[package]
name = "surrealdb_rocksdb_unsound"
version = "0.1.0"
edition = "2021"

[dependencies]
tokio = {version = "1.20", features = ["full"]}
surrealdb = "1.0.0-beta.7"

src/main.rs

use surrealdb::Transaction;

#[tokio::main]
async fn main() {
    let mut transaction = get_transaction().await;
    println!("{:?}", transaction.put("uh", "oh").await.unwrap());
}

async fn get_transaction() -> Transaction {
    let datastore = surrealdb::Datastore::new("rocksdb:/tmp/rocks.db").await.unwrap();
    datastore.transaction(true, false).await.unwrap()
}

Expected behaviour

Either:

  1. The above safe shouldn't compile (because the APIs are made private, made sound in a breaking change, or marked unsafe)
  2. The above safe code should just work™

At minimum, the unsoundness should be documented. Right now, the main clue is in an internal comment:

// The database reference must always outlive
// the transaction. If it doesn't then this
// is undefined behaviour. This unsafe block
// ensures that the transaction reference is
// static, but will cause a crash if the
// datastore is dropped prematurely.
let tx = unsafe {
std::mem::transmute::<
rocksdb::Transaction<'_, OptimisticTransactionDB>,
rocksdb::Transaction<'static, OptimisticTransactionDB>,
>(tx)
};

SurrealDB version

surrealdb = "1.0.0-beta.7"

Contact Details

[email protected]

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Prevent any unauthorised user/session from running any query

Describe the bug

I have run surrealdb started with root user and password. Server running well but, i hit via rest API that not include username password still can acess query.

Output from surreal :

[2022-09-01 09:28:19] INFO  surrealdb::iam Root authentication is enabled
[2022-09-01 09:28:19] INFO  surrealdb::iam Root username is 'userdb'
[2022-09-01 09:28:19] INFO  surrealdb::dbs Database strict mode is enabled
[2022-09-01 09:28:19] INFO  surrealdb::kvs Connecting to kvs store at tikv://192.23.192.212:2079
[2022-09-01 09:28:19] INFO  surrealdb::kvs Connected to kvs store at tikv://192.23.192.212:2079
[2022-09-01 09:28:19] INFO  surrealdb::net Starting web server on 0.0.0.0:8000
[2022-09-01 09:28:19] INFO  surrealdb::net Started web server on 0.0.0.0:8000

Steps to reproduce

Start surreal db with :

surreal start --strict --log trace --user userdb --pass userdb123 tikv://192.23.192.212:2079

select from HTTP REST API without username and password :

curl -X POST \                                                                                                                                 04:28:11 PM
         -H "NS: myapplication" \
         -H "DB: myapplication" \
         -H "Content-Type: application/json" \
         -d "SELECT * FROM time::day('2021-11-01T08:30:17+00:00');" \
         http://192.23.192.210:8000/sql

Work with result :

[{"time":"957.355µs","status":"OK","result":[1]}]

If i query to some table, the authentication works, for example :

curl -X POST \                                                                                                                                 04:28:11 PM
         -H "NS: myapplication" \
         -H "DB: myapplication" \
         -H "Content-Type: application/json" \
         -d "SELECT * FROM person WHERE age > 18" \
         http://192.23.192.210:8000/sql

The output is :

{"code":403,"details":"Authentication failed","description":"Your authentication details are invalid. Reauthenticate using valid authentication parameters.","information":"There was a problem with authentication"}%

Expected behaviour

Request Not Authenticated

SurrealDB version

1.0.0-beta.7

Contact Details

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Turn this repo into a Nix flake

Is your feature request related to a problem?

Building this repo from source is a bit of challenge right now, even for Rust developers. This is especially true if one wants to use TiKV as their KV store.

Describe the solution

Nix flakes are the easiest way of distributing software from source, that I know of. If someone has a flake-enabled (flakes are currently still an experimental feature) Nix package manager, all they will need to do to install surreal on their system is nix profile install github:surrealdb/surrealdb. This will download, compile and install the latest commit along with all its dependencies. One can target specific versions, branches or even commits. They don't need install it in their environment either. nix run github:surrealdb/surrealdb is equivalent to cargo run except it will download not only this repo but also all its dependencies.

To turn the repo into a Nix flake, all we need to do is add to files to the root, flake.nix and flake.lock.

Alternative methods

Keep installing dependencies manually.

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: The math::sum() SQL function should return a number when not used as an aggregate function

Describe the bug

When using the math::sum() function on a field which is a number, and when not using a GROUP BY clause to aggregate the record values, the function should return a number instead of NONE.

Steps to reproduce

Running the following query...

INSERT INTO player (agility, strength, scores) VALUES (10, 10, [97, 83, 79]);
INSERT INTO player (agility, strength, scores) VALUES (10, 50, [87, 90, 88]);
SELECT math::sum(strength) AS strength FROM player;

currently returns...

[
  {
    "time": "203.041µs",
    "status": "OK",
    "result": [
      {
        "strength": null
      },
      {
        "strength": null
      }
    ]
  }
]

Expected behaviour

If the value is not an array of numbers, and when not using a GROUP BY clause we would expect the function to return the field value as a number...

[
  {
    "time": "203.041µs",
    "status": "OK",
    "result": [
      {
        "strength": 10
      },
      {
        "strength": 50
      }
    ]
  }
]

SurrealDB version

surreal 1.0.0-beta.6 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Allow multiple different table types in FIELD TYPE

Is your feature request related to a problem?

One should be able to define link constraints to multiple tables. Currently it's only possible to create a link constraint on a single type.

DEFINE FIELD link ON entity TYPE record(person);

Describe the solution

We should be able to constrain record types to multiple types (in a polymorphic way).

DEFINE FIELD link ON entity TYPE record(person, organisation, cause);

Alternative methods

No alternative methods exist.

SurrealDB version

surreal 1.0.0-beta.5 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Add a method for retrieving the currently selected NS and DB

Is your feature request related to a problem?

The original points for this issue can be seen in #32.

Currently there is no way of finding out which Namespace or Database is currently selected.

Describe the solution

We should have a new statement type or a function for retrieving the current Namespace and current Database.

Alternative methods

It is not yet possible to get the currently selected NS or DB.

SurrealDB version

surreal 1.0.0-beta.6 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Disable root authentication when no password is set

Is your feature request related to a problem?

When no root password is set with the command-line arguments -p or --pass, then instead of setting a randomly-generated password for the root user, we should disable root authentication altogether.

Describe the solution

Disable root authentication if the root password is not specified.

Alternative methods

Not applicable.

SurrealDB version

surreal 1.0.0-beta.2 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Unable to install windows-amd64 binary version of SurrealDB

Describe the bug

OS: Windows 10 x64

PS C:\WINDOWS\system32> iwr https://windows.surrealdb.com -useb | iex

 .d8888b.                                             888 8888888b.  888888b.
d88P  Y88b                                            888 888  'Y88b 888  '88b
Y88b.                                                 888 888    888 888  .88P
 'Y888b.   888  888 888d888 888d888  .d88b.   8888b.  888 888    888 8888888K.
    'Y88b. 888  888 888P'   888P'   d8P  Y8b     '88b 888 888    888 888  'Y88b
      '888 888  888 888     888     88888888 .d888888 888 888    888 888    888
Y88b  d88P Y88b 888 888     888     Y8b.     888  888 888 888  .d88P 888   d88P
 'Y8888P'   'Y88888 888     888      'Y8888  'Y888888 888 8888888P'  8888888P'

Fetching the latest database version...
Fetching the host system architecture...
Installing surreal-v1.0.0-beta.5
 for windows-amd64...
Invoke-WebRequest : The remote server returned an error: (403) Forbidden.
At line:54 char:5
+     Invoke-WebRequest $DownloadUrl -OutFile $Executable -UseBasicPars ...
+     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebExc
   eption
    + FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand

PS C:\WINDOWS\system32>

Steps to reproduce

Run the installation script

iwr https://windows.surrealdb.com -useb | iex

Expected behaviour

It's supposed to install SurrealDB.

SurrealDB version

1.0.0-beta.5

Contact Details

[email protected]

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Unable to assign a value to a field after creating index on it

Describe the bug

An error occurs when attempting to modify a field after creating an index on that field

Steps to reproduce

DEFINE TABLE PERSON;
CREATE person:bob SET age = 23;
UPDATE person set age = 10;
DEFINE INDEX age_idx ON person COLUMNS age;
UPDATE person SET age = 30;

Upon executing the last statement, I get this:

[
  {
    "detail": "There was a problem with a datastore transaction: Value being checked was not correct",
    "status": "ERR",
    "time": "190.625µs"
  }
]

Expected behaviour

The age field is successfully assigned the new value (30)

SurrealDB version

surreal 1.0.0-beta.6 for macos on aarch64

Contact Details

[email protected]

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Implement config definition caching within a transaction

Is your feature request related to a problem?

Currently within a transaction, TABLE definitions, EVENT definitions, FIELD definitions, INDEX definitions, and foreign TABLE AS definitions are fetched for every record when reading or writing records.

Ideally we should only fetch the definitions once, and then use the cached values when fetching them for subsequent record processing.

Describe the solution

Use an in-transaction cache to store and cache configuration table records once they have been retrieved for the first time within a transaction.

This can be used when retrieving:

DEFINE TABLE statements for the document table
DEFINE EVENT statements for the document events
DEFINE FIELD statements for the document fields
DEFINE INDEX statements for the document indexes
DEFINE TABLE AS statements for the document foreign tables

Alternative methods

No alternative methods.

SurrealDB version

surreal 1.0.0-beta.5 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: In strict mode If we try to use a non-existent namespace then SDB should return an error

Describe the bug

In strict mode If we try to use a non-exixtent namespace then SDB should return an error and not status=ok.

Steps to reproduce

info for kv; use ns test;
[
    {
        "time": "1.28µs",
        "status": "OK",
        "result": null
    }
]

Expected behaviour

[
    {
        "time": "1.28µs",
        "status": "ERR",
        "detail": "The namespace test does not exists"
    }
]

SurrealDB version

v1.0.0-beta.4

Contact Details

[email protected]

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Aliased field is not output when fetching a multi-yield path expression with a final alias

Describe the bug

When fetching a multi-yield path expression, if the last path part uses an AS field name, then the overall aliased field name is ignored and not output.

Steps to reproduce

Run the following SQL on a blank database:

CREATE person:1, person:2, person:3 RETURN NONE;
RELATE person:1->like->person:2;
RELATE person:3->like->person:2;
SELECT ->?->(person AS a)<-like<-(person AS b)->(like AS c)->(person AS d) AS people FROM person:1;

The result of the final SELECT query is:

{
	"a": [
		"person:2"
	],
	"b": [
		"person:3",
		"person:1"
	],
	"c": [
		"like:cgxc3q8m0470vkcunu5b",
		"like:qw1zus1o0xg607vw339l"
	],
	"d": [
		"person:2",
		"person:2"
	]
}

Expected behaviour

Run the following SQL on a blank database:

CREATE person:1, person:2, person:3 RETURN NONE;
RELATE person:1->like->person:2;
RELATE person:3->like->person:2;
SELECT ->?->(person AS a)<-like<-(person AS b)->(like AS c)->(person AS d) AS people FROM person:1;

The result of the final SELECT query should be:

{
	"a": [
		"person:2"
	],
	"b": [
		"person:3",
		"person:1"
	],
	"c": [
		"like:cgxc3q8m0470vkcunu5b",
		"like:qw1zus1o0xg607vw339l"
	],
	"d": [
		"person:2",
		"person:2"
	],
        "people": [
		"person:2",
		"person:2"
	]
}

SurrealDB version

surreal 1.0.0-beta.5 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Duplicated SCHEMAFULL FIELDS in export cli

Describe the bug

Duplicated SCHEMAFULL FIELDS in export file

Steps to reproduce

Create a simple SCHEMAFULL table with SDBQL

# tag:define
DEFINE TABLE tag SCHEMAFULL;
DEFINE FIELD name ON tag TYPE string;
DEFINE FIELD meta_data ON tag TYPE object;
DEFINE FIELD created_at ON tag TYPE datetime VALUE time::now();
DEFINE INDEX idx_name ON tag COLUMNS name UNIQUE;
# tag:create
CREATE tag:red CONTENT { name: "Red" };
CREATE tag:green CONTENT { name: "Green" };
CREATE tag:blue CONTENT { name: "Blue" };
CREATE tag:white CONTENT { name: "White" };
CREATE tag:black CONTENT { name: "Black" };
# tag:info
INFO FOR TABLE tag;

Export it, and note that FIELD's are duplicated

-- ------------------------------
-- TABLE: tag
-- ------------------------------

DEFINE TABLE tag SCHEMAFULL;

DEFINE FIELD created_at ON tag TYPE datetime VALUE time::now();
DEFINE FIELD meta_data ON tag TYPE object;
DEFINE FIELD name ON tag TYPE string;

DEFINE FIELD created_at ON tag TYPE datetime VALUE time::now();
DEFINE FIELD meta_data ON tag TYPE object;
DEFINE FIELD name ON tag TYPE string;

Expected behaviour

expected to have exported file without duplicate FIELD's

-- ------------------------------
-- TABLE: tag
-- ------------------------------

DEFINE TABLE tag SCHEMAFULL;

DEFINE FIELD created_at ON tag TYPE datetime VALUE time::now();
DEFINE FIELD meta_data ON tag TYPE object;
DEFINE FIELD name ON tag TYPE string;

SurrealDB version

surreal 1.0.0-beta.5 for linux on x86_64

Contact Details

[email protected]

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Remember previously specified NS and DB in command-line REPL

Is your feature request related to a problem?

The original points for this issue can be seen in #32.

Once the --ns and --db arguments are made optional in the SurrealDB command-line REPL (#34), then if the Namespace or Database has not yet been specified, each query must include a USE NS ... DB ... statement before any other statement, can be run.

Describe the solution

The SurrealDB command-line should remember the last specified Namespace or Database which have been set with USE NS and USE DB.

Alternative methods

No alternative methods.

SurrealDB version

surreal 1.0.0-beta.6 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Support more formats for latitude and longitude

Is your feature request related to a problem?

Currently these only support floats. We may want to support more formats like:-

40° 26′ 46″ N 79° 58′ 56″ W
N 40° 26′ 46″ W 79° 58′ 56″
40° 26.767' N 79° 58.933' W
40° 26′ 46″ 79° 58′ 56″, 40° 26′ 46″, 79° 58′ 56″, ...
N 40° 26.767' W 79° 58.933'
40° 26.767' 79° 58.933', 40° 26.767', 79° 58.933', ...
N 40.446° W 79.982°
40.446° N 79.982° W
40.446° 79.982°, 40.446,79.982, etc.

Describe the solution

From Tobie's comment:

We would need to write it in the parser (as opposed to defer to the that crate though, as that will be more performant. Similar to how we are parsing datetimes/durations...

Alternative methods

Using the linked crate latlon. See Tobie's comment above on this.

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Log a message on server startup with root authentication status

Is your feature request related to a problem?

When the server starts up, currently there is no obvious way of knowing whether root authentication is enabled or disabled.

Describe the solution

We should log whether root authentication is enabled or not when the server is started.

Alternative methods

Not applicable.

SurrealDB version

surreal 1.0.0-beta.2 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

SurrealQL statements for backup/restore and import/export

Is your feature request related to a problem?

Currently we can only perform Backup/Restore and Import/Export with CLI and There is no way to initiate these task with SurrealQL.

Describe the solution

I suggest adding SurrealQL statements for performing backup/restore and import/export data.

BACKUP INTO '{collectionURI}';
RESTORE DATABASE bank FROM LATEST IN '{collectionURI}';
RESTORE TABLE bank.customers FROM LATEST IN '{collectionURI}';

Reference: https://www.cockroachlabs.com/docs/stable/backup.html

These statements will enable us to initiate and schedule backup/restore from code.

Alternative methods

CLI can be used to perform backup/restore.

SurrealDB version

v1.0.0-beta.4

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: improved error message for uniqueness conflicts

Is your feature request related to a problem?

When a new record contains a duplicate on a field defined as unique, the error message returns the ID of the new record. While this works out fine for the id field, on a field with a UNIQUE index this new ID doesn't always match the ID in the database and may even be a newly generated random ID.

> DEFINE TABLE foo SCHEMAFULL;
[{"time":"84.158µs","status":"OK","result":null}]

> DEFINE FIELD email_address ON foo TYPE string ASSERT is::email($value);
[{"time":"152.971µs","status":"OK","result":null}]

> DEFINE INDEX email_address_idx ON foo FIELDS email_address UNIQUE;
[{"time":"105.3µs","status":"OK","result":null}]

> CREATE foo SET email_address = "[email protected]";
[{"time":"778.488µs","status":"OK","result":[{"email_address":"[email protected]","id":"foo:ni00u6941w0xij9b6aev"}]}]

> CREATE foo SET email_address = "[email protected]";
[{"time":"214.592µs","status":"ERR","detail":"Database index `email_address_idx` already contains `foo:1z1vsan748ov7jfyhing`"}]

# Notice that the ID already in the database is `foo:ni00u6941w0xij9b6aev` not `foo:1z1vsan748ov7jfyhing`
# which is being reported in the error message.

Describe the solution

I think

Database index `email_address_idx` contains a duplicate of a value in `foo:ni00u6941w0xij9b6aev`

or something like that would be more accurate and makes it easier to debug down the line if someone does not handle this error right away and only notices it in the logs. In a case like this, the new ID will not even be in the database by the time one discovers this. This would make it impossible to track it down.

Alternative methods

It would be nice if it said

Database index `email_address_idx` already contains `[email protected]`

but that might lead to gigantic error messages if the field value is large.

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: `type::table` doesn't extract the table from the output of `type::thing`

Describe the bug

type::table doesn't extract the table from the output of type::thing

Steps to reproduce

SELECT * FROM <string> type::table(type::thing(1, 2));
[{"time":"24.292µs","status":"OK","result":["`1:2`"]}]

Expected behaviour

SELECT * FROM <string> type::table(type::thing(1, 2));
[{"time":"24.292µs","status":"OK","result":["1"]}]

SurrealDB version

surreal 1.0.0-beta.6 for linux on x86_64

Contact Details

[email protected]

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Add initial implementation using FoundationDB as a backing store

Is your feature request related to a problem?

The original points for this issue can be seen in #25.

Describe the solution

With the addition of a key-value store implementation for FoundationDB, SurrealDB will be able to be run on a throughly tested, and scaleable key-value store, as an alternative to TiKV.

Alternative methods

Currently the only other method for using distributed key-value backed storage is TiKV.

SurrealDB version

surreal 1.0.0-beta.6 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: unknown variant `Polygons`

Describe the bug

Selecting a record that contains a MultiPolygon crashes the database.

Steps to reproduce

Run the following documented query:

UPDATE university:oxford SET locations = {
	type: "MultiPolygon",
	coordinates: [
		[
			[ [10.0, 11.2], [10.5, 11.9], [10.8, 12.0], [10.0, 11.2] ]
		],
		[
			[ [9.0, 11.2], [10.5, 11.9], [10.3, 13.0], [9.0, 11.2] ]
		]
	]
};

Select the record or export the database:

SELECT * FROM university:oxford;

The database crashes with the following error message:

thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Syntax("unknown variant `Polygons`, expected one of `Point`, `Line`, `Polygon`, `MultiPoint`, `MultiLine`, `MultiPolygon`, `Collection`")', lib/src/sql/value/value.rs:97:60

Expected behaviour

Selects should return the record successfully and exporting should work as expected. I have already identified the source of the bug and will prepare and submit a pull request shortly.

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: CORS headers are not set on HTTP response when error occurs

Describe the bug

When a request is made which causes an error response (400 or 500 status codes), then the server does not set any CORS headers on the HTTP response. This means that client browsers can not read the error response returned.

Steps to reproduce

Make a POST request to the /sql HTTP endpoint, with a query that has a syntax error. The server returns a 400 error because the request failed, however the CORS headers are not present in the response.

Expected behaviour

The server should respond with the correct CORS headers regardless of whether the request succeeded or failed.

SurrealDB version

surreal 1.0.0-beta.2 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Health Check Endpoint

Is your feature request related to a problem?

When running in production, especially in a Kubernetes environment, it is fairly common practice to included a /health or /health_check endpoint that is publicly available. This way the system can monitor the health status of the running instances.

Describe the solution

Add a REST endpoint: /health . That return an Http Status of 200 when all is ok and 500 otherwise.

Alternative methods

/health is the defacto.

SurrealDB version

all

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Don't block the async runtime for expensive functions

Describe the bug

A few of the so-called synchronous functions are very computationally intensive. Some of them, namely the argon2, pbkdf2, and scrypt hash functions, are computationally intensive by design (to prevent brute force).

Futures are not supposed to synchronously perform too much computation, as they run on threads of the async executor. See the docs on this subject:

An implementation of poll should strive to return quickly, and should not block. Returning quickly prevents unnecessarily clogging up threads or event loops. If it is known ahead of time that a call to poll may end up taking awhile, the work should be offloaded to a thread pool (or something similar) to ensure that poll can return quickly.
(https://docs.rs/futures/latest/futures/future/trait.Future.html)

And yet, the expensive functions are called synchronously:

// Attempts to run any function
pub async fn run(ctx: &Context<'_>, name: &str, args: Vec<Value>) -> Result<Value, Error> {
match name {
v if v.starts_with("http") => {
// HTTP functions are asynchronous
asynchronous(ctx, name, args).await
}
_ => {
// Other functions are synchronous
synchronous(ctx, name, args)
}
}
}

Steps to reproduce

Performance would severely degrade when one or more of the expensive functions are running.

Expected behaviour

// Attempts to run any function
pub async fn run(ctx: &Context<'_>, name: &str, args: Vec<Value>) -> Result<Value, Error> {
	match name {
		v if v.starts_with("http") => {
			// HTTP functions are asynchronous
			asynchronous(ctx, name, args).await
		}
                v if v.starts_with("crypto") && (v.ends_with("argon2") || ...others...) => {
                        // Computationally expensive functions are dispatched to a thread pool
                        tokio::task::spawn_blocking(|| synchronous(ctx, name, args)).await
                }
		_ => {
			// Other functions are synchronous
			synchronous(ctx, name, args)
		}
	}
}

The reason this isn't a PR is that the tokio dependency doesn't seem to be available. It could maybe be replicated with std::thread::spawn and an spsc channel; let me know what you think.

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

[email protected]

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Limit number of concurrent futures when fetching remote records

Is your feature request related to a problem?

Currently when processing linked records, we process all records concurrently using try_join_all!, with no limit to the number of futures processed at one time.

let futs = v.iter().map(|v| v.get(ctx, opt, txn, path));
try_join_all(futs).await.map(Into::into)

The files which are affected are:

let futs = v.iter().map(|v| v.get(ctx, opt, txn, path));
try_join_all(futs).await.map(Into::into)

let futs = v.iter().map(|v| v.get(ctx, opt, txn, path));
try_join_all(futs).await.map(Into::into)

let futs = v.iter_mut().map(|v| v.set(ctx, opt, txn, path, val.clone()));
try_join_all(futs).await?;

let futs = v.iter_mut().map(|v| v.set(ctx, opt, txn, path, val.clone()));
try_join_all(futs).await?;

let futs = v.iter_mut().map(|v| v.del(ctx, opt, txn, path));
try_join_all(futs).await?;

let futs = v.iter_mut().map(|v| v.del(ctx, opt, txn, path));
try_join_all(futs).await?;

Describe the solution

We should look into replacing try_join_all with a buffered stream. Something like the following...

futures::stream::iter(futures).buffer_unordered(10);

Alternative methods

join_all! and try_join_all! now make use of FuturesOrdered for performance reasons if the number of futures is large.

Perhaps we therefore don't need to do anything here...

SurrealDB version

surreal 1.0.0-beta.5 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Range queries on record ids should be ergonomic and fast

Is your feature request related to a problem?

SurrealDB is built upon various ordered and, to the extent they are distributed, range-partitioned key-value stores such as TiKV. This has the potential to make range queries on keys (record ids) very performant. However, SurQL lacks a dedicated syntax or performance guarantees for such queries.

Consider the following timeseries records grouped by game (the first index of each record id is the game name, and the second index is a timestamp of days since product launch):

[
  {
    id: ["Chess", 1],
    players: 50
  },
  {
    id: ["Chess", 2],
    players: 15
  },
  ...296 records omitted...
  {
    id: ["Chess", 299],
    players: 15
  },
  {
    id: ["Chess", 300],
    players: 15
  },
  {
    id: ["Tetris", 1],
    players: 10
  },
  {
    id: ["Tetris", 2],
    players: 12
  },
  ...296 records omitted...
  {
    id: ["Tetris", 299],
    players: 26
  },
  {
    id: ["Tetris", 300],
    players: 23
  }
]

Assuming each node can handle 150 records, a likely partitioning into four nodes would result in the following ranges:

  1. Node 1 gets keys ["Chess", 1] to ["Chess", 150]
  2. Node 2 gets keys ["Chess", 151] to ["Chess", 300]
  3. Node 3 gets keys ["Tetris", 1] to ["Tetris", 150]
  4. Node 4 gets keys ["Tetris", 151] to ["Tetris", 300]

A common query pattern will be to chart the data for a particular game for the last 90 days. Using the game Tetris as an example, that means getting all records between ["Tetris", 210] (inclusive) and ["Tetris", 300] (inclusive). Luckily, these records all reside on Node 4, so the underlying KV-store can retrieve them in a single access (side note: if we were querying many more records, we might hit multiple nodes, but the ordering would make their disk accesses much more efficient and the number of nodes hit would be relatively minimal).

Describe the solution

Idea 1 (.. and ..= to signify Range<Key> and RangeInclusive<Key>, respectively):

SELECT id, players FROM metrics:["Tetris", 210]..["Tetris", 301]
SELECT id, players FROM metrics:["Tetris", 210]..=["Tetris", 300]

Or, if preferable, idea 1.5:

SELECT id, players FROM metrics:["Tetris", 210]..metrics:["Tetris", 301]
SELECT id, players FROM metrics:["Tetris", 210]..=metrics:["Tetris", 300]

Alternative methods

Idea 2 (support normal-SQL's BETWEEN ... AND ... syntax, and make sure it is optimized to use a range lookup from the underlying KV-store):

SELECT id, players FROM metrics WHERE id BETWEEN ["Tetris", 210] AND ["Tetris", 300]

Idea 3 (no new syntax, just make sure the following optimizes to use a range lookup from the underlying KV-store):

SELECT id, players FROM metrics WHERE ["Tetris", 210] <= id AND id <= ["Tetris", 300]

Non-solution: Changing the schema to use a random record id and to have an index on game name and timestamp would throw spatial-locality and, by extension, query performance out the window. Executing SELECT id, players FROM metrics WHERE name = "Tetris" AND timestamp BETWEEN 210 AND 300, assuming the existence of an ordered index on (name, timestamp), would play nicely with that index but then do 90 random accesses to fetch the actual records.

See also: https://discord.com/channels/902568124350599239/902568124350599242/1012746600315105401

SurrealDB version

surreal 1.0.0-beta.6 for linux on x86_64

Contact Details

[email protected]

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: CONTRIBUTING.md seems outdated

Describe the bug

The commands in that document contain -vvv which is not or is no longer supported. Furthermore, running without root authentication seems to only lead to authentication failure.

Steps to reproduce

$ cargo run -- -vvv start memory
    Finished dev [unoptimized + debuginfo] target(s) in 0.17s
     Running `target/debug/surreal -vvv start memory`
error: Found argument '-v' which wasn't expected, or isn't valid in this context

	If you tried to supply `-v` as a value rather than a flag, use `-- -v`

USAGE:
    surreal [SUBCOMMAND]

For more information try --help

Expected behaviour

Commands in that document should just run.

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: `datetime` fields can't be set to `NONE`

Describe the bug

Fields defined with a type of datetime cannot be set to NONE. They always default to time::now() and revert to that value even when explicitly set to NONE.

Steps to reproduce

> DEFINE TABLE foo;
[{"time":"49.646µs","status":"OK","result":null}]

> DEFINE FIELD deleted_at ON foo TYPE datetime;
[{"time":"173.153µs","status":"OK","result":null}]

> CREATE foo;
[{"time":"198.576µs","status":"OK","result":[{"deleted_at":"2022-08-29T16:25:02.478436711Z","id":"foo:hx4rbszo4xajq8ilip2f"}]}]

# Notice deleted_at is set to time::now() instead of NONE ^^^

> UPDATE foo:hx4rbszo4xajq8ilip2f SET deleted_at = NONE;
[{"time":"150.578µs","status":"OK","result":[{"deleted_at":"2022-08-29T16:25:32.007420801Z","id":"foo:hx4rbszo4xajq8ilip2f"}]}]

# Again here, instead of setting the value to NONE, it regenerates time::now() and uses that new value instead

# Even if you explicitly set the default value to NONE
> DEFINE FIELD deleted_at ON foo TYPE datetime VALUE $value OR NONE;
[{"time":"26.726µs","status":"OK","result":null}]

# It will still override it with time::now()
> CREATE foo;
[{"time":"128.641µs","status":"OK","result":[{"deleted_at":"2022-08-29T16:44:10.777821074Z","id":"foo:qged50ou8ygw2bv49me1"}]}]

Expected behaviour

  • It should default to NONE when there is no explicit default value defined and there is no ASSERT $value != NONE
  • When explicitly set to NONE, it should update to that value if the field accepts NONE, otherwise it should return an error instead of calling time::now().

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Non-UTF-8 strings return error code 400

Describe the bug

Strings that are not valid UTF-8 result in the database throwing the following error:-

There is a problem with your request. Refer to the documentation for further information.

Steps to reproduce

> SELECT * FROM is::uuid("67e55044-10b1-426f-9247-bb680e5\0e0c8");
{
    "code":400,
    "details":"Request problems detected",
    "description":"There is a problem with your request. Refer to the documentation for further information.",
    "information":"There was a problem with the database: Parse error on line 1 at character 16 when parsing '::uuid(\"67e55044-10b1-426f-9247-bb680e5\\0e0c8\");'"
}

This is not limited to UUIDs nor validation functions.

Expected behaviour

Non-UTF-8 strings should be handled properly. In this particular case it should be forwarded to is::uuid which will then return false.

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Ensure multi-yield path expression alias outputs are flattened

Describe the bug

When performing a graph traversal query with multi-yield path alias expressions, each additional output field is embedded within an array, instead of being output as a flat result.

Steps to reproduce

Run the following SQL on a blank database:

CREATE person:1, person:2, person:3 RETURN NONE;
RELATE person:1->like->person:2;
RELATE person:3->like->person:2;
SELECT ->?->(person AS a)<-like<-(person AS b)->(like AS c)->person AS people FROM person:1;

The result of the final SELECT query is:

{
    "a": [
        "person:2"
    ],
    "b": [
        [
            "person:3",
            "person:1"
        ]
    ],
    "c": [
        [
            [
                "like:1fm28zli7kd49uzzl42q"
            ],
            [
                "like:ybvh0zbpfzbbsi1mcbai"
            ]
        ]
    ],
    "people": [
        [
            [
                [
                    "person:2"
                ]
            ],
            [
                [
                    "person:2"
                ]
            ]
        ]
    ]
}

Expected behaviour

Run the following SQL on a blank database:

CREATE person:1, person:2, person:3 RETURN NONE;
RELATE person:1->like->person:2;
RELATE person:3->like->person:2;
SELECT ->?->(person AS a)<-like<-(person AS b)->(like AS c)->person AS people FROM person:1;

The result of the final SELECT query should be:

{
    "time": "4.921291ms",
    "status": "OK",
    "result": [
        {
            "a": [
                "person:2"
            ],
            "b": [
                "person:1",
                "person:3"
            ],
            "c": [
                "like:fy41kwb25ik92zxvej9g",
                "like:vpf4yciv2067xtkidpq4"
            ],
            "people": [
                "person:2",
                "person:2"
            ]
        }
    ]
}

SurrealDB version

surreal 1.0.0-beta.5 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: fields with a `UNIQUE` constraint should allow multiple `NONE` values

Describe the bug

If you define a UNIQUE index on a field, that field will only accept a single NONE value on that field.

Steps to reproduce

> DEFINE TABLE foo SCHEMAFULL;
[{"time":"27.67µs","status":"OK","result":null}]

> DEFINE FIELD name ON foo TYPE string;
[{"time":"27.043µs","status":"OK","result":null}]

> DEFINE FIELD national_id ON foo TYPE string;
[{"time":"46.241µs","status":"OK","result":null}]

> DEFINE INDEX national_id_idx ON foo FIELDS national_id UNIQUE;
[{"time":"69.913µs","status":"OK","result":null}]

# The first person with no national_id will be accepted
> CREATE foo SET name = "John Doe";
[{"time":"266.761µs","status":"OK","result":[{"id":"foo:yki2758ba7dwzastfsk1","name":"John Doe","national_id":"NONE"}]}]

# Any subsequent records without a national_id will be rejected
> CREATE foo SET name = "Jane Doe";
[{"time":"168.721µs","status":"ERR","detail":"Database index `national_id_idx` already contains `foo:4vgq4l8u4y211c3ekgn9`"}]

Expected behaviour

I would expect NONE values to be ignored.

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: add HTTP compression to web requests

Is your feature request related to a problem?

Currently, after a recent update, no web requests make use of HTTP compression, as the warp library does not support setting the output compression based on the Accept-Encoding request header.

Describe the solution

Currently there is no standard solution until seanmonstar/warp#513 gets added in to the warp library.

Alternative methods

We could implement request compression by taking a similar approach to casper-network/casper-node#2077.

SurrealDB version

surreal 1.0.0-beta.5 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Don't clone values by default when retrieving values

Is your feature request related to a problem?

When fetching nested fields, array values, and remote records, we are currently cloning all values which are fetched. This has an impact on performance.

Describe the solution

Instead of using clone() by default when we are fetching or comparing fields , we should return Cow<'a, Value> values, resulting in a value only being cloned when it needs to be updated / written to.

We would also need to ensure that all Value types are based on Cow<'a, Value> values, as opposed to owned Value values.

So the following:

pub async fn get(&self, ctx: &Runtime, opt: &Options, txn: &Transaction, path: &[Part]) -> Result<Self, Error>;

would become:

pub async fn get(&'a self, ctx: &Runtime, opt: &Options, txn: &Transaction, path: &[Part]) -> Result<Cow<'a, Self>, Error>;

In addition, compute() functions would need to return Cow<'a, Value> values.

pub(crate) async fn compute(&'a self, ctx: &Context<'_>, opt: &Options, txn: &Transaction, doc: Option<&Value>) -> Result<Cow<'a, Value>, Error>

Alternative methods

Currently the functionality works without modification, but query performance will be significantly improved when this change is made.

SurrealDB version

surreal 1.0.0-beta.5 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Docker container fails to start or run

Describe the bug

When starting Docker, the container fails to run and instead the following error is returned:

docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/surreal": permission denied: unknown.

Steps to reproduce

Run the following command:

docker run --rm -p 8000:8000 surrealdb/surrealdb:latest start

Expected behaviour

Should run without issue, and start the SurrealDB server

SurrealDB version

surreal 1.0.0-beta.2 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: our validation and parser functions are not robust enough

Describe the bug

Currently these functions allow some invalid inputs and disallow some valid ones. We are currently using regular expressions to implement these functions. While this is OK for some applications, for a database management system it's very important that we be as correct as possible. Especially considering that these functions are used to constrain data that will be stored in the database.

Steps to reproduce

is::domain

# Accepts invalid input

> SELECT * FROM is::domain("example-.com");
[{"time":"370.978µs","status":"OK","result":[true]}]

# Domain labels cannot end with a "-"
# Rejects valid input

> SELECT * FROM is::domain("食狮.**");
[{"time":"263.639µs","status":"OK","result":[false]}]

# That is a valid internationalised domain name

is::email

# Accepts invalid input

> SELECT * FROM is::email("[email protected]");
[{"time":"4.776857ms","status":"OK","result":[true]}]

# Email addresses cannot contain empty labels in the local-part
# Rejects valid input

> SELECT * FROM is::email("user@[fd79:cdcb:38cc:9dd:f686:e06d:32f3:c123]");
[{"time":"315.276µs","status":"OK","result":[false]}]

# IP addresses are valid email hosts

Of all the inputs here, email addresses are probably the hardest to get right.

is::uuid

# Returns 400 when input is not valid UTF-8

> SELECT * FROM is::uuid("67e55044-10b1-426f-9247-bb680e5\0e0c8");
{"code":400,"details":"Request problems detected","description":"There is a problem with your request. Refer to the documentation for further information.","information":"There was a problem with the database: Parse error on line 1 at character 16 when parsing '::uuid(\"67e55044-10b1-426f-9247-bb680e5\\0e0c8\");'"}

# This is not unique to UUIDs. It happens to all the functions I tested.
# Rejects valid input

> SELECT * FROM is::uuid("67e55044-10b1-426f-9247-bb680e5fe0c8");
[{"time":"351.335µs","status":"OK","result":[false]}]

# This is a valid UUIDv4

Expected behaviour

All parser and validation functions should parse correctly and return correct results.

Proposal

We could try and fix our regular expressions but it will be very hard to get them right and it will be a pain to maintain them. Instead, I would like to propose that we delegate this functionality to external crates.

  • We already depend directly on the uuid crate. We can use that to validate UUIDs.
  • We already depend indirectly on the semver crate.
  • For domain names and email addresses, I propose we use the addr crate. I maintain the addr crate but that is not the reason I'm nominating it. It is small, fast and well-tested. It even supports no_std. It avoids heap allocations even when the std feature is enabled.

All these crates are lightweight, popular, fast and well tested. After these changes the only new crates that will be added to Cargo.lock are addr which is 92.6 kB on crates.io and psl-types which is just 7.96 kB. On the positive side, having tests for the functions stay in the external crates will result in tests running faster as opposed to importing them.

@tobiemh Let me know it you would like to go ahead with this. I will be happy to put together a PR right away.

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Build Fails on gRPC Linux

Describe the bug

While trying to build: cargo build the build fails:

chris@cb ~/s/surrealdb (main)> cargo build
   Compiling proc-macro2 v1.0.43
   Compiling unicode-ident v1.0.3
   Compiling version_check v0.9.4
   Compiling quote v1.0.21
   Compiling syn v1.0.99
   Compiling cfg-if v1.0.0
   Compiling autocfg v1.1.0
   Compiling libc v0.2.132
   Compiling memchr v2.5.0
   Compiling cc v1.0.73
   Compiling once_cell v1.13.1
   Compiling typenum v1.15.0
   Compiling log v0.4.17
   Compiling futures-core v0.3.23
   Compiling pin-project-lite v0.2.9
   Compiling pkg-config v0.3.25
   Compiling bytes v1.2.1
   Compiling futures-io v0.3.23
   Compiling fastrand v1.8.0
   Compiling bitflags v1.3.2
   Compiling futures-sink v0.3.23
   Compiling serde_derive v1.0.143
   Compiling either v1.8.0
   Compiling serde v1.0.143
   Compiling anyhow v1.0.62
   Compiling hashbrown v0.12.3
   Compiling subtle v2.4.1
   Compiling lazy_static v1.4.0
   Compiling scopeguard v1.1.0
   Compiling itoa v1.0.3
   Compiling fnv v1.0.7
   Compiling pin-utils v0.1.0
   Compiling futures-channel v0.3.23
   Compiling futures-task v0.3.23
   Compiling futures-util v0.3.23
   Compiling regex-syntax v0.6.27
   Compiling futures v0.1.31
   Compiling libm v0.2.5
   Compiling base64 v0.13.0
   Compiling ppv-lite86 v0.2.16
   Compiling remove_dir_all v0.5.3
   Compiling percent-encoding v2.1.0
   Compiling smallvec v1.9.0
   Compiling matches v0.1.9
   Compiling cpufeatures v0.2.2
   Compiling byteorder v1.4.3
   Compiling opaque-debug v0.3.0
   Compiling glob v0.3.0
   Compiling tinyvec_macros v0.1.0
   Compiling unicode-segmentation v1.9.0
   Compiling unicode-bidi v0.3.8
   Compiling parking v2.0.0
   Compiling waker-fn v1.1.0
   Compiling cache-padded v1.2.0
   Compiling httparse v1.7.1
   Compiling multimap v0.8.3
   Compiling bindgen v0.57.0
   Compiling fixedbitset v0.4.2
   Compiling openssl-probe v0.1.5
   Compiling fixedbitset v0.2.0
   Compiling same-file v1.0.6
   Compiling parking_lot_core v0.8.5
   Compiling event-listener v2.5.3
   Compiling ryu v1.0.11
   Compiling shlex v0.1.1
   Compiling rustc-hash v1.1.0
   Compiling lazycell v1.3.0
   Compiling peeking_take_while v0.1.2
   Compiling getrandom v0.1.16
   Compiling proc-macro-hack v0.5.19
   Compiling async-task v4.3.0
   Compiling semver v1.0.13
   Compiling wasm-bindgen-shared v0.2.82
   Compiling openssl v0.10.41
   Compiling crc32fast v1.3.2
   Compiling protobuf v2.27.1
   Compiling foreign-types-shared v0.1.1
   Compiling mime v0.3.16
   Compiling try-lock v0.2.3
   Compiling async-trait v0.1.57
   Compiling cpuid-bool v0.2.0
   Compiling httpdate v1.0.2
   Compiling const_fn v0.4.9
   Compiling serde_json v1.0.83
   Compiling adler v1.0.2
   Compiling native-tls v0.2.10
   Compiling bumpalo v3.11.0
   Compiling crossbeam-utils v0.8.11
   Compiling tower-service v0.3.2
   Compiling spin v0.5.2
   Compiling strsim v0.10.0
   Compiling encoding_rs v0.8.31
   Compiling untrusted v0.7.1
   Compiling atomic-waker v1.0.0
   Compiling wasm-bindgen v0.2.82
   Compiling curl v0.4.44
   Compiling ident_case v1.0.1
   Compiling stable_deref_trait v1.2.0
   Compiling io-lifetimes v0.7.3
   Compiling isahc v0.9.14
   Compiling base64ct v1.5.1
   Compiling relative-path v1.7.2
   Compiling alloc-no-stdlib v2.0.3
   Compiling http-types v2.12.0
   Compiling bytes v0.5.6
   Compiling hex v0.4.3
   Compiling prometheus v0.12.0
   Compiling ipnet v2.5.0
   Compiling infer v0.2.3
   Compiling rustix v0.35.9
   Compiling paste v1.0.8
   Compiling linux-raw-sys v0.0.46
   Compiling utf-8 v0.7.6
   Compiling safemem v0.3.3
   Compiling time-macros v0.2.4
   Compiling num_threads v0.1.6
   Compiling endian-type v0.1.2
   Compiling minimal-lexical v0.2.1
   Compiling any_ascii v0.1.7
   Compiling robust v0.2.3
   Compiling quick-error v1.2.3
   Compiling iana-time-zone v0.1.45
   Compiling futures-timer v3.0.2
   Compiling arc-swap v1.5.1
   Compiling urlencoding v2.1.0
   Compiling os_str_bytes v6.3.0
   Compiling deunicode v1.3.1
   Compiling utf8parse v0.2.0
   Compiling scoped-tls v1.0.0
   Compiling trice v0.1.0
   Compiling unicode-width v0.1.9
   Compiling half v1.8.2
   Compiling termcolor v1.1.3
   Compiling textwrap v0.15.0
   Compiling libloading v0.7.3
   Compiling instant v0.1.12
   Compiling geographiclib-rs v0.2.1
   Compiling tracing-core v0.1.29
   Compiling thread_local v1.1.4
   Compiling itertools v0.9.0
   Compiling itertools v0.10.3
   Compiling form_urlencoded v1.0.1
   Compiling tinyvec v1.6.0
   Compiling pem v1.1.0
   Compiling concurrent-queue v1.2.4
   Compiling nibble_vec v0.1.0
   Compiling value-bag v1.0.0-alpha.9
   Compiling generic-array v0.14.6
   Compiling nom v5.1.2
   Compiling standback v0.2.17
   Compiling unicase v2.6.0
   Compiling time v0.2.27
   Compiling proc-macro-error-attr v1.0.4
   Compiling cookie v0.14.4
   Compiling proc-macro-error v1.0.4
   Compiling openssl-src v111.22.0+1.1.1q
   Compiling cmake v0.1.48
   Compiling http v0.2.8
   Compiling slab v0.4.7
   Compiling lock_api v0.4.7
   Compiling tokio v1.20.1
   Compiling indexmap v1.9.1
   Compiling num-traits v0.2.15
   Compiling num-integer v0.1.45
   Compiling async-io v1.8.0
   Compiling num-bigint v0.4.3
   Compiling hash32 v0.2.1
   Compiling walkdir v2.3.2
   Compiling async-lock v2.5.0
   Compiling foreign-types v0.3.2
   Compiling miniz_oxide v0.5.3
   Compiling heck v0.3.3
   Compiling clang-sys v1.3.3
   Compiling alloc-stdlib v0.2.1
   Compiling lexical-sort v0.3.1
   Compiling clap_lex v0.2.4
   Compiling fuzzy-matcher v0.3.7
   Compiling radix_trie v0.2.1
   Compiling libz-sys v1.1.8
   Compiling libnghttp2-sys v0.1.7+1.45.0
   Compiling curl-sys v0.4.56+curl-7.83.1
   Compiling ring v0.16.20
   Compiling rquickjs-sys v0.1.6
   Compiling boringssl-src v0.2.0
   Compiling openssl-sys v0.9.75
   Compiling brotli-decompressor v2.3.2
   Compiling async-channel v1.7.1
   Compiling unicode-normalization v0.1.21
   Compiling aho-corasick v0.7.18
   Compiling futures-lite v1.12.0
   Compiling twoway v0.1.8
   Compiling buf_redux v0.8.4
   Compiling nom v7.1.1
   Compiling flate2 v1.0.24
   Compiling http-body v0.4.5
   Compiling headers-core v0.2.0
   Compiling rustc_version v0.4.0
   Compiling sluice v0.5.5
   Compiling which v4.2.5
   Compiling tempfile v3.3.0
   Compiling mime_guess v2.0.4
   Compiling spin v0.9.4
   Compiling spinning_top v0.2.4
   Compiling bitmaps v2.1.0
   Compiling idna v0.2.3
   Compiling flume v0.9.2
   Compiling prost-build v0.9.0
   Compiling prost-build v0.7.0
   Compiling petgraph v0.6.2
   Compiling petgraph v0.5.1
   Compiling heapless v0.7.16
   Compiling regex v1.6.0
   Compiling async-executor v1.4.1
   Compiling blocking v1.2.0
   Compiling brotli v3.3.4
   Compiling num_cpus v1.13.1
   Compiling getrandom v0.2.7
   Compiling socket2 v0.4.4
   Compiling procfs v0.9.1
   Compiling atty v0.2.14
   Compiling time v0.3.13
   Compiling time v0.1.44
   Compiling dirs-sys-next v0.1.2
   Compiling nix v0.24.2
   Compiling sized-chunks v0.6.5
   Compiling crypto-common v0.1.6
   Compiling block-buffer v0.10.2
   Compiling digest v0.9.0
   Compiling block-buffer v0.9.0
   Compiling cipher v0.2.5
   Compiling universal-hash v0.4.1
   Compiling crypto-mac v0.10.1
   Compiling aead v0.3.2
   Compiling inout v0.1.3
   Compiling rand_core v0.6.3
   Compiling nanorand v0.7.0
   Compiling rand_core v0.5.1
   Compiling colored v1.9.3
   Compiling clap v3.2.17
   Compiling approx v0.5.1
   Compiling rmp v0.8.11
   Compiling float_next_after v0.1.5
   Compiling parking_lot v0.11.2
   Compiling dirs-next v2.0.0
   Compiling digest v0.10.3
   Compiling sha2 v0.9.9
   Compiling sha-1 v0.9.8
   Compiling polyval v0.4.5
   Compiling hmac v0.10.1
   Compiling aes-soft v0.6.4
   Compiling ctr v0.6.0
   Compiling cipher v0.4.3
   Compiling rand_chacha v0.3.1
   Compiling password-hash v0.4.2
   Compiling rand_xoshiro v0.6.0
   Compiling rand_chacha v0.2.2
   Compiling cexpr v0.4.0
   Compiling dmp v0.1.1
   Compiling ghash v0.3.1
   Compiling hkdf v0.10.0
   Compiling hmac v0.12.1
   Compiling sha-1 v0.10.0
   Compiling sha2 v0.10.2
   Compiling blake2 v0.10.4
   Compiling md-5 v0.10.1
   Compiling aes v0.6.0
   Compiling salsa20 v0.10.2
   Compiling rand v0.7.3
   Compiling rand v0.8.5
   Compiling imbl v1.0.1
   Compiling headers v0.3.7
   Compiling rstar v0.9.3
   Compiling aes-gcm v0.8.0
   Compiling pbkdf2 v0.11.0
   Compiling argon2 v0.4.1
   Compiling scrypt v0.10.0
   Compiling fd-lock v3.0.6
   Compiling sct v0.6.1
   Compiling webpki v0.21.4
   Compiling toml v0.5.9
   Compiling nanoid v0.4.0
   Compiling wasm-bindgen-backend v0.2.82
   Compiling darling_core v0.14.1
   Compiling wasm-bindgen-macro-support v0.2.82
   Compiling rquickjs-core v0.1.6
   Compiling ctor v0.1.23
   Compiling thiserror-impl v1.0.32
   Compiling futures-macro v0.3.23
   Compiling tokio-macros v1.8.0
   Compiling tracing-attributes v0.1.22
   Compiling prost-derive v0.7.0
   Compiling prost-derive v0.9.0
   Compiling pin-project-internal v1.0.12
   Compiling derive-new v0.5.9
   Compiling openssl-macros v0.1.0
   Compiling time-macros-impl v0.1.2
   Compiling async-recursion v1.0.0
   Compiling surrealdb-derive v0.3.0
   Compiling wasm-bindgen-macro v0.2.82
   Compiling darling_macro v0.14.1
   Compiling grpcio-sys v0.8.1
   Compiling time-macros v0.1.1
   Compiling darling v0.14.1
   Compiling mio v0.8.4
   Compiling want v0.3.0
   Compiling polling v2.2.0
   Compiling kv-log-macro v1.0.7
   Compiling rustls v0.19.1
   Compiling fail v0.4.0
   Compiling multipart v0.18.0
   Compiling fern v0.6.1
   Compiling rustyline v10.0.0
   Compiling pin-project v1.0.12
   Compiling flume v0.10.14
   Compiling js-sys v0.3.59
   Compiling tracing v0.1.36
   Compiling thiserror v1.0.32
   Compiling prost v0.9.0
   Compiling async-global-executor v2.2.0
   Compiling proc-macro-crate v1.2.1
   Compiling simple_asn1 v0.6.2
   Compiling tracing-futures v0.2.5
   Compiling async-std v1.12.0
   Compiling prost v0.7.0
   Compiling prost-types v0.9.0
   Compiling rquickjs-macro v0.1.6
   Compiling prost-types v0.7.0
   Compiling grpcio-compiler v0.10.0
   Compiling web-sys v0.3.59
   Compiling protobuf-build v0.12.3
   Compiling futures-executor v0.3.23
   Compiling futures v0.3.23
   Compiling tikv-client-proto v0.1.0
   Compiling tokio-util v0.7.3
   Compiling echodb v0.3.0
   Compiling tokio-util v0.6.10
   Compiling tokio-rustls v0.22.0
   Compiling async-compression v0.3.14
   Compiling tokio-stream v0.1.9
   Compiling rquickjs v0.1.6
   Compiling h2 v0.3.14
   Compiling url v2.2.2
   Compiling serde_urlencoded v0.7.1
   Compiling serde_qs v0.8.5
   Compiling geo-types v0.7.6
   Compiling bigdecimal v0.3.0
   Compiling storekey v0.3.0
   Compiling chrono v0.4.22
   Compiling uuid v1.1.2
   Compiling rmp-serde v1.1.0
   Compiling serde_cbor v0.11.2
   Compiling tungstenite v0.14.0
   Compiling geo v0.22.1
   Compiling jsonwebtoken v8.1.1
   Compiling tokio-tungstenite v0.15.0
   Compiling tokio-native-tls v0.3.0
   Compiling hyper v0.14.20
   Compiling http-client v6.5.3
   Compiling surf v2.3.2
   Compiling hyper-tls v0.5.0
   Compiling warp v0.3.2
   Compiling reqwest v0.11.11
error: failed to run custom build command for `grpcio-sys v0.8.1`

Caused by:
  process didn't exit successfully: `/home/chris/src/surrealdb/target/debug/build/grpcio-sys-1ce1c20ccd61db10/build-script-build` (exit status: 101)
  --- stdout
  cargo:rerun-if-changed=grpc_wrap.cc
  cargo:rerun-if-changed=grpc
  cargo:rerun-if-env-changed=UPDATE_BIND
  cargo:rerun-if-env-changed=CARGO_CFG_TARGET_OS
  cargo:rerun-if-env-changed=GRPCIO_SYS_USE_PKG_CONFIG
  cargo:rerun-if-env-changed=CARGO_CFG_TARGET_OS
  cargo:rerun-if-env-changed=CARGO_CFG_TARGET_OS
  cargo:rerun-if-env-changed=CARGO_CFG_TARGET_OS
  cargo:rerun-if-env-changed=CXX
  OPT_LEVEL = Some("0")
  TARGET = Some("x86_64-unknown-linux-gnu")
  HOST = Some("x86_64-unknown-linux-gnu")
  CC_x86_64-unknown-linux-gnu = None
  CC_x86_64_unknown_linux_gnu = None
  HOST_CC = None
  CC = None
  CFLAGS_x86_64-unknown-linux-gnu = None
  CFLAGS_x86_64_unknown_linux_gnu = None
  HOST_CFLAGS = None
  CFLAGS = None
  CRATE_CC_NO_DEFAULTS = None
  DEBUG = Some("true")
  CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
  cargo:rustc-link-search=native=/home/chris/src/surrealdb/target/debug/build/libz-sys-a9b86f3dea7681f0/out/build
  cargo:rustc-link-search=native=/home/chris/src/surrealdb/target/debug/build/libz-sys-a9b86f3dea7681f0/out/lib
  CMAKE_TOOLCHAIN_FILE_x86_64-unknown-linux-gnu = None
  CMAKE_TOOLCHAIN_FILE_x86_64_unknown_linux_gnu = None
  HOST_CMAKE_TOOLCHAIN_FILE = None
  CMAKE_TOOLCHAIN_FILE = None
  CMAKE_GENERATOR_x86_64-unknown-linux-gnu = None
  CMAKE_GENERATOR_x86_64_unknown_linux_gnu = None
  HOST_CMAKE_GENERATOR = None
  CMAKE_GENERATOR = None
  CMAKE_PREFIX_PATH_x86_64-unknown-linux-gnu = None
  CMAKE_PREFIX_PATH_x86_64_unknown_linux_gnu = None
  HOST_CMAKE_PREFIX_PATH = None
  CMAKE_PREFIX_PATH = Some("/home/chris/src/surrealdb/target/debug/build/libz-sys-a9b86f3dea7681f0/out/build")
  CMAKE_x86_64-unknown-linux-gnu = None
  CMAKE_x86_64_unknown_linux_gnu = None
  HOST_CMAKE = None
  CMAKE = None
  running: "cmake" "/home/chris/.cargo/registry/src/github.com-1ecc6299db9ec823/grpcio-sys-0.8.1/grpc" "-DgRPC_INSTALL=false" "-DgRPC_BUILD_CSHARP_EXT=false" "-DgRPC_BUILD_CODEGEN=false" "-DgRPC_BENCHMARK_PROVIDER=none" "-DgRPC_SSL_PROVIDER=package" "-DgRPC_ZLIB_PROVIDER=package" "-DCMAKE_INSTALL_PREFIX=/home/chris/src/surrealdb/target/debug/build/grpcio-sys-f8d84882fc745783/out" "-DCMAKE_C_FLAGS= -ffunction-sections -fdata-sections -fPIC -m64" "-DCMAKE_C_COMPILER=/usr/bin/cc" "-DCMAKE_CXX_FLAGS= -ffunction-sections -fdata-sections -fPIC -m64" "-DCMAKE_CXX_COMPILER=/usr/bin/c++" "-DCMAKE_ASM_FLAGS= -ffunction-sections -fdata-sections -fPIC -m64" "-DCMAKE_ASM_COMPILER=/usr/bin/cc" "-DCMAKE_BUILD_TYPE=Debug"
  -- The C compiler identification is GNU 11.2.0
  -- The CXX compiler identification is GNU 11.2.0
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Looking for pthread.h
  -- Looking for pthread.h - found
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
  -- Found Threads: TRUE
  -- Looking for res_servicename in resolv
  -- Looking for res_servicename in resolv - not found
  -- Looking for gethostbyname in nsl
  -- Looking for gethostbyname in nsl - found
  -- Looking for gethostbyname in socket
  -- Looking for gethostbyname in socket - not found
  -- Looking for socket in socket
  -- Looking for socket in socket - not found
  -- Looking for clock_gettime in rt
  -- Looking for clock_gettime in rt - found
  -- Looking for include file sys/types.h
  -- Looking for include file sys/types.h - found
  -- Looking for include file sys/socket.h
  -- Looking for include file sys/socket.h - found
  -- Looking for include file arpa/inet.h
  -- Looking for include file arpa/inet.h - found
  -- Looking for include file arpa/nameser_compat.h
  -- Looking for include file arpa/nameser_compat.h - found
  -- Looking for include file arpa/nameser.h
  -- Looking for include file arpa/nameser.h - found
  -- Looking for include file assert.h
  -- Looking for include file assert.h - found
  -- Looking for include file errno.h
  -- Looking for include file errno.h - found
  -- Looking for include file fcntl.h
  -- Looking for include file fcntl.h - found
  -- Looking for include file inttypes.h
  -- Looking for include file inttypes.h - found
  -- Looking for include file limits.h
  -- Looking for include file limits.h - found
  -- Looking for include file malloc.h
  -- Looking for include file malloc.h - found
  -- Looking for include file memory.h
  -- Looking for include file memory.h - found
  -- Looking for include file netdb.h
  -- Looking for include file netdb.h - found
  -- Looking for include file netinet/in.h
  -- Looking for include file netinet/in.h - found
  -- Looking for include file netinet/tcp.h
  -- Looking for include file netinet/tcp.h - found
  -- Looking for include file net/if.h
  -- Looking for include file net/if.h - found
  -- Looking for include file signal.h
  -- Looking for include file signal.h - found
  -- Looking for include file socket.h
  -- Looking for include file socket.h - not found
  -- Looking for include file stdbool.h
  -- Looking for include file stdbool.h - found
  -- Looking for include file stdint.h
  -- Looking for include file stdint.h - found
  -- Looking for include file stdlib.h
  -- Looking for include file stdlib.h - found
  -- Looking for include file strings.h
  -- Looking for include file strings.h - found
  -- Looking for include file string.h
  -- Looking for include file string.h - found
  -- Looking for include file stropts.h
  -- Looking for include file stropts.h - not found
  -- Looking for include file sys/ioctl.h
  -- Looking for include file sys/ioctl.h - found
  -- Looking for include file sys/param.h
  -- Looking for include file sys/param.h - found
  -- Looking for include file sys/select.h
  -- Looking for include file sys/select.h - found
  -- Looking for include file sys/stat.h
  -- Looking for include file sys/stat.h - found
  -- Looking for include file sys/time.h
  -- Looking for include file sys/time.h - found
  -- Looking for include file sys/uio.h
  -- Looking for include file sys/uio.h - found
  -- Looking for include file time.h
  -- Looking for include file time.h - found
  -- Looking for include file dlfcn.h
  -- Looking for include file dlfcn.h - found
  -- Looking for include file unistd.h
  -- Looking for include file unistd.h - found
  -- Looking for include files winsock2.h, windows.h
  -- Looking for include files winsock2.h, windows.h - not found
  -- Looking for 3 include files winsock2.h, ..., windows.h
  -- Looking for 3 include files winsock2.h, ..., windows.h - not found
  -- Looking for include files winsock.h, windows.h
  -- Looking for include files winsock.h, windows.h - not found
  -- Looking for include file windows.h
  -- Looking for include file windows.h - not found
  -- Performing Test HAVE_SOCKLEN_T
  -- Performing Test HAVE_SOCKLEN_T - Success
  -- Performing Test HAVE_TYPE_SOCKET
  -- Performing Test HAVE_TYPE_SOCKET - Failed
  -- Performing Test HAVE_BOOL_T
  -- Performing Test HAVE_BOOL_T - Success
  -- Performing Test HAVE_SSIZE_T
  -- Performing Test HAVE_SSIZE_T - Success
  -- Performing Test HAVE_LONGLONG
  -- Performing Test HAVE_LONGLONG - Success
  -- Performing Test HAVE_SIG_ATOMIC_T
  -- Performing Test HAVE_SIG_ATOMIC_T - Success
  -- Performing Test HAVE_STRUCT_ADDRINFO
  -- Performing Test HAVE_STRUCT_ADDRINFO - Success
  -- Performing Test HAVE_STRUCT_IN6_ADDR
  -- Performing Test HAVE_STRUCT_IN6_ADDR - Success
  -- Performing Test HAVE_STRUCT_SOCKADDR_IN6
  -- Performing Test HAVE_STRUCT_SOCKADDR_IN6 - Success
  -- Performing Test HAVE_STRUCT_SOCKADDR_STORAGE
  -- Performing Test HAVE_STRUCT_SOCKADDR_STORAGE - Success
  -- Performing Test HAVE_STRUCT_TIMEVAL
  -- Performing Test HAVE_STRUCT_TIMEVAL - Success
  -- Looking for AF_INET6
  -- Looking for AF_INET6 - found
  -- Looking for O_NONBLOCK
  -- Looking for O_NONBLOCK - found
  -- Looking for FIONBIO
  -- Looking for FIONBIO - found
  -- Looking for SIOCGIFADDR
  -- Looking for SIOCGIFADDR - found
  -- Looking for MSG_NOSIGNAL
  -- Looking for MSG_NOSIGNAL - found
  -- Looking for PF_INET6
  -- Looking for PF_INET6 - found
  -- Looking for SO_NONBLOCK
  -- Looking for SO_NONBLOCK - not found
  -- Looking for CLOCK_MONOTONIC
  -- Looking for CLOCK_MONOTONIC - found
  -- Performing Test HAVE_SOCKADDR_IN6_SIN6_SCOPE_ID
  -- Performing Test HAVE_SOCKADDR_IN6_SIN6_SCOPE_ID - Success
  -- Performing Test HAVE_LL
  -- Performing Test HAVE_LL - Success
  -- Looking for bitncmp
  -- Looking for bitncmp - not found
  -- Looking for closesocket
  -- Looking for closesocket - not found
  -- Looking for CloseSocket
  -- Looking for CloseSocket - not found
  -- Looking for connect
  -- Looking for connect - found
  -- Looking for fcntl
  -- Looking for fcntl - found
  -- Looking for freeaddrinfo
  -- Looking for freeaddrinfo - found
  -- Looking for getaddrinfo
  -- Looking for getaddrinfo - found
  -- Looking for getenv
  -- Looking for getenv - found
  -- Looking for gethostbyaddr
  -- Looking for gethostbyaddr - found
  -- Looking for gethostbyname
  -- Looking for gethostbyname - found
  -- Looking for gethostname
  -- Looking for gethostname - found
  -- Looking for getnameinfo
  -- Looking for getnameinfo - found
  -- Looking for getservbyport_r
  -- Looking for getservbyport_r - found
  -- Looking for gettimeofday
  -- Looking for gettimeofday - found
  -- Looking for if_indextoname
  -- Looking for if_indextoname - found
  -- Looking for inet_net_pton
  -- Looking for inet_net_pton - not found
  -- Looking for inet_ntop
  -- Looking for inet_ntop - found
  -- Looking for inet_pton
  -- Looking for inet_pton - found
  -- Looking for ioctl
  -- Looking for ioctl - found
  -- Looking for ioctlsocket
  -- Looking for ioctlsocket - not found
  -- Looking for IoctlSocket
  -- Looking for IoctlSocket - not found
  -- Looking for recv
  -- Looking for recv - found
  -- Looking for recvfrom
  -- Looking for recvfrom - found
  -- Looking for send
  -- Looking for send - found
  -- Looking for setsockopt
  -- Looking for setsockopt - found
  -- Looking for socket
  -- Looking for socket - found
  -- Looking for strcasecmp
  -- Looking for strcasecmp - found
  -- Looking for strcmpi
  -- Looking for strcmpi - not found
  -- Looking for strdup
  -- Looking for strdup - found
  -- Looking for stricmp
  -- Looking for stricmp - not found
  -- Looking for strncasecmp
  -- Looking for strncasecmp - found
  -- Looking for strncmpi
  -- Looking for strncmpi - not found
  -- Looking for strnicmp
  -- Looking for strnicmp - not found
  -- Looking for writev
  -- Looking for writev - found
  -- Looking for __system_property_get
  -- Looking for __system_property_get - not found
  -- Found OpenSSL: /home/chris/src/surrealdb/target/debug/build/openssl-sys-53d5f7ede8b04507/out/openssl-build/install/lib/libcrypto.a (found version "1.1.1q")
  -- Found ZLIB: /home/chris/src/surrealdb/target/debug/build/libz-sys-a9b86f3dea7681f0/out/lib/libz.a (found version "1.2.11")
  -- Configuring done
  -- Generating done
  -- Build files have been written to: /home/chris/src/surrealdb/target/debug/build/grpcio-sys-f8d84882fc745783/out/build
  running: "cmake" "--build" "." "--target" "grpc" "--config" "Debug" "--parallel" "32"
  [  0%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/bitstate.cc.o
  [  0%] Building CXX object third_party/abseil-cpp/absl/base/CMakeFiles/absl_log_severity.dir/log_severity.cc.o
  [  0%] Building CXX object third_party/abseil-cpp/absl/numeric/CMakeFiles/absl_int128.dir/int128.cc.o
  [  0%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_civil_time.dir/internal/cctz/src/civil_time_detail.cc.o
  [  0%] Building C object CMakeFiles/address_sorting.dir/third_party/address_sorting/address_sorting.c.o
  [  0%] Building C object CMakeFiles/address_sorting.dir/third_party/address_sorting/address_sorting_posix.c.o
  [  0%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/compile.cc.o
  [  0%] Building CXX object third_party/abseil-cpp/absl/base/CMakeFiles/absl_spinlock_wait.dir/internal/spinlock_wait.cc.o
  [  0%] Building C object CMakeFiles/address_sorting.dir/third_party/address_sorting/address_sorting_windows.c.o
  [  0%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/filtered_re2.cc.o
  [  0%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time_zone.dir/internal/cctz/src/time_zone_format.cc.o
  [  0%] Building CXX object third_party/abseil-cpp/absl/hash/CMakeFiles/absl_city.dir/internal/city.cc.o
  [  0%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time_zone.dir/internal/cctz/src/time_zone_if.cc.o
  [  0%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/mimics_pcre.cc.o
  [  1%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time_zone.dir/internal/cctz/src/time_zone_info.cc.o
  [  1%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time_zone.dir/internal/cctz/src/time_zone_fixed.cc.o
  [  3%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/onepass.cc.o
  [  1%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/dfa.cc.o
  [  3%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/parse.cc.o
  [  3%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time_zone.dir/internal/cctz/src/time_zone_libc.cc.o
  [  3%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time_zone.dir/internal/cctz/src/time_zone_impl.cc.o
  [  3%] Building C object CMakeFiles/upb.dir/third_party/upb/upb/decode_fast.c.o
  [  3%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/nfa.cc.o
  [  3%] Building C object CMakeFiles/upb.dir/third_party/upb/upb/decode.c.o
  [  3%] Building CXX object third_party/abseil-cpp/absl/base/CMakeFiles/absl_exponential_biased.dir/internal/exponential_biased.cc.o
  [  3%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time_zone.dir/internal/cctz/src/time_zone_lookup.cc.o
  [  3%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time_zone.dir/internal/cctz/src/time_zone_posix.cc.o
  [  3%] Building C object CMakeFiles/upb.dir/third_party/upb/upb/def.c.o
  [  3%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time_zone.dir/internal/cctz/src/zone_info_source.cc.o
  [  3%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/perl_groups.cc.o
  [  3%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/prefilter.cc.o
  [  3%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares__close_sockets.c.o
  [  3%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/prefilter_tree.cc.o
  [  3%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/prog.cc.o
  [  3%] Building C object CMakeFiles/upb.dir/third_party/upb/upb/encode.c.o
  [  3%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares__get_hostent.c.o
  [  3%] Linking C static library libaddress_sorting.a
  [  3%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/re2.cc.o
  [  3%] Building C object CMakeFiles/upb.dir/third_party/upb/upb/json_decode.c.o
  [  5%] Linking CXX static library libabsl_spinlock_wait.a
  [  5%] Building C object CMakeFiles/upb.dir/third_party/upb/upb/json_encode.c.o
  [  6%] Linking CXX static library libabsl_city.a
  [  6%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares__read_line.c.o
  [  6%] Linking CXX static library libabsl_exponential_biased.a
  [  6%] Built target address_sorting
  [  6%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares__timeval.c.o
  [  6%] Built target absl_spinlock_wait
  [  6%] Linking CXX static library libabsl_log_severity.a
  [  6%] Building C object CMakeFiles/upb.dir/third_party/upb/upb/msg.c.o
  [  6%] Building C object CMakeFiles/upb.dir/third_party/upb/upb/reflection.c.o
  [  6%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/regexp.cc.o
  [  6%] Built target absl_city
  [  6%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/set.cc.o
  [  6%] Building C object CMakeFiles/upb.dir/third_party/upb/upb/table.c.o
  [  6%] Building C object CMakeFiles/upb.dir/third_party/upb/upb/text_encode.c.o
  [  6%] Built target absl_exponential_biased
  [  6%] Building C object CMakeFiles/upb.dir/third_party/upb/upb/upb.c.o
  [  6%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_android.c.o
  [  6%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/stringpiece.cc.o
  [  6%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_cancel.c.o
  [  6%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/simplify.cc.o
  [  6%] Built target absl_log_severity
  [  6%] Building C object CMakeFiles/upb.dir/src/core/ext/upb-generated/google/protobuf/descriptor.upb.c.o
  [  6%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_data.c.o
  [  6%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_destroy.c.o
  [  6%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_expand_name.c.o
  [  6%] Linking CXX static library libabsl_civil_time.a
  [  6%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/unicode_casefold.cc.o
  [  8%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_expand_string.c.o
  [  8%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_fds.c.o
  [  8%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_free_hostent.c.o
  [  8%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/tostring.cc.o
  [ 10%] Linking C static library libupb.a
  [ 10%] Building CXX object third_party/abseil-cpp/absl/base/CMakeFiles/absl_raw_logging_internal.dir/internal/raw_logging.cc.o
  [ 10%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_getenv.c.o
  [ 10%] Linking CXX static library libabsl_int128.a
  [ 12%] Building CXX object third_party/re2/CMakeFiles/re2.dir/re2/unicode_groups.cc.o
  [ 12%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_free_string.c.o
  [ 12%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_gethostbyaddr.c.o
  [ 12%] Building CXX object third_party/re2/CMakeFiles/re2.dir/util/rune.cc.o
  [ 12%] Built target absl_civil_time
  [ 12%] Building CXX object third_party/re2/CMakeFiles/re2.dir/util/strutil.cc.o
  [ 12%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_gethostbyname.c.o
  [ 12%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_getnameinfo.c.o
  [ 12%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_getsock.c.o
  [ 12%] Built target absl_int128
  [ 12%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_init.c.o
  [ 12%] Built target upb
  [ 12%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_parse_aaaa_reply.c.o
  [ 12%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_library_init.c.o
  [ 12%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_nowarn.c.o
  [ 12%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_options.c.o
  [ 13%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_create_query.c.o
  [ 13%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_llist.c.o
  [ 13%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_mkquery.c.o
  [ 13%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_parse_a_reply.c.o
  [ 13%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_parse_mx_reply.c.o
  [ 13%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_parse_naptr_reply.c.o
  [ 13%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_parse_soa_reply.c.o
  [ 13%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_parse_srv_reply.c.o
  [ 13%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_parse_ns_reply.c.o
  [ 13%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_platform.c.o
  [ 13%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_parse_ptr_reply.c.o
  [ 13%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_parse_txt_reply.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_process.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_query.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_send.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_strerror.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_strdup.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_search.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_strcasecmp.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_strsplit.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_version.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_writev.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/ares_timeout.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/inet_net_pton.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/bitncmp.c.o
  [ 15%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/inet_ntop.c.o
  [ 17%] Building C object third_party/cares/cares/CMakeFiles/c-ares.dir/windows_port.c.o
  [ 17%] Linking CXX static library libabsl_raw_logging_internal.a
  [ 17%] Linking C static library lib/libcares.a
  [ 17%] Built target absl_raw_logging_internal
  [ 17%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings_internal.dir/internal/escaping.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/base/CMakeFiles/absl_base.dir/internal/spinlock.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/base/CMakeFiles/absl_base.dir/internal/cycleclock.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings_internal.dir/internal/ostringstream.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/base/CMakeFiles/absl_base.dir/internal/sysinfo.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings_internal.dir/internal/utf8.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/debugging/CMakeFiles/absl_debugging_internal.dir/internal/address_is_readable.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/types/CMakeFiles/absl_bad_variant_access.dir/bad_variant_access.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/base/CMakeFiles/absl_base.dir/internal/thread_identity.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/base/CMakeFiles/absl_throw_delegate.dir/internal/throw_delegate.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/debugging/CMakeFiles/absl_debugging_internal.dir/internal/elf_mem_image.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/types/CMakeFiles/absl_bad_optional_access.dir/bad_optional_access.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/debugging/CMakeFiles/absl_debugging_internal.dir/internal/vdso_support.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/base/CMakeFiles/absl_base.dir/internal/unscaledcycleclock.cc.o
  [ 17%] Built target c-ares
  [ 17%] Linking CXX static library libabsl_time_zone.a
  [ 17%] Built target absl_time_zone
  [ 17%] Linking CXX static library libabsl_bad_optional_access.a
  [ 17%] Linking CXX static library libabsl_strings_internal.a
  [ 17%] Linking CXX static library libabsl_bad_variant_access.a
  [ 17%] Built target absl_bad_optional_access
  [ 17%] Linking CXX static library libabsl_throw_delegate.a
  [ 17%] Built target absl_strings_internal
  [ 17%] Linking CXX static library libabsl_debugging_internal.a
  [ 17%] Built target absl_bad_variant_access
  [ 17%] Built target absl_throw_delegate
  [ 17%] Built target absl_debugging_internal
  [ 17%] Building CXX object third_party/abseil-cpp/absl/debugging/CMakeFiles/absl_stacktrace.dir/stacktrace.cc.o
  [ 17%] Linking CXX static library libabsl_base.a
  [ 17%] Built target absl_base
  [ 17%] Building CXX object third_party/abseil-cpp/absl/base/CMakeFiles/absl_malloc_internal.dir/internal/low_level_alloc.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/charconv.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/ascii.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/debugging/CMakeFiles/absl_demangle_internal.dir/internal/demangle.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/str_cat.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/internal/memutil.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/match.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/numbers.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/internal/charconv_bigint.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/str_split.cc.o
  [ 17%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/escaping.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/str_replace.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/internal/charconv_parse.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/string_view.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_strings.dir/substitute.cc.o
  [ 18%] Linking CXX static library libabsl_stacktrace.a
  [ 18%] Linking CXX static library libre2.a
  [ 18%] Linking CXX static library libabsl_demangle_internal.a
  [ 18%] Built target absl_stacktrace
  [ 18%] Built target absl_demangle_internal
  [ 18%] Built target re2
  [ 18%] Linking CXX static library libabsl_malloc_internal.a
  [ 18%] Built target absl_malloc_internal
  [ 18%] Building CXX object third_party/abseil-cpp/absl/synchronization/CMakeFiles/absl_graphcycles_internal.dir/internal/graphcycles.cc.o
  [ 18%] Linking CXX static library libabsl_strings.a
  [ 18%] Built target absl_strings
  [ 18%] Building CXX object third_party/abseil-cpp/absl/debugging/CMakeFiles/absl_symbolize.dir/symbolize.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_cord.dir/cord.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time.dir/civil_time.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_str_format_internal.dir/internal/str_format/arg.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_str_format_internal.dir/internal/str_format/bind.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time.dir/clock.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/hash/CMakeFiles/absl_hash.dir/internal/hash.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_str_format_internal.dir/internal/str_format/extension.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_str_format_internal.dir/internal/str_format/float_conversion.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time.dir/duration.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time.dir/format.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/time/CMakeFiles/absl_time.dir/time.cc.o
  [ 18%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_str_format_internal.dir/internal/str_format/output.cc.o
  [ 20%] Building CXX object third_party/abseil-cpp/absl/strings/CMakeFiles/absl_str_format_internal.dir/internal/str_format/parser.cc.o
  [ 20%] Linking CXX static library libabsl_hash.a
  [ 20%] Linking CXX static library libabsl_symbolize.a
  [ 20%] Built target absl_hash
  [ 20%] Built target absl_symbolize
  [ 22%] Linking CXX static library libabsl_time.a
  [ 22%] Built target absl_time
  [ 22%] Linking CXX static library libabsl_str_format_internal.a
  [ 22%] Built target absl_str_format_internal
  [ 22%] Linking CXX static library libabsl_cord.a
  [ 22%] Built target absl_cord

  --- stderr
  CMake Warning at cmake/protobuf.cmake:51 (message):
    gRPC_PROTOBUF_PROVIDER is "module" but PROTOBUF_ROOT_DIR is wrong
  Call Stack (most recent call first):
    CMakeLists.txt:254 (include)


  CMake Warning:
    Manually-specified variables were not used by the project:

      CMAKE_ASM_COMPILER
      CMAKE_ASM_FLAGS


  gmake: warning: -j32 forced in submake: resetting jobserver mode.
  /home/chris/.cargo/registry/src/github.com-1ecc6299db9ec823/grpcio-sys-0.8.1/grpc/third_party/abseil-cpp/absl/synchronization/internal/graphcycles.cc: In member function ‘void absl::lts_2020_09_23::synchronization_internal::GraphCycles::RemoveNode(void*)’:
  /home/chris/.cargo/registry/src/github.com-1ecc6299db9ec823/grpcio-sys-0.8.1/grpc/third_party/abseil-cpp/absl/synchronization/internal/graphcycles.cc:451:26: error: ‘numeric_limits’ is not a member of ‘std’
    451 |   if (x->version == std::numeric_limits<uint32_t>::max()) {
        |                          ^~~~~~~~~~~~~~
  /home/chris/.cargo/registry/src/github.com-1ecc6299db9ec823/grpcio-sys-0.8.1/grpc/third_party/abseil-cpp/absl/synchronization/internal/graphcycles.cc:451:49: error: expected primary-expression before ‘>’ token
    451 |   if (x->version == std::numeric_limits<uint32_t>::max()) {
        |                                                 ^
  /home/chris/.cargo/registry/src/github.com-1ecc6299db9ec823/grpcio-sys-0.8.1/grpc/third_party/abseil-cpp/absl/synchronization/internal/graphcycles.cc:451:52: error: ‘::max’ has not been declared; did you mean ‘std::max’?
    451 |   if (x->version == std::numeric_limits<uint32_t>::max()) {
        |                                                    ^~~
        |                                                    std::max
  In file included from /usr/include/c++/11/algorithm:62,
                   from /home/chris/.cargo/registry/src/github.com-1ecc6299db9ec823/grpcio-sys-0.8.1/grpc/third_party/abseil-cpp/absl/synchronization/internal/graphcycles.cc:38:
  /usr/include/c++/11/bits/stl_algo.h:3467:5: note: ‘std::max’ declared here
   3467 |     max(initializer_list<_Tp> __l, _Compare __comp)
        |     ^~~
  gmake[3]: *** [third_party/abseil-cpp/absl/synchronization/CMakeFiles/absl_graphcycles_internal.dir/build.make:76: third_party/abseil-cpp/absl/synchronization/CMakeFiles/absl_graphcycles_internal.dir/internal/graphcycles.cc.o] Error 1
  gmake[2]: *** [CMakeFiles/Makefile2:3153: third_party/abseil-cpp/absl/synchronization/CMakeFiles/absl_graphcycles_internal.dir/all] Error 2
  gmake[2]: *** Waiting for unfinished jobs....
  gmake[1]: *** [CMakeFiles/Makefile2:848: CMakeFiles/grpc.dir/rule] Error 2
  gmake: *** [Makefile:247: grpc] Error 2
  thread 'main' panicked at '
  command did not execute successfully, got: exit status: 2

  build script failed, must exit now', /home/chris/.cargo/registry/src/github.com-1ecc6299db9ec823/cmake-0.1.48/src/lib.rs:975:5
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...

Steps to reproduce

cargo build on ubuntu

Expected behaviour

Build should work.

SurrealDB version

surreal for linux

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Improve syntax for defining embedded JavaScript functions

Is your feature request related to a problem?

The syntax for defining an embedded JavaScript function is a little obtuse and obscure.

CREATE event SET name = fn::future -> () => {
    return 'my js function';
};

Describe the solution

Why not define embedded JavaScript functions just like in JavaScript...

CREATE event SET name = function() {
    return 'my js function';
};

Alternative methods

We'll also enable the ability to define functions using a shortened syntax (similar to Rust)...

CREATE event SET name = fn() {
    return 'my js function';
};

SurrealDB version

surreal 1.0.0-beta.2 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Zero-copy datastore key serialisation and deserialisation

Is your feature request related to a problem?

Currently when creating Vec<u8> keys for use in the datastore, we pass in references, which are then cloned before being serialized. In addition when deserializing a datastore key, the value is cloned to create owned data.

This is unnecessary as the datastore key is never used or held beyond the end of a local function.

Describe the solution

Zero-copy deserialization will ensure that we are not unnecessarily cloning &str values when serializing, and cloning data once again when deserializing.

With this improvement, writing to and reading from the key-value store should be quicker, with less memory allocation.

For this to work, we need to make the storekey deserializer accept borrowed data as can be seen in rmp-serde - https://github.com/3Hren/msgpack-rust/blob/master/rmp-serde/src/decode.rs#L909-L1003

#[derive(Clone, Debug, Eq, PartialEq, PartialOrd, Serialize, Deserialize)]
pub struct Ns<'a> {
    kv: &'a str,
    _a: &'a str,
    ns: &'a str,
}

impl<'a> Into<Vec<u8>> for Ns<'a> {
    fn into(self) -> Vec<u8> {
        self.encode().unwrap()
    }
}

impl<'a> From<Vec<u8>> for Ns<'a> {
    fn from(val: Vec<u8>) -> Self {
        deserialize(&val).unwrap()
    }
}

pub fn new<'a>(ns: &'a str) -> Ns<'a> {
    Ns::new(ns)
}

impl<'a> Ns<'a> {
    pub fn new(ns: &'a str) -> Ns {
        Ns {
            kv: BASE,
            _a: "!ns",
            ns,
        }
    }
    pub fn encode(&self) -> Result<Vec<u8>, Error> {
        Ok(serialize(self)?)
    }
    pub fn decode(v: &[u8]) -> Result<Ns, Error> {
        Ok(deserialize(v)?)
    }
}

Alternative methods

No alternative methods.

SurrealDB version

surreal 1.0.0-beta.5 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: LET statement variables do not stay across queries in the CLI REPL

Describe the bug

The value of let statements is instantly lost once they have been executed. Running

-- Define the parameter
LET $name = "tobie";

And then

-- Use the parameter
CREATE person SET name = $name;

results in the name field not being set:

> LET $name = "tobie";
[{"time":"17.363µs","status":"OK","result":null}]
> CREATE person:3 SET name = $name;
[{"time":"116.22µs","status":"OK","result":[{"id":"person:3"}]}]

However interestingly copying both statements together into the repl works as expected.

Steps to reproduce

Run

> LET $name = "tobie";
[{"time":"17.363µs","status":"OK","result":null}]
> CREATE person:3 SET name = $name;
[{"time":"116.22µs","status":"OK","result":[{"id":"person:3"}]}]

Expected behaviour

The value is stored in the variable.

SurrealDB version

surreal 1.0.0-beta.6 for linux on x86_64

Contact Details

[email protected]

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Support Server-Sent Events (SSE) protocol in addition to WebSockets (WS)

Is your feature request related to a problem?

SSE may be preferable over WS for certain users, as it provides a few benefits over WS:

  • Websockets may be blocked by certain WAFs / company firewalls
  • SSE does not require a 101 HTTP upgrade request, so the handshake is simpler
  • SSE supports cookies and headers, and do not need a separate custom "sign-in" / "connection_init" frame like a WS connection does
    • Additionally, this means you can allow / deny the connection without having to initialize it on the server
  • SSE / Fetch are stateless APIs, unlike WS which requires additional server-side work to track

Reference:
https://www.the-guild.dev/blog/graphql-over-sse
https://wundergraph.com/blog/deprecate_graphql_subscriptions_over_websockets

Describe the solution

Support for SSE connections in addition to WS connections when making LIVE / realtime queries

Alternative methods

N/A

SurrealDB version

1.0.0-beta.6

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Make --ns and --db arguments optional in command-line REPL

Is your feature request related to a problem?

The original points for this issue can be seen in #32.

If SurrealDB is started for the first time, or if no Namespace or Database exists yet, then it doesn't make sense to specify a Namespace or Database using the --ns or --db arguments when running the command-line REPL with surreal sql.

Describe the solution

The --ns and --db arguments should be optional.

Alternative methods

Currently the arguments are required.

SurrealDB version

surreal 1.0.0-beta.6 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: There was a problem with a datastore transaction: PessimisticLock error: ResolveLockError

Describe the bug

When using the TiKV store, sometimes the database runs into ResolveLockError and when that happens, it looks like it gets stuck in that state. Even restarting TiKV doesn't seem to make it go away.

Steps to reproduce

The easiest way to trigger this error is by embedding the database in Rust. I was able to trigger it via the /sql endpoint too but in that case the database seems to recover somehow. I'm able to consistently reproduce it by running the following Rust code repeatedly:

use surrealdb::{Datastore, Session};

#[tokio::main]
async fn main() -> Result<(), surrealdb::Error> {
    let sql = "
      BEGIN TRANSACTION;
        DEFINE NAMESPACE foo;
        USE NS foo;
        DEFINE DATABASE bar;
      COMMIT TRANSACTION;
    ";
    let dbs = Datastore::new("tikv://127.0.0.1:2379").await?;
    let ses = Session::for_kv();
    let results = dbs.execute(sql, &ses, None, true).await?;
    for result in results {
        match result.output() {
            Ok(record) => println!("{record}"),
            Err(error) => eprintln!("{error}"),
        }
    }
    Ok(())
}

Put the above code in main.rs.

Start TiKV...

$ tiup playground --mode tikv-slim
...
Playground Bootstrapping...
Start pd instance:v6.2.0
Start tikv instance:v6.2.0
...

Run the Rust code repeatedly...

$ for i in `seq 1 3`; do cargo run; done
Aug 30 15:02:55.547 INFO connect to tikv endpoint: "127.0.0.1:20160"
NONE
NONE
NONE
Aug 30 15:02:56.638 INFO connect to tikv endpoint: "127.0.0.1:20160"
The query was not executed due to a failed transaction
The query was not executed due to a failed transaction
There was a problem with a datastore transaction: PessimisticLock error: ResolveLockError
Aug 30 15:02:57.799 INFO connect to tikv endpoint: "127.0.0.1:20160"
The query was not executed due to a failed transaction
The query was not executed due to a failed transaction
There was a problem with a datastore transaction: PessimisticLock error: ResolveLockError

Once it runs into that error, even spinning up SurrealDB

surreal start --log trace --user root --pass root tikv://127.0.0.1:2379

and running queries via the REPL doesn't work

$ surreal sql --conn http://localhost:8000 --user root --pass root --ns foo --db bar

> DEFINE TABLE foo_bar SCHEMAFULL;
[{"time":"15.826006ms","status":"ERR","detail":"There was a problem with a datastore transaction: Failed to resolve lock"}]

Expected behaviour

If I don't spawn a new process each time but instead move the loop into main.rs it takes a bit longer to run into this. It keeps returning NONE with no errors. That's what I expect to happen. It should never return ResolveLockError, no matter how many times you run it and TiKV must remain in good state.

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Encryption At Rest

Is your feature request related to a problem?

This is a security feature. This prevents an attacker from directly reading the contents of a database without decrypting it first.

Describe the solution

Please see https://discord.com/channels/902568124350599239/970336107206176768/1014654616442511523
Apparently this was supported in the now defunct GoLang version of SurrealDB.

Alternative methods

TiKV, which SurrealDB supports as a KV backend, supports encryption at rest (see https://docs.pingcap.com/tidb/stable/encryption-at-rest#tikv-encryption-at-rest)

See: https://discord.com/channels/902568124350599239/970336107206176768/1014635769865977937

SurrealDB version

v1.0.0-beta.7

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: rustfmt.toml contains unstable options

Describe the bug

The unstable options in .rustfmt.toml are currently not being used as the CI is testing on the stable channel. These options lead to to warning messages saying unstable features are only available in nightly channel. This can be confusing and may prompt developers to run cargo +nightly fmt which currently results in almost the entire codebase being reformatted.

Steps to reproduce

$ cargo fmt
...
Warning: can't set `reorder_impl_items = true`, unstable features are only available in nightly channel.
...

Expected behaviour

Running cargo fmt on a formatted codebase shouldn't print any warnings.

SurrealDB version

surreal 1.0.0-beta.7 for linux on x86_64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Can not DEFINE NAMESPACE / DATABASE without first setting selecting a namespace and database

Describe the bug

Attempting to DEFINE NAMESPACE or DEFINE DATABASE without first selecting the NAMESPACE and DATABASE tp use results in a failure to execute the query. The database appears to need both a NAMESPACE and a DATABASE selected, before running any queries.

Steps to reproduce

DEFINE NAMESPACE test;
[
    {
        "time": "76.958µs",
        "status": "ERR",
        "detail": "Specify a namespace to use"
    }
]

USE NAMESPACE test;
DEFINE NAMESPACE test;
[
    {
        "time": "76.958µs",
        "status": "ERR",
        "detail": "Specify a database to use"
    }
]

USE NAMESPACE test DATABASE test;
DEFINE NAMESPACE test;
[
    {
        "time": "48.875µs",
        "status": "OK",
        "result": null
    }
]

Expected behaviour

The database should allow the user to create the NAMESPACE or DATABASE if they have the correct permissions, and without needing to first select the NAMESPACE or DATABASE.

SurrealDB version

surreal 1.0.0-beta.3 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Docker image not fetchable

Describe the bug

It appears the Docker image is not public?

Steps to reproduce

docker run --pull --rm -p 8000:8000 surrealdb/surrealdb:latest start
docker: Error response from daemon: No such image: surrealdb/surrealdb:latest.
See 'docker run --help'.

Expected behaviour

docker run --pull --rm -p 8000:8000 surrealdb/surrealdb:latest start

To download the iamge

SurrealDB version

N/A

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature: Improve syntax for defining future expressions

Is your feature request related to a problem?

The syntax for defining an embedded JavaScript function is a little obtuse and obscure.

CREATE event SET start_time = fn::future -> {
    time::now() + 1w
};

Describe the solution

We could simplify this and make the syntax similar to the casting expressions...

CREATE event SET start_time = <future> {
    time::now() + 1w
};

Alternative methods

No other alternatives for this syntax improvement.

SurrealDB version

surreal 1.0.0-beta.2 for macos on aarch64

Contact Details

No response

Is there an existing issue for this?

  • I have searched the existing issues

Code of Conduct

  • I agree to follow this project's Code of Conduct

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.