Giter VIP home page Giter VIP logo

sqlx's Introduction

SQLx

🧰 The Rust SQL Toolkit


Built with ❤️ by The LaunchBadge team

Have a question? Be sure to check the FAQ first!

SQLx is an async, pure Rust SQL crate featuring compile-time checked queries without a DSL.

  • Truly Asynchronous. Built from the ground-up using async/await for maximum concurrency.

  • Compile-time checked queries (if you want). See SQLx is not an ORM.

  • Database Agnostic. Support for PostgreSQL, MySQL, MariaDB, SQLite.

    • MSSQL was supported prior to version 0.7, but has been removed pending a full rewrite of the driver as part of our SQLx Pro initiative.
  • Pure Rust. The Postgres and MySQL/MariaDB drivers are written in pure Rust using zero unsafe†† code.

  • Runtime Agnostic. Works on different runtimes (async-std / tokio / actix) and TLS backends (native-tls, rustls).

† The SQLite driver uses the libsqlite3 C library as SQLite is an embedded database (the only way we could be pure Rust for SQLite is by porting all of SQLite to Rust).

†† SQLx uses #![forbid(unsafe_code)] unless the sqlite feature is enabled. The SQLite driver directly invokes the SQLite3 API via libsqlite3-sys, which requires unsafe.


  • Cross-platform. Being native Rust, SQLx will compile anywhere Rust is supported.

  • Built-in connection pooling with sqlx::Pool.

  • Row streaming. Data is read asynchronously from the database and decoded on demand.

  • Automatic statement preparation and caching. When using the high-level query API (sqlx::query), statements are prepared and cached per connection.

  • Simple (unprepared) query execution including fetching results into the same Row types used by the high-level API. Supports batch execution and returns results from all statements.

  • Transport Layer Security (TLS) where supported (MySQL, MariaDB and PostgreSQL).

  • Asynchronous notifications using LISTEN and NOTIFY for PostgreSQL.

  • Nested transactions with support for save points.

  • Any database driver for changing the database driver at runtime. An AnyPool connects to the driver indicated by the URL scheme.

Install

SQLx is compatible with the async-std, tokio, and actix runtimes; and, the native-tls and rustls TLS backends. When adding the dependency, you must choose a runtime feature that is runtime + tls.

# Cargo.toml
[dependencies]
# PICK ONE OF THE FOLLOWING:

# tokio (no TLS)
sqlx = { version = "0.7", features = [ "runtime-tokio" ] }
# tokio + native-tls
sqlx = { version = "0.7", features = [ "runtime-tokio", "tls-native-tls" ] }
# tokio + rustls
sqlx = { version = "0.7", features = [ "runtime-tokio", "tls-rustls" ] }

# async-std (no TLS)
sqlx = { version = "0.7", features = [ "runtime-async-std" ] }
# async-std + native-tls
sqlx = { version = "0.7", features = [ "runtime-async-std", "tls-native-tls" ] }
# async-std + rustls
sqlx = { version = "0.7", features = [ "runtime-async-std", "tls-rustls" ] }

Cargo Feature Flags

For backward-compatibility reasons, the runtime and TLS features can either be chosen together as a single feature, or separately.

For forward compatibility, you should use the separate runtime and TLS features as the combination features may be removed in the future.

  • runtime-async-std: Use the async-std runtime without enabling a TLS backend.

  • runtime-async-std-native-tls: Use the async-std runtime and native-tls TLS backend (SOFT-DEPRECATED).

  • runtime-async-std-rustls: Use the async-std runtime and rustls TLS backend (SOFT-DEPRECATED).

  • runtime-tokio: Use the tokio runtime without enabling a TLS backend.

  • runtime-tokio-native-tls: Use the tokio runtime and native-tls TLS backend (SOFT-DEPRECATED).

  • runtime-tokio-rustls: Use the tokio runtime and rustls TLS backend (SOFT-DEPRECATED).

    • Actix-web is fully compatible with Tokio and so a separate runtime feature is no longer needed.
  • tls-native-tls: Use the native-tls TLS backend (OpenSSL on *nix, SChannel on Windows, Secure Transport on macOS).

  • tls-rustls: Use the rustls TLS backend (cross-platform backend, only supports TLS 1.2 and 1.3).

  • postgres: Add support for the Postgres database server.

  • mysql: Add support for the MySQL/MariaDB database server.

  • mssql: Add support for the MSSQL database server.

  • sqlite: Add support for the self-contained SQLite database engine.

  • any: Add support for the Any database driver, which can proxy to a database driver at runtime.

  • derive: Add support for the derive family macros, those are FromRow, Type, Encode, Decode.

  • macros: Add support for the query*! macros, which allows compile-time checked queries.

  • migrate: Add support for the migration management and migrate! macro, which allow compile-time embedded migrations.

  • uuid: Add support for UUID (in Postgres).

  • chrono: Add support for date and time types from chrono.

  • time: Add support for date and time types from time crate (alternative to chrono, which is preferred by query! macro, if both enabled)

  • bstr: Add support for bstr::BString.

  • bigdecimal: Add support for NUMERIC using the bigdecimal crate.

  • rust_decimal: Add support for NUMERIC using the rust_decimal crate.

  • ipnetwork: Add support for INET and CIDR (in postgres) using the ipnetwork crate.

  • json: Add support for JSON and JSONB (in postgres) using the serde_json crate.

  • Offline mode is now always enabled. See sqlx-cli/README.md.

SQLx is not an ORM!

SQLx supports compile-time checked queries. It does not, however, do this by providing a Rust API or DSL (domain-specific language) for building queries. Instead, it provides macros that take regular SQL as input and ensure that it is valid for your database. The way this works is that SQLx connects to your development DB at compile time to have the database itself verify (and return some info on) your SQL queries. This has some potentially surprising implications:

  • Since SQLx never has to parse the SQL string itself, any syntax that the development DB accepts can be used (including things added by database extensions)
  • Due to the different amount of information databases let you retrieve about queries, the extent of SQL verification you get from the query macros depends on the database

If you are looking for an (asynchronous) ORM, you can check out ormx or SeaORM, which is built on top of SQLx.

Usage

See the examples/ folder for more in-depth usage.

Quickstart

use sqlx::postgres::PgPoolOptions;
// use sqlx::mysql::MySqlPoolOptions;
// etc.

#[async_std::main] // Requires the `attributes` feature of `async-std`
// or #[tokio::main]
// or #[actix_web::main]
async fn main() -> Result<(), sqlx::Error> {
    // Create a connection pool
    //  for MySQL/MariaDB, use MySqlPoolOptions::new()
    //  for SQLite, use SqlitePoolOptions::new()
    //  etc.
    let pool = PgPoolOptions::new()
        .max_connections(5)
        .connect("postgres://postgres:password@localhost/test").await?;

    // Make a simple query to return the given parameter (use a question mark `?` instead of `$1` for MySQL/MariaDB)
    let row: (i64,) = sqlx::query_as("SELECT $1")
        .bind(150_i64)
        .fetch_one(&pool).await?;

    assert_eq!(row.0, 150);

    Ok(())
}

Connecting

A single connection can be established using any of the database connection types and calling connect().

use sqlx::Connection;

let conn = SqliteConnection::connect("sqlite::memory:").await?;

Generally, you will want to instead create a connection pool (sqlx::Pool) for the application to regulate how many server-side connections it's using.

let pool = MySqlPool::connect("mysql://user:pass@host/database").await?;

Querying

In SQL, queries can be separated into prepared (parameterized) or unprepared (simple). Prepared queries have their query plan cached, use a binary mode of communication (lower bandwidth and faster decoding), and utilize parameters to avoid SQL injection. Unprepared queries are simple and intended only for use where a prepared statement will not work, such as various database commands (e.g., PRAGMA or SET or BEGIN).

SQLx supports all operations with both types of queries. In SQLx, a &str is treated as an unprepared query, and a Query or QueryAs struct is treated as a prepared query.

// low-level, Executor trait
conn.execute("BEGIN").await?; // unprepared, simple query
conn.execute(sqlx::query("DELETE FROM table")).await?; // prepared, cached query

We should prefer to use the high-level query interface whenever possible. To make this easier, there are finalizers on the type to avoid the need to wrap with an executor.

sqlx::query("DELETE FROM table").execute(&mut conn).await?;
sqlx::query("DELETE FROM table").execute(&pool).await?;

The execute query finalizer returns the number of affected rows, if any, and drops all received results. In addition, there are fetch, fetch_one, fetch_optional, and fetch_all to receive results.

The Query type returned from sqlx::query will return Row<'conn> from the database. Column values can be accessed by ordinal or by name with row.get(). As the Row retains an immutable borrow on the connection, only one Row may exist at a time.

The fetch query finalizer returns a stream-like type that iterates through the rows in the result sets.

// provides `try_next`
use futures::TryStreamExt;

let mut rows = sqlx::query("SELECT * FROM users WHERE email = ?")
    .bind(email)
    .fetch(&mut conn);

while let Some(row) = rows.try_next().await? {
    // map the row into a user-defined domain type
    let email: &str = row.try_get("email")?;
}

To assist with mapping the row into a domain type, one of two idioms may be used:

let mut stream = sqlx::query("SELECT * FROM users")
    .map(|row: PgRow| {
        // map the row into a user-defined domain type
    })
    .fetch(&mut conn);
#[derive(sqlx::FromRow)]
struct User { name: String, id: i64 }

let mut stream = sqlx::query_as::<_, User>("SELECT * FROM users WHERE email = ? OR name = ?")
    .bind(user_email)
    .bind(user_name)
    .fetch(&mut conn);

Instead of a stream of results, we can use fetch_one or fetch_optional to request one required or optional result from the database.

Compile-time verification

We can use the macro, sqlx::query! to achieve compile-time syntactic and semantic verification of the SQL, with an output to an anonymous record type where each SQL column is a Rust field (using raw identifiers where needed).

let countries = sqlx::query!(
        "
SELECT country, COUNT(*) as count
FROM users
GROUP BY country
WHERE organization = ?
        ",
        organization
    )
    .fetch_all(&pool) // -> Vec<{ country: String, count: i64 }>
    .await?;

// countries[0].country
// countries[0].count

Differences from query():

  • The input (or bind) parameters must be given all at once (and they are compile-time validated to be the right number and the right type).

  • The output type is an anonymous record. In the above example the type would be similar to:

    { country: String, count: i64 }
  • The DATABASE_URL environment variable must be set at build time to a database which it can prepare queries against; the database does not have to contain any data but must be the same kind (MySQL, Postgres, etc.) and have the same schema as the database you will be connecting to at runtime.

    For convenience, you can use a .env file1 to set DATABASE_URL so that you don't have to pass it every time:

    DATABASE_URL=mysql://localhost/my_database
    

The biggest downside to query!() is that the output type cannot be named (due to Rust not officially supporting anonymous records). To address that, there is a query_as!() macro that is mostly identical except that you can name the output type.

// no traits are needed
struct Country { country: String, count: i64 }

let countries = sqlx::query_as!(Country,
        "
SELECT country, COUNT(*) as count
FROM users
GROUP BY country
WHERE organization = ?
        ",
        organization
    )
    .fetch_all(&pool) // -> Vec<Country>
    .await?;

// countries[0].country
// countries[0].count

To avoid the need of having a development database around to compile the project even when no modifications (to the database-accessing parts of the code) are done, you can enable "offline mode" to cache the results of the SQL query analysis using the sqlx command-line tool. See sqlx-cli/README.md.

Compile-time verified queries do quite a bit of work at compile time. Incremental actions like cargo check and cargo build can be significantly faster when using an optimized build by putting the following in your Cargo.toml (More information in the Profiles section of The Cargo Book)

[profile.dev.package.sqlx-macros]
opt-level = 3

1 The dotenv crate itself appears abandoned as of December 2021 so we now use the dotenvy crate instead. The file format is the same.

Safety

This crate uses #![forbid(unsafe_code)] to ensure everything is implemented in 100% Safe Rust.

If the sqlite feature is enabled, this is downgraded to #![deny(unsafe_code)] with #![allow(unsafe_code)] on the sqlx::sqlite module. There are several places where we interact with the C SQLite API. We try to document each call for the invariants we're assuming. We absolutely welcome auditing of, and feedback on, our unsafe code usage.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any Contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

sqlx's People

Contributors

abonander avatar ace4896 avatar andrewwhitehead avatar blackwolf12333 avatar cosmichorrordev avatar cycraig avatar esemeniuc avatar framp avatar freax13 avatar grgi avatar iamjpotts avatar janaakhterov avatar jesperaxelsson avatar joshtriplett avatar jplatte avatar link2xt avatar markazmierczak avatar matt-paul avatar mehcode avatar meteficha avatar paolobarbolini avatar phated avatar poiscript avatar raftario avatar saiintbrisson avatar salewski avatar thedodd avatar tyrelr avatar utterstep avatar xiaopengli89 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sqlx's Issues

mysql query error

dependencies

[dependencies.sqlx]
git = 'https://github.com/launchbadge/sqlx.git'
features = ['mysql']

when I try query:

println!("123");
let pool = sqlx::MySqlPool::new("mysql://xxx:xxx@xxx:xxx/rimer").await?;
let mut conn = pool.acquire().await?;
println!("456");
sqlx::query(r"DELETE FROM rimer_trigger_job WHERE job_id = ?")
    .bind(job_id)
    .fetch_one(&mut *conn)
    .await?;
println!("789");

I got the following error:

123
456
thread 'test_schedule_job' panicked at 'called `Result::unwrap()` on an `Err` value: NotFound', src/libcore/result.rs:1165:5

MySQL password is not percent-decoded

I'm currently unable to authenticate with a MySQL server because the password contains the @ character and it is not properly percent decoded before being sent to the server. Looking at diesel, it does the following before sending the password off to the server:

let password = match url.password() {
    Some(password) => Some(decode_into_cstring(password)?),
    None => None,
};

(decode_into_cstring here)

Relevant line: https://github.com/launchbadge/sqlx/blob/master/sqlx-core/src/mysql/connection.rs#L554

Bringing in percent-encoding, could be changed to something like

let password = url.password().map(percent_decode_str).unwrap_or_default();

then adjusting the further calls down to use password.as_ref() or equivalent. I can most likely make this PR myself relatively soon, but if someone wants to do it before I get a chance, feel free!

Postgres support for UUID?

I was looking through the source code a bit, but wasn't certain if UUID is a natively supported field.

I have some basic tables that look like this:

CREATE TYPE operation AS ENUM ('create', 'update', 'delete');

CREATE TABLE IF NOT EXISTS events (
    event_id    uuid PRIMARY KEY NOT NULL,  -- id of event
    namespace   uuid NOT NULL,              -- user's "namespace" where their events are stored
    event_time  bigint NOT NULL,            -- when event occurred
    op          operation NOT NULL,         -- operation that occurred
    acted_on    uuid NOT NULL               -- key of item that was acted on
);

I'm trying to do a test query on it to see if the whole setup works. I have a local instance of the db set up, and the DATABASE_URL properly set, but when I compile I get an error from some pretty simple code:

async fn get_all_in_ns(pool: PgPool) -> anyhow::Result<()> {
    let rows = sqlx::query!(
        r#"SELECT
             event_id, namespace, event_time, op, acted_on                                                                                                                 
             FROM events
             WHERE event_time > 5"#,
    )
        .fetch(&mut pool)
        .await
        .unwrap();
    Ok(())
}

-> "unknown field type ID: 2950"

I haven't seen any other errors like this, and my assumption is that sqlx doesn't support postgres UUIDs, but I'm not certain.

Could someone advise? If I can use statically typed sql queries, that would be a big win. We use UUIDs extensively from uuid-ossp

buffer reset the indexes error

dependencies

[dependencies.sqlx]
git = 'https://github.com/launchbadge/sqlx.git'
features = ['mysql']

code

sqlx::query(
        "SELECT ... from ...
        where id = ?",
    )
    .bind(id)
    .fetch_optional(&mut *conn)
    .await?
    .map(|row| .......)

error backtrace

thread 'test_schedule_job' panicked at 'assertion failed: `(left == right)`
  left: `8174`,
 right: `8192`', /Users/liangyongrui/.cargo/git/checkouts/sqlx-f05f33ba4f5c3036/5c532a8/sqlx-core/src/io/buf_stream.rs:77:17
stack backtrace:
   0: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
   1: core::fmt::write
   2: std::io::Write::write_fmt
   3: std::panicking::default_hook::{{closure}}
   4: std::panicking::default_hook
   5: std::panicking::rust_panic_with_hook
   6: std::panicking::continue_panic_fmt
   7: std::panicking::begin_panic_fmt
   8: sqlx_core::io::buf_stream::BufStream<S>::peek::{{closure}}
   9: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
  10: std::future::set_task_context
  11: <std::future::GenFuture<T> as core::future::future::Future>::poll
  12: std::future::poll_with_tls_context::{{closure}}
  13: std::future::get_task_context
  14: std::future::poll_with_tls_context
  15: sqlx_core::mysql::connection::MySqlConnection::try_receive::{{closure}}
  16: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
  17: std::future::set_task_context
  18: <std::future::GenFuture<T> as core::future::future::Future>::poll
  19: std::future::poll_with_tls_context::{{closure}}
  20: std::future::get_task_context
  21: std::future::poll_with_tls_context
  22: sqlx_core::mysql::connection::MySqlConnection::receive::{{closure}}
  23: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
  24: std::future::set_task_context
  25: <std::future::GenFuture<T> as core::future::future::Future>::poll
  26: std::future::poll_with_tls_context::{{closure}}
  27: std::future::get_task_context
  28: std::future::poll_with_tls_context
  29: sqlx_core::mysql::executor::<impl sqlx_core::mysql::connection::MySqlConnection>::prepare_with_cache::{{closure}}
  30: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
  31: std::future::set_task_context
  32: <std::future::GenFuture<T> as core::future::future::Future>::poll
  33: std::future::poll_with_tls_context::{{closure}}
  34: std::future::get_task_context
  35: std::future::poll_with_tls_context
  36: sqlx_core::mysql::executor::<impl sqlx_core::mysql::connection::MySqlConnection>::fetch::{{closure}}
  37: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
  38: std::future::set_task_context
  39: <std::future::GenFuture<T> as core::future::future::Future>::poll
  40: <async_stream::async_stream::AsyncStream<T,U> as futures_core::stream::Stream>::poll_next
  41: <core::pin::Pin<P> as futures_core::stream::Stream>::poll_next
  42: <S as futures_core::stream::TryStream>::try_poll_next
  43: futures_util::stream::try_stream::TryStreamExt::try_poll_next_unpin
  44: <futures_util::stream::try_stream::try_next::TryNext<St> as core::future::future::Future>::poll
  45: std::future::poll_with_tls_context::{{closure}}
  46: std::future::get_task_context
  47: std::future::poll_with_tls_context
  48: sqlx_core::executor::Executor::fetch_optional::{{closure}}
  49: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
  50: std::future::set_task_context
  51: <std::future::GenFuture<T> as core::future::future::Future>::poll
  52: <core::pin::Pin<P> as core::future::future::Future>::poll
  53: std::future::poll_with_tls_context::{{closure}}
  54: std::future::get_task_context
  55: std::future::poll_with_tls_context
  56: sqlx_core::query::Query<DB,P>::fetch_optional::{{closure}}
  57: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
  58: std::future::set_task_context
  59: <std::future::GenFuture<T> as core::future::future::Future>::poll
  60: std::future::poll_with_tls_context::{{closure}}
  61: std::future::get_task_context
  62: std::future::poll_with_tls_context
...

buf_stream.rs 77:17

            // If we are out of space to write to in the read buffer,
            // we reset the indexes
            if self.rbuf.len() < (self.rbuf_windex + cnt) {
                // TODO: This assumes that all data is consumed when we need to re-allocate
                debug_assert_eq!(self.rbuf_rindex, self.rbuf_windex);

                self.rbuf_rindex = 0;
                self.rbuf_windex = 0;
            }

Support for Mysql

I'am new to this library and try to write an mysql example.
Cargo.toml:
async-std = { version = "1.4.0", features = [ "attributes" ] }
sqlx = {version = "0.1.3", features = [ "mysql" ]}
anyhow = "1.0.26"

#[async_std::main]
async fn main() -> anyhow::Result<()> {

    let pool_result = MySqlPool::new(
        "mysql://root:123456@localhost:3306/test"
    ).await;
    let mut pool = match pool_result {
        Ok(p) => p,
        Err(_) => panic!("pool create faild")
    };

    let user_id = 1u32;
    let stat = sqlx::query("SELECT * FROM hello WHERE id = $1").bind(user_id);

    // blocked here, and will never return
    let row_result = stat.fetch_one(&mut pool).await;

    // can not reach here
    let row = match row_result {
        Ok(data) => data,
        Err(_) => panic!("panic at row")
    };
    Ok(())
}

From this example, maybe mysql support is poor, maybe there is some bug in this library.
I open this issue to request for more mysql support.

Macro fails on bytes data

For table called my_table with id -> Integer, value-> Integer and data->BYTEA (Postgress)

the following macro seems to fail when trying to insert a record (without BYTEA it seemed to work)

Fails with: "error: "unknown type param ID: 17""

        let id : i32 = 2;
        let value : i32 = 100;
        let v: Vec<u8> = vec![0, 2, 4, 6];

        let row = sqlx::query(            r#"
        INSERT INTO transactions ( id , value, data)
        VALUES ( $1, $2 , $3)
        RETURNING id, value, signature_or_message
                "#,)
            .bind(id).bind(value).bind(v)
            .fetch_one(&mut tx)
            .await.unwrap();


        tx.commit().await.unwrap();

This works fine:


        let id : i32 = 2;
        let value : i32 = 100;
        let v: Vec<u8> = vec![0, 2, 4, 6];
        let rec = sqlx::query!(
            r#"
 INSERT INTO transactions ( id , value, signature_or_message)
 VALUES ( $1, $2 , $3)
 RETURNING id, value, signature_or_message
         "#,
            0,
            0,
            v
        )
        .fetch_one(&mut tx)
        .await
        .unwrap();

        tx.commit().await.unwrap();

Unify the Row types to remove code duplication and make easier-to-use

  • Row::get is a pretty common method when using the query API. Requiring a trait import is less-than-ideal.
  • PgRow and MySqlRow are nearly identical in implementation.

Proposal:

  • PgRow and MySqlRow are now PgDataRow and MySqlDataRow that only expose getting values by index and only getting byte slices.

  • Row becomes a struct that contains the once duplicated functionality.

pub struct Row<R: DataRow> {
  inner: R,
  shared: Arc<SharedRow>,
}

struct SharedRow<DB: Database> {
  column_names: Arc<HashMap<Box<str>, usize>>,
  column_types: Arc<[DB::TypeId]>,
}
  • We can add runtime type checking to Row as well ( see #8 )

SQL Template Strings

One of my favorite libraries from the node world is sql-template-strings. It allows users to take something like this:

pg.query('SELECT author FROM books WHERE name = $1 AND author = $2', [book, author])

And instead write this:

pg.query(SQL`SELECT author FROM books WHERE name = ${book} AND author = ${author}`)

Notice the placement of the book and author variables -- they are placed inline inside the query. sql-template-strings handles creating a prepared statement and ordering the parameters for query execution.

sqlx reminds me a lot of the first snippet, with the primary additional benefit being that we can get type-safety inferred from the query (which on its own is very cool!). I don't know a lot about Rust macros, but I was wondering if it would be possible to take the sqlx macro a step further and additionally implement sql-template-strings-style inline parameters.

Edit:

Here's another example from the sql-template-strings README showing the benefit when a lot of parameters are used:

db.query(
  'INSERT INTO books (name, author, isbn, category, recommended_age, pages, price) VALUES (?, ?, ?, ?, ?, ?, ?)',
  [name, author, isbn, category, recommendedAge, pages, price]
)
// is better written as
db.query(SQL`
  INSERT
  INTO    books
          (name, author, isbn, category, recommended_age, pages, price)
  VALUES  (${name}, ${author}, ${isbn}, ${category}, ${recommendedAge}, ${pages}, ${price})
`)

unknown type param ID when adding column to database

I'm not really sure whether this is a silly user error or something coming from sqlx (thanks a bunch for this library by the way, really enjoying it!). I have a table called kudos that looks like this:

CREATE TABLE kudos (
    id UUID PRIMARY KEY,
    count INTEGER DEFAULT 1 NOT NULL
);

and in some server code a query running

let res = sqlx::query_as!(
        Kudos,
        r#"
INSERT INTO kudos (id) 
VALUES ($1) 
ON CONFLICT (id) DO NOTHING 
RETURNING *
        "#,
        id,

and it works great. However, I wanted to add a column to keep track of the URL for a given id, so I updated the table like so:

@@ -1,4 +1,5 @@
 CREATE TABLE kudos (
     id UUID PRIMARY KEY,
+    url VARCHAR UNIQUE NOT NULL,
     count INTEGER DEFAULT 1 NOT NULL
 );

and the corresponding query:

     let res = sqlx::query_as!(
         Kudos,
         r#"
- INSERT INTO kudos (id) 
+ INSERT INTO kudos (id, url) 
- VALUES ($1) 
+ VALUES ($1, $2) 
  ON CONFLICT (id) DO NOTHING 
  RETURNING *
         "#,
         id,
         req.path

However, doing this gives me an error.

cargo check
    Checking program v0.1.0 (/path/to/thing)
error: "unknown type param ID: 1043"
  --> src/handlers.rs:53:15
   |
53 |       let res = sqlx::query_as!(
   |  _______________^
54 | |         Thing,
55 | |         r#"
56 | | INSERT INTO kudos (id, url) 
...  |
62 | |         req.path
63 | |     )
   | |_____^
   |
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

I've confirmed in psql that the same query works fine by copy-pasting and replacing the placeholders. I'm at a bit of a loss as to what could be the cause of the error.

Support JSON for Postgres

Support taking and returning serde_json::Value (maybe others as well?) in queries.

I think this would be a Postgres-only feature, right?

Derive macro for FromRow

We've been talking about this for a while, we just punted on it for 0.1.

It'd be extremely useful for working with query_as().

Add (and test) explicit support for PgBouncer

Notable differences

  • Prepared statements are invalid after a SYNC (so they can be used but may not be cached).

  • Apparently does not support the startup parameter IntervalStyle. That may need to be deferred to a SET after connect.

Support running macros in "decoupled" mode

This was a popular request in the original Reddit announcement: https://www.reddit.com/r/rust/comments/egpw7g/announcing_sqlx_a_fully_asynchronous_pure_rust/fc8ldpw/?context=1000

The idea is to allow expansion of query!(), et al, without having to connect directly to the database by running the PREPAREs for all queries in a separate step and caching the result.

The solution in my head looks something like this:

  • We create a Cargo subcommand, cargo-sqlx, which links sqlx-core internally.
  • The user runs cargo sqlx prepare in their project which invokes cargo check with QUERY_PREPARE_OUT=$OUT_DIR/sqlx set in the environment
    • By being a subcommand we can know which cargo they're using so we build with the right toolchain, e.g. if the user runs cargo +nightly sqlx prepare when their default toolchain is stable
  • When query!() and friends see QUERY_PREPARE_OUT in the environment they don't perform normal expansion:
    • Instead they run PREPARE on their query strings and output the hash of the query and the prepare result as randomly named JSON files to QUERY_PREPARE_OUT
    • Then emit panics instead of their normal expansion so compilation can still finish.
  • The subcommand then gathers the files inside QUERY_PREPARE_OUT and writes them to a single JSON file in a predetermined location in the project (TBD)
    • These separate scatter/gather steps is because the macro expansions may run in parallel, so we don't want them all to clobber the same file.
    • This new file is intended to be checked-in to source control where it can be used even without a database connection, e.g. in CI.
    • If two queries have the same hash, the subcommand deduplicates the entries by asserting that the PREPARE data is the same.
  • If the file exists when query!() is later expanded normally, the macro checks in the file for the hash of its query string and uses that data instead of connecting to the database.
    • If the hash isn't found it emits a compile error saying to run cargo sqlx prepare again.
    • We can also support cargo sqlx prepare --check which does the opposite; it reruns PREPARE for the cached query strings and asserts that the result is the same.

We can then later extend the cargo-sqlx command to add migrations. With migrations in place we have a stronger guarantee that the schema we compiled and cached the PREPARE data against is the same as the one we are running against.

Questions:

  • Is prepare the right name for the sub-subcommand?
  • What should the cached file be named and where should it go?
  • Should we support a sqlx.toml to change the location and possibly other behaviors?
  • Should the macro use an environment variable like QUERY_PREPARE_OUT or look at a cfg flag? I'm thinking the former might fail to trigger rebuilds when set if Cargo doesn't know to watch for it but the latter might trigger a rebuild of the whole dep graph.
    • We could also just touch src/ to trigger a rebuild, no?
  • Do we want to support checking the cached file at runtime to ensure the PREPARE data we saved still matches what we get back from the database server?

Can't get value form mysql row

Hello,

I'm trying to use sqlx with MySQL but I can't seems to have it work to get the value out of the row.

let rec = sqlx::query(
        "
SELECT username, email
FROM users
WHERE id = $1
LIMIT 1
        "
    ).bind(user_id)
        .fetch_one(&mut pool)
        .await
        .unwrap();

    #[derive(serde::Serialize)]
    struct GetCurrentUserResponseBody {
        user: models::user::User,
    }

    println!("{}", rec.get(0));
    println!("{}", rec.get("username"));
    format!("{}", rec.len);

    Response::new(200)
        .body_json(&GetCurrentUserResponseBody {
            user: models::user::User {
                username: rec.username,
                email: rec.email,
                token,
            },
        })
        .unwrap()
error[E0599]: no method named `get` found for type `sqlx_core::mysql::row::MySqlRow` in the current scope
  --> src/routes/user.rs:54:24
   |
54 |     println!("{}", rec.get(0));
   |                        ^^^ method not found in `sqlx_core::mysql::row::MySqlRow`
   |
   = help: items from traits can only be used if the trait is in scope
   = note: the following trait is implemented but not in scope; perhaps add a `use` for it:
           `use sqlx_core::row::Row;`

error[E0599]: no method named `get` found for type `sqlx_core::mysql::row::MySqlRow` in the current scope
  --> src/routes/user.rs:55:24
   |
55 |     println!("{}", rec.get("username"));
   |                        ^^^ method not found in `sqlx_core::mysql::row::MySqlRow`
   |
   = help: items from traits can only be used if the trait is in scope
   = note: the following trait is implemented but not in scope; perhaps add a `use` for it:
           `use sqlx_core::row::Row;`

error[E0609]: no field `len` on type `sqlx_core::mysql::row::MySqlRow`
  --> src/routes/user.rs:54:23
   |
54 |     format!("{}", rec.len);
   |                       ^^^ unknown field

error[E0609]: no field `username` on type `sqlx_core::mysql::row::MySqlRow`
  --> src/routes/user.rs:59:31
   |
59 |                 username: rec.username,
   |                               ^^^^^^^^ unknown field

error[E0609]: no field `email` on type `sqlx_core::mysql::row::MySqlRow`
  --> src/routes/user.rs:60:28
   |
60 |                 email: rec.email,
   |                            ^^^^^ unknown field

What am I doing wrong? I even tried the get method but nothing is working.

PS : Do you have anywhere for a more direct support? Instead of creating issue for those kind of problem? A community channel maybe?

"failed to connect to database: Unknown or incorrect time zone: \'UTC\'"

Hi,

Thanks for your lib.

I got this error while running the example with tide using mysql features.

"failed to connect to database: Unknown or incorrect time zone: 'UTC'"

I fixed it with this command :

mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -u root mysql -p

found here https://dba.stackexchange.com/a/120946

Just to let you know, maybe it need something from your end, maybe not.
Mysql/MariaDB : mysql Ver 15.1 Distrib 10.3.18-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2

Regards

Should PgTypeMetadata be public?

Hello,
I'm trying to encode my custom types by implementing the HasSqlType and Encode traits:


impl HasSqlType<MyType> for Postgres {
    fn metadata() -> PgTypeMetadata {
        PgTypeMetadata::binary(25, 1009)
    }
}

impl Encode<Postgres> for MyType {
    fn encode(&self, buf: &mut Vec<u8>) {
        match self {
           MyType::Something => buf.extend_from_slice(b"Something"),
           MyType::Other => buf.extend_from_slice(b"Other"),
        }
    }
}

But it appears that the PgTypeMetadata type is not exposed as public.

Should it be or is there an other way to do this?

Missing feature flag for tokio in `macros` crate

When using the runtime-tokio and macros features, a compile time error is generated:

error[E0599]: no method named `threaded_scheduler` found for type `tokio::runtime::builder::Builder` in the current scope
  --> /home/isaac/.cargo/registry/src/github.com-1ecc6299db9ec823/sqlx-macros-0.2.1/src/lib.rs:32:14
   |
32 |             .threaded_scheduler()
   |              ^^^^^^^^^^^^^^^^^^ method not found in `tokio::runtime::builder::Builder`

A work-around is to include the following dependency in Cargo.toml:

tokio = { version = "0.2.9", default-features = false, features = [ "rt-threaded" ] }

Make PgTypeInfo publicly-constructible to support custom types

In PostgreSQL, custom types can be created with CREATE TYPE (e.g. enums). To interface with such types using sqlx, one would have to (amongst other things) implement HasSqlType<CustomType> for Postgres, which in turn requires being able to construct a PgTypeInfo. PgTypeInfo currently doesn't have a public constructor or all-public fields though.

panic at decoding row

Simply reproducing the case with table:

CREATE TABLE `accounts` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `field2` varchar(30) NOT NULL DEFAULT '',
  `field3` varchar(255) NOT NULL DEFAULT '',
  `field4` tinyint(4) NOT NULL DEFAULT '0',
  `field5` timestamp NULL DEFAULT NULL,
  `field6` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
  `field7` tinyint(4) NOT NULL DEFAULT '0',
  `field8` text,
  `field9` tinyint(4) NOT NULL DEFAULT '0',
  `field10` tinytext,
  `field11` tinytext,
  `field12` timestamp NOT NULL DEFAULT '2000-01-01 00:00:00',
  `field13` tinyint(4) DEFAULT NULL,
  `field14` int(11) DEFAULT '0',
  `field15` int(11) DEFAULT NULL,
  `field16` tinyint(4) NOT NULL DEFAULT '10',
  `field17` varchar(64) DEFAULT NULL,
  `field18` int(11) NOT NULL DEFAULT '0',
  `field19` int(11) NOT NULL DEFAULT '0',
  `field20` timestamp NULL DEFAULT NULL,
  `field21` timestamp NULL DEFAULT NULL,
  `field22` int(3) DEFAULT NULL,
  `field23` int(6) NOT NULL DEFAULT '0',
  `field24` int(6) NOT NULL DEFAULT '0',
  `field25` bigint(20) NOT NULL DEFAULT '0',
  `field26` int(11) DEFAULT '0',
  PRIMARY KEY (`id`),
  UNIQUE KEY `field2` (`field2`),
  KEY `ranking1` (`id`,`field7`,`field13`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC

insert into accounts (field2) values ('rust');
let row = sqlx::query("SELECT * FROM accounts")
        .fetch_one(&mut conn)
        .await?;

problem: thread 'main' panicked at 'index out of bounds: the len is 58 but the index is 235', cargo/registry/src/github.com-1ecc6299db9ec823/sqlx-core-0.2.0/src/mysql/protocol/row.rs:107:65

Add support for COPY ... FROM STDIN and COPY ... TO STDOUT statements in PostgreSQL

Hi!

COPY is the most efficient way to import data to PostgreSQL database (it is less flexible than INSERT, but has significantly less overhead).
https://www.postgresql.org/docs/current/sql-copy.html
https://www.postgresql.org/docs/current/populate.html#POPULATE-COPY-FROM

It would be great to have a low-level interface to stream data to/from PostgreSQL using COPY ... FROM STDIN and COPY ... TO STDOUT statements. For example, similar to https://docs.rs/tokio-postgres/0.5.1/tokio_postgres/struct.Client.html#method.copy_in and https://docs.rs/tokio-postgres/0.5.1/tokio_postgres/struct.Client.html#method.copy_out.

pg query! err

when parameters more than 10 I get a err

error[E0271]: type mismatch resolving <sqlx_core::postgres::database::Postgres as sqlx_core::database::Database>::Arguments == (&std::string::String, &std::string::String, &i32, &i32, &i32, &i32, &std::string::String, &std::string::String, &i64, &i32)

expected struct sqlx_core::postgres::arguments::PgArguments, found tuple

Add a top-level derive for a new SQL Type ( general ideas, needs refinement )

The following are ideas and not final.

Tuple Struct (of 1)

#[derive(sqlx::Type)]
// transparent indicates that this type is purely for 
// documentation and it should act exactly as if it were the wrapped type in all aspects
#[sqlx(transparent)]
struct MyType(i32);

Enum

#[derive(sqlx::Type)]
// specify the OID explicitly for the type (optionally the OID for when its used as an array)
#[sqlx(postgres(oid = 1212, array_oid = 21341))]
// by default we encode and decode as a string and rely on first-class support
// for enumerations in the database server
#[sqlx(repr(i32))]
// by default we should snake_case the type name and look up its OID
// you should be able to use a container-level attribute to rename the type name 
#[sqlx(rename = "myenum")]
enum MyEnum {
  TheFoo,
  // By default we should snake_case rename but also allow renames
  #[sqlx(rename = "foothe")]
  FooThe
}

Struct

// In Postgres you can define your own composites or structs
// We should be able to support this
#[derive(sqlx::Type)]
struct Foo { a: i32 }

// Additionally, perhaps something like the following for strongly typed JSON
// from Postgres or MySQL
#[derive(sqlx::Type)]
#[sqlx(repr(json))]
struct Bar { b: i64 }

Basic idea of implementation is these derives would implement HasSqlType and add the Encode and Decode derives. Can proc derives be recursive (e.g., can this derive generate more derive macro invocations)?

`Row.try_get()` should check the decoding type against the type reported by the server

If there is a schema mismatch between when query!()'s checks were run and when the query is actually executed, we can have unexpected (but still technically safe) behavior in most of our Decode impls due to various assumptions (#7).

Since Postgres and MySQL both report to us the data types in the result, we should check that the type we've been asked to decode matches via <DB as HasSqlType<T>>::metadata().

When I connect mysql,I get "Access denied for user"

when I execute program

 sqlx::mysql::connect("mysql://rimer_test:xxxxxxx@xxxxxxxx:3306/rimer").await?,

DatabaseError { message: "Access denied for user 'rimer_test'@'106.37.187.148' (using password: YES)

but I can connect when I use other mysql client

use mysql_async::Pool;
Pool::new("mysql://rimer_test:[email protected]:3306/rimer");

Add support for transactions in code

Transactions can of course be done manually by obtaining a connection and executing BEGIN and then COMMIT/ROLLBACK, but having support for it in the crate makes it easier to work with.

Obvious options are:

An RAII struct that takes ownership of a connection, starts a transaction on creation and either commits or rolls back on drop.

A method on the pool that takes a closure, and calls the closure with a connection that is the transaction. This could than use a Result return from the closure to indicate commit or rollback, which makes the code easier to follow but does hide some details away.

Audit for panics in `Decode` implementations

A lot of our Decode impls, primarily for integers, will panic if passed the wrong sized byte slices, which could happen if the query returned unexpected datatypes (e.g. due to a mismatch between compile time and runtime).

Better error message for unsupported types in query!

See #16


Current

unknown field type ID: 2950

Proposal

conditionally supported column type: UUID (2950); hint: try enabling the uuid feature

unknown column type: 3112; hint: custom types are not currently supported, try casting to a supported type with ::text

Usage with tokio

First off, the project looks awesome! I'm currently using mysql_async with tokio and not completely satisfied. I'd love to give this crate a try, but I saw you're using async_std. My question is: how much do you rely on it?

I would like to keep everything running on a single threadpool which I can control. I'm using other libraries bound to tokio, so I can't switch over. Would it be possible to decouple from async_std or perhaps provide some feature flags to support tokio aswell? Again, it depends on what functionality you're using it for.

Derive macros for Encode and Decode

It's possible (I assume - I've not tried it yet!) to use custom types as inputs and outputs from queries by implementing Encode and Decode. This can then help to keep domain code cleaner, since the domain code itself often doesn't need the raw values but the database does.

Implementing these traits isn't too hard, but a derive macro for them just makes it a bit simpler.

For example:

#[derive(Encode, Decode, Debug)]
pub struct Username(String)

MySQL: ColumnDefintion.type is insufficient to tell the difference between character strings and binary strings

MySQL does not use the column type to differentiate between binary strings and character strings (see enum enum_field_types type): https://dev.mysql.com/doc/refman/8.0/en/c-api-data-structures.html

image

Instead, both String and Vec<u8> can both potentially decode from all of MYSQL_TYPE_STRING, MYSQL_TYPE_VAR_STRING, or MYSQL_TYPE_BLOB

To tell the difference between binary and character strings by one of the following:

  • if the field is one of the above types, check if ColumnDefinition.flags has FieldFlags.BINARY set
  • ColumnDefinition.character_set will be 0x3f for binary strings and all others are some sort of text encoding

On that second element, we can support multiple character sets quite easily in the future:

  • utf8, utf8mb3 and utf8mb4 all can be decoded as UTF-8 (though utf8 and utf8mb3 forbid 4-byte sequences)
  • ucs2, utf16 and utf16le can all be decoded with String::from_utf16()

Rewrite the query! macro to undo stringly typed programming

Right now the sql!() macro accepts a string argument that essentially forces a user to write code in a string. In a language like rust where one can accept rust source tokens themselves as input, this is a missed opportunity:

  • Strings don't allow for internal syntactic or semantic code highlighting.
    The larger the query, the more desirable this becomes.
  • Programmers make more mistakes writing code in strings than regular code, but I suppose those are currently caught by virtue of macros working at compile time.
  • Conceptually it's a lot nicer to have a proc_macro that accepts code rather than a string

failed to connect to database: unsupported startup parameter: IntervalStyle

I'm having an issue connecting to my postgres database. I get the error "failed to connect to database: unsupported startup parameter: IntervalStyle". Running on a hosted postgres (v11) with tls. Any ideas as to why I'm getting this?

main.rs

#[async_std::main]
async fn main() -> Res<()> {
    let mut pool = PgPool::new(&env::var("DATABASE_URL")?).await?;

    let rows = sqlx::query!(
        "SELECT title
        FROM product
        LIMIT 10",
    )
    .fetch_all(&mut pool)
    .await?;

    for row in rows {
        println!("row {:?}", row.get("title"));
    }

    Ok(())
}

Cargo.toml

[dependencies]
async-std = { version = "1.4.0", features = ["attributes"] }
sqlx = { version = "0.2.0", features = ["postgres", "tls"] }

Use #![forbid(unsafe_code)] instead of #![deny(unsafe_code)]

#![deny(unsafe_code)] can be reversed with #![allow(unsafe_code)] while #![forbid(unsafe_code)] can't. With #![forbid(unsafe_code)] you just have to put it at the top of each file (Above the imports so at the very top of the file). So with #![deny(unsafe_code)] you might be using unsafe unknowingly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.