launchbadge / sqlx Goto Github PK
View Code? Open in Web Editor NEW🧰 The Rust SQL Toolkit. An async, pure Rust SQL crate featuring compile-time checked queries without a DSL. Supports PostgreSQL, MySQL, and SQLite.
License: Apache License 2.0
🧰 The Rust SQL Toolkit. An async, pure Rust SQL crate featuring compile-time checked queries without a DSL. Supports PostgreSQL, MySQL, and SQLite.
License: Apache License 2.0
MySQL does not use the column type to differentiate between binary strings and character strings (see enum enum_field_types type
): https://dev.mysql.com/doc/refman/8.0/en/c-api-data-structures.html
Instead, both String
and Vec<u8>
can both potentially decode from all of MYSQL_TYPE_STRING
, MYSQL_TYPE_VAR_STRING
, or MYSQL_TYPE_BLOB
To tell the difference between binary and character strings by one of the following:
ColumnDefinition.flags
has FieldFlags.BINARY
setColumnDefinition.character_set
will be 0x3f
for binary strings and all others are some sort of text encodingOn that second element, we can support multiple character sets quite easily in the future:
utf8
, utf8mb3
and utf8mb4
all can be decoded as UTF-8 (though utf8
and utf8mb3
forbid 4-byte sequences)ucs2
, utf16
and utf16le
can all be decoded with String::from_utf16()
Now that proc macros can emit macro_rules!
definitions, #[proc_macro_hack]
should no longer be necessary: https://www.reddit.com/r/rust/comments/emwjs1/heres_a_pr_showing_how_easy_it_is_to_remove_a/
We're already wrapping in macro_rules!
facades to make the documentation nicer anyway.
This would bump the minimum Rust version to 1.40 and so should be part of the 0.2 release.
When using the runtime-tokio
and macros
features, a compile time error is generated:
error[E0599]: no method named `threaded_scheduler` found for type `tokio::runtime::builder::Builder` in the current scope
--> /home/isaac/.cargo/registry/src/github.com-1ecc6299db9ec823/sqlx-macros-0.2.1/src/lib.rs:32:14
|
32 | .threaded_scheduler()
| ^^^^^^^^^^^^^^^^^^ method not found in `tokio::runtime::builder::Builder`
A work-around is to include the following dependency in Cargo.toml:
tokio = { version = "0.2.9", default-features = false, features = [ "rt-threaded" ] }
TODO: fill this out when I'm not on a phone
Notable differences
Prepared statements are invalid after a SYNC
(so they can be used but may not be cached).
Apparently does not support the startup parameter IntervalStyle
. That may need to be deferred to a SET
after connect.
One of my favorite libraries from the node world is sql-template-strings
. It allows users to take something like this:
pg.query('SELECT author FROM books WHERE name = $1 AND author = $2', [book, author])
And instead write this:
pg.query(SQL`SELECT author FROM books WHERE name = ${book} AND author = ${author}`)
Notice the placement of the book
and author
variables -- they are placed inline inside the query. sql-template-strings
handles creating a prepared statement and ordering the parameters for query execution.
sqlx
reminds me a lot of the first snippet, with the primary additional benefit being that we can get type-safety inferred from the query (which on its own is very cool!). I don't know a lot about Rust macros, but I was wondering if it would be possible to take the sqlx
macro a step further and additionally implement sql-template-strings
-style inline parameters.
Edit:
Here's another example from the sql-template-strings
README showing the benefit when a lot of parameters are used:
db.query(
'INSERT INTO books (name, author, isbn, category, recommended_age, pages, price) VALUES (?, ?, ?, ?, ?, ?, ?)',
[name, author, isbn, category, recommendedAge, pages, price]
)
// is better written as
db.query(SQL`
INSERT
INTO books
(name, author, isbn, category, recommended_age, pages, price)
VALUES (${name}, ${author}, ${isbn}, ${category}, ${recommendedAge}, ${pages}, ${price})
`)
I'm not really sure whether this is a silly user error or something coming from sqlx
(thanks a bunch for this library by the way, really enjoying it!). I have a table called kudos
that looks like this:
CREATE TABLE kudos (
id UUID PRIMARY KEY,
count INTEGER DEFAULT 1 NOT NULL
);
and in some server code a query running
let res = sqlx::query_as!(
Kudos,
r#"
INSERT INTO kudos (id)
VALUES ($1)
ON CONFLICT (id) DO NOTHING
RETURNING *
"#,
id,
and it works great. However, I wanted to add a column to keep track of the URL for a given id
, so I updated the table like so:
@@ -1,4 +1,5 @@
CREATE TABLE kudos (
id UUID PRIMARY KEY,
+ url VARCHAR UNIQUE NOT NULL,
count INTEGER DEFAULT 1 NOT NULL
);
and the corresponding query:
let res = sqlx::query_as!(
Kudos,
r#"
- INSERT INTO kudos (id)
+ INSERT INTO kudos (id, url)
- VALUES ($1)
+ VALUES ($1, $2)
ON CONFLICT (id) DO NOTHING
RETURNING *
"#,
id,
req.path
However, doing this gives me an error.
cargo check
Checking program v0.1.0 (/path/to/thing)
error: "unknown type param ID: 1043"
--> src/handlers.rs:53:15
|
53 | let res = sqlx::query_as!(
| _______________^
54 | | Thing,
55 | | r#"
56 | | INSERT INTO kudos (id, url)
... |
62 | | req.path
63 | | )
| |_____^
|
= note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
I've confirmed in psql
that the same query works fine by copy-pasting and replacing the placeholders. I'm at a bit of a loss as to what could be the cause of the error.
It's possible (I assume - I've not tried it yet!) to use custom types as inputs and outputs from queries by implementing Encode and Decode. This can then help to keep domain code cleaner, since the domain code itself often doesn't need the raw values but the database does.
Implementing these traits isn't too hard, but a derive macro for them just makes it a bit simpler.
For example:
#[derive(Encode, Decode, Debug)]
pub struct Username(String)
Also i have posted this issue in slack which detail the scenario:
https://stackoverflow.com/questions/59751778/method-not-found-in-where-it-seems-like-method-is-defined
I don't want to define what anything but 1-arity tuple structs means at this time. We are going to explore derives for custom types in the near future.
From #5
This would require proc_macro::SourceFile
to be stabilized (or a similar API): rust-lang/rust#54725
Probably would require fixing rust-lang/rust#62892
We could make it work on nightly using --cfg procmacro2_semver_exempt
but that's a half-measure.
The following are ideas and not final.
Tuple Struct (of 1)
#[derive(sqlx::Type)]
// transparent indicates that this type is purely for
// documentation and it should act exactly as if it were the wrapped type in all aspects
#[sqlx(transparent)]
struct MyType(i32);
Enum
#[derive(sqlx::Type)]
// specify the OID explicitly for the type (optionally the OID for when its used as an array)
#[sqlx(postgres(oid = 1212, array_oid = 21341))]
// by default we encode and decode as a string and rely on first-class support
// for enumerations in the database server
#[sqlx(repr(i32))]
// by default we should snake_case the type name and look up its OID
// you should be able to use a container-level attribute to rename the type name
#[sqlx(rename = "myenum")]
enum MyEnum {
TheFoo,
// By default we should snake_case rename but also allow renames
#[sqlx(rename = "foothe")]
FooThe
}
Struct
// In Postgres you can define your own composites or structs
// We should be able to support this
#[derive(sqlx::Type)]
struct Foo { a: i32 }
// Additionally, perhaps something like the following for strongly typed JSON
// from Postgres or MySQL
#[derive(sqlx::Type)]
#[sqlx(repr(json))]
struct Bar { b: i64 }
Basic idea of implementation is these derives would implement HasSqlType
and add the Encode
and Decode
derives. Can proc derives be recursive (e.g., can this derive generate more derive macro invocations)?
query!("SELECT 1 as \"'1\"")
lifetimes cannot start with a number
dependencies
[dependencies.sqlx]
git = 'https://github.com/launchbadge/sqlx.git'
features = ['mysql']
when I try query:
println!("123");
let pool = sqlx::MySqlPool::new("mysql://xxx:xxx@xxx:xxx/rimer").await?;
let mut conn = pool.acquire().await?;
println!("456");
sqlx::query(r"DELETE FROM rimer_trigger_job WHERE job_id = ?")
.bind(job_id)
.fetch_one(&mut *conn)
.await?;
println!("789");
I got the following error:
123
456
thread 'test_schedule_job' panicked at 'called `Result::unwrap()` on an `Err` value: NotFound', src/libcore/result.rs:1165:5
A lot of our Decode
impls, primarily for integers, will panic if passed the wrong sized byte slices, which could happen if the query returned unexpected datatypes (e.g. due to a mismatch between compile time and runtime).
bool
(#35)For table called my_table with id -> Integer, value-> Integer and data->BYTEA (Postgress)
the following macro seems to fail when trying to insert a record (without BYTEA it seemed to work)
Fails with: "error: "unknown type param ID: 17""
let id : i32 = 2;
let value : i32 = 100;
let v: Vec<u8> = vec![0, 2, 4, 6];
let row = sqlx::query( r#"
INSERT INTO transactions ( id , value, data)
VALUES ( $1, $2 , $3)
RETURNING id, value, signature_or_message
"#,)
.bind(id).bind(value).bind(v)
.fetch_one(&mut tx)
.await.unwrap();
tx.commit().await.unwrap();
This works fine:
let id : i32 = 2;
let value : i32 = 100;
let v: Vec<u8> = vec![0, 2, 4, 6];
let rec = sqlx::query!(
r#"
INSERT INTO transactions ( id , value, signature_or_message)
VALUES ( $1, $2 , $3)
RETURNING id, value, signature_or_message
"#,
0,
0,
v
)
.fetch_one(&mut tx)
.await
.unwrap();
tx.commit().await.unwrap();
I'm currently unable to authenticate with a MySQL server because the password contains the @
character and it is not properly percent decoded before being sent to the server. Looking at diesel
, it does the following before sending the password off to the server:
let password = match url.password() {
Some(password) => Some(decode_into_cstring(password)?),
None => None,
};
(decode_into_cstring
here)
Relevant line: https://github.com/launchbadge/sqlx/blob/master/sqlx-core/src/mysql/connection.rs#L554
Bringing in percent-encoding
, could be changed to something like
let password = url.password().map(percent_decode_str).unwrap_or_default();
then adjusting the further calls down to use password.as_ref()
or equivalent. I can most likely make this PR myself relatively soon, but if someone wants to do it before I get a chance, feel free!
First off, the project looks awesome! I'm currently using mysql_async
with tokio
and not completely satisfied. I'd love to give this crate a try, but I saw you're using async_std
. My question is: how much do you rely on it?
I would like to keep everything running on a single threadpool which I can control. I'm using other libraries bound to tokio
, so I can't switch over. Would it be possible to decouple from async_std
or perhaps provide some feature flags to support tokio
aswell? Again, it depends on what functionality you're using it for.
Hello,
I'm trying to encode my custom types by implementing the HasSqlType
and Encode
traits:
impl HasSqlType<MyType> for Postgres {
fn metadata() -> PgTypeMetadata {
PgTypeMetadata::binary(25, 1009)
}
}
impl Encode<Postgres> for MyType {
fn encode(&self, buf: &mut Vec<u8>) {
match self {
MyType::Something => buf.extend_from_slice(b"Something"),
MyType::Other => buf.extend_from_slice(b"Other"),
}
}
}
But it appears that the PgTypeMetadata
type is not exposed as public.
Should it be or is there an other way to do this?
Hello,
I'm trying to use sqlx with MySQL but I can't seems to have it work to get the value out of the row.
let rec = sqlx::query(
"
SELECT username, email
FROM users
WHERE id = $1
LIMIT 1
"
).bind(user_id)
.fetch_one(&mut pool)
.await
.unwrap();
#[derive(serde::Serialize)]
struct GetCurrentUserResponseBody {
user: models::user::User,
}
println!("{}", rec.get(0));
println!("{}", rec.get("username"));
format!("{}", rec.len);
Response::new(200)
.body_json(&GetCurrentUserResponseBody {
user: models::user::User {
username: rec.username,
email: rec.email,
token,
},
})
.unwrap()
error[E0599]: no method named `get` found for type `sqlx_core::mysql::row::MySqlRow` in the current scope
--> src/routes/user.rs:54:24
|
54 | println!("{}", rec.get(0));
| ^^^ method not found in `sqlx_core::mysql::row::MySqlRow`
|
= help: items from traits can only be used if the trait is in scope
= note: the following trait is implemented but not in scope; perhaps add a `use` for it:
`use sqlx_core::row::Row;`
error[E0599]: no method named `get` found for type `sqlx_core::mysql::row::MySqlRow` in the current scope
--> src/routes/user.rs:55:24
|
55 | println!("{}", rec.get("username"));
| ^^^ method not found in `sqlx_core::mysql::row::MySqlRow`
|
= help: items from traits can only be used if the trait is in scope
= note: the following trait is implemented but not in scope; perhaps add a `use` for it:
`use sqlx_core::row::Row;`
error[E0609]: no field `len` on type `sqlx_core::mysql::row::MySqlRow`
--> src/routes/user.rs:54:23
|
54 | format!("{}", rec.len);
| ^^^ unknown field
error[E0609]: no field `username` on type `sqlx_core::mysql::row::MySqlRow`
--> src/routes/user.rs:59:31
|
59 | username: rec.username,
| ^^^^^^^^ unknown field
error[E0609]: no field `email` on type `sqlx_core::mysql::row::MySqlRow`
--> src/routes/user.rs:60:28
|
60 | email: rec.email,
| ^^^^^ unknown field
What am I doing wrong? I even tried the get method but nothing is working.
PS : Do you have anywhere for a more direct support? Instead of creating issue for those kind of problem? A community channel maybe?
A conversation on Discord suggests that it doesn't work at the moment
In PostgreSQL, custom types can be created with CREATE TYPE
(e.g. enums). To interface with such types using sqlx
, one would have to (amongst other things) implement HasSqlType<CustomType> for Postgres
, which in turn requires being able to construct a PgTypeInfo
. PgTypeInfo
currently doesn't have a public constructor or all-public fields though.
If there is a schema mismatch between when query!()
's checks were run and when the query is actually executed, we can have unexpected (but still technically safe) behavior in most of our Decode
impls due to various assumptions (#7).
Since Postgres and MySQL both report to us the data types in the result, we should check that the type we've been asked to decode matches via <DB as HasSqlType<T>>::metadata()
.
dependencies
[dependencies.sqlx]
git = 'https://github.com/launchbadge/sqlx.git'
features = ['mysql']
code
sqlx::query(
"SELECT ... from ...
where id = ?",
)
.bind(id)
.fetch_optional(&mut *conn)
.await?
.map(|row| .......)
error backtrace
thread 'test_schedule_job' panicked at 'assertion failed: `(left == right)`
left: `8174`,
right: `8192`', /Users/liangyongrui/.cargo/git/checkouts/sqlx-f05f33ba4f5c3036/5c532a8/sqlx-core/src/io/buf_stream.rs:77:17
stack backtrace:
0: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
1: core::fmt::write
2: std::io::Write::write_fmt
3: std::panicking::default_hook::{{closure}}
4: std::panicking::default_hook
5: std::panicking::rust_panic_with_hook
6: std::panicking::continue_panic_fmt
7: std::panicking::begin_panic_fmt
8: sqlx_core::io::buf_stream::BufStream<S>::peek::{{closure}}
9: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
10: std::future::set_task_context
11: <std::future::GenFuture<T> as core::future::future::Future>::poll
12: std::future::poll_with_tls_context::{{closure}}
13: std::future::get_task_context
14: std::future::poll_with_tls_context
15: sqlx_core::mysql::connection::MySqlConnection::try_receive::{{closure}}
16: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
17: std::future::set_task_context
18: <std::future::GenFuture<T> as core::future::future::Future>::poll
19: std::future::poll_with_tls_context::{{closure}}
20: std::future::get_task_context
21: std::future::poll_with_tls_context
22: sqlx_core::mysql::connection::MySqlConnection::receive::{{closure}}
23: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
24: std::future::set_task_context
25: <std::future::GenFuture<T> as core::future::future::Future>::poll
26: std::future::poll_with_tls_context::{{closure}}
27: std::future::get_task_context
28: std::future::poll_with_tls_context
29: sqlx_core::mysql::executor::<impl sqlx_core::mysql::connection::MySqlConnection>::prepare_with_cache::{{closure}}
30: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
31: std::future::set_task_context
32: <std::future::GenFuture<T> as core::future::future::Future>::poll
33: std::future::poll_with_tls_context::{{closure}}
34: std::future::get_task_context
35: std::future::poll_with_tls_context
36: sqlx_core::mysql::executor::<impl sqlx_core::mysql::connection::MySqlConnection>::fetch::{{closure}}
37: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
38: std::future::set_task_context
39: <std::future::GenFuture<T> as core::future::future::Future>::poll
40: <async_stream::async_stream::AsyncStream<T,U> as futures_core::stream::Stream>::poll_next
41: <core::pin::Pin<P> as futures_core::stream::Stream>::poll_next
42: <S as futures_core::stream::TryStream>::try_poll_next
43: futures_util::stream::try_stream::TryStreamExt::try_poll_next_unpin
44: <futures_util::stream::try_stream::try_next::TryNext<St> as core::future::future::Future>::poll
45: std::future::poll_with_tls_context::{{closure}}
46: std::future::get_task_context
47: std::future::poll_with_tls_context
48: sqlx_core::executor::Executor::fetch_optional::{{closure}}
49: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
50: std::future::set_task_context
51: <std::future::GenFuture<T> as core::future::future::Future>::poll
52: <core::pin::Pin<P> as core::future::future::Future>::poll
53: std::future::poll_with_tls_context::{{closure}}
54: std::future::get_task_context
55: std::future::poll_with_tls_context
56: sqlx_core::query::Query<DB,P>::fetch_optional::{{closure}}
57: <std::future::GenFuture<T> as core::future::future::Future>::poll::{{closure}}
58: std::future::set_task_context
59: <std::future::GenFuture<T> as core::future::future::Future>::poll
60: std::future::poll_with_tls_context::{{closure}}
61: std::future::get_task_context
62: std::future::poll_with_tls_context
...
buf_stream.rs 77:17
// If we are out of space to write to in the read buffer,
// we reset the indexes
if self.rbuf.len() < (self.rbuf_windex + cnt) {
// TODO: This assumes that all data is consumed when we need to re-allocate
debug_assert_eq!(self.rbuf_rindex, self.rbuf_windex);
self.rbuf_rindex = 0;
self.rbuf_windex = 0;
}
Currently, in v0.1.0 MySQL only works with "empty" passwords.
#![deny(unsafe_code)] can be reversed with #![allow(unsafe_code)] while #![forbid(unsafe_code)] can't. With #![forbid(unsafe_code)] you just have to put it at the top of each file (Above the imports so at the very top of the file). So with #![deny(unsafe_code)] you might be using unsafe unknowingly.
I'am new to this library and try to write an mysql example.
Cargo.toml:
async-std = { version = "1.4.0", features = [ "attributes" ] }
sqlx = {version = "0.1.3", features = [ "mysql" ]}
anyhow = "1.0.26"
#[async_std::main]
async fn main() -> anyhow::Result<()> {
let pool_result = MySqlPool::new(
"mysql://root:123456@localhost:3306/test"
).await;
let mut pool = match pool_result {
Ok(p) => p,
Err(_) => panic!("pool create faild")
};
let user_id = 1u32;
let stat = sqlx::query("SELECT * FROM hello WHERE id = $1").bind(user_id);
// blocked here, and will never return
let row_result = stat.fetch_one(&mut pool).await;
// can not reach here
let row = match row_result {
Ok(data) => data,
Err(_) => panic!("panic at row")
};
Ok(())
}
From this example, maybe mysql support is poor, maybe there is some bug in this library.
I open this issue to request for more mysql support.
when I execute program
sqlx::mysql::connect("mysql://rimer_test:xxxxxxx@xxxxxxxx:3306/rimer").await?,
DatabaseError { message: "Access denied for user 'rimer_test'@'106.37.187.148' (using password: YES)
but I can connect when I use other mysql client
use mysql_async::Pool;
Pool::new("mysql://rimer_test:[email protected]:3306/rimer");
Hi,
Thanks for your lib.
I got this error while running the example with tide using mysql features.
"failed to connect to database: Unknown or incorrect time zone: 'UTC'"
I fixed it with this command :
mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -u root mysql -p
found here https://dba.stackexchange.com/a/120946
Just to let you know, maybe it need something from your end, maybe not.
Mysql/MariaDB : mysql Ver 15.1 Distrib 10.3.18-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
Regards
Small nit-- just thought others would find the correction helpful. Thanks for the neat library! :)
when parameters more than 10 I get a err
error[E0271]: type mismatch resolving <sqlx_core::postgres::database::Postgres as sqlx_core::database::Database>::Arguments == (&std::string::String, &std::string::String, &i32, &i32, &i32, &i32, &std::string::String, &std::string::String, &i64, &i32)
expected struct sqlx_core::postgres::arguments::PgArguments
, found tuple
Transactions can of course be done manually by obtaining a connection and executing BEGIN and then COMMIT/ROLLBACK, but having support for it in the crate makes it easier to work with.
Obvious options are:
An RAII struct that takes ownership of a connection, starts a transaction on creation and either commits or rolls back on drop.
A method on the pool that takes a closure, and calls the closure with a connection that is the transaction. This could than use a Result return from the closure to indicate commit or rollback, which makes the code easier to follow but does hide some details away.
Simply reproducing the case with table:
CREATE TABLE `accounts` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`field2` varchar(30) NOT NULL DEFAULT '',
`field3` varchar(255) NOT NULL DEFAULT '',
`field4` tinyint(4) NOT NULL DEFAULT '0',
`field5` timestamp NULL DEFAULT NULL,
`field6` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`field7` tinyint(4) NOT NULL DEFAULT '0',
`field8` text,
`field9` tinyint(4) NOT NULL DEFAULT '0',
`field10` tinytext,
`field11` tinytext,
`field12` timestamp NOT NULL DEFAULT '2000-01-01 00:00:00',
`field13` tinyint(4) DEFAULT NULL,
`field14` int(11) DEFAULT '0',
`field15` int(11) DEFAULT NULL,
`field16` tinyint(4) NOT NULL DEFAULT '10',
`field17` varchar(64) DEFAULT NULL,
`field18` int(11) NOT NULL DEFAULT '0',
`field19` int(11) NOT NULL DEFAULT '0',
`field20` timestamp NULL DEFAULT NULL,
`field21` timestamp NULL DEFAULT NULL,
`field22` int(3) DEFAULT NULL,
`field23` int(6) NOT NULL DEFAULT '0',
`field24` int(6) NOT NULL DEFAULT '0',
`field25` bigint(20) NOT NULL DEFAULT '0',
`field26` int(11) DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `field2` (`field2`),
KEY `ranking1` (`id`,`field7`,`field13`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC
insert into accounts (field2) values ('rust');
let row = sqlx::query("SELECT * FROM accounts")
.fetch_one(&mut conn)
.await?;
problem: thread 'main' panicked at 'index out of bounds: the len is 58 but the index is 235', cargo/registry/src/github.com-1ecc6299db9ec823/sqlx-core-0.2.0/src/mysql/protocol/row.rs:107:65
Row::get
is a pretty common method when using the query
API. Requiring a trait import is less-than-ideal.PgRow
and MySqlRow
are nearly identical in implementation.Proposal:
PgRow
and MySqlRow
are now PgDataRow
and MySqlDataRow
that only expose getting values by index and only getting byte slices.
Row
becomes a struct
that contains the once duplicated functionality.
pub struct Row<R: DataRow> {
inner: R,
shared: Arc<SharedRow>,
}
struct SharedRow<DB: Database> {
column_names: Arc<HashMap<Box<str>, usize>>,
column_types: Arc<[DB::TypeId]>,
}
Row
as well ( see #8 )See #16
Current
unknown field type ID: 2950
Proposal
conditionally supported column type: UUID (2950); hint: try enabling the
uuid
feature
unknown column type: 3112; hint: custom types are not currently supported, try casting to a supported type with
::text
Support taking and returning serde_json::Value
(maybe others as well?) in queries.
I think this would be a Postgres-only feature, right?
I was looking through the source code a bit, but wasn't certain if UUID is a natively supported field.
I have some basic tables that look like this:
CREATE TYPE operation AS ENUM ('create', 'update', 'delete');
CREATE TABLE IF NOT EXISTS events (
event_id uuid PRIMARY KEY NOT NULL, -- id of event
namespace uuid NOT NULL, -- user's "namespace" where their events are stored
event_time bigint NOT NULL, -- when event occurred
op operation NOT NULL, -- operation that occurred
acted_on uuid NOT NULL -- key of item that was acted on
);
I'm trying to do a test query on it to see if the whole setup works. I have a local instance of the db set up, and the DATABASE_URL
properly set, but when I compile I get an error from some pretty simple code:
async fn get_all_in_ns(pool: PgPool) -> anyhow::Result<()> {
let rows = sqlx::query!(
r#"SELECT
event_id, namespace, event_time, op, acted_on
FROM events
WHERE event_time > 5"#,
)
.fetch(&mut pool)
.await
.unwrap();
Ok(())
}
-> "unknown field type ID: 2950"
I haven't seen any other errors like this, and my assumption is that sqlx doesn't support postgres UUIDs, but I'm not certain.
Could someone advise? If I can use statically typed sql queries, that would be a big win. We use UUIDs extensively from uuid-ossp
This was a popular request in the original Reddit announcement: https://www.reddit.com/r/rust/comments/egpw7g/announcing_sqlx_a_fully_asynchronous_pure_rust/fc8ldpw/?context=1000
The idea is to allow expansion of query!()
, et al, without having to connect directly to the database by running the PREPARE
s for all queries in a separate step and caching the result.
The solution in my head looks something like this:
cargo-sqlx
, which links sqlx-core
internally.cargo sqlx prepare
in their project which invokes cargo check
with QUERY_PREPARE_OUT=$OUT_DIR/sqlx
set in the environment
cargo
they're using so we build with the right toolchain, e.g. if the user runs cargo +nightly sqlx prepare
when their default toolchain is stable
query!()
and friends see QUERY_PREPARE_OUT
in the environment they don't perform normal expansion:
PREPARE
on their query strings and output the hash of the query and the prepare result as randomly named JSON files to QUERY_PREPARE_OUT
QUERY_PREPARE_OUT
and writes them to a single JSON file in a predetermined location in the project (TBD)
PREPARE
data is the same.query!()
is later expanded normally, the macro checks in the file for the hash of its query string and uses that data instead of connecting to the database.
cargo sqlx prepare
again.cargo sqlx prepare --check
which does the opposite; it reruns PREPARE
for the cached query strings and asserts that the result is the same.We can then later extend the cargo-sqlx
command to add migrations. With migrations in place we have a stronger guarantee that the schema we compiled and cached the PREPARE
data against is the same as the one we are running against.
Questions:
prepare
the right name for the sub-subcommand?sqlx.toml
to change the location and possibly other behaviors?QUERY_PREPARE_OUT
or look at a cfg
flag? I'm thinking the former might fail to trigger rebuilds when set if Cargo doesn't know to watch for it but the latter might trigger a rebuild of the whole dep graph.
touch src/
to trigger a rebuild, no?PREPARE
data we saved still matches what we get back from the database server?We've been talking about this for a while, we just punted on it for 0.1.
It'd be extremely useful for working with query_as()
.
Hi!
COPY
is the most efficient way to import data to PostgreSQL database (it is less flexible than INSERT
, but has significantly less overhead).
https://www.postgresql.org/docs/current/sql-copy.html
https://www.postgresql.org/docs/current/populate.html#POPULATE-COPY-FROM
It would be great to have a low-level interface to stream data to/from PostgreSQL using COPY ... FROM STDIN
and COPY ... TO STDOUT
statements. For example, similar to https://docs.rs/tokio-postgres/0.5.1/tokio_postgres/struct.Client.html#method.copy_in and https://docs.rs/tokio-postgres/0.5.1/tokio_postgres/struct.Client.html#method.copy_out.
I'm having an issue connecting to my postgres database. I get the error "failed to connect to database: unsupported startup parameter: IntervalStyle"
. Running on a hosted postgres (v11) with tls. Any ideas as to why I'm getting this?
main.rs
#[async_std::main]
async fn main() -> Res<()> {
let mut pool = PgPool::new(&env::var("DATABASE_URL")?).await?;
let rows = sqlx::query!(
"SELECT title
FROM product
LIMIT 10",
)
.fetch_all(&mut pool)
.await?;
for row in rows {
println!("row {:?}", row.get("title"));
}
Ok(())
}
Cargo.toml
[dependencies]
async-std = { version = "1.4.0", features = ["attributes"] }
sqlx = { version = "0.2.0", features = ["postgres", "tls"] }
Right now the sql!() macro accepts a string argument that essentially forces a user to write code in a string. In a language like rust where one can accept rust source tokens themselves as input, this is a missed opportunity:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.