Giter VIP home page Giter VIP logo

tiberius's Introduction

Prisma

Prisma

Discord

Quickstart   •   Website   •   Docs   •   Examples   •   Blog   •   Discord   •   Twitter

What is Prisma?

Prisma is a next-generation ORM that consists of these tools:

  • Prisma Client: Auto-generated and type-safe query builder for Node.js & TypeScript
  • Prisma Migrate: Declarative data modeling & migration system
  • Prisma Studio: GUI to view and edit data in your database

Prisma Client can be used in any Node.js or TypeScript backend application (including serverless applications and microservices). This can be a REST API, a GraphQL API, a gRPC API, or anything else that needs a database.

The Prisma ORM can also further be extended with these Prisma products:

Getting started

The fastest way to get started with Prisma is by following the Quickstart (5 min).

The Quickstart is based on a preconfigured SQLite database. You can also get started with your own database (PostgreSQL and MySQL) by following one of these guides:

How Prisma works

This section provides a high-level overview of how Prisma works and its most important technical components. For a more thorough introduction, visit the Prisma documentation.

The Prisma schema

Every project that uses a tool from the Prisma toolkit starts with a Prisma schema file. The Prisma schema allows developers to define their application models in an intuitive data modeling language. It also contains the connection to a database and defines a generator:

// Data source
datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

// Generator
generator client {
  provider = "prisma-client-js"
}

// Data model
model Post {
  id        Int     @id @default(autoincrement())
  title     String
  content   String?
  published Boolean @default(false)
  author    User?   @relation(fields:  [authorId], references: [id])
  authorId  Int?
}

model User {
  id    Int     @id @default(autoincrement())
  email String  @unique
  name  String?
  posts Post[]
}

In this schema, you configure three things:

  • Data source: Specifies your database connection (via an environment variable)
  • Generator: Indicates that you want to generate Prisma Client
  • Data model: Defines your application models

The Prisma data model

On this page, the focus is on the data model. You can learn more about Data sources and Generators on the respective docs pages.

Functions of Prisma models

The data model is a collection of models. A model has two major functions:

  • Represent a table in the underlying database
  • Provide the foundation for the queries in the Prisma Client API

Getting a data model

There are two major workflows for "getting" a data model into your Prisma schema:

  • Generate the data model from introspecting a database
  • Manually writing the data model and mapping it to the database with Prisma Migrate

Once the data model is defined, you can generate Prisma Client which will expose CRUD and more queries for the defined models. If you're using TypeScript, you'll get full type-safety for all queries (even when only retrieving the subsets of a model's fields).


Accessing your database with Prisma Client

Generating Prisma Client

The first step when using Prisma Client is installing its npm package:

npm install @prisma/client

Note that the installation of this package invokes the prisma generate command which reads your Prisma schema and generates the Prisma Client code. The code will be located in node_modules/.prisma/client, which is exported by node_modules/@prisma/client/index.d.ts.

After you change your data model, you'll need to manually re-generate Prisma Client to ensure the code inside node_modules/.prisma/client gets updated:

npx prisma generate

Refer to the documentation for more information about "generating the Prisma client".

Using Prisma Client to send queries to your database

Once the Prisma Client is generated, you can import it in your code and send queries to your database. This is what the setup code looks like.

Import and instantiate Prisma Client

You can import and instantiate Prisma Client as follows:

import { PrismaClient } from '@prisma/client'

const prisma = new PrismaClient()

or

const { PrismaClient } = require('@prisma/client')

const prisma = new PrismaClient()

Now you can start sending queries via the generated Prisma Client API, here are a few sample queries. Note that all Prisma Client queries return plain old JavaScript objects.

Learn more about the available operations in the Prisma Client docs or watch this demo video (2 min).

Retrieve all User records from the database
// Run inside `async` function
const allUsers = await prisma.user.findMany()
Include the posts relation on each returned User object
// Run inside `async` function
const allUsers = await prisma.user.findMany({
  include: { posts: true },
})
Filter all Post records that contain "prisma"
// Run inside `async` function
const filteredPosts = await prisma.post.findMany({
  where: {
    OR: [{ title: { contains: 'prisma' } }, { content: { contains: 'prisma' } }],
  },
})
Create a new User and a new Post record in the same query
// Run inside `async` function
const user = await prisma.user.create({
  data: {
    name: 'Alice',
    email: '[email protected]',
    posts: {
      create: { title: 'Join us for Prisma Day 2021' },
    },
  },
})
Update an existing Post record
// Run inside `async` function
const post = await prisma.post.update({
  where: { id: 42 },
  data: { published: true },
})

Usage with TypeScript

Note that when using TypeScript, the result of this query will be statically typed so that you can't accidentally access a property that doesn't exist (and any typos are caught at compile-time). Learn more about leveraging Prisma Client's generated types on the Advanced usage of generated types page in the docs.

Community

Prisma has a large and supportive community of enthusiastic application developers. You can join us on Discord and here on GitHub.

Security

If you have a security issue to report, please contact us at [email protected].

Support

Ask a question about Prisma

You can ask questions and initiate discussions about Prisma-related topics in the prisma repository on GitHub.

👉 Ask a question

Create a bug report for Prisma

If you see an error message or run into an issue, please make sure to create a bug report! You can find best practices for creating bug reports (like including additional debugging output) in the docs.

👉 Create bug report

Submit a feature request

If Prisma currently doesn't have a certain feature, be sure to check out the roadmap to see if this is already planned for the future.

If the feature on the roadmap is linked to a GitHub issue, please make sure to leave a 👍 reaction on the issue and ideally a comment with your thoughts about the feature!

👉 Submit feature request

Contributing

Refer to our contribution guidelines and Code of Conduct for contributors.

Tests Status

  • Prisma Tests Status:
    Prisma Tests Status
  • Ecosystem Tests Status:
    Ecosystem Tests Status

tiberius's People

Contributors

agalakhov avatar andriy-viyatyk avatar bbigras avatar benesch avatar chrismatteson avatar danylaporte avatar davidpodhola avatar descawed avatar dwink avatar esheppa avatar gaoqiangz avatar github-actions[bot] avatar jolg42 avatar liyixin95 avatar ms705 avatar nathanregner avatar nicoavanzdev avatar pimeys avatar r-ml avatar rkuhn avatar rogerpengyu avatar speedsx avatar steffengy avatar suzak avatar svanharmelen avatar taiki-e avatar tomhoule avatar weakky avatar willtrnr avatar yoshuawuyts avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tiberius's Issues

SQL Server: trustServerCertificate=false doesn't work

If you try to run

npx prisma db push  --preview-feature --force

You get:

Environment variables loaded from prisma/.env
Prisma schema loaded from prisma/schema.prisma
Datasource "ms"
warn "nativeTypes" is a preview feature. There may be bugs and it's not recommended to use it in production environments.
Error: P1011: Error opening a TLS connection: The TLS settings didn't allow the connection to be established. Please review your connection string. Note that TLS encryption is currently not supported on macOS devices. You can subscribe to the following issue: https://github.com/prisma/prisma-engines/issues/1140 (error: `error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1915: (unable to get local issuer certificate)`)

This is problematic because when you provision a database from Azure, this is the default configuration:

jdbc:sqlserver://{host}:{post};database={database};user={user};password={your_password_here};encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;

Setting trustServerCertificate=true fixes the problem.

Version:

Environment variables loaded from prisma/.env
@prisma/cli          : 2.15.0-dev.71
@prisma/client       : 2.15.0-dev.71
Current platform     : darwin
Query Engine         : query-engine 94e94e327ccbd6d374c9a079ed59f89b077348e4 (at node_modules/@prisma/engines/query-engine-darwin)
Migration Engine     : migration-engine-cli 94e94e327ccbd6d374c9a079ed59f89b077348e4 (at node_modules/@prisma/engines/migration-engine-darwin)
Introspection Engine : introspection-core 94e94e327ccbd6d374c9a079ed59f89b077348e4 (at node_modules/@prisma/engines/introspection-engine-darwin)
Format Binary        : prisma-fmt 94e94e327ccbd6d374c9a079ed59f89b077348e4 (at node_modules/@prisma/engines/prisma-fmt-darwin)
Studio               : 0.337.0
Preview Features     : microsoftSqlServer, nativeTypes

Possibility to use Tiberius with different runtimes

It would be great if we'd be able to use Tiberius with other runtimes than Tokio. This means no spawning, and it also means the underlying transportation protocol should be possible to inject into the connection. TcpStream and TlsStream must be able to be chosen by the user, as long as the streams implement AsyncRead and AsyncWrite from the futures crate. The Tokio counterparts can deal with the abstractions by using Tokio's compat module.

For making it easier for the user, we could provide separate crates:

  • tiberius-tokio
  • tiberius-async-std

Or additionally this can be handled with feature flags.

The other part needing replacement is the use of tokio_codec. futures_codec is a good replacement using the standard AsyncRead and AsyncWrite.

One should take into consideration of replacing the bytes crate while doing the refactoring, due to bytes being quite unsafe and triggering segmentation faults if using it wrong. One crate to experiment with is byte_pool, being a safe replacement for bytes. Be aware of writing the corresponding byte extensions to the SqlReadBytes.

Transactions

We need a higher-level system for managing transactions. It is possible to trigger transactions with the normal BEGIN/COMMIT/ROLLBACK mechanism, but an abstraction that handles proper cleanup, committing, rolling back and so on, and offers the same query mechanisms as the Client would be useful.

The transaction should be started from the Client. When started, Rust's ownership system should prevent the usage of Client until the Transaction is dropped. The Transaction should provide at least the basics: execute and query, and additionally rollback and commit methods.

When the Transaction is dropped, it should know did the user finish it properly using the rollback or commit method, and if not, message the client to trigger a rollback before allowing more queries to be handled.

The last part can be done in many ways, and the tricky thing here is if wanting to set the ROLLBACK query in the Drop trait, which is not by design asynchronous. Transaction should have a mutable reference to the Client, so one way of doing this would be to set a flag in the Client about a dirty state, and the client (or Connection) takes care of triggering the ROLLBACK call before executing the next query.

More examples?

I'm largely a noob to rust (coming from a c# background, been doing rust for 4 weeks) but I'm trying to start with the basics, so I'm doing something simple like dump part of a table to a CSV file. So far I can connect to the db and create a string representing a CSV header:

    let stream = client.query("SELECT * FROM [master].[dbo].[MSreplication_options]", &[&1i32]).await?;
    let rows = stream.into_first_result().await?;

    let col_names: Vec<&str> = rows.first().unwrap().columns().iter().map(|c| c.name()).collect();

    let file = File::create("test.csv")?;
    let mut file = LineWriter::new(file);

    file.write_all(col_names.join(",").as_ref())?;

No problem.

However when I try to read the values from each row, that's where I get lost. I noticed the get() method and tried to serialize from that, but I can't figure out how to get a string out of that. Any assistance? I know the below code doesn't work because it needs me to specify a type, but I'm not quite sure how.

    for row in rows {
        let mut rowtext: Vec<&str> = Vec::new();
        for i in 0..(row.len() -1) {
            rowtext.push(row.get(i).unwrap_or_default());
        }
        file.write_all(rowtext.join(",").as_ref())?;
    }

Where I'm stuck is the rowtext.push() bit. I imagine that I'm in want of a superfish type representation here, but I can't figure it out from the documentation. Any help?

Prevent corrupting connections when cancelling futures

This has not bubbled up yet, but I highly suspect we're going to hit problems when cancelling futures reading or writing data with Tiberius.

This means, if we use select functionality between a Tiberius future with another, such as a timer, and if we're just reading or writing data to the wire, this operation will be stopped when the timer is ready. At this point the wire is in a dysfunctional state, where either it has partial data from the previous request still awaiting, or we wrote some of the data, but not all.

This means if implementing a service, that for example reads input from a field to perform a search, that is cancelled when the user presses more keys to replace the query with a new one, the connection probably ends into a state where we must reconnect for it to work again.

Some ways of fixing this:

  • Mark a connection as dirty until all data has been read and written
  • Implement the streams so they just peek the data until we know the headers, and only then reading the full packet
  • And then, somehow before every new query, check the wire status and if its dirty, find a way to clean it before triggering a new query.

Implement proper connection string parsing

Our current ADO.net connection string parsing is quite stupid and will break things like having ; character in some of the values. This needs a proper tokenizer and parser. We should also support JDBC connection strings additionally.

The work is already started for a new parser crate implemented by @yoshuawuyts, so this is a ticket to replace our connection string handling with this crate.

Impossible to do an "IN" prepared statement query?

hey guys, very simple, but how can we achieve something like this?
It looks like the (@p1) is not accepted, and i do not know how to make it spit out the query after it is generated.

//Would like to use a prepared statement like this
pub const QUERY_LIST_CMDB_CI_SERVERS: &str = r#"
    SELECT
    cmdb.sys_id,
    cmdb.name,
    cmdb.u_tech_owner,
    u.email as 'pri_owner_email',
    cmdb.dv_u_tech_owner,
    cmdb.u_active,
    cmdb.short_description,
    cmdb.dv_u_cbt_owner,
    cmdb.u_cbt_owner
    FROM  cmdb_ci cmdb
    LEFT JOIN sys_user u on u.sys_id=cmdb.u_tech_owner 
    WHERE cmdb.name IN (@P1)
    OR cmdb.sys_id IN (@P1)
    AND cmdb.u_active = 'true'
    AND (cmdb.sys_class_name = 'cmdb_ci_voice_hardware' 
    OR cmdb.sys_class_name LIKE '%server%')
    AND cmdb.sys_class_name NOT LIKE '%web_server%'
    AND cmdb.sys_class_name NOT IN (
    'cmdb_ci_app_server_websphere',
    'cmdb_ci_app_server_jboss',
    'cmdb_ci_app_server_tomcat',
    'cmdb_ci_appl_license_server',
    'cmdb_ci_app_server_weblogic'
    );
"#;
// Instead of doing a format like this:

pub fn query_list_cmdb_ci_servers(keys: Vec<String>) -> String {
    format!(
        r#"
    SELECT
    cmdb.sys_id,
    cmdb.name,
    cmdb.u_tech_owner,
    u.email as 'pri_owner_email',
    cmdb.dv_u_tech_owner,
    cmdb.u_active,
    cmdb.short_description,
    cmdb.dv_u_cbt_owner,
    cmdb.u_cbt_owner
    FROM  cmdb_ci cmdb
    LEFT JOIN sys_user u on u.sys_id=cmdb.u_tech_owner 
    WHERE cmdb.name IN ({0})
    OR cmdb.sys_id IN ({0})
    AND cmdb.u_active = 'true'
    AND (cmdb.sys_class_name = 'cmdb_ci_voice_hardware' 
    OR cmdb.sys_class_name LIKE '%server%')
    AND cmdb.sys_class_name NOT LIKE '%web_server%'
    AND cmdb.sys_class_name NOT IN (
    'cmdb_ci_app_server_websphere',
    'cmdb_ci_app_server_jboss',
    'cmdb_ci_app_server_tomcat',
    'cmdb_ci_appl_license_server',
    'cmdb_ci_app_server_weblogic'
    );
    "#,
        keys.join(",")
    )
}

Support integratedsecurity on Unix and MacOS

.NET Core supports integratedsecurity on Unix and MacOS via Kerberos through GSSAPI. It seems to all be open source under an MIT license now, so an implementation could be based on .NET Core. 😄

future cannot be sent between threads safely

I had problems with the "old tiberius" when I tried to use it with async web frameworks like actix or warp. I ended up having to run the tiberius stuff with something like tokio::task::spawn_local and use channels to connect the web part with the tiberius part. Which was a pain.

Maybe I'm not doing something correctly or maybe using something like deadpool could help.

[dependencies]
futures = "0.3.5"
tiberius = "0.4.0-alpha.6"
tokio = { version = "0.2", features = [ "process", "rt-util", "macros", "rt-core" ] }
use futures::stream::StreamExt;
use futures::stream::TryStreamExt;
use tiberius::{AuthMethod, Client};

async fn something(mut conn: Client) -> Vec<i8> {
    let mut stream = conn
        .query("SELECT @P1 AS first; SELECT @P2 AS second", &[&1i32, &2i32])
        .await
        .unwrap();

    let first_result: Vec<Option<i32>> = stream
        .by_ref()
        .map_ok(|x| x.get::<i32, _>(0))
        .try_collect()
        .await
        .unwrap();

    assert!(stream.next_resultset());

    Vec::new()
}

#[tokio::main]
async fn main() {
    let mut builder = Client::builder();
    builder.host("localhost");
    builder.port(1433);
    builder.database("master");
    builder.authentication(AuthMethod::sql_server("SA", "<YourStrong@Passw0rd>"));
    builder.trust_cert();

    let mut conn = builder.build().await.unwrap();

    let f = something(conn);

    tokio::spawn(f);
}
error: future cannot be sent between threads safely
   --> src/main.rs:36:5
    |
36  |     tokio::spawn(f);
    |     ^^^^^^^^^^^^ future returned by `something` is not `Send`
    |
   ::: /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/task/spawn.rs:127:21
    |
127 |         T: Future + Send + 'static,
    |                     ---- required by this bound in `tokio::task::spawn::spawn`
    |
    = help: the trait `std::marker::Sync` is not implemented for `dyn tiberius::to_sql::ToSql`
note: future is not `Send` as this value is used across an await
   --> src/main.rs:6:22
    |
6   |       let mut stream = conn
    |  ______________________^
7   | |         .query("SELECT @P1 AS first; SELECT @P2 AS second", &[&1i32, &2i32])
    | |                                                               ----- has type `&dyn tiberius::to_sql::ToSql` which is not `Send`
8   | |         .await
    | |______________^ await occurs here, with `&1i32` maybe used later
9   |           .unwrap();
    |                    - `&1i32` is later dropped here

error: future cannot be sent between threads safely
   --> src/main.rs:36:5
    |
36  |     tokio::spawn(f);
    |     ^^^^^^^^^^^^ future returned by `something` is not `Send`
    |
   ::: /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/task/spawn.rs:127:21

    |
127 |         T: Future + Send + 'static,
    |                     ---- required by this bound in `tokio::task::spawn::spawn`
    |
    = help: the trait `std::marker::Send` is not implemented for `dyn futures_core::stream::Stream<Item = std::result::Result<tiberius::tds::stream::token::ReceivedToken, tiberius::error::Error>>`
note: future is not `Send` as this value is used across an await
   --> src/main.rs:11:42
    |
6   |       let mut stream = conn
    |           ---------- has type `tiberius::result::QueryResult<'_>` which is not `Send`
...
11  |       let first_result: Vec<Option<i32>> = stream
    |  __________________________________________^
12  | |         .by_ref()
13  | |         .map_ok(|x| x.get::<i32, _>(0))
14  | |         .try_collect()
15  | |         .await
    | |______________^ await occurs here, with `mut stream` maybe used later
...
21  |   }
    |   - `mut stream` is later dropped here

QueryResult will lose the column metadata when received a WARNING

sql:

CREATE PROCEDURE p_test
AS
BEGIN
	SET NOCOUNT ON;
	DECLARE @tmp TABLE (col int);
	INSERT INTO @tmp
	SELECT SUM(col) AS col FROM (SELECT NULL AS col UNION SELECT 1) t;
	SELECT SUM(col) AS col FROM @tmp;
END
GO

rust:

let rs = client.simple_query("EXEC p_test").await?;
println!("{:?}",rs.columns()); //None

This procedure will cause a warning:

Warning: Null value is eliminated by an aggregate or other SET operation.

The QueryResult can't receive column metadatas in this case.

no `TcpStream` in `net`

Hello, please help with a starting example.

I tried an example from redme and from the example folder.
And I keep getting this error:

     | use tokio::net::TcpStream;
     |     ^^^^^^^^^^^^^^^^^^^^^ no `TcpStream` in `net`

Ubuntu 20.04
rustc 1.52.1

My cargo:
[dependencies]
tiberius = "0.5.13"
tokio = {version = "1.6", feature = ["full"]}
once_cell = "1.7.2"

[dependencies.tokio-util]
version = "0.6.7"
features = ["compat"]

Conversion error for nullable int field

I have a table like this:

CREATE TABLE Test (
	id int IDENTITY(1,1) NOT NULL,
	value int NULL
)

Filled it with values:

INSERT INTO Test (value) VALUES (1)
INSERT INTO Test (value) VALUES (NULL)

and trying to read:

    let result = con.simple_query("select * from Test").await.unwrap();
    let rows: Vec<Row> = result.into_first_result().await.unwrap();
    info!("{:?}", rows);
    let a: Option<i32> = rows[0].try_get("value").unwrap();
    info!("a = {:?}", a);
    let b: Option<i32> = rows[1].try_get("value").unwrap();
    info!("b = {:?}", b);

Surprisingly it fails with output:

[2020-08-25][12:07:21][startup][INFO] [Row { columns: [Column { name: "id", column_type: Int4 }, Column { name: "value", column_type: Intn }], data: TokenRow { data: [I32(Some(1)), I32(Some(1))] } }, Row { columns: [Column { name: "id", column_type: Int4 }, Column { name: "value", column_type: Intn }], data: TokenRow { data: [I32(Some(2)), U8(None)] } 
}]
[2020-08-25][12:07:21][startup][INFO] a = Some(1)
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Conversion("cannot interpret U8(None) as an i32 value")'

So in some reason the value in a second row is parsed as U8(None), not i32(None).
Is it a bug? If not, could you suggest some workaround?

Is there any way to batch the results?

Can we support batch reads of the results? I typically deal with millions of rows from query output. It would be great if there is a way to batch the results so that I can start processing as soon as a batch is available.

Connecting to Azure SQL with tokio (on mac) gives a Tls connection closed via error.

this is my code

let mut config = Config::from_ado_string("server=xxx.database.windows.net,1433;UID=xxx;PWD=xxx;TrustServerCertificate=true")?;
let tcp = TcpStream::connect(config.get_addr()).await?;
tcp.set_nodelay(true)?;
let mut client = Client::connect(config, tcp.compat_write()).await.expect("Failed!");

and I get
thread 'main' panicked at 'Failed!: Tls("connection closed via error")', src/main.rs:80:22

tcl porblem

"Error forming TLS connection: a certificate chain processed, but terminated in a root certificate which is not trusted。 (os error -2146762487)"
error occured when i compile example from github/tiberius.
How can i sloved it?

`Config::from_jdbc_string` does not parse `databaseName`

for example:

#[async_std::main]
async fn main() -> anyhow::Result<()> {
    let jdbc = "jdbc:sqlserver://127.0.0.1;databaseName=rezerv;TrustServerCertificate=true";
    let config = Config::from_jdbc_string(jdbc)?;
    println!("{:?}", config);
    Ok(())
}
$ cargo run --example jdbc_parse
    Finished dev [unoptimized + debuginfo] target(s) in 0.05s
     Running `target/debug/examples/jdbc_parse`
Config { host: Some("127.0.0.1"), port: None, database: None, instance_name: None, encryption: NotSupported, trust_cert: true, auth: SqlServer(SqlServerAuth { user: "", password: "<HIDDEN>" 

Using Tiberius with named pipes

Mio 0.7 has support for Windows named pipes. The author of mysql_async (blackbeam) is having a PR open to Tokio to support them in the runtime.

I wrote a small example, trying them out with Tiberius, and they actually work!

When the support lands to Tokio, we should update our documentation to reflect these changes.

#132

Is String not implemented for FromSql?

    |
367 |             my_string_field: row.get(12),
    |                           ^^^ the trait `FromSql<'_>` is not implemented for `std::string::String`

my_string_field is a struct field of String type. Row::get() only seems to fail on String types. Other fields are OK. I'm pretty newb to Rust so any input would be appreciated. As far as I know, I wouldn't be able to implement this myself because both are foreign types?

Public Errors

Hi!

I'm really impressed about tiberius (since it looks like the only really working mssql library - sqlx still doesn't support chrono date types, odbc not working, at least in my hands). So thank you very much for your work.

But for now, I'm trying to do some conversions from your tiberius::Error to mine, and I can't, because tiberius::Error is public only for its own crate. So I would be glad to understand the motivation behind this decision, or maybe I'm doing something entirely wrong and you would help me to find the right way to do so.

Numeric values

This is quite buggy in it's current state, we cannot write Numeric values at all, only read them. There is a crate rust-decimal that does the same and more than the Numeric type in Tiberius. We should integrate it instead and map it behind a feature flag.

Connection pooling

Hey guys,

Is there any example of how to create an mssql connection pool, rather than initiating a new connection per query?

If so, how can I do it?

Async/await TODO

Things to do until hitting the publish button for the async/await version of Tiberius. Started from a state where the transportation and quite a many basic types already work and are usable.

  • date/datetimeoffset/datetime2/smalldatetime/datetime/time types
  • text/ntext types
  • binary/varbinary
  • image
  • xml
  • User-defined types
  • Transaction system. Maybe the connection should finally be Arc<Mutex<Connection>> in the Client, so a transaction can roll back on Drop if needed. (#28)
  • Guid type
  • Execute doesn't really need to be a stream, The results should be collected to a vector for better error handling and provided as an iterator or using the total method.
  • ADO.NET connection strings
  • Testing SSPI windows login (on windows)
  • Windows authentication
  • Testing SQL browser mode (on windows)
  • Testing TLS
  • Testing Azure
  • Builder and connection should handle all possible features and options.
  • AppVeyor setup for this fork.
  • Work on tokio, async-std and smol (#29)
  • Fix numeric handling (#31)

Integration authentication throwing error

Hello!

I am trying out Tiberius with integrated authentication on RHEL. Connection is failing with below error

Error: Token error: ' 'Logon failed for login 'DOMAIN\USERNAME' due to trigger execution.' on server SERVERNAME executing  on line 1 (code: 17892, state: 1, class: 14)

Could someone help?

trivial README upgrade

Hi there, I realise this is trivial in the grand scheme of things, but your README contains: "Documentation (deprecared, compile from this branch)" which I think is now redundant given that the old async branch is your new master branch?

If so, I'll submit a PR removing it

Replace all BytesMut/Bytes with a Vec

We should not rely on bytes crate. A vector would be good enough, safer and in some cases might be even faster.

Tasks

  • Get rid of futures-codec
  • No more Sink for Connection, use AsyncWrite
  • For encoding, use Cursor and byteorder
  • For convenience, maybe implement a trait/struct that allows ergonomic little-endian writes!

Is there any connection retry mechanism builtin?

Sometimes we get connection disconnects from SQL Server. Is there any retry mechanism built-in so that client would fail the current operation and reconnects automattically so the next operations woould succeed.

cannot interpret I32(None) as an u8 value

I have a struct where the last field (15th field) is an Option<u8> (confirmed correct). In the database, the corresponding column is a tinyint NULL (confirmed correct)

When I start streaming data from the SQL server I receive this error: Conversion error: cannot interpret I32(None) as an u8 value

For testing outside of a struct, if I put this code in: println!("{:?}", row.get::<u8, _>(15)); I receive the error Conversion error: cannot interpret I32(None) as an u8 value on the very first row, which has a value of null for the 15th column.

If I change the above code to an i32: println!("{:?}", row.get::<i32, _>(15)); the first row with the None now works. However, I receive the error Conversion error: cannot interpret U8(Some(1)) as i32 value when it encounters the first Some() value in the stream.

Haha. I think Tiberius is confused?? The server is MSSQL 2012. The column really is a u8 both in the struct and in the database.

  14: tiberius::row::Row::get<u8,usize><unknown>
      at C:\Users\User\.cargo\registry\src\github.com-1ecc6299db9ec823\tiberius-0.5.5\src\row.rs:369

Issue using COUNT(...) in GCP

Hey,

This may not be a Tiberius specific issue but I have a Rust server hosted on GCP Serverless service. I've discovered all queries using COUNT(...) function cause it to fail. So I get an anonymous error in GCP which leads to the service having to automatically restarting. I was wondering if you would maybe have an idea of the underlying issue. Whether it's some form of unsupported feature in the cloud?

How to disable dbg!("foo"), can you remove it first

How to disabled dbg!("foo"), can you remove it first?

when I cycle execute one sql, there is too many dbg log......

[/root/.cargo/registry/src/gitea.xxxx.com-349729298cb642f1/tiberius-0.5.9/src/result.rs:231] "foo" = "foo"
[/root/.cargo/registry/src/gitea.xxxx.com-349729298cb642f1/tiberius-0.5.9/src/result.rs:231] "foo" = "foo"
[/root/.cargo/registry/src/gitea.xxxx.com-349729298cb642f1/tiberius-0.5.9/src/result.rs:231] "foo" = "foo"
[/root/.cargo/registry/src/gitea.xxxx.com-349729298cb642f1/tiberius-0.5.9/src/result.rs:231] "foo" = "foo"
[/root/.cargo/registry/src/gitea.xxxx.com-349729298cb642f1/tiberius-0.5.9/src/result.rs:231] "foo" = "foo"
[/root/.cargo/registry/src/gitea.xxxx.com-349729298cb642f1/tiberius-0.5.9/src/result.rs:231] "foo" = "foo"
[/root/.cargo/registry/src/gitea.xxxx.com-349729298cb642f1/tiberius-0.5.9/src/result.rs:231] "foo" = "foo"
[/root/.cargo/registry/src/gitea.xxxx.com-349729298cb642f1/tiberius-0.5.9/src/result.rs:231] "foo" = "foo"
[/root/.cargo/registry/src/gitea.xxxx.com-349729298cb642f1/tiberius-0.5.9/src/result.rs:231] "foo" = "foo"

Statements

For better performance with lots of repetitive queries, a proper statement system would be nice. Whenever we execute a query, the server responds with a handle that can be reused with different parameters in the future. This means the query is parsed only once, saving the parsing time in the subsequent calls.

The work required:

  • query and execute should be able to take either a string or a statement as a parameter.
  • prepare function to create a new statement
  • A way of removing dropped statements from the server session

A good example how this could work can be found from steffengy's version.

Options for an API:

a) User keeps statements in memory and uses them when fit.
b) We keeping statements in memory in Tiberius, hashing the incoming queries and using the already stored statement if possible.

If taking the approach b, we should be able to turn the feature off by configuration.

Other connection methods than TCP

Currently our documentation is talking about using the server with a TCP client. But in general our implementation does not dictate use of TCP, but a socket that implements AsyncRead and AsyncWrite. So if anybody knows a good named pipe implementations for Windows that implement the async IO traits, this would be the right place to talk about good implementations.

There's work in miow for NamedPipe that implements Read and Write. This needs an IOCP event loop that translates it to the async ecosystem. @yoshuawuyts is looking into implementing that.

The other protocol is the Local Procedure Call (LPC) or the asynchronous version of it (ALPC). This is a non-documented API and I have no idea if anybody is working on implementing Rust bindings to this API. Any tips are appreciated on this ticket. This is also the default connection method on Windows platforms.

In the end, if we have a working socket with the right traits, it should Just Work (tm) with Tiberius without making any changes to the crate.

Connection pool

Is there a possibility with tiberius to maintain a pool of connections to the database, rather than firing up new connections every time need to make a query or use one connection for all queries?

Someone attempted to create one with r2d2 but it didn't work out.

Add bulk loading

Running INSERT INTO table VALUES(...) is really slow especially when e.g. inserting against an Azure server. You can batch inserts with INSERT INTO table VALUES(({},{}),({},{})), but that is limited to 1000 values. You can also use INSERT INTO table SELECT a,b FROM (VALUES {},{},{},{}))sub(a,b) as mentioned in steffengy#93, but you are limited to 2100 values in the query.

A solution to this would be to implement something similar to what Tedious (NodeJS library) implements: https://tediousjs.github.io/tedious/bulk-load.html

Add Serde (de)serialization

We could maybe have a nice interface for people using serde structs in their code. Let's see a few examples:

#[derive(Deserialize)]
struct Cat {
    id: i64,
    name: String
}

Now using the struct, we could deserialize rows directly to them in an efficient manner:

let mut client = tiberius::Client::connect(config, tcp.compat_write()).await?;
let stream = client.query("SELECT id, name FROM cats", &[&1i32, &2i32, &3i32]).await?;
let rows = streams.into_first_result().await?;

let cats = stream
    .by_ref()
    .map_ok(|row| row.deserialize::<Cat>())
    .try_collect()
    .await?;

The question is, should Tiberius do this, or should it be let for higher-level crates, such as Diesel?

Missing feature (DNS) for Tokio

When trying to build a project using tiberius = { version = "0.4", features = ["sql-browser-tokio"] } I get an error saying:

image

Which seems to be fixed by enabling the Tokio feature "dns":

image

So should this line be updated to: features = ["dns", "udp"], or is there a better way to fix this?

Cannot execute "CREATE Schema [cot-ci] AUTHORIZATION dbo" using client.execute

I cannot execute a statement like CREATE Schema [cot-ci] AUTHORIZATION dbo

When using Sql Profiler, I saw that it used sp_executesql to execute the query but when I use a C# Systen.Data.SqlClient.SqlCommand, it does not use the stored proc at all.

I think it should work though with sp_executesql but I'm not sure why it does not.

Problem with blob (type=Image) query

It is fantastic to see new progress in tiberius. I had tried to use a much earlier version but no support of blob types was a show stopper.

I have tried using the sample code from the documentation, both against the tokio and the async-std executors. My database is Microsoft SQL Server 2016 (SP2-CU8) (KB4505830) - 13.0.5426.0 (X64) .

A simple SELECT query worked fine with both executors, however when I tried another query which retrieved a single row with two columns, row index (integer key column) and corresponding BLOB (Image type), from another table the result is corrupted:

let mut stream = client.simple_query(_query2).await?;
assert!(stream.next_resultset());
println!("{:?}",stream.columns().unwrap());

returns the name of the index and BLOB column

[Column { name: "i", column_type: Int4 }, Column { name: "琀攀猀琀搀愀琀愀开戀氀漀戀", column_type: Image }]

The types are correct for both columns but the column name for the BLOB column should be 'b'. It is corrupted. Also if I continue on and try extract the rows I get a "Error: Protocol error: invalid token type 0".

I am not overriding the default tiberius features and get the same results with the latest stable rust on both Windows and Ubuntu 64bit. Any suggestions for how to debug this?

I tried running the same query with freetds "tsql" and it returned the correct data.

On Handling Redirects

firewall

Currently we are able to connect to Azure SQL with the firewall settings pictured above. If the connection policy is set to redirect, we'll error out with Routing, giving a new host and port. Although connecting to this new host gives an error how server name cannot be determined. If the original server was foobar.database.windows.net, the user was naturally user@foobar. But, if the redirect goes to f02f9939c13c.tr2996.northeurope1-a.worker.database.windows.net, the original username doesn't work anymore. We can try [email protected], [email protected], [email protected] or user@f02f9939c13c, but all of these will give the same error.

I'm leaving this for now as an issue, if anybody else hits the same problem and finds some nice solution. For now, the only workaround is to set the connection policy to proxy.

Column Encryption

SQL Server supports Column Encryption (CEK), and we should probably implement some kind of support for it in Tiberius.

The client should take the needed keys, and transparently encrypt/decrypt columns if needed.

Executing more than 254 queries on a single connection panics with "attempt to add with overflow"

Issue: Executing more than 254 queries on a connection panics with a stack overflow.

Repro:


#[test]
fn repro() {
    use smol::Async;
    use std::net::TcpStream;
    use tiberius::{Client, Config};

    smol::run(async {
        let conn_config = Config::from_ado_string(TEST_MSSQL_CONN_STR).unwrap();

        let stream = Async::<TcpStream>::connect(conn_config.get_addr())
            .await
            .unwrap();

        let mut connection = Client::connect(conn_config, stream).await.unwrap();

        for i in 1..255 {
            println!("OK - {}", i);
            connection.simple_query("SELECT 1").await.unwrap();
        }
    });
}

Result:

  • thread panicked at 'attempt to add with overflow, src\tds\context.rs:31:9 (after 254 iterations)

Cause:

The issue happens due to id in the next_packet_id method being u8 and overflowing:

#src\tds\context.rs:29-34:

 pub fn next_packet_id(&mut self) -> u8 {
        let id = self.packet_id; 
        self.packet_id += 1;
        id
    }

This one appears to be pretty straightforward to fix. I could have submitted a PR fixing it, but i'm not that familiar with the library yet to go around possibly breaking stuff.

Also, thanks for the great work bringing this project back to life.

Query result is not read from stored proc

My stored procedure looks like this:

CREATE PROCEDURE [dbo].[sp_NewItem] 
  @Param1 varchar(20),
  @Param2 datetime
AS
  insert into Items (Param1, Param2)
  values (@Param1, @Param2)

  return SCOPE_IDENTITY()

The code which calls that stored procedure is

        let result = client.query(
                "declare @rc int; exec @rc = sp_NewItem @P1, @P2; select @rc as Id", 
                &[&param1, &param2])
            .await?
            .into_row()
            .await?;

and the result set is empty.
The workaround I found is using 'nocount' outside a procedure call. Like this:

"set nocount on; declare @rc int; exec @rc = sp_NewItem @P1, @P2; select @rc as Id", 

So it seems that it reads number of affected rows from stream and stops. I also tried to update SP to use 'select' instead of 'return', but that doesn't change much.
This proc is an old legacy database, but I've never had this problem before, working with other drivers.
What do you think, is that a bug?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.