Giter VIP home page Giter VIP logo

charybdis's Introduction

Rust ORM for ScyllaDB and Apache Cassandra

⚠️ This project is currently in an early stage of development. Feedback and contributions are welcomed!

Crates.io License Docs.rs Discord

scylla_logo cassandra_logo

Charybdis is a ORM layer on top of ScyllaDB Rust Driver focused on easy of use and performance

Usage considerations:

  • Provide and expressive API for CRUD & Complex Query operations on model as a whole
  • Provide easy way to work with subset of model fields by using automatically generated partial_<model>! macro
  • Provide easy way to run complex queries by using automatically generated find_<model>! macro
  • Automatic migration tool analyzes the project files and runs migrations according to differences between the model definition and database

Performance consideration:

  • It uses prepared statements (shard/token aware) -> bind values
  • It expects CachingSession as a session arg for operations
  • Queries are macro generated str constants (no concatenation at runtime)
  • By using find_<model>! macro we can run complex queries that are generated at compile time as &'static str
  • Although it has expressive API it's thin layer on top of scylla_rust_driver, and it does not introduce any significant overhead

Table of Contents

Charybdis Models

Define Tables

use charybdis::macros::charybdis_model;
use charybdis::types::{Text, Timestamp, Uuid};

#[charybdis_model(
    table_name = users,
    partition_keys = [id],
    clustering_keys = [],
    global_secondary_indexes = [username],
    local_secondary_indexes = [],
    static_columns = []
)]
pub struct User {
    pub id: Uuid,
    pub username: Text,
    pub email: Text,
    pub created_at: Timestamp,
    pub updated_at: Timestamp,
    pub address: Address,
}

Define UDT

 use charybdis::macros::charybdis_udt_model;
 use charybdis::types::Text;
 
 #[charybdis_udt_model(type_name = address)]
 pub struct Address {
     pub street: Text,
     pub city: Text,
     pub state: Option<Text>,
     pub zip: Text,
     pub country: Text,
 }

🚨 UDT fields must be in the same order as they are in the database.

Note that in order for migration to correctly detect changes on each migration, type_name has to match struct name. So if we have struct ReorderData we have to use #[charybdis_udt_model(type_name = reorderdata)] - without underscores.

Define Materialized Views

use charybdis::macros::charybdis_view_model;
use charybdis::types::{Text, Timestamp, Uuid};

#[charybdis_view_model(
    table_name=users_by_username,
    base_table=users,
    partition_keys=[username],
    clustering_keys=[id]
)]
pub struct UsersByUsername {
    pub username: Text,
    pub id: Uuid,
    pub email: Text,
    pub created_at: Timestamp,
    pub updated_at: Timestamp,
}

Resulting auto-generated migration query will be:

CREATE MATERIALIZED VIEW IF NOT EXISTS users_by_email
AS SELECT created_at, updated_at, username, email, id
FROM users
WHERE email IS NOT NULL AND id IS NOT NULL
PRIMARY KEY (email, id)

Automatic migration

  • charybdis-migrate enables automatic migration to database without need to write migrations by hand. It iterates over project files and generates migrations based on differences between model definitions and database. It supports following operations:

    • Create new tables
    • Create new columns
    • Drop columns
    • Change field types (drop and recreate column --drop-and-replace flag)
    • Create secondary indexes
    • Drop secondary indexes
    • Create UDTs
    • Create materialized views
    • Table options
        #[charybdis_model(
            table_name = commits,
            partition_keys = [object_id],
            clustering_keys = [created_at, id],
            global_secondary_indexes = [],
            local_secondary_indexes = [],
            table_options = r#"
                CLUSTERING ORDER BY (created_at DESC) 
                AND gc_grace_seconds = 86400
            "#
        )]
        #[derive(Serialize, Deserialize, Default)]
        pub struct Commit {...}
      • ⚠️ If table exists, table options will result in alter table query that without CLUSTERING ORDER and COMPACT STORAGE options.

    Model dropping is not added. If you removed model, you need to drop table manually.

  • Running migration

    cargo install charybdis-migrate
    
    migrate --hosts <host> --keyspace <your_keyspace> --drop-and-replace (optional)

    ⚠️ If you are working with existing datasets, before running migration you need to make sure that your **model ** definitions structure matches the database in respect to table names, column names, column types, partition keys, clustering keys and secondary indexes so you don't alter structure accidentally. If structure is matched, it will not run any migrations. As mentioned above, in case there is no model definition for table, it will not drop it. In future, we will add modelize command that will generate src/models files from existing data source.

  • Global secondary indexes

    If we have model:

    #[charybdis_model(
        table_name = users,
        partition_keys = [id],
        clustering_keys = [],
        global_secondary_indexes = [username]
    )]

    resulting query will be: CREATE INDEX ON users (username);

  • Local secondary Indexes

    Indexes that are scoped to the partition key

    #[charybdis_model(
        table_name = menus,
        partition_keys = [location],
        clustering_keys = [name, price, dish_type],
        global_secondary_indexes = [],
        local_secondary_indexes = [dish_type]
    )]

    resulting query will be: CREATE INDEX ON menus((location), dish_type);

Basic Operations:

For each operation you need to bring respective trait into scope. They are defined in charybdis::operations module.

Insert

  • use charybdis::{CachingSession, Insert};
    
    #[tokio::main]
    async fn main() {
      let session: &CachingSession; // init sylla session
      
      // init user
      let user: User = User {
        id,
        email: "[email protected]".to_string(),
        username: "charybdis".to_string(),
        created_at: Utc::now(),
        updated_at: Utc::now(),
        address: Some(
            Address {
                street: "street".to_string(),
                state: "state".to_string(),
                zip: "zip".to_string(),
                country: "country".to_string(),
                city: "city".to_string(),
            }
        ),
      };
    
      // create
      user.insert().execute(&session).await;
    }

Find

  • Find by primary key

      let user = User {id, ..Default::default()};
      let user = user.find_by_primary_key().execute(&session).await?;
  • Find by partition key

      let users =  User {id, ..Default::default()}.find_by_partition_key().execute(&session).await;
  • Find by primary key associated

    let users = User::find_by_primary_key_value(val: User::PrimaryKey).execute(&session).await;
  • Available find functions

    use scylla::CachingSession;
    use charybdis::errors::CharybdisError;
    use charybdis::macros::charybdis_model;
    use charybdis::stream::CharybdisModelStream;
    use charybdis::types::{Date, Text, Uuid};
    
    #[charybdis_model(
        table_name = posts,
        partition_keys = [date],
        clustering_keys = [category_id, title],
        global_secondary_indexes = [category_id],
        local_secondary_indexes = [title]
    )]
    pub struct Post {
        pub date: Date,
        pub category_id: Uuid,
        pub title: Text,
    }
    
    impl Post {
        async fn find_various(db_session: &CachingSession) -> Result<(), CharybdisError> {
           let date = Date::default();
           let category_id = Uuid::new_v4();
           let title = Text::default();
        
           let posts: CharybdisModelStream<Post> = Post::find_by_date(date).execute(db_session).await?;
           let posts: CharybdisModelStream<Post> = Post::find_by_date_and_category_id(date, category_id).execute(db_session).await?;
           let posts: Post = Post::find_by_date_and_category_id_and_title(date, category_id, title.clone()).execute(db_session).await?;
        
           let post: Post = Post::find_first_by_date(date).execute(db_session).await?;
           let post: Post = Post::find_first_by_date_and_category_id(date, category_id).execute(db_session).await?;
        
           let post: Option<Post> = Post::maybe_find_first_by_date(date).execute(db_session).await?;
           let post: Option<Post> = Post::maybe_find_first_by_date_and_category_id(date, category_id).execute(db_session).await?;
           let post: Option<Post> = Post::maybe_find_first_by_date_and_category_id_and_title(date, category_id, title.clone()).execute(db_session).await?;
        
           // find by local secondary index
           let posts: CharybdisModelStream<Post> = Post::find_by_date_and_title(date, title.clone()).execute(db_session).await?;
           let post: Post = Post::find_first_by_date_and_title(date, title.clone()).execute(db_session).await?;
           let post: Option<Post> = Post::maybe_find_first_by_date_and_title(date, title.clone()).execute(db_session).await?;
    
          // find by global secondary index
          let posts: CharybdisModelStream<Post> = Post::find_by_category_id(category_id).execute(db_session).await?;
          let post: Post = Post::find_first_by_category_id(category_id).execute(db_session).await?;
          let post: Option<Post> = Post::maybe_find_first_by_category_id(category_id).execute(db_session).await?;
        
          Ok(())
        }
    }
  • Custom filtering:

    Lets use our Post model as an example:

    #[charybdis_model(
        table_name = posts, 
        partition_keys = [category_id], 
        clustering_keys = [date, title],
        global_secondary_indexes = []
    )]
    pub struct Post {...}

    We get automatically generated find_post! macro that follows convention find_<struct_name>!. It can be used to create custom queries.

    Following will return stream of Post models, and query will be constructed at compile time as &'static str.

    // automatically generated macro rule
    let posts = find_post!("category_id in ? AND date > ?", (categor_vec, date))
        .execute(session)
        .await?;

    We can also use find_first_post! macro to get single result:

    let post = find_first_post!("category_id in ? AND date > ? LIMIT 1", (date, categor_vec))
        .execute(session)
        .await?;

    If we just need the Query and not the result, we can use find_post_query! macro:

    let query = find_post_query!("date = ? AND category_id in ?", (date, categor_vec));

Update

  • let user = User::from_json(json);
    
    user.username = "scylla".to_string();
    user.email = "[email protected]";
    
    user.update().execute(&session).await;
  • Collection:

    • Let's use our User model as an example:
      #[charybdis_model(
          table_name = users,
          partition_keys = [id],
          clustering_keys = [],
      )]
      pub struct User {
          id: Uuid,
          tags: Set<Text>,
          post_ids: List<Uuid>,
      }
    • push_to_<field_name> and pull_from_<field_name> methods are generated for each collection field.
      let user: User;
      
      user.push_tags(vec![tag]).execute(&session).await;
      user.pull_tags(vec![tag]).execute(&session).await;
      
      user.push_post_ids(vec![tag]).execute(&session).await;
      user.pull_post_ids(vec![tag]).execute(&session).await;
  • Counter

    • Let's define post_counter model:
      #[charybdis_model(
          table_name = post_counters,
          partition_keys = [id],
          clustering_keys = [],
      )]
      pub struct PostCounter {
          id: Uuid,
          likes: Counter,
          comments: Counter,
      }
    • We can use increment_<field_name> and decrement_<field_name> methods to update counter fields.
      let post_counter: PostCounter;
      post_counter.increment_likes(1).execute(&session).await;
      post_counter.decrement_likes(1).execute(&session).await;
      
      post_counter.increment_comments(1).execute(&session).await;
      post_counter.decrement_comments(1).execute(&session).await;

Delete

  • let user = User::from_json(json);
    
    user.delete().execute(&session).await;
  • Macro generated delete helpers

    Lets use our Post model as an example:

    #[charybdis_model(
        table_name = posts,
        partition_keys = [date],
        clustering_keys = [categogry_id, title],
        global_secondary_indexes = [])
    ]
    pub struct Post {
        date: Date,
        category_id: Uuid,
        title: Text,
        id: Uuid,
        ...
    }

    We have macro generated functions for up to 3 fields from primary key.

    Post::delete_by_date(date: Date).execute(&session).await?;
    Post::delete_by_date_and_category_id(date: Date, category_id: Uuid).execute(&session).await?;
    Post::delete_by_date_and_category_id_and_title(date: Date, category_id: Uuid, title: Text).execute(&session).await?;
  • Custom delete queries

    We can use delete_post! macro to create custom delete queries.

    delete_post!("date = ? AND category_id in ?", (date, category_vec)).execute(&session).await?

Configuration

Every operation returns CharybdisQuery that can be configured before execution with method chaining.

let user: User = User::find_by_id(id)
    .consistency(Consistency::One)
    .timeout(Some(Duration::from_secs(5)))
    .execute(&app.session)
    .await?;
    
let result: QueryResult = user.update().consistency(Consistency::One).execute(&session).await?;

Supported configuration options:

  • consistency
  • serial_consistency
  • timestamp
  • timeout
  • page_size
  • timestamp

Batch

CharybdisModelBatch operations are used to perform multiple operations in a single batch.

  • Batch Operations

    let users: Vec<User>;
    let batch = User::batch();
    
    // inserts
    batch.append_inserts(users);
    
    // or updates
    batch.append_updates(users);
    
    // or deletes
    batch.append_deletes(users);
    
    batch.execute(&session).await?;
  • Chunked Batch Operations

    Chunked batch operations are used to operate on large amount of data in chunks.

      let users: Vec<User>;
      let chunk_size = 100;
    
      User::batch().chunked_inserts(&session, users, chunk_size).await?;
      User::batch().chunked_updates(&session, users, chunk_size).await?;
      User::batch().chunked_deletes(&session, users, chunk_size).await?;
  • Batch Configuration

    Batch operations can be configured before execution with method chaining.

    let batch = User::batch()
        .consistency(Consistency::One)
        .retry_policy(Some(Arc::new(DefaultRetryPolicy::new())))
        .chunked_inserts(&session, users, 100)
        .await?;

    We could also use method chaining to append operations to batch:

    let batch = User::batch()
        .consistency(Consistency::One)
        .retry_policy(Some(Arc::new(DefaultRetryPolicy::new())))
        .append_update(&user_1)
        .append_update(&user_2)
        .execute(data.db_session())
        .await?;
  • Statements Batch

    We can use batch statements to perform collection operations in batch:

    let batch = User::batch();
    let users: Vec<User>;
    
    for user in users {
        batch.append_statement(User::PUSH_TAGS_QUERY, (vec![tag], user.id));
    }
    
    batch.execute(&session).await;

Partial Model:

  • Use auto generated partial_<model>! macro to run operations on subset of the model fields. This macro generates a new struct with same structure as the original model, but only with provided fields. Macro is automatically generated by #[charybdis_model]. It follows convention partial_<struct_name>!.

    // auto-generated macro - available in crate::models::user
    partial_user!(UpdateUsernameUser, id, username);

    Now we have new struct UpdateUsernameUser that is equivalent to User model, but only with id and username fields.

    let mut update_user_username = UpdateUsernameUser {
        id,
        username: "updated_username".to_string(),
    };
    
    update_user_username.update().execute(&session).await?;
  • Partial Model Considerations:

    • partial_<model> requires #[derive(Default)] on native model
    • partial_<model> require complete primary key in definition
    • All derives that are defined bellow #charybdis_model macro will be automatically added to partial model.
    • partial_<model> struct implements same field attributes as native model, so if we have #[serde(rename = "rootId")] on native model field, it will be present on partial model field.
    • partial_<model> should be defined in same file as native model, so it can reuse imports required by native model
  • As Native

    In case we need to run operations on native model, we can use as_native method:

    let native_user: User = update_user_username.as_native().find_by_primary_key().execute(&session).await?;
    // action that requires native model
    authorize_user(&native_user);

    as_native works by returning new instance of native model with fields from partial model. For other fields it uses default values.

  • Recommended naming convention is Purpose + Original Struct Name. E.g: UpdateAdresssUser, UpdateDescriptionPost.

Callbacks

Callbacks are convenient way to run additional logic on model before or after certain operations. E.g.

  • we can use before_insert to set default values and/or validate model before insert.
  • we can use after_update to update other data sources, e.g. elastic search.

Implementation:

  1. Let's say we define custom extension that will be used to update elastic document on every post update:
    pub struct AppExtensions {
        pub elastic_client: ElasticClient,
    }
  2. Now we can implement Callback that will utilize this extension:
    #[charybdis_model(...)]
    pub struct Post {}
    
    impl ExtCallbacks for Post {
        type Extention = AppExtensions;
        type Error = AppError; // From<CharybdisError>
        
       // use before_insert to set default values
        async fn before_insert(
            &mut self,
            _session: &CachingSession,
            extension: &AppExtensions,
        ) -> Result<(), CustomError> {
            self.id = Uuid::new_v4();
            self.created_at = Utc::now();
            
            Ok(())
        }
        
        // use before_update to set updated_at
        async fn before_update(
            &mut self,
            _session: &CachingSession,
            extension: &AppExtensions,
        ) -> Result<(), CustomError> {
            self.updated_at = Utc::now();
            
            Ok(())
        }
    
        // use after_update to update elastic document
        async fn after_update(
            &mut self,
            _session: &CachingSession,
            extension: &AppExtensions,
        ) -> Result<(), CustomError> {
            extension.elastic_client.update(...).await?;
    
            Ok(())
        }
        
        // use after_delete to delete elastic document
        async fn after_delete(
            &mut self,
            _session: &CachingSession,
            extension: &AppExtensions,
        ) -> Result<(), CustomError> {
            extension.elastic_client.delete(...).await?;
    
            Ok(())
        }
    }
  • Possible Callbacks:

    • before_insert
    • before_update
    • before_delete
    • after_insert
    • after_update
    • after_delete
  • Triggering Callbacks

    In order to trigger callback we use <operation>_cb. method: insert_cb, update_cb, delete_cb according traits. This enables us to have clear distinction between insert and insert with callbacks (insert_cb). Just as on main operation, we can configure callback operation query before execution.
     use charybdis::operations::{DeleteWithCallbacks, InsertWithCallbacks, UpdateWithCallbacks};
    
     post.insert_cb(app_extensions).execute(&session).await;
     post.update_cb(app_extensions).execute(&session).await;
     post.delete_cb(app_extensions).consistency(Consistency::All).execute(&session).await;

Collections

For each collection field, we get following:

  • PUSH_<field_name>_QUERY static str
  • PUSH_<field_name>_IF_EXISTS_QUERY static str'
  • PULL_<field_name>_QUERY static str
  • PULL_<field_name>_IF_EXISTS_QUERY static str
  • push_<field_name> method
  • push_<field_name>_if_exists method
  • pull_<field_name> method
  • pull_<field_name>_if_exists method
  1. Model:

    #[charybdis_model(
        table_name = users,
        partition_keys = [id],
        clustering_keys = []
    )]
    pub struct User {
        id: Uuid,
        tags: Set<Text>,
        post_ids: List<Uuid>,
        books_by_genre: Map<Text, Frozen<List<Text>>>,
    }
  2. Generated Collection Queries:

    Generated query will expect value as first bind value and primary key fields as next bind values.

    impl User {
        const PUSH_TAGS_QUERY: &'static str = "UPDATE users SET tags = tags + ? WHERE id = ?";
        const PUSH_TAGS_IF_EXISTS_QUERY: &'static str = "UPDATE users SET tags = tags + ? WHERE id = ? IF EXISTS";
    
        const PULL_TAGS_QUERY: &'static str = "UPDATE users SET tags = tags - ? WHERE id = ?";
        const PULL_TAGS_IF_EXISTS_QUERY: &'static str = "UPDATE users SET tags = tags - ? WHERE id = ? IF EXISTS";
         
        const PUSH_POST_IDS_QUERY: &'static str = "UPDATE users SET post_ids = post_ids + ? WHERE id = ?";
        const PUSH_POST_IDS_IF_EXISTS_QUERY: &'static str = "UPDATE users SET post_ids = post_ids + ? WHERE id = ? IF EXISTS";
    
        const PULL_POST_IDS_QUERY: &'static str = "UPDATE users SET post_ids = post_ids - ? WHERE id = ?";
        const PULL_POST_IDS_IF_EXISTS_QUERY: &'static str = "UPDATE users SET post_ids = post_ids - ? WHERE id = ? IF EXISTS";
    
        const PUSH_BOOKS_BY_GENRE_QUERY: &'static str = "UPDATE users SET books_by_genre = books_by_genre + ? WHERE id = ?";
        const PUSH_BOOKS_BY_GENRE_IF_EXISTS_QUERY: &'static str = "UPDATE users SET books_by_genre = books_by_genre + ? WHERE id = ? IF EXISTS";
        
        const PULL_BOOKS_BY_GENRE_QUERY: &'static str = "UPDATE users SET books_by_genre = books_by_genre - ? WHERE id = ?";
        const PULL_BOOKS_BY_GENRE_IF_EXISTS_QUERY: &'static str = "UPDATE users SET books_by_genre = books_by_genre - ? WHERE id = ? IF EXISTS";
    }

    Now we could use this constant within Batch operations.

    let batch = User::batch();
    let users: Vec<User>;
    
    for user in users {
        batch.append_statement(User::PUSH_TAGS_QUERY, (vec![tag], user.id));
    }
    
    batch.execute(&session).await;
  3. Generated Collection Methods:

    push_to_<field_name> and pull_from_<field_name> methods are generated for each collection field.

    let user: User::new();
    
    user.push_tags(tags: HashSet<T>).execute(&session).await;
    user.push_tags_if_exists(tags: HashSet<T>).execute(&session).await;
    
    user.pull_tags(tags: HashSet<T>).execute(&session).await;
    user.pull_tags_if_exists(tags: HashSet<T>).execute(&session).await;
    
    
    user.push_post_ids(ids: Vec<T>).execute(&session).await;
    user.push_post_ids_if_exists(ids: Vec<T>).execute(&session).await;
    
    user.pull_post_ids(ids: Vec<T>).execute(&session).await;
    user.pull_post_ids_if_exists(ids: Vec<T>).execute(&session).await;
    
    user.push_books_by_genre(map: HashMap<K, V>).execute(&session).await;
    user.push_books_by_genre_if_exists(map: HashMap<K, V>).execute(&session).await;
    
    user.pull_books_by_genre(map: HashMap<K, V>).execute(&session).await;
    user.pull_books_by_genre_if_exists(map: HashMap<K, V>).execute(&session).await;

Ignored fields

We can ignore fields by using #[charybdis(ignore)] attribute:

#[charybdis_model(...)]
pub struct User {
    id: Uuid,
    #[charybdis(ignore)]
    organization: Option<Organization>,
}

So field organization will be ignored in all operations and default value will be used when deserializing from other data sources. It can be used to hold data that is not persisted in database.

charybdis's People

Contributors

danielhe4rt avatar goranbrkuljan avatar mykaul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

charybdis's Issues

Enable delete by LSI

I'm pretty sure scylla supports this. I want to delete a row by it's partition key and a field included in an LSI. The macros don't generate this.

[Idea] Charybdis Migrate: Keyspaces

I was wondering if there's any possibility to create Keyspaces directly from a file inside models/ folder. Something like: src/models/keyspaces.rs

Also set it as default for each model in the like:

#[charybdis_model(
keyspace_name = "leaderboard"
table_name = song_leaderboard,
partition_keys = [song_id, modifiers, difficulty, instrument],
clustering_keys = [player_id, score],
global_secondary_indexes = [],
local_secondary_indexes = [],
table_options = "
  CLUSTERING ORDER BY (player_id ASC, score DESC)
",
)]

Even if it's just ONE keyspace, it would be better to keep re-running these migrations from scratch...

Thanks in advance!

Section to add: Good practices with X Framework

Hey! I was wondering if it's not a good option to add a simple section explaining how Charybdis work together with other frameworks.

Since I'm doing that project using DataTransferObjects, I realized that I can just skip primitive types and go straight to Charybdis type for validation on the DTO's.

use charybdis::types::{Frozen, Set, Text};
use serde::{Deserialize, Serialize};
use validator::Validate;

use crate::models::leaderboard::Leaderboard;

#[derive(Deserialize, Debug, Validate)]
pub struct SubmissionDTO {
    pub song_id: String, # primitive
    pub player_id: Text, # charybdis type
    pub modifiers: Frozen<Set<Text>>, # charybdis type 
    pub score: i32,
    pub difficulty: String,
    pub instrument: String,
}

AFAIK they have the same behavior based on what you sent on the request, right? And also since it's been decoded with serde it should work to all web frameworks.

Failed to type check Rust type uuid::Uuid against CQL type Timeuuid: expected one of the CQL types: [Uuid]

This test fails with the following error:

called `Result::unwrap()` on an `Err` value: QueryError("INSERT INTO test (id) VALUES (:id)", BadQuery(SerializationError(SerializationError(BuiltinSerializationError { rust_name: "test::Test", kind: ColumnSerializationFailed { name: "id", err: SerializationError(BuiltinTypeCheckError { rust_name: "uuid::Uuid", got: Timeuuid, kind: MismatchedType { expected: [Uuid] } }) } }))))
#[cfg(test)]
mod test {
    use charybdis::operations::Insert;
    use charybdis::types::{Timeuuid, Uuid};
    use charybdis_macros::charybdis_model;
    use scylla::{CachingSession, SessionBuilder};
    use std::str::FromStr;

    #[derive(Debug, Default)]
    #[charybdis_model(
        table_name = test,
        partition_keys = [id],
        clustering_keys = [],
    )]
    struct Test {
        id: Timeuuid,
    }

    #[tokio::test]
    async fn test() {
        let session = SessionBuilder::new()
            .known_node("localhost")
            .use_keyspace("test", true)
            .build()
            .await
            .unwrap();

        let session = CachingSession::from(session, 100);
        let record = Test {
            id: Uuid::from_str("b01e22a8-0205-11ef-8f3a-42048a2ee1aa").unwrap(),
        };

        record.insert().execute(&session).await.unwrap();
    }
}

Community resources

It would be neat to have a Discord server or, less preferably, a Slack community.

Move charybdis::types into a separate feature/package

Hello!
First of all thanks for a great project.
I am using Leptos and I want to use same structs in WASM frontend. I cannot do that because Charybdis in an ssr-only dependency and charybdis::types is unavailable.

Missing find by secondary index

Hi, not sure what's going on here but it looks like in this specific struct that the macros aren't generating the expected find_by_email associated function.

#[charybdis_model(
    table_name = chefs,
    partition_keys = [id],
    clustering_keys = [],
    global_secondary_indexes = [email],
    local_secondary_indexes = [],
    static_columns = []
)]
#[derive(Default)]
pub struct Chef {
    pub id: Uuid,
    pub email: Text,
    pub completed_signup: bool,
    pub first_name: Text,
    pub last_name: Text,
}

Composed Partition Keys in Materialized Views

Hey! I'm trying to create a MV with two keys in the partition key but seems to not work. Maybe it's a syntax error. It's suppose to work?

#[charybdis_view_model(
table_name=timeline_liked,
base_table=timeline,
partition_keys=[(username, liked)], // <- 
clustering_keys=[created_at, tweet_id],
table_options = "CLUSTERING ORDER BY (created_at DESC)"
)]
pub struct TimelineLiked {
    pub username: Text,
    pub tweet_id: Uuid,
    pub author: Text,
    pub text: Text,
    pub liked: Boolean,
    pub bookmarked: Boolean,
    pub retweeted: Boolean,
    pub created_at: Timeuuid
}

The output:

image

The expected query to be executed:

CREATE MATERIALIZED VIEW IF NOT EXISTS timeline_liked AS
    SELECT
        text,
        created_at,
        tweet_id,
        retweeted,
        bookmarked,
        author,
        liked,
        username
    FROM 
        timeline
    WHERE
        username IS NOT NULL AND
        liked IS NOT NULL AND
        created_at IS NOT NULL AND
        tweet_id IS NOT NULL
PRIMARY KEY ((username, liked), created_at, tweet_id) WITH CLUSTERING ORDER BY (created_at DESC);

Empty set fails to deserialise back into empty hashset

When a column with a Set type is inserted and defaulted (to become an empty hash set), it is stored in Scylla as null. When reading this, the crate fails to convert the null into an empty hashset.

Apologies for bad image

image

failed to find meals: NextRowError("SELECT chef, id, name, description, ingredients, allergens, dietary_requirements, images, added_on FROM meals WHERE chef = ?", FromRowError(BadCqlVal { err: ValIsNull, column: 7 }))

TLS support

Hiya, does this support using TLS to connect to scylla? If not, can we add a feature to enable that functionality?

Handling Counters

Hey!

There's any example on how to increment/decrement using counters? I tried to look at the docs or even inspecting the charybdis counter type but with no luck.

Find macro for materialised views

Hi,

I have a PurchasesByChef MV. I want to query based on a date < ... but can't do so like with normal tables. Can this be implemented? date is a cluster key on both Purchases and the MV.

However, there is a find_purchases_by_chef_query macro but I'm not quite sure how to use these.

Update with a variable set of fields using MaybeUnset

Here's how I am doing it with the native driver:

use scylla::frame::value::MaybeUnset::{Set, Unset};
    scylla
        .query(
            "UPDATE places
            SET name = ?, phone = ?, email = ?, address = ?, google_maps_link = ?, short_description = ?, description = ?, tags = ?
            WHERE id = ? IF owner_telegram_id = ?",
            (
                name.map_or(Unset, Set),
                phone.map_or(Unset, Set),
                email.map_or(Unset, Set),
                address.map_or(Unset, Set),
                google_maps_link.map_or(Unset, Set),
                short_description.map_or(Unset, Set),
                description.map_or(Unset, Set),
                tags.map_or(Unset, |x| {
                    Set(x.split(',')
                        .map(|s| s.trim().to_string())
                        .collect::<Vec<String>>())
                }),
                id.clone(),
                telegram_id.clone(),
            ),
        )
        .await?;

Is there a Charybdis alternative to this?

How to use `Time` and `Duration`?

Hi, sorry I know I'm opening a lot of issues. Does charybdis support the use of the Time and Duration scylla types? If so how do I use them?

I know Time is a chrono::DateTime<Utc> even though i think it should be the NaiveTime type that the scylla crate uses because Time only specifies a time in a day, not a date.

Duration is just a string and not sure how to work with it

Raw Query Dump

Hey Goran!

There's any way to dump the raw query before it get executed for debugging purposes?

Update field using GSI

Hiya,

Thanks for all your work as usual. I'm trying to update a field on a row to be true by using the GSI to lookup the row. I can't use the partial macro as it requires the primary key. How would I do this?

Custom types for migrations

Do you plan on adding some sort of support for custom types for migrations?

I think just the ability to map a custom type to a CqlType would be the easiest work around, maybe a field attribute?

#[charybdis_model(...)] pub struct Example { #[charybdis(cql_type = Varint)] pub foo: MyCustomType, #[charybdis(cql_type = Blob)] pub bar: OtherCustomType, }

Can't find rows from just partition key

I'm pretty sure this should be possible, but I have a table:

#[charybdis_model(
    table_name = meals,
    partition_keys = [chef], // place dishes of the same chef together
    clustering_keys = [id],
    global_secondary_indexes = [],
    local_secondary_indexes = [],
    static_columns = []
)]
#[derive(Default, Serialize, Deserialize)]
pub struct Meal {
    pub id: Uuid,
    pub chef: Uuid,
    ...
}

I'm pretty sure I should be able to look up all meals by a chef, but for some reason I cannot do so?

error[E0599]: no function or associated item named `find_by_chef` found for struct `Meal` in the current scope
  --> src/chef/meals/get.rs:52:34
   |
52 |     let mut stream = match Meal::find_by_chef(query.chef).execute(&db).await {
   |                                  ^^^^^^^^^^^^ function or associated item not found in `Meal`

Screenshot 2024-04-19 at 18 59 30

Project Status

Hey! Just wondering if the message below still a thing:

Warning

WIP: This project is currently in an experimental stage. It depends on Rust: 1.75.0 which is currently Beta and it's scheduled for stable release on Dec 28.

Also about the compatibility issue that you mentioned on our discussion on Twitter, I just saw that the driver maintainers already implemented the fix for that (scylladb/scylla-rust-driver#899).

Charybdis Sample Project

Hey, how's going?

There's any sample project using Charybdis that I can inspect? Like a todo-list or something related?

I really loved the simplicity on the usage and want to know more about, but I'm still a Rust newbie... Maybe we can bring more improvements on the base doc together as well!

Cheers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.