Giter VIP home page Giter VIP logo

rust-cryptoki's Introduction

Cryptoki Rust Wrapper

The cryptoki crate provides an idiomatic interface to the PKCS #11 API. The cryptoki-sys crate provides the direct FFI bindings.

Community

Come and ask questions or talk with the Parsec Community in our Slack channel or biweekly meetings. See the Community repository for more information on how to join.

Contributing

Please check the Contribution Guidelines to know more about the contribution process.

History

This repository is based on this original PR on rust-pkcs11. Read the PR discussion for more information.

License

The software is provided under Apache-2.0. Contributions to this project are accepted under the same license.

Copyright 2021 Contributors to the Parsec project.

rust-cryptoki's People

Contributors

a1ien avatar arjennienhuis avatar baloo avatar bobo1239 avatar daxpedda avatar ellerh avatar firstyear avatar gowthamsk-arm avatar heiher avatar hug-dev avatar ionut-arm avatar jeamland avatar jhagborgftx avatar jippeholwerda avatar kakemone avatar marcsvll avatar mike-boquard avatar nickray avatar nk185 avatar palfrey avatar robertdrazkowskigl avatar sbihel avatar subject38 avatar sven-bg avatar tgonzalezorlandoarm avatar vkkoskie avatar wiktor-k avatar ximon18 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rust-cryptoki's Issues

Treat CK*_VENDOR_DEFINED correctly

All enum types with an associated VENDOR_DEFINED constant describe that value in roughly the same way in the standard. Using key types as an example: "Key types CKK_VENDOR_DEFINED and above are permanently reserved for token vendors. For interoperability, vendors should register their key types through the PKCS process."

The intent is that vendors who aren't concerned with interoperability or standardization are permitted to safely use that entire range of values. This crate largely omits these values, but not consistently. The two places where it does use them, they aren't treated properly:

  1. CKM_VENDOR_DEFINED is recognized for the Display trait, but only as a single value. All other in-range values are treated as errors.
  2. CKR_VENDOR_DEFINED is reported as an error return value. All other in-range values are converted to GeneralError.

In both cases, the underlying value is made visible in printed strings but lost within the type system and to the user.

Proposed changes:

  • All enums documented to support vendor extensions should at least recognize them properly, even if full support isn't provided.
  • Where defined, all values x that satisfy VENDOR_DEFINED <= x <= ULONG::MAX should be propagated back up to the user to respond to. For Rv, this would likely be a sibling type to Ok and Error(RvError). For all others it would be an additional discriminant containing the non-standard value (i.e., VendorDefined(Ulong))
  • All stringification sites should include the value associated with vendor-defined values (e,g. "CKM_VENDOR_DEFINED(0x{%08x})"

PIN Handling

This issue is opened to continue the discussion started in #49 in regards to how user PINs are handled by the cryptoki library wrapper.

Currently, when a user wishes to login, they must first call set_pin in the Pkcs11 struct which stores the pin in a HashMap, keyed to the Slot. Then the user may call login on the created Session object.

Any other function that directly handles pins (those that expose the C_InitToken, C_InitPIN and C_SetPIN functionality) wraps the provided PIN within a Secret structure and passes a pointer to the appropriate PKCS#11 function. Upon completion of the function, the Secret structure will drop and zeroize the memory.

My first proposition is to drop the pins HashMap member in the Pkcs11 struct and provide a pins parameter to Session::login

First, if the user wishes to change which user is logged in (say going from CKU_USER to CKU_SO) the must call set_pin to change the pin and then call login. This requires the implementer to track which pin is currently being stored by the library to ensure they are logging in with the correct pin.

Secondly, and this is a personal preference thing, I do not believe the library should be storing the user's passwords. I believe the user should be the one that controls where their sensitive data is stored and be responsible for it, rather than ourselves.

Thirdly, and this is one of those weird PKCS#11 corner cases, there are situations where you may not want to provide a password to C_Login. For example, if the flag CKF_PROTECTED_AUTHENTICATION_PATH is set in token flags, that indicates there's a separate way of logging into the token (e.g. finger print reader, smart card reader, etc). In this situation, you must pass a NULL_PTR to C_Login to take advantage of that alternative authentication path.

My second proposition is to drop Secret and instead just pass the pointer from the provided str slice to the appropriate C_ function. If this isn't possible, then anytime a pin is provided and needs to be copied somewhere on the stack that is controlled by the library, then use Secret.

Resizing/Truncating returned lists

Pkcs11::get_slots_with_token, Pkcs11::get_all_slots and Pkcs11::get_mechanism_list currently truncate returned lists to account for changed sizes between querying the size and getting the list, a typical race condition.

There are two problems with this currently:

  • Vec::resize is used instead of Vec::truncate for Pkcs11::get_slots_with_token and Pkcs11::get_all_slots
  • Lists can also increase in size, which returns an error, it might be better to put it in a loop until the correct size is established

Thread safety?

I noticed the tests are #[serial]. How do I use this library in a multi-threaded Rust app safely? It would be nice to be able to amortize the cost of slot/token/login/key stuff.

Pack structs in C bindings

This creates a separate issue out of the latter part of this comment and the one that follows.

When bindings are generated from the C headers in cryptoki-sys, they do so using whatever structure alignment and packing is assumed for the target platform unless packing is made explicit in the headers for bindgen to read. Currently, packing is only specified for Windows and left implied everywhere else.

You can confirm that forcing a 1-byte alignment for structs on other platforms that it does indeed produce bindings that differ in terms of size and field offsets, and is not the implicit default.

Meanwhile, the PKCS#11 standard (both 2.x and 3.x, Section 2.1) are very clear that

Cryptoki structures are packed to occupy as little space as is possible. Cryptoki structures SHALL be packed with 1-byte alignment.

This would seem to imply that packing be explicit for all target bindings. But when this is done, several problems arise:

  1. Rust assumes a >1-byte minimum alignment for struct members, which makes referencing anything beyond the 0th item undefined behavior. Each such instance of this (hundreds in the auto-generated tests) produces a valid, unsuppressible compiler warning. This is a known issue for bindgen that doesn't appear to be nearing a solution any time soon.
  2. Tests written for this crate using SoftHSM seg fault. Whether this is at the rust level (dereferencing unaligned addresses) or at the C level (mismatch with struct packing internal to SoftHSM) is unclear. The latter doesn't seem like it should be the case, but the fact that tests are currently passing would seem to be an endorsement of implicit, non-compact alignment.

So, something is incorrect here, but what exactly that is needs to be investigated. Even if it turns out the way the bindings are currently generated is correct, that fact should still be documented conspicuously to avoid further misconception.

Digest

Hi cryptoki!
Is it possible to perform a digest operation (ex. SHA256) on an HSM using cryptoki? I see a TODO in the code.

Is anyone actively working to implement this?
Thanks!
-F

`pkcs11.open_session_no_callback` against Luna Network HSM crashed with SIGSEGV

It doesn’t seem rust-cryptoki can be used to open a pkcs11 session to the Thales eLab, i.e. Luna Network HSM without ending up in (signal: 11, SIGSEGV: invalid memory reference).

Sample unit test:

extern crate cryptoki;

use cryptoki::Pkcs11;
use cryptoki::types::slot_token::Slot;
use cryptoki::types::locking::CInitializeArgs;
use cryptoki::types::Flags;
use std::env;

fn open_session() -> (Pkcs11, Slot) {
    let pkcs11 = Pkcs11::new(
        env::var("PKCS11_HSM_MODULE")
            .unwrap_or_else(|_| "/usr/safenet/lunaclient/lib/libCryptoki2_64.so".to_string()),
    )
    .unwrap();

    // initialize the library
    pkcs11.initialize(CInitializeArgs::OsThreads).unwrap();

    // find a slot, get the first one
    let slot = pkcs11.get_slots_with_token().unwrap().remove(0);

    println!("slot: {}", slot.id());

    // set flags
    let mut flags = Flags::new();
    let _ = flags.set_rw_session(true).set_serial_session(true);

    {
        pkcs11.open_session_no_callback(slot, flags).unwrap();
    }

    (pkcs11, slot)
}

#[test]
#[serial]
fn test_open_session() {
    open_session();
}

Test run and failure

# Centos 7
$ cargo test -- --nocapture
running 1 test
slot: 0
error: test failed, to rerun pass '--bin xks-proxy'

Caused by:
  process didn't exit successfully: `/local/centos/ThalesElab/rust/target/debug/deps/my_project-be90c88b811a4002 --nocapture` (signal: 11, SIGSEGV: invalid memory reference)

Note

  1. I can open the session in pure C without issue.
  2. Switching between rust stable vs nightly build doesn't seem to make any difference.

Also, changing the libloading dependency to open the .so file from:

Library::open(Some(filename), RTLD_LAZY | RTLD_LOCAL)

to:

Library::open(Some(filename), RTLD_NOW)

doesn't seem to make any difference.

Missing constants for x86_64-unknown-linux-gnu

Latest version of cryptoki-sys does not include CKF_EC_F_2M constant and so compilation of cryptoki v0.4.1 as dependency fails.

My system:

Linux blade 6.2.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Fri, 03 Mar 2023 15:58:31 +0000 x86_64 GNU/Linux

Steps to reproduce

cargo new cryptoki-repro
cd cryptoki-repro
cargo add [email protected]
cargo build

Fails with message:

error[E0425]: cannot find value `CKF_EC_F_2M` in this scope
   --> /home/jck/.cargo/registry/src/github.com-1ecc6299db9ec823/cryptoki-0.4.1/src/mechanism/mechanism_info.rs:26:25
    |
26  |         const EC_F_2M = CKF_EC_F_2M;
    |                         ^^^^^^^^^^^ help: a constant with a similar name exists: `CKF_EC_F_P`
    |
   ::: /home/jck/.cargo/registry/src/github.com-1ecc6299db9ec823/cryptoki-sys-0.1.5/src/bindings/x86_64-unknown-linux-gnu.rs:581:1
    |
581 | pub const CKF_EC_F_P: CK_FLAGS = 1048576;
    | ------------------------------ similarly named constant `CKF_EC_F_P` defined here

error[E0425]: cannot find value `CKF_EC_ECPARAMETERS` in this scope
  --> /home/jck/.cargo/registry/src/github.com-1ecc6299db9ec823/cryptoki-0.4.1/src/mechanism/mechanism_info.rs:27:33
   |
27 |         const EC_ECPARAMETERS = CKF_EC_ECPARAMETERS;
   |                                 ^^^^^^^^^^^^^^^^^^^ not found in this scope

error[E0425]: cannot find value `CKF_ERROR_STATE` in this scope
  --> /home/jck/.cargo/registry/src/github.com-1ecc6299db9ec823/cryptoki-0.4.1/src/slot/token_info.rs:35:29
   |
35 |         const ERROR_STATE = CKF_ERROR_STATE;
   |                             ^^^^^^^^^^^^^^^ not found in this scope

For more information about this error, try `rustc --explain E0425`.
error: could not compile `cryptoki` due to 3 previous errors

I can see that latest main branch code contains those constants so perhaps all that is to be done is tagging a new version?

RSA OAEP interface is unsound

The following code compiles without error or warning:

use cryptoki::context::{CInitializeArgs, Pkcs11};
use cryptoki::mechanism::{Mechanism, MechanismType};
use cryptoki::mechanism::rsa::{PkcsMgfType, PkcsOaepParams, PkcsOaepSourceType};
use cryptoki::object::Attribute;
use cryptoki::session::UserType;
use std::error::Error;

const PKCS11_LIB_PATH: &str = "/usr/lib/softhsm/libsofthsm2.so";
const SLOT_ID: u64 = 1359578051; // from softhsm2-util
const PIN: &str = "testpin";

fn main() -> Result<(), Box<dyn Error>> {
    let mut context = Pkcs11::new(PKCS11_LIB_PATH)?;
    context.initialize(CInitializeArgs::OsThreads)?;
    let session = context.open_rw_session(SLOT_ID.try_into()?)?;
    session.login(UserType::User, Some(PIN))?;

    let pub_key_template = [Attribute::ModulusBits(2048.into())];
    let (pubkey, _privkey) = session.generate_key_pair(&Mechanism::RsaPkcsKeyPairGen,
                                                       &pub_key_template, &[])?;

    let encrypt_mechanism = Mechanism::RsaPkcsOaep(PkcsOaepParams {
        hash_alg: MechanismType::SHA1,
        mgf: PkcsMgfType::MGF1_SHA1,
        source: PkcsOaepSourceType::DATA_SPECIFIED,
        source_data: 0xBADC0DE as _, // uh oh!
        source_data_len: 1.into(),
    });

    session.encrypt(&encrypt_mechanism, pubkey, b"Hello, world!")?;

    Ok(())
}

This is almost certainly UB (although in this example, libsofthsm returns CKR_BAD_ARGUMENTS, only because it does not support non-null source_data). Rust code should not expose an interface that may dereference a user-supplied raw pointer, without marking that interface unsafe.

I don't care much about this parameter, but the same problem exists for other, not-yet-supported mechanisms whose parameters contain pointers (notably, AES-GCM and AES-CCM). I do care about safe interfaces for those, and it would make sense to treat them all consistently.

Possible solutions

No matter what, we can't let the user supply arbitrary pointers, without making them call an unsafe function. The internals of PkcsOaepParams should be private, with one or more safe constructors (and possibly an unsafe constructor taking a raw pointer).

Don't support source_data at all

I don't see any tests for this feature. libsofthsm2 doesn't support it. Is anyone using this thing anyway? Maybe we just force it to be null.

On the other hand, we'll need to figure out the same thing for AES-GCM and AES-CCM, where we do care about non-null parameters.

Make PkcsOaepParams own its source data

This is ergonomic, although inefficient if source_data is large and must be shared. The spec allows source_data to be huge (as large as the hash function supports), but again, is anyone doing that?

Add a lifetime parameter to Mechanism and PkcsOaepParams

This allows for more flexibility, although it clutters the interface of anything that uses a Mechanism. Luckily, any variant which does not use the lifetime parameter can be inferred 'static.

This is what I personally would like most, but I understand it's an invasive change.

Provide an unsafe constructor taking a pointer and length

This isn't great if it's the only way, but it does provide a useful backdoor if we go with forcing ownership.

Segmentation fault on parsing `Date`

When requesting StartDate or EndDate attributes on a key we get a segmentation fault. For example, running the following unit test:

#[test]
#[serial]
fn aes_key() -> Result<()> {
    let (pkcs11, slot) = init_pins();

    // set flags
    let mut flags = SessionFlags::new();
    let _ = flags.set_rw_session(true).set_serial_session(true);

    // open a session
    let session = pkcs11.open_session_no_callback(slot, flags)?;

    // log in the session
    session.login(UserType::User, Some(USER_PIN))?;

    // get mechanism
    let mechanism = Mechanism::AesKeyGen;

    // pub key template
    let key_template = vec![
        Attribute::Class(ObjectClass::SECRET_KEY),
        Attribute::Token(true),
        Attribute::Sensitive(true),
        Attribute::ValueLen(16.into()),
        Attribute::KeyType(KeyType::AES),
        Attribute::Label(b"testAES".to_vec()),
        Attribute::Private(true),
    ];

    // generate a key pair
    let key = session.generate_key(&mechanism, &key_template)?;

    let attributes_result =
        session.get_attributes(key, &[AttributeType::EndDate, AttributeType::StartDate]);

    match attributes_result {
        Ok(attributes) => println!("working with version: {:?}", attributes),
        Err(e) => println!("error getting attributes: {:?}", e),
    }
    Ok(())
}

results (for me) in a SIGSEGV.

`clone()` and `is_initialized()`

Looking at he logic of initialize(), is_initialized() and finalize() I found out that Pkcs11::initialized can get out of sync with Pkcs11::impl_.

    #[test]
    fn test_clone_initialize() {
        let mut lib = Pkcs11::new(config::pkcs11_lib()).unwrap();
        let mut clone = lib.clone();
        lib.initialize(CInitializeArgs::OsThreads).unwrap();
        if (!clone.is_initialized()) {
            clone.initialize(CInitializeArgs::OsThreads).unwrap();
        }
    }

I'm not sure how to fix this. I'm not sure why initialized is in Pkcs11 and not in Pkcs11Impl or even stored at all.

I'm also not sure why Pkcs11 should be cloned and not borrowed.

UserNotLoggedIn calling decrypt after login....

I'm not terribly familiar with this stuff, but this seems pretty straight-forward, lol.

I have an app successfully using this module on my machine (sigh...).
I can get session, inspect slots, find tokens, login, and perform crypto operations just fine.

On a another user's machine, I can do all of the above, but when I try to call decrypt using the private key it throws a UserNotLoggedIn even though I successfully logged in immediately BEFORE calling decrypt.
The login threw no error. I saw their software come up and prompt for the pin, yet I still get this error...

Has anyone ever seen this before?

PKCS OAEP padding always returns: Pkcs11(ArgumentsBad)

This OAEP code complies but then throws the Pkcs11(ArgumentsBad) error for any parameters set.
I am using softhsm2.
Here is the code

use cryptoki::context::{CInitializeArgs, Pkcs11};
use cryptoki::mechanism::{Mechanism, MechanismType};
use cryptoki::mechanism::rsa::{PkcsMgfType, PkcsOaepParams, PkcsOaepSource};
use cryptoki::object::Attribute;
use cryptoki::session::UserType;
use std::error::Error;
use cryptoki::types::AuthPin;
const PKCS11_LIB_PATH: &str = "PATH";
const SLOT_ID: u64 = SLOT;
fn main() -> Result<(), Box<dyn Error>> {
    let context = Pkcs11::new(PKCS11_LIB_PATH)?;
    context.initialize(CInitializeArgs::OsThreads)?;
    let session = context.open_rw_session(SLOT_ID.try_into()?)?;
    session.login(UserType::User, Some(&AuthPin::new("PIN".into()))).expect("Invalid PIN");
    let pub_key_template = [Attribute::ModulusBits(2048.into())];
    let (publickey, _privkey) = session.generate_key_pair(&Mechanism::RsaPkcsKeyPairGen,
                                                       &pub_key_template, &[])?;
    let oaep=PkcsOaepParams::new(MechanismType::SHA1 ,PkcsMgfType::MGF1_SHA1, PkcsOaepSource::empty());
    let encrypt_mechanism: Mechanism = Mechanism::RsaPkcsOaep(oaep);

    session.encrypt(&encrypt_mechanism, publickey, b"Test")?;
    Ok(())
}

Which results in the error: Error: Pkcs11(ArgumentsBad).
I tried the same code without using OAEP and it works as well, So i presume it is an issue with OAEP Padding.
Thank you

Signing with RSA-PSS does not hash the message with the given function

I have code that looks like this:

let mechanism = Mechanism::RsaPkcsPss(PkcsPssParams {
    hash_alg: MechanismType::SHA256,
    mgf: PkcsMgfType::MGF1_SHA256,
    s_len: Ulong::from(32)
});
let label = Attribute::Label(keyname.to_string().into_bytes());
let key = session
    .find_objects(&[label])
    .unwrap()
    .into_iter()
    .nth(0)
    .unwrap();
let signature = session.sign(&mechanism, key, data);

Here, data is a &[u8]. This code only works if data has length 20, 28, 32, 48, or 64. As it turns out, these are the digest lengths for SHA1, SHA224, SHA256, SHA384, and SHA512 respectively, and it works for all five of these sizes regardless of the mechanism specifying SHA256. I expected this code to use the mechanism provided to perform the corresponding hash function on the input data. Is this an incorrect assumption on my part?

Underlying library access / vendor extensions

First of all, thanks for this great crate / api! again!
Second, merry Christmas to those of you who celebrate!

My HSM vendor (entrust) has extensions to the standard PKCS#11 to support K/N authentication mechanisms with the HSM.
Those are documented here:
https://nshielddocs.entrust.com/api-generic/12.80/pkcs11#nshield-specific-pkcs11-api-extensions

My current idea is to implement an cryptoki::context::Pkcs11::unsafe_deref() method to that would return the underlying the libloading::Library so I could grab the missing symbols from there (This is exposed in the symbols of the ELF, no idea how it could be exposed in the C_GetFunctionList).
I'll take care of having a cbindgen wrapper for those methods.

Opening this issue to gather ideas/suggestions on that, and see if you would be open for the usecase and a PR.

It doesn't look like this is related to #105 (but I might be missing something), feel free to close as duplicate if so.

Add `is_initialized()` to `Pkcs11`

I recently created a PR for the Krill project to switch from the pkcs11 crate to the cryptoki crate and in doing so I discovered a missing piece of functionality: the cryptoki crate has no equivalent of the pkcs11 crate `is_initialized() function.

Of course I can track whether or not I have called initialized myself but that's extra work. Or alternatively add an fn initialize_if_needed() or enable fn initalize() to tolerate being already initialized, which would prevent the caller from having to check and then call initialize to avoid getting an already initialized error.

Signing and Verifying

I'm kind of at a loss for this, but maybe I'm missing something silly...

I'm trying to sign and then verify the signature

	let sig = session.sign(&Mechanism::RsaPkcs, key_handle, &digest)?;
	eprintln!("verify: {:?}", session.verify(&Mechanism::RsaPkcs, cert_handle, &digest, &sig));

I keep getting verify: Err(Pkcs11(KeyTypeInconsistent))

The cert is queried looking for all x509 certs capable of signing.
I get the Id attribute of the cert and look for Attribute::Private(true) Attribute::Id(id) and find the key.

I've looked at pkcs11 examples in c and such and it looks like this should work? Is my card just messed up?

Provide attribute type in return from `get_attribute_info`

Currently get_attribute_info takes in a slice of AttributeType's and returns a Vec<AttributeInfo> ,however other than indexing into the slice and assuming the corresponding attribute information occupies the same index , the returned data does not correlate the attribute length to said attribute.

I can think of two solutions (but my limited Rust experience tells me there's probably better ways of doing this):

  1. Modify AttributeInfo to contain the AttributeType by modifying the enum variants to be either:
pub enum AttributeInfo {
    Unavailable(AttributeType),
    Available(AttributeType, usize),
}

or

pub enum AttributeInfo {
    Unavailable { type: AttributeType},
    Available { type: AttributeType, size: usize },
}
  1. Return a map which is keyed on the attribute, such as:
pub get_attribute_info(
&self,
object: ObjectHandle,
attributes: &[AttributeType],
) -> Result<HashMap<AttributeType, AttributeInfo>>

Furthermore, get_attribute_info does not return whether or not C_GetAttributeValue returns CKR_ATTRIBUTE_SENSITIVE or CKR_ATTRIBUTE_INVALID. Operationally, these may not be the most helpful returns (frankly the spec for C_GetAttributeValue leaves a lot to be desired when it comes to determining why an attribute is not returned) but I believe the wrapper library should not be masking away that information.

This part of the issue may warrant a separate ticket/discussion, but one possibility would be returning the CKR as part of a tuple (along with the attribute information) in the successful part of the Result enum.

`#[hsm_test]` attribute/macro

Hi there!

I ended up creating an #[hsm_test] macro to be able to write:

#[cfg(test)]
mod tests {
    use cryptoki::{context::Pkcs11, slot::Slot};
    use softhsm_test::hsm_test;

    #[hsm_test]
    fn test_foo(pkcs11: Pkcs11, slot: Slot) {
        assert!(false, "oh no!");
    }
}

This will run the tests in a dedicated subprocess, set up a dedicated softhsm configuration in a test dir, update environment variables to then run the test.

Do you have any interest for such a contribution? (this will be at least one extra crate because one is a proc-macro)

Improvement to unreleased open_session change?

Regarding the Replace SessionFlags with bool to open session unreleased change I have a concern.

With the new code one ends up with a call like:

ctx.open_session(slot, true)?;

The reader is left wondering what does that true do so I modified my code like so:

const READ_WRITE: bool = true;
let session_handle = ctx.open_session(slot, READ_WRITE)?

However, this issue wouldn't exist if instead an enum were passed in rather than a bool.

Calling @vkkoskie as the original author of the change and who is thus perhaps best placed to comment on it.

Add support for SHA-based KDFs for ECDH

PKCS11 defines the following EC KDFs:

  • CKD_NULL
  • CKD_SHA1_KDF
  • CKD_SHA224_KDF
  • CKD_SHA256_KDF
  • CKD_SHA384_KDF
  • CKD_SHA512_KDF

It looks like the only currently supported KDF is CKD_NULL:

impl EcKdfType {
/// The null transformation. The derived key value is produced by
/// taking bytes from the left of the agreed value. The new key
/// size is limited to the size of the agreed value.
pub const NULL: EcKdfType = EcKdfType { val: CKD_NULL };
}

Supported targets might not need an exact target triple check

When we select the bindings that we want, we only check target_arch and target_os. If those two parameters are the only things determining the value of the bindings, then we can simplify this check in build.rs:

        let supported_platforms = vec![
            "x86_64-unknown-linux-gnu".to_string(),
            "aarch64-unknown-linux-gnu".to_string(),
            "armv7-unknown-linux-gnueabi".to_string(),
            "armv7-unknown-linux-gnueabihf".to_string(),
            "arm-unknown-linux-gnueabi".to_string(),
        ];
        let target = std::env::var("TARGET").unwrap();

        // check if target is in the list of supported ones or panic with nice message
        if !supported_platforms.contains(&target) {
            panic!("Compilation target ({}) is not part of the supported targets ({:?}). Please compile with the \"generate-bindings\" feature or add support for your platform :)", target, supported_platforms);
        }

by parsing the target triple as <arch><sub>-<vendor>-<sys>-<abi> (see here) and only checking for the tuple (arch, os).

That would allow much more crates to be used with the generated bindings.

edit: exact same issue on the tss-esapi crate.

Function name as part of errors

What would people think about including (the name of) the called function in error values?

E.g. most error values could include a field of type cryptoki::context::Function. The error message could then look like Function::Login failed: RvError::PinIncorrect: The specified PIN is ....

Without the function name, error messages are less useful and callers of the library can't simply propagate errors. Instead they will likely create their own error types that include the function name in some way or another.

How to test for supported functions?

I recently created a PR for the Krill project to switch from the pkcs11 crate to the cryptoki crate and in doing so I discovered a difference: the pkcs11 crate requires lots of the pointers returned by C_GetFunctionList() to be non-null when the library is loaded, while cryptoki delays validation of function pointers until they are invoked.

If I want to verify as early as possible, and without actually invoking the functions, which functions a loaded library supports, how can I do that with cryptoki?

finalize() without drop()?

The only way I can find to "gracefully" stop another thread that is blocking on C_WaitForSlotEvent is to call C_Finalize.

Pkcs11::finalize() is a no-op and Pkcs11Impl::finalize() is only called on drop(). I cannot drop the library if it is being used in another thread.

Is it ok to let Pkcs11::finalize() call Pkcs11Impl::finalize()?

We also need to fix Pkcs11::is_initialized() but that is a separate issue.

Session Pool Management

Hi, this issue is just to say I wrote a session pool management crate based on r2d2: https://github.com/spruceid/r2d2-cryptoki.

I believe it's the right way of handling concurrent requests for service HSMs like cloud-hosted ones, but I thought maybe you'd have an opinion.

On a side note, I added a Zeroize wrapper for pins to ensure users understand how sensitive they are, and unless you think it's unnecessary I will create a PR as it would belong in this crate.

Expose more fine-grained control over `find_objects`

Currently, the only way to search for objects is through Session::find_objects, which returns a vector of all objects. If there may be a huge number of objects, it's desirable to be able to find only a bounded number of objects. This is possible with PKCS#11, but the functionality is not exposed by this crate.

Further, it may be desirable to iterate through objects incrementally, i.e. calling C_FindObjects multiple times, with some processing in between each call. Also, the user may care whether the library makes several small calls to C_FindObjects, or one large one, especially if the HSM is being accessed over a network.

I believe all of these needs may be met safely by the following API:

impl Session {
    // ...

    pub fn find_objects_init<'a>(&'a mut self, template: &[Attribute]) -> Result<FindObjects<'a>> {
        // call `C_FindObjectsInit`
    }
}

pub FindObjects<'a> {
    session: '&mut Session // exclusive borrow to ensure at most one object search per session
}

impl<'a> FindObjects<'a> {
    pub fn find_next(&self, object_count: usize) -> Result<Option<Vec<ObjectHandle>>> {
        // call `C_FindObjects` once, with a buffer of size `object_count`.
        // return Ok(Some(...)) if we found objects, or Ok(None) if the search is over
    }
}

impl Drop for FindObjects { ... } // call `C_FindObjectsFinal` on drop

Although find_next could return just Result<Vec<ObjectHandle>>, I think returning an Option is more ergonomic, since

  • It's consistent with iterators and streams, which return None when finished
  • It reminds the user they need to check for termination
  • It allows for looping over all objects with while let notation

I believe the current implementation of Session::find_objects, as well as any other versions we might want to make (e.g. Session::find_objects_bounded(template, object_count)), can be implemented in terms of this one.

I will make a PR with something like this soon, but first I would like to collect some feedback about the API.

Remove psa_crypto dependency

The conversions impl TryFrom<psa_crypto::types::algorithm::Algorithm> for Mechanism and pub fn from_psa_crypto_hash should be removed from cryptoki crate as ideally cryptoki should really not depend on psa-crypto. Instead we should have these functions implemented in the parsec service.

bindgen_test_layout_max_align_t test fails on i686 on cryptoki-sys crate

When running the tests with bindgen for i686 the tests fail due the size of max_align_t

test bindgen_test_layout_max_align_t ... FAILED
failures:
---- bindgen_test_layout_max_align_t stdout ----
thread 'bindgen_test_layout_max_align_t' panicked at 'assertion failed: `(left == right)`
  left: `16`,
 right: `24`: Size of: max_align_t', /builddir/build/BUILD/cryptoki-sys-0.1.0/target/release/build/cryptoki-sys-3b2c9ff8efdc70bf/out/pkcs11_bindings.rs:3:71904
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
    bindgen_test_layout_max_align_t
test result: FAILED. 22 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Set homepage in GitHub project info

Hi,

Please set homepage field in GitHub project info. I think it should point to https://docs.rs/cryptoki/ since I check it semi-frequently and I'm too lazy to type that and a link would be so much useful 🙏

Thanks!

`get_attributes()` with AttributeType::Class fails for private key with YubiHSM2 Nano

Hi,

Before I start asking questions again first let me say thanks for the cryptoki crate!

When I migrated my simple keyls tool from the Rust pkcs11 crate to the cryptoki crate I encountered a strange problem that I hadn't had before. When fetching attributes for a key I can request the class attribute but only for a public key, when doing so for a public key I get a "Feature not supported" error.

See the old code vs the new code.

Am I doing something wrong?

Thanks,

Ximon

EDDSA contrib

Hello Cryptoki team!
Thanks for working on a Rust abstraction for PKCS#11!

Would you be open to a PR integrating EDDSA support? A project of mine needs it and might attempt to add it to your library since it doesn't seem supported at the moment.

Thanks!

Solution for `Session` lifetimes

The Session struct contained an explicit lifetime requirement on an instance of the Pkcs11 struct. Overall, this makes sense; you do not want an instance of a Session to outlive the instance of Pkcs11. However, this explicit lifetime requirement presents some interesting problems during implementation.

Take for example, a certificate authority that utilizes a PKCS#11 compliant token that contains a private key used to sign requests coming in. This certificate authority is multi-threaded so it can handle multiple requests at once. Each thread is required to open a session on the token in order to access the underlying object and wield it in order to sign the request. Now in order to wield this object, that session needs to be logged in.

One way of handling this is to simply login each session when it's created. If the session is already logged in, the login call will fail with CKR_USER_ALREADY_LOGGED_IN. That's annoying, but it can be handled in code. However, some tokens can't be logged in simply by password, some require smart cards inserted into the token, some require biometerics, etc. The point is that sometimes, login is interactive. And once the final session closes, that token is now logged out and a user is required to log back in.

To get around this, many CA application store off a 'management' session. This session is opened shortly after the application is started and the PKCS#11 library is initialized. This session is logged in and lives for as long as the application runs. The threads simply need to open a new session, do what they need to do, close the session and end.

The issue now, is say this CA stores the instance of Pkcs11 in a struct. Where can the instance of the 'management' session live?

struct MyCA {
  p11: Pkcs11,
  mgmt_session: Session,
}

This won't work, due to the explicit lifetime requirement of Session. This stack overflow question goes into more detail.

One possible solution is to add functions to the crate that allow the user to create the session without having to hold onto it. Something like create_management_session(slot: Slot, read_write: bool, user: UserType, pin: Option<&str>) and close_management_session(slot: Slot).

Add a testing infrastructure

The few tests in cryptoki/src/lib.rs do not even compile. They were written with rust-pkcs11 infrastructure in mind. We need to check if they are still valid, need to be modified and make cargo test pass locally and on the CI.

More generally we need to add more test and have a proper strategy for it!

Force cache flush?

I recently created a PR for the Krill project to switch from the pkcs11 crate to the cryptoki crate and in doing so I discovered one piece of "functionality" which I couldn't (easily) keep:

From: https://github.com/NLnetLabs/krill/pull/754/files#diff-753b1eac0564748e7eb4cbe23d53f883084d5e4b7140a81efed610ebc0d3af67R322

        fn force_cache_flush(context: ThreadSafePkcs11Context) {
            // Finalize the PKCS#11 library so that we re-initialize it on next use, otherwise it just caches (at
            // least with SoftHSMv2 and YubiHSM) the token info and doesn't ever report the presence of the token
            // even when it becomes available.
            let _ = context.write().unwrap().finalize();
        }

So I'm not sure if this is even legal per the PKCS#11 specification. What was happening was that Krill lazily connects to the "HSM" via the PKCS#11 library and if unable to connect then for some classes of error it will retry, for others it will declare the backend unusable.

With SoftHSMv2 and the YubiHSM PKCS#11 libraries I saw that if the token or slot were not initialized/ready/reachable when Krill first attempted to use the "HSM" that I could resolve the underlying issue but Krill would still be unable to connect until restarted. This seems to be because of some sort of caching in the PKCS#11 library being used. I discovered that calling C_Finalize() and then C_Initialize() again would "flush" this cache and permit Krill to connect to the HSM without requiring a restart of Krill.

With the significantly different interface offered by the cryptoki crate (at the current tip of main) compared to the pkcs11 crate I wasn't quickly easily able to keep this functionality, but doubted if I should even try.

What's your view on this?

Wrapper for C_WaitForSlotEvent

I might need to use C_WaitForSlotEvent. I wrote this and it seems to work:

    fn wait_for_slot_event_impl(&self, dont_block: bool) -> Result<Slot> {
        let flags = if dont_block { CKF_DONT_BLOCK } else { 0 };
        unsafe {
            let mut slot: CK_SLOT_ID = 0;
            let wait_for_slot_event = get_pkcs11!(self, C_WaitForSlotEvent);
            let rv = wait_for_slot_event(flags, &mut slot, std::ptr::null_mut());
            Rv::from(rv).into_result()?;
            Ok(Slot::new(slot))
        }
    }

    /// wait for slot events (insertion or removal of a token)
    pub fn wait_for_slot_event(&self) -> Result<Slot> {
        self.wait_for_slot_event_impl(false)
    }

    /// get the latest slot event (insertion or removal of a token)
    pub fn get_slot_event(&self) -> Result<Option<Slot>> {
        match self.wait_for_slot_event_impl(true) {
            Err(Error::Pkcs11(RvError::NoEvent)) => Ok(None),
            Ok(slot) => Ok(Some(slot)),
            Err(x) => Err(x),
        }
    }

Is that something I should make a pull request for?

Add Parsec copyright

We need to add on top of all files the standard Parsec copyright:

// Copyright 2021 Contributors to the Parsec project.
// SPDX-License-Identifier: Apache-2.0

Module tree structure makes docs difficult to navigate

The overall structure of the cryptoki crate's module tree consists of several relatively long branches with often empty intermediate modules. The actual content is largely buried on distant leaves in counterintuitive locations (e.g., Session declared among data types). When I first encountered this crate, I genuinely assumed someone was just reserving the name and it was mostly empty because the places I assumed things would be (e.g., definitions of object types) were empty. That's a shame, because it exists for the same reason I was looking in the first place: the current pkcs11 crate isn't idiomatic and exposes an unsafe, easy to misuse interface.

This crate shows a lot of promise toward addressing that problem, but its documentation structure (which is a direct result of its internal structure) is (IMO) a serious barrier to it being adopted widely.

I'll be submitting a PR shortly that attempts to address this issue.

test slot::token_info::test::debug_info fails on 32-bit architectures.

Debian ci discovered that the tests for cryptoki fail on 32-bit architectures

243s   left: `"TokenInfo {\n    label: \"Token Label\",\n    manufacturer_id: \"Manufacturer ID\",\n    model: \"Token Model\",\n    serial_number: \"Serial Number\",\n    flags: (empty),\n    max_session_count: Max(\n        100,\n    ),\n    session_count: None,\n    max_rw_session_count: Infinite,\n    rw_session_count: Some(\n        1,\n    ),\n    max_pin_len: 16,\n    min_pin_len: 4,\n    total_public_memory: Some(\n        0,\n    ),\n    free_public_memory: Some(\n        1234567890,\n    ),\n    total_private_memory: None,\n    free_private_memory: None,\n    hardware_version: Version {\n        major: 0,\n        minor: 255,\n    },\n    firmware_version: Version {\n        major: 255,\n        minor: 0,\n    },\n    utc_time: Some(\n        UtcTime {\n            year: 1970,\n            month: 1,\n            day: 1,\n            hour: 0,\n            minute: 0,\n            second: 0,\n        },\n    ),\n}"`,
243s  right: `"TokenInfo {\n    label: \"Token Label\",\n    manufacturer_id: \"Manufacturer ID\",\n    model: \"Token Model\",\n    serial_number: \"Serial Number\",\n    flags: (empty),\n    max_session_count: Max(\n        100,\n    ),\n    session_count: None,\n    max_rw_session_count: Infinite,\n    rw_session_count: Some(\n        1,\n    ),\n    max_pin_len: 16,\n    min_pin_len: 4,\n    total_public_memory: Some(\n        34359738368,\n    ),\n    free_public_memory: Some(\n        1234567890,\n    ),\n    total_private_memory: None,\n    free_private_memory: None,\n    hardware_version: Version {\n        major: 0,\n        minor: 255,\n    },\n    firmware_version: Version {\n        major: 255,\n        minor: 0,\n    },\n    utc_time: Some(\n        UtcTime {\n            year: 1970,\n            month: 1,\n            day: 1,\n            hour: 0,\n            minute: 0,\n            second: 0,\n        },\n    ),\n}"`', src/slot/token_info.rs:551:9

The culprit seems to be.

total_public_memory: Some(32 << 30),

total_public_memory is an Option, so on 32-bit architectures this results in a value of zero. Rust does not consider this to be an overflow due to the way overflow is defined for shift operators (which is different from how it is defined for arithmetic operators).

In Debian I simply patched the test to use a smaller value.

https://salsa.debian.org/rust-team/debcargo-conf/-/blob/c320f4577a9de19ce5517940deee4bf39c80cd21/src/cryptoki/debian/patches/fix-tests-32-bit.patch

Current `pkcs11.h` is not up-to-date

The pkcs11.h header file in crypto-sys is not in conformance with PKCS#11 v2.40 + errata 01. This should be updated to match what the current spec is. We have two options:

  1. Go through the file and add in the missing fields
  2. Download latest header files from oasis
    2a. Will need to either modify pkcs11.h or add in our own header files for the different architectures (really the big one is Windows with its pragma directives).

Give Wiktor and Mike the `Write` role

Hi @wiktor-k, @mjb3279,

You both helped us a lot with this crate over the past few weeks/days. You showed your interest in increasing the functionnality of the cryptoki crate and making overall better for all. Your PRs are always high-quality 💯.
As discussed on the community channel, we would like to give you the Write role on this repository.

You can find a definition of the Write role here but basically that means that you will be able to approve/merge pull-requests and tinker with the CI.

Would you accept 😄?

Vendored mechanisms

Hello,

I am working with a PKCS11 interface requiring the use of vendored mechanisms. I did not find a way to do so with this crate and was wondering if this could be added. Beside this issue (#54) which focuses on errors, it seems to me the subject was not discussed, at least publicly.

Do you plan implementing this feature or would you accept a PR for it?

In case you are open to the idea, I already looked at the code and it seems to me this boils down to provide a mean to extend the Mechanism enum. Because the rust-cryptoki library cannot know in advance, I could not think of a simple way to ensure the library user constructs correct PKCS11 mechanisms.

I think a simple solution would be to offer an additional VendoredMechanism in the Mechanism enum which would expose the inners of what makes a CK_MECHANISM : a MechanismType and some arbitrary bytes as parameters.

In a working PoC, I implemented this:

// In src/mechanism/vendored.rs
pub struct VendoredMechanism {
    pub mech_type: MechanismType,
    pub params: Vec<CK_BYTE>,
}

// In src/mechanism/mod.rs
pub enum Mechanism {
    ...
    VendoredMechanism(vendored::VendoredMechanism),
}

impl Mechanism {
    /// Get the type of a mechanism
    pub fn mechanism_type(&self) -> MechanismType {
        ...
        Mechanism::VendoredMechanism(mech) => mech.mech_type,
    }
}

impl From<&Mechanism> for CK_MECHANISM {
    fn from(mech: &Mechanism) -> Self {
        let mechanism = mech.mechanism_type().into();
        match mech {
            Mechanism::VendoredMechanism(mech) => CK_MECHANISM {
                mechanism,
                pParameter: mech.params.as_slice() as *const _ as *mut c_void,
                ulParameterLen: mech.params.len() as u64,
            },
        }
    }
}

Which can be used as so:

pub const SOME_VENDORED_MECH: MechanismType = MechanismType {
    val: SOME_VENDORED_MECH_VALUE,
};

pub struct SomeVendoredMech {
    pub field1: u8,
    pub field2: u8,
}

impl From<&SomeVendoredMech> for VendoredMechanism {
    fn from(mech: &SomeVendoredMech) -> Self {
        VendoredMechanism {
            mech_type: SOME_VENDORED_MECH,
            params: vec![
                mech.field1,
                mech.field2,
            ],
        }
    }
}

I know this code is not that great, it's just to start the discussion and suggest a simple way to go. I look forward for your input!

bug: `is_fn_supported()` always returns `true`

is_fn_supported() returns true for all libraries and functions I tested with. This makes sense when reading the spec:

Both say:

Every function in the Cryptoki API MUST have an entry point defined in the Cryptoki library’s CK_FUNCTION_LIST structure. If a particular function in the Cryptoki API is not supported by a library, then the function pointer for that function in the library’s CK_FUNCTION_LIST structure should point to a function stub which simply returns CKR_FUNCTION_NOT_SUPPORTED.

CKA_PUBLIC_KEY_INFO getting TypeInvalid

I hope you can give me a pointer. I'm using a smartcard that has some certs on it for smime.

I can use the ActivClient smartcard tool on windows to poke around at the details, and I've been using this library fairly successfully pointing to their driver so far... until this.

I can see when I inspect the certs in the ActivClient program that they have a SubjectKeyIdentifier
image

When I try to get the attribute from any of the objects returned, it always says TypeInvalid...
Is this a driver problem? Am I misunderstanding something here?

Document test dependencies/setup in contributor docs

This project's tests rely on external libraries not contained within it. Testing changes from a development environment other than github CI is not possible without additional setup work, which is not documented.

It is possible to extract setup details from the correct setup from the contents of .github but that's just as likely to be a deterrent. It's also specific to ubuntu, which leaves users on other platforms guessing at how to translate.

Possible solutions in order of preference:

  • The entirety of test dependencies are contained in the project and work "out of the box". This may not be possible due to the licensing constraints of SoftHSM, but it's the ideal to be aiming for.
  • The project provides test setup scripts to collect test dependencies post-clone.
  • The project provides detailed documentation of how to set up test dependencies

In both of the latter two cases, you'd want to provide a space that could accumulate and organize that content for an expanding assortment of platforms. You may also also want to create a specific, in-tree location for them that can be listed in .gitignore to prevent them from accidentally being committed.

Test fails on 32 bit platforms

The following test fails of 32 bit platforms like i686/armv7:

     Running `/usr/bin/rustc --crate-name basic --edition=2018 tests/basic.rs --error-format=json --json=diagnostic-rendered-ansi --emit=dep-info,link -C opt-level=3 -C embed-bitcode=no --test --cfg 'feature="generate-bindings"' --cfg 'feature="psa-crypto"' --cfg 'feature="psa-crypto-conversions"' -C metadata=583ee0dc93216851 -C extra-filename=-583ee0dc93216851 --out-dir /builddir/build/BUILD/cryptoki-0.1.1/target/release/deps -L dependency=/builddir/build/BUILD/cryptoki-0.1.1/target/release/deps --extern cryptoki=/builddir/build/BUILD/cryptoki-0.1.1/target/release/deps/libcryptoki-6bad776c5f6e9b13.rlib --extern cryptoki_sys=/builddir/build/BUILD/cryptoki-0.1.1/target/release/deps/libcryptoki_sys-fe431e2ca4694bb5.rlib --extern derivative=/builddir/build/BUILD/cryptoki-0.1.1/target/release/deps/libderivative-f9299b8cc86001df.so --extern hex=/builddir/build/BUILD/cryptoki-0.1.1/target/release/deps/libhex-b0aa3e37c6be2f6a.rlib --extern libloading=/builddir/build/BUILD/cryptoki-0.1.1/target/release/deps/liblibloading-7e373c2941651059.rlib --extern log=/builddir/build/BUILD/cryptoki-0.1.1/target/release/deps/liblog-6938a0e0f6e5755d.rlib --extern num_traits=/builddir/build/BUILD/cryptoki-0.1.1/target/release/deps/libnum_traits-c7ca9deddf3a72ee.rlib --extern psa_crypto=/builddir/build/BUILD/cryptoki-0.1.1/target/release/deps/libpsa_crypto-71db317aab3c8e33.rlib --extern secrecy=/builddir/build/BUILD/cryptoki-0.1.1/target/release/deps/libsecrecy-c0451c9c4cd66ea8.rlib --extern serial_test=/builddir/build/BUILD/cryptoki-0.1.1/target/release/deps/libserial_test-dbc26b919cafbd54.rlib --extern serial_test_derive=/builddir/build/BUILD/cryptoki-0.1.1/target/release/deps/libserial_test_derive-0641132783ca7cb9.so -Copt-level=3 -Cdebuginfo=2 -Clink-arg=-Wl,-z,relro,-z,now -Ccodegen-units=1 --cap-lints=warn`
error[E0277]: the trait bound `Ulong: From` is not satisfied
  --> tests/basic.rs:40:32
   |
40 |         Attribute::ModulusBits(modulus_bits.into()),
   |                                ^^^^^^^^^^^^^^^^^^^ the trait `From` is not implemented for `Ulong`
   |
   = help: the following implementations were found:
             >
   = note: required because of the requirements on the impl of `Into` for `u64`
error[E0277]: the trait bound `Ulong: From` is not satisfied
  --> tests/basic.rs:93:32
   |
93 |         Attribute::ModulusBits(modulus_bits.into()),
   |                                ^^^^^^^^^^^^^^^^^^^ the trait `From` is not implemented for `Ulong`
   |
   = help: the following implementations were found:
             >
   = note: required because of the requirements on the impl of `Into` for `u64`

Add some deny

We usually have in our repos:

#![deny(
    nonstandard_style,
    const_err,
    dead_code,
    improper_ctypes,
    non_shorthand_field_patterns,
    no_mangle_generic_items,
    overflowing_literals,
    path_statements,
    patterns_in_fns_without_body,
    private_in_public,
    unconditional_recursion,
    unused,
    unused_allocation,
    unused_comparisons,
    unused_parens,
    while_true,
    missing_debug_implementations,
    missing_docs,
    trivial_casts,
    trivial_numeric_casts,
    unused_extern_crates,
    unused_import_braces,
    unused_qualifications,
    unused_results,
    missing_copy_implementations
)]

We can review that list and add it in all our crate roots: cryptoki/src/lib.rs, cryptoki-sys/build.rs, cryptoki-sys/src/lib.rs.

Add Wycheproof-based tests

For many crypto operations we could just use Wycheproof to verify that we're doing the right thing. We don't necessarily need to go through all tests, since we're not verifying a crypto implementation, just a wrapping layer, but still good to use official test vectors, and have them maintained for us.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.