Giter VIP home page Giter VIP logo

cargo-lambda's Introduction

cargo-lambda

crates.io Build Status

cargo-lambda is a Cargo subcommand to help you work with AWS Lambda.

Get started by looking at the documentation.

License

This project is released under the MIT License.

cargo-lambda's People

Contributors

calavera avatar cb-rnag avatar developerc286 avatar dgame avatar ecklf avatar ernyoke avatar greenwoodcm avatar hakuyume avatar jackos avatar maiconpavi avatar mbergkvist avatar messense avatar nicklimmm avatar oeed avatar quiibz avatar renovate-bot avatar renovate[bot] avatar rgreinho avatar rlukata avatar rnag avatar rootkc avatar sdd avatar sgasse avatar signalwhisperer avatar simonrw avatar simonvandel avatar sondrelg avatar taylor1791 avatar tomcraven avatar yerke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cargo-lambda's Issues

Make `cargo lambda new` work in an existent project

That subcommand is great to quickly scaffold new functions, however, it only works for new projects.

It'd be very nice if it also worked on existent projects, where it'd add the function code to an existent package.

Add debug logs to all crates

Only the build and watch commands have debug logs at the moment. It'd be useful to add more debug logs in all subcommands.

env_logger spams hyper logs when turned on

I'm not sure if this is a problem with lambda_http or cargo-lambda.

With this code:

use dotenv::dotenv;
use lambda_http::{
    http::{Method, StatusCode},
    service_fn,
    tower::ServiceBuilder,
    Request, Response,
};
use lambda_http::{Body, IntoResponse, RequestExt};
use serde_json::json;
use tower_http::cors::{Any, CorsLayer};

async fn function_handler(
    request: Request,
) -> Result<Response<Body>, lambda_http::Error> {
       Response::builder()
        .status(StatusCode::CREATED)
        .header("Content-Type", "application/json")
        .body("".into())
        .unwrap()
}

#[tokio::main]
async fn main() -> Result<(), lambda_http::Error> {
    env_logger::init();
    dotenv().ok();

    let cors_layer = CorsLayer::new()
        .allow_methods(vec![Method::GET, Method::POST, Method::OPTIONS])
        .allow_origin(Any);

    let handler = ServiceBuilder::new()
        .layer(cors_layer)
        .service(service_fn(function_handler);

    lambda_http::run(handler).await?;

    Ok(())
}

and running cargo lambda watch I will get my terminal spammed:

[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)

Running Without `cargo-watch`

Hi! This is an awesome project!

Would it be possible to add a "no cargo-watch" mode? That is, just a standalone cargo lambda server without the auto-rebuild? I'd like to use the emulated Lambda control plane in integration tests but it's tricky to do so when the server is tracking file changes.

Create IAM role before deploying when none is provided

If cargo lambda deploy doesn't include the --iam-role flag, try to create a role with a basic policy for the function to work. This would mimic the behavior that Lambda provides when you create a function in the AWS Console.

Installation error due to nightly features

Suddently, installing with cargo stopped working because a dependency is using a nightly feature:

   Compiling cargo-platform v0.1.2
   Compiling serde_urlencoded v0.7.1
   Compiling kstring v2.0.0
error: `MaybeUninit::<T>::assume_init` is not yet stable as a const fn
   --> /home/dcalaver/.cargo/registry/src/github.com-1ecc6299db9ec823/kstring-2.0.0/src/string.rs:844:17
    |
844 |                 std::mem::MaybeUninit::uninit().assume_init()
    |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

error: could not compile `kstring` due to previous error
warning: build failed, waiting for other jobs to finish...
error: failed to compile `cargo-lambda v0.6.0`, intermediate artifacts can be found at `/tmp/cargo-install94SN9D`

Caused by:
  build failed

Fix dependencies installation with Pip

When you run pip install cargo-lambda, Zig should be installed too, but it currently is not. I think we need a setup.py file with the install_requires directive to install Zig.

invalid entrypointing loading asset from bootstrap.zip in aws-cdk

Hello, I'm receiving an error when I upload a rust lambda to aws using aws-cdk. It says Runtime.InvalidEntrypoint. I do Code.fromAsset(...path) and i see the bootstrap file on the console. I think the issue may be with what I'm setting the handler to, in the cdk code. What's the recommended pattern for this?

Missing exec flag on the binary created with `--output-format zip`

Binary in the zip archive created by the cargo lambda build --release --output-format zip is missing an exec flag.

When a Lambda is deployed directly with the zip file produced with the above command, intermittent error occurs on provided.al2 runtime.

Error: fork/exec /var/task/bootstrap: permission denied Runtime.InvalidEntrypoint

OS: macOS Monterey (v12.4)
Rust: v1.61.0

Dependency Dashboard

This issue provides visibility into Renovate updates and their statuses. Learn more

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.


  • Check this box to trigger a request for Renovate to run again on this repository

Add `new` command

It'd be interesting to have a command that creates the scaffolding for lambda functions, with some way to ask for the event that the function receives.

Like cargo new, cargo lambda new would create a project locally, but the source would already have lambda code, and it would include the right dependencies.

Add a subcommand that installs things

The build and start commands will install dependencies like Zig and cargo-watch on demand. Perhaps, so people want to install everything ahead of time, so they don't have to wait on their first build, or start for dependencies.

We could have a subcommand like init or install-deps that does that.

Improve the ergonomics of `cargo lambda new` type arguments

Per the conversation in #221, we are adding support for cargo lambda new --type extension to allow creating extensions and log extensions in addition to the default functions. This causes some ergonomics weirdness. For example, cargo lambda new supports arguments like --event-type and --http, both of which only make sense if the user is generating a function crate (not either type of extension). For now we manually validate that the set of known incompatible arguments are not provided at the same time. But we can probably improve on this so that only the appropriate arguments are presented to the user. If done correctly, this could also clean up the process of asking the user for arguments that were not specified. Some options:

  1. new subcommands like cargo lambda new-extension and cargo lambda new-function that take in the appropriate arguments only
  2. clap provides some support for ArgGroup which lets you model arguments that are interdependent. this could help express things like "you can't specify --http and --type extension, but I'm not too familiar with ArgGroup so I'd need to do more research there.
  3. a new unified "template parameters" argument. this might have usability challenges (lose some of the clap help and some of the type validation) but could make the tool more generic and more extensible to custom templates

Deploy doesn't work in workspace

Hey, it seems like cargo-lambda can't deploy things in a workspace?

tree:

├── Cargo.lock
├── Cargo.toml
├── crates
│   ├── app_1
│   │   ├── Cargo.toml
│   │   └── src
│   ├── app_2
│   │   ├── Cargo.toml
│   │   └── src
│   ├── app_3
│   │   ├── Cargo.toml
│   │   ├── migrations
│   │   └── src
├── src
│   ├── config.rs
│   ├── lib.rs
│   ├── notifier.rs

app_2 is the only one that has a lambda and depends on lambda_http etc.

root Cargo.toml:

[package]
name = "project"
version = "0.1.0"
edition = "2021"

[profile.dev.package."*"]
opt-level = 2

[workspace]
members = ["crates/*"]

Running cargo lambda deploy --iam-role arn:aws:iam::610581918146:role/AWS_LAMBDA_FULL_ACCESS app_2 gives me Error: × binary file for app_2 not found, use cargo lambda build to create it

If I take the app_2 and move it to another folder outside of this workspace and builds, then cargo-lambda is able to deploy it.

Loading sensitive/secret environment variables

First of all, thank you to the creators and maintainers of this project!

I am developing a Lambda which requires access to a third-party API token, and it seems that at the moment there is no way to add environment variables to the cargo lambda watch environment other than using the metadata section in Cargo.toml. This is problematic because I don't want to have to add API tokens or other secrets to my Cargo.toml which is included in my Git repo. It would be great if there was an extra CLI flag (--forward-env-vars or something) that forwarded environment variables from the system to the Lambda environment, or if there was another way of adding them manually (CLI flags or a .env file of sorts). I may attempt to implement this and open a PR - hopefully this feature is something that others would like to see as well!

One-shot mode

It would be cool to have a feature that allows starting the lambda server and invoking all in one go. This would be useful in automated testing, in order to avoid running cargo lambda watch and then waiting until the server is ready before running cargo lambda invoke.

This could take the form of a new flag in cargo lambda watch, such as --one-shot, which takes the arguments of cargo lambda invoke, executing them after starting the server. I'm not sure how easy this would be to accomplish using clap though.

Let me know your thoughts.

Create native installers

The initial installation experience with cargo install is suboptimal. This command takes several minutes because it needs to download and compile the code in the host machine. It'd be much better if we had native installers. This is a list of installers that we should probably support:

  • Homebrew (linux and mac)
  • Scoop (windows)
  • NPM (any OS with Node installed, good for people that use the AWS CDK)
  • PiP (any OS with Python installed, good for people that use the AWS SAM cli)

Reload environment variables on Cargo.toml edit

When editing the [package.metadata.lambda.bin..env] section of a Cargo.toml file, it seems that the change is not causing the cargo lambda watch invocation to reload the loaded environment variables in the running lambda.

Would that be possible to do?

My current workaround is ctrl-c'ing the cargo lambda watch process on every edit which quickly gets boring.

Reuse compiled code made by `cargo build`

Problem

In my GitHub Actions i was trying to reuse the compiled code made cargo build because it was used in the unit tests step and cached, but I noticed that cargo build and cargo lambda build doesn't share the same compiled code, making my deploy step slow even with the unit tests cache.

Things I tried and didn't work

  • manually setting RUSTFLAGS to -C strip=symbols -C target-cpu=haswell and using --target x86_64-unknown-linux-gnu in the cargo build step, the reason is, for what I saw in the source code, the cargo lambda build command was setting those envs and args;
  • Trying to build with cargo lambda build --tests and executing the resulting binary: I didn't found the binary, it is supposed to be located in target/release/deps/$CRATE_NAME-$HASH, but the command doesn't build this binary.

Workaround

Use two caches, one for cargo lambda build to use in the deploy step, and another one for cargo lambda build to use in the unit test step.

Proposed solution

Implement a command like cargo lambda test that would behave similar to cargo test, but with the use of zig and specific cargo/rustc configuration for lambda builds.

`cargo lambda watch` inserts extra / before the path

When running a lambda using cargo lambda watch it invokes the lambda with an extra / at the beginning of the path, like this:
http://localhost:9000/lambda-url/my-binary/ => http://localhost:9000//
http://localhost:9000/lambda-url/my-binary/some/path => http://localhost:9000//some/path

Looks like this line is what causes the issue:

let path = format!("/{}", path);

Also, for anyone who runs into the same issue, axum also issues a 308 redirect to https if the path begins with // which means this double-slash issue causes 308 permanent redirect to https when your lambda is using axum for routing, which will obviously break things further here

Allow invoke to send requests to a different server address

Right now, the invoke command sends all requests to localhost because the start command is supposed to be running on the same host. I'd be useful to allow setting the server address just in case you want to test requests in a remote host.

"Invalid cross-device link" when running `cargo lambda new blah` on Manjaro

   ~/projects  cargo lambda new sa-solver-sns-result-ws-dispatcher
? Is this function an HTTP function? No
? AWS Event type that this function receives sns::SnsEvent
Error:
× failed to move package: from "/tmp/.tmpX6qXqg" to "sa-solver-sns-result-ws-dispatcher"
╰─▶ Invalid cross-device link (os error 18)

~/projects  cargo lambda new sa-solver-sns-result-ws-dispatcher
? Is this function an HTTP function? No
? AWS Event type that this function receives sns::SnsEvent
Error:
× failed to move package: from "/tmp/.tmpX6qXqg" to "sa-solver-sns-result-ws-dispatcher"
╰─▶ Invalid cross-device link (os error 18)

   ~/projects  uname -a  1 ✘  18s   base 
Linux gauss 5.10.105-1-MANJARO #1 SMP PREEMPT Fri Mar 11 14:12:33 UTC 2022 x86_64 GNU/Linux
   ~/projects  cargo --version  ✔  base 
cargo 1.62.0-nightly (dba5baf 2022-04-13)

Error with cargo lambda build on Windows

Hello,

I have tried following the readme on how to get up an running with this tool but I have hit this problem on two separate windows 10 computers. When I get to the lambda build command I get the below error.

PS C:\temp\test_fn> cargo lambda build --release
Error: 
  × The prompt configuration is invalid: Available options can not be empty

I tried to run this with various other switches but I cannot get it to build. Other commands like cargo lambda watch were working correctly.

I tried to search to see if others had the issue but could not find any, apologies if it is already logged.

Thank you,
Adam

`cargo lambda build --release` no matching package named `lambda_http` found

$ cargo lambda build --release [±master ●●]
error: no matching package named lambda_http found
location searched: registry crates-io
required by package `my-lambda v0.1.0

Hi folks! I might be doing something bone-headed. I am seeing this error when I try to run the command above after running cargo lambda new my-lambda and changing into the created package directory.

But then, the weird thing is that I can do this:

 kleb@klebs-MacBook-Pro /tmp                                                                                                                                           [9:56:10]
> $ mkdir scratch

kleb@klebs-MacBook-Pro /tmp                                                                                                                                           [9:56:12]
> $ cd scratch

kleb@klebs-MacBook-Pro /tmp/scratch                                                                                                                                   [9:56:15]
> $ cargo init --lib
     Created library package

kleb@klebs-MacBook-Pro /tmp/scratch                                                                                                                                   [9:56:17]
> $ vim Cargo.toml                                                                                                                                                  [±master ●]

kleb@klebs-MacBook-Pro /tmp/scratch                                                                                                                                   [9:56:27]
> $ cat Cargo.toml                                                                                                                                                  [±master ●]
[package]
name = "scratch"
version = "0.1.0"
edition = "2021"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
lambda_http = "0.6.0"

kleb@klebs-MacBook-Pro /tmp/scratch                                                                                                                                   [9:56:29]
> $ cargo build                                                                                                                                                     [±master ●]
    Updating crates.io index
  Downloaded itoa v1.0.3
  Downloaded 1 crate (10.5 KB) in 1.11s
   Compiling proc-macro2 v1.0.43
   Compiling quote v1.0.21
   Compiling unicode-ident v1.0.2
   Compiling syn v1.0.99
   Compiling serde_derive v1.0.141
   Compiling serde v1.0.141
   Compiling autocfg v1.1.0
   Compiling libc v0.2.126
   Compiling cfg-if v1.0.0
   Compiling log v0.4.17
   Compiling futures-core v0.3.21
   Compiling pin-project-lite v0.2.9
   Compiling itoa v1.0.3
   Compiling once_cell v1.13.0
   Compiling fnv v1.0.7
   Compiling futures-task v0.3.21
   Compiling memchr v2.5.0
   Compiling futures-util v0.3.21
   Compiling pin-utils v0.1.0
   Compiling httparse v1.7.1
   Compiling futures-channel v0.3.21
   Compiling matches v0.1.9
   Compiling try-lock v0.2.3
   Compiling ryu v1.0.11
   Compiling tower-service v0.3.2
   Compiling percent-encoding v2.1.0
   Compiling serde_json v1.0.82
   Compiling httpdate v1.0.2
   Compiling tower-layer v0.3.1
   Compiling encoding_rs v0.8.31
   Compiling base64 v0.13.0
   Compiling mime v0.3.16
   Compiling tracing-core v0.1.29
   Compiling form_urlencoded v1.0.1
   Compiling tokio v1.20.1
   Compiling num-traits v0.2.15
   Compiling num-integer v0.1.45
   Compiling want v0.3.0
   Compiling mio v0.8.4
   Compiling socket2 v0.4.4
   Compiling num_cpus v1.13.1
   Compiling time v0.1.44
   Compiling tokio-macros v1.8.0
   Compiling tracing-attributes v0.1.22
   Compiling pin-project-internal v1.0.11
   Compiling async-stream-impl v0.3.3
   Compiling async-stream v0.3.3
   Compiling pin-project v1.0.11
   Compiling tracing v0.1.36
   Compiling tower v0.4.13
   Compiling bytes v1.2.1
   Compiling query_map v0.5.0
   Compiling chrono v0.4.19
   Compiling serde_urlencoded v0.7.1
   Compiling http v0.2.8
   Compiling http-body v0.4.5
   Compiling http-serde v1.1.0
   Compiling aws_lambda_events v0.6.3
   Compiling hyper v0.14.20
   Compiling tokio-stream v0.1.9
   Compiling lambda_runtime_api_client v0.6.0
   Compiling lambda_runtime v0.6.0
   Compiling lambda_http v0.6.0
   Compiling scratch v0.1.0 (/private/tmp/scratch)
    Finished dev [unoptimized + debuginfo] target(s) in 28.00s

kleb@klebs-MacBook-Pro /tmp/scratch                                                                                                                                   [9:57:00]
> $                                                            

In other words, lambda seems to work OK outside of cargo-lambda.

Here is the Cargo.toml for my lambda package:

:!/bin/cat my-lambda/Cargo.toml
[package]
name = "my-lambda"
version = "0.1.0"
edition = "2021"

# Starting in Rust 1.62 you can use `cargo add` to add dependencies
# to your project.
#
# If you're using an older Rust version,
# download cargo-edit(https://github.com/killercup/cargo-edit#installation)
# to install the `add` subcommand.
#
# Running `cargo add DEPENDENCY_NAME` will
# add the latest version of a dependency to the list,
# and it will keep the alphabetic ordering for you.

[dependencies]
lambda_http = { version = "0.6.0", default-features = false, features = ["apigw_http"] }
lambda_runtime = "0.6.0"
tokio = { version = "1", features = ["macros"] }
tracing = { version = "0.1", features = ["log"] }
tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt"] }

Did I do something wrong during cargo-lambda setup? I am pretty tired at the moment after an all-nighter and might be missing something obvious.

I will check back after I pass out asleep for a little while and see if I can solve 🤣🙏

Otherwise, I am looking forward to using cargo-lambda! It seems great! Thank you!!

Support for features in cargo lambda watch

Features is not working with cargo lambda watch, there are any alternative if I want to have different features for different lambdas (or indeed, to have in the same project lambda binaries and non-lambda binaries.)

Add more information to the version flag

We currently use Clap's builtin --version flag to display information about cargo-lambda. It'd be useful to add more information to that subcommand, like the git sha, and date when the binary was built.

Support XRay

Make the dev server send XRay information to the functions.

How to define endpoint route?

I am trying to migrate from serverless-rust to cargo-lambda, for deploying incrementally. One thing I cannot find in README is the way to define route and endpoint with cargo-lambda. How can I provider the following information?

This would be the config for serverless

functions:
  # handler value syntax is `{cargo-package-name}.{bin-name}`
  # or `{cargo-package-name}` for short when you are building a
  # default bin for a given package.
  hello:
    handler: hello
    events:
      - httpApi:
          path: /v1/hello
          method: get

Help: cargo lambda build vs. openssl

Attempting to cargo lambda build a project that transitively depends on openssl fails with,

build/expando.c:2:10: fatal error: 'openssl/opensslconf.h' file not found                                                                                                                                 
#include <openssl/opensslconf.h>                                                                                                                                                                          
         ^~~~~~~~~~~~~~~~~~~~~~~                                                                                                                                                                          
1 error generated.                                                                                                                                                                                        

attempting to build with --arm64 instead fails with,

pkg-config has not been configured to support cross-compilation.

Install a sysroot for the target platform and configure it via
PKG_CONFIG_SYSROOT_DIR and PKG_CONFIG_PATH, or install a
cross-compiling wrapper for pkg-config and set it via
PKG_CONFIG environment variable.

Is there any way to prepare my build environment so that libraries expected on the target will be available at build time, or some documentation I need to read?

For what its worth, I can use cross build ... together with an awsl2-derived docker container to build the code, and that works. I was hoping that cargo lambda might simplify the workflow by avoiding the need to setup a custom docker image.

`cargo deploy` to deploy several functions at once

Would be nice to have the feature to deploy several functions at once (as it is done in this example https://github.com/softprops/serverless-aws-rust-multi ). Also would be nice to have ability to put some sort of prefix/suffix to function names.

I have tried with serverless plugin first, but looks abandoned, doesn't work - tries to compile my code with ancient version of Rust and fails. cargo lambda at least compiles and builds fine. Also deploy of just one function works fine. However, scaling it to several functions becomes boilerplate.

Would be nice to have some sort of descriptor as part of build file...

What do you think? Perhaps I can try to contribute when I have some free time.

Error in Linking with openSSL on Mac

Hi, i am observing an error while building my code which has trasitive dependency on openssl-sys via reqwest crate. without cargo lambda, i can build and run programs with dependencies on reqwest.

OS: Mac Monterey 12.4
Rust: version 1.62.1
Cargo: version 1.62.1

Cargo.toml Dependencies

aws_lambda_events = { version = "0.6.3", default-features = false, features = ["cloudwatch_events"] }

lambda_runtime = "0.6.0"
tokio = { version = "1", features = ["macros"] }
tracing = { version = "0.1", features = ["log"] }
tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt"] }

log = "0.4.17"
simple_logger = "2.2.0"

reqwest = { version = "0.11.11", features = ["json"] }
serde_json = "1.0.82"
serde = "1.0.140"
simple-error = "0.2.3"

I am setting the OPENSSL_LIB_DIR and OPENSSL_INCLUDE_DIR to point to my [email protected] installed via homebrew, like so

export OPENSSL_LIB_DIR=/opt/homebrew/opt/[email protected]/lib/
export OPENSSL_INCLUDE_DIR=/opt/homebrew/opt/[email protected]/include/

i also tried, setting the export OPENSSL_DIR=/opt/homebrew/opt/[email protected]/

Note: i have [email protected] also installed via homebrew

i face this error, looks like a linker error to me.


 ld.lld: error: undefined symbol: SSL_do_handshake
>>> referenced by mod.rs:3228 (/Users/abhi/.cargo/registry/src/github.com-1ecc6299db9ec823/openssl-0.10.41/src/ssl/mod.rs:3228)
          >>>               reqwest-628e4659c1aa0421.reqwest.1be7fdc3-cgu.1.rcgu.o:(openssl::ssl::SslStream$LT$S$GT$::do_handshake::h0050cc50cd28c4b5) in archive /Users/abhi/myproject/target/x86_64-unknown-linux-gnu/debug/deps/libreqwest-628e4659c1aa0421.rlib
          >>> referenced by mod.rs:3228 (/Users/abhi/.cargo/registry/src/github.com-1ecc6299db9ec823/openssl-0.10.41/src/ssl/mod.rs:3228)
          >>>               reqwest-628e4659c1aa0421.reqwest.1be7fdc3-cgu.1.rcgu.o:(openssl::ssl::SslStream$LT$S$GT$::do_handshake::hcabf8f3c773d8442) in archive /Users/abhi/myproject/target/x86_64-unknown-linux-gnu/debug/deps/libreqwest-628e4659c1aa0421.rlib
          >>> did you mean: _SSL_do_handshake
          >>> defined in: /opt/homebrew/opt/[email protected]/lib/libssl.a

many more undefined symbol.

i also get unrecognized flag warning:

 = note: warning: unsupported linker arg: -znoexecstack
          warning: unsupported linker arg: -zrelro
          warning: unsupported linker arg: -znow

Can you please help me fixing this error?

Add `start-api` subcommand

cargo lambda start-api could start an http gateway that forwards requests to the lambda functions in target/lambda.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.