cargo-lambda is a Cargo subcommand to help you work with AWS Lambda.
Get started by looking at the documentation.
This project is released under the MIT License.
Cargo Lambda is a Cargo subcommand to help you work with AWS Lambda.
Home Page: https://www.cargo-lambda.info
License: MIT License
cargo-lambda is a Cargo subcommand to help you work with AWS Lambda.
Get started by looking at the documentation.
This project is released under the MIT License.
That subcommand is great to quickly scaffold new functions, however, it only works for new projects.
It'd be very nice if it also worked on existent projects, where it'd add the function code to an existent package.
Only the build and watch commands have debug logs at the moment. It'd be useful to add more debug logs in all subcommands.
I'm not sure if this is a problem with lambda_http
or cargo-lambda
.
With this code:
use dotenv::dotenv;
use lambda_http::{
http::{Method, StatusCode},
service_fn,
tower::ServiceBuilder,
Request, Response,
};
use lambda_http::{Body, IntoResponse, RequestExt};
use serde_json::json;
use tower_http::cors::{Any, CorsLayer};
async fn function_handler(
request: Request,
) -> Result<Response<Body>, lambda_http::Error> {
Response::builder()
.status(StatusCode::CREATED)
.header("Content-Type", "application/json")
.body("".into())
.unwrap()
}
#[tokio::main]
async fn main() -> Result<(), lambda_http::Error> {
env_logger::init();
dotenv().ok();
let cors_layer = CorsLayer::new()
.allow_methods(vec![Method::GET, Method::POST, Method::OPTIONS])
.allow_origin(Any);
let handler = ServiceBuilder::new()
.layer(cors_layer)
.service(service_fn(function_handler);
lambda_http::run(handler).await?;
Ok(())
}
and running cargo lambda watch
I will get my terminal spammed:
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers
[2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000)
[2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
When you run cargo lambda build --target ...
, it'd be interesting to check if the toolchain is already installed, and install it if it's not installed.
We could also validate that the flag value is one of the only two toolchains that lambda supports.
They should be --data-ascii
, --data-file
, and --data-example
.
The server can already take that information from a http header. The Invoke command should expose a way to provide that information and serialize it in request headers.
Hi! This is an awesome project!
Would it be possible to add a "no cargo-watch" mode? That is, just a standalone cargo lambda
server without the auto-rebuild? I'd like to use the emulated Lambda control plane in integration tests but it's tricky to do so when the server is tracking file changes.
If cargo lambda deploy
doesn't include the --iam-role
flag, try to create a role with a basic policy for the function to work. This would mimic the behavior that Lambda provides when you create a function in the AWS Console.
Suddently, installing with cargo stopped working because a dependency is using a nightly feature:
Compiling cargo-platform v0.1.2
Compiling serde_urlencoded v0.7.1
Compiling kstring v2.0.0
error: `MaybeUninit::<T>::assume_init` is not yet stable as a const fn
--> /home/dcalaver/.cargo/registry/src/github.com-1ecc6299db9ec823/kstring-2.0.0/src/string.rs:844:17
|
844 | std::mem::MaybeUninit::uninit().assume_init()
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error: could not compile `kstring` due to previous error
warning: build failed, waiting for other jobs to finish...
error: failed to compile `cargo-lambda v0.6.0`, intermediate artifacts can be found at `/tmp/cargo-install94SN9D`
Caused by:
build failed
Make the dev server compatible with the extensions API, so people can develop extensions as well as functions with cargo lambda start
.
Some people might want to use precompiled binaries instead of compiling this tool in every download.
When you run pip install cargo-lambda
, Zig should be installed too, but it currently is not. I think we need a setup.py
file with the install_requires
directive to install Zig.
Add a --zip
flag to the build command to generated compressed files ready to upload to lambda.
When a function starts, there is no way to add custom environment variables. We should find a way to pass them from the parent environment.
Hello, I'm receiving an error when I upload a rust lambda to aws using aws-cdk. It says Runtime.InvalidEntrypoint. I do Code.fromAsset(...path)
and i see the bootstrap file on the console. I think the issue may be with what I'm setting the handler to, in the cdk code. What's the recommended pattern for this?
Binary in the zip archive created by the cargo lambda build --release --output-format zip
is missing an exec flag.
When a Lambda is deployed directly with the zip file produced with the above command, intermittent error occurs on provided.al2
runtime.
Error: fork/exec /var/task/bootstrap: permission denied Runtime.InvalidEntrypoint
OS: macOS Monterey (v12.4)
Rust: v1.61.0
This issue provides visibility into Renovate updates and their statuses. Learn more
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
It'd be interesting to have a command that creates the scaffolding for lambda functions, with some way to ask for the event that the function receives.
Like cargo new
, cargo lambda new
would create a project locally, but the source would already have lambda code, and it would include the right dependencies.
The server can already take that information from a http header. The Invoke command should expose a way to provide that information and serialize it in request headers.
The build and start commands will install dependencies like Zig and cargo-watch on demand. Perhaps, so people want to install everything ahead of time, so they don't have to wait on their first build, or start for dependencies.
We could have a subcommand like init
or install-deps
that does that.
Per the conversation in #221, we are adding support for cargo lambda new --type extension
to allow creating extensions and log extensions in addition to the default functions. This causes some ergonomics weirdness. For example, cargo lambda new
supports arguments like --event-type
and --http
, both of which only make sense if the user is generating a function crate (not either type of extension). For now we manually validate that the set of known incompatible arguments are not provided at the same time. But we can probably improve on this so that only the appropriate arguments are presented to the user. If done correctly, this could also clean up the process of asking the user for arguments that were not specified. Some options:
cargo lambda new-extension
and cargo lambda new-function
that take in the appropriate arguments onlyclap
provides some support for ArgGroup which lets you model arguments that are interdependent. this could help express things like "you can't specify --http
and --type extension
, but I'm not too familiar with ArgGroup
so I'd need to do more research there.clap
help and some of the type validation) but could make the tool more generic and more extensible to custom templatesHey, it seems like cargo-lambda
can't deploy things in a workspace?
tree:
├── Cargo.lock
├── Cargo.toml
├── crates
│ ├── app_1
│ │ ├── Cargo.toml
│ │ └── src
│ ├── app_2
│ │ ├── Cargo.toml
│ │ └── src
│ ├── app_3
│ │ ├── Cargo.toml
│ │ ├── migrations
│ │ └── src
├── src
│ ├── config.rs
│ ├── lib.rs
│ ├── notifier.rs
app_2
is the only one that has a lambda and depends on lambda_http
etc.
root Cargo.toml:
[package]
name = "project"
version = "0.1.0"
edition = "2021"
[profile.dev.package."*"]
opt-level = 2
[workspace]
members = ["crates/*"]
Running cargo lambda deploy --iam-role arn:aws:iam::610581918146:role/AWS_LAMBDA_FULL_ACCESS app_2
gives me Error: × binary file for app_2 not found, use
cargo lambda build to create it
If I take the app_2
and move it to another folder outside of this workspace and builds, then cargo-lambda
is able to deploy it.
It'd be interesting to add a flag that takes a list of extension ARNs to associate with the function that you try to deploy:
cargo lambda deploy --extension ARN-1 --extension ARN-2...
First of all, thank you to the creators and maintainers of this project!
I am developing a Lambda which requires access to a third-party API token, and it seems that at the moment there is no way to add environment variables to the cargo lambda watch
environment other than using the metadata section in Cargo.toml
. This is problematic because I don't want to have to add API tokens or other secrets to my Cargo.toml
which is included in my Git repo. It would be great if there was an extra CLI flag (--forward-env-vars
or something) that forwarded environment variables from the system to the Lambda environment, or if there was another way of adding them manually (CLI flags or a .env
file of sorts). I may attempt to implement this and open a PR - hopefully this feature is something that others would like to see as well!
Provide a way to run an extension or layer at the same time a function is running.
It would be cool to have a feature that allows starting the lambda server and invoking all in one go. This would be useful in automated testing, in order to avoid running cargo lambda watch
and then waiting until the server is ready before running cargo lambda invoke
.
This could take the form of a new flag in cargo lambda watch
, such as --one-shot
, which takes the arguments of cargo lambda invoke
, executing them after starting the server. I'm not sure how easy this would be to accomplish using clap though.
Let me know your thoughts.
The initial installation experience with cargo install
is suboptimal. This command takes several minutes because it needs to download and compile the code in the host machine. It'd be much better if we had native installers. This is a list of installers that we should probably support:
When editing the [package.metadata.lambda.bin..env] section of a Cargo.toml file, it seems that the change is not causing the cargo lambda watch
invocation to reload the loaded environment variables in the running lambda.
Would that be possible to do?
My current workaround is ctrl-c'ing the cargo lambda watch
process on every edit which quickly gets boring.
In my GitHub Actions i was trying to reuse the compiled code made cargo build
because it was used in the unit tests step and cached, but I noticed that cargo build
and cargo lambda build
doesn't share the same compiled code, making my deploy step slow even with the unit tests cache.
RUSTFLAGS
to -C strip=symbols -C target-cpu=haswell
and using --target x86_64-unknown-linux-gnu
in the cargo build
step, the reason is, for what I saw in the source code, the cargo lambda build
command was setting those envs and args;cargo lambda build --tests
and executing the resulting binary: I didn't found the binary, it is supposed to be located in target/release/deps/$CRATE_NAME-$HASH
, but the command doesn't build this binary.Use two caches, one for cargo lambda build
to use in the deploy step, and another one for cargo lambda build
to use in the unit test step.
Implement a command like cargo lambda test
that would behave similar to cargo test
, but with the use of zig and specific cargo/rustc configuration for lambda builds.
When running a lambda using cargo lambda watch
it invokes the lambda with an extra /
at the beginning of the path, like this:
http://localhost:9000/lambda-url/my-binary/
=> http://localhost:9000//
http://localhost:9000/lambda-url/my-binary/some/path
=> http://localhost:9000//some/path
Looks like this line is what causes the issue:
Also, for anyone who runs into the same issue, axum
also issues a 308 redirect to https
if the path begins with //
which means this double-slash issue causes 308 permanent redirect to https
when your lambda is using axum
for routing, which will obviously break things further here
Right now, the invoke
command sends all requests to localhost because the start
command is supposed to be running on the same host. I'd be useful to allow setting the server address just in case you want to test requests in a remote host.
While trying to integrate cargo-lambda into SAM, I bumped into an error in Windows that I cannot reproduce locally:
https://ci.appveyor.com/project/AWSSAMCLI/aws-lambda-builders/builds/43271090/job/3ovwam50va3735y2
The error looks related to this issue about long list of arguments, but I cannot verify it: ziglang/zig#10881
It might be a good idea to setup AppVeyor with some integration tests to see if there is a way to reproduce.
Clap has a way to add a version flag automatically from you package version.
~/projects cargo lambda new sa-solver-sns-result-ws-dispatcher
? Is this function an HTTP function? No
? AWS Event type that this function receives sns::SnsEvent
Error:
× failed to move package: from "/tmp/.tmpX6qXqg" to "sa-solver-sns-result-ws-dispatcher"
╰─▶ Invalid cross-device link (os error 18)
~/projects cargo lambda new sa-solver-sns-result-ws-dispatcher
? Is this function an HTTP function? No
? AWS Event type that this function receives sns::SnsEvent
Error:
× failed to move package: from "/tmp/.tmpX6qXqg" to "sa-solver-sns-result-ws-dispatcher"
╰─▶ Invalid cross-device link (os error 18)
~/projects uname -a 1 ✘ 18s base
Linux gauss 5.10.105-1-MANJARO #1 SMP PREEMPT Fri Mar 11 14:12:33 UTC 2022 x86_64 GNU/Linux
~/projects cargo --version ✔ base
cargo 1.62.0-nightly (dba5baf 2022-04-13)
Because I manage to break the Windows builds more than I'd like 😅
Hello,
I have tried following the readme on how to get up an running with this tool but I have hit this problem on two separate windows 10 computers. When I get to the lambda build command I get the below error.
PS C:\temp\test_fn> cargo lambda build --release
Error:
× The prompt configuration is invalid: Available options can not be empty
I tried to run this with various other switches but I cannot get it to build. Other commands like cargo lambda watch were working correctly.
I tried to search to see if others had the issue but could not find any, apologies if it is already logged.
Thank you,
Adam
Error when cargo lambda start
error: no bin target named `bootstrap`
Yet, I can fix this by adding
[[bin]]
name = "bootstrap"
path = "src/main.rs"
to Cargo.toml
Do you think we should put bootstrap into target/lambda directly?
$ cargo lambda build --release [±master ●●]
error: no matching package namedlambda_http
found
location searched: registrycrates-io
required by package `my-lambda v0.1.0
Hi folks! I might be doing something bone-headed. I am seeing this error when I try to run the command above after running cargo lambda new my-lambda
and changing into the created package directory.
But then, the weird thing is that I can do this:
kleb@klebs-MacBook-Pro /tmp [9:56:10]
> $ mkdir scratch
kleb@klebs-MacBook-Pro /tmp [9:56:12]
> $ cd scratch
kleb@klebs-MacBook-Pro /tmp/scratch [9:56:15]
> $ cargo init --lib
Created library package
kleb@klebs-MacBook-Pro /tmp/scratch [9:56:17]
> $ vim Cargo.toml [±master ●]
kleb@klebs-MacBook-Pro /tmp/scratch [9:56:27]
> $ cat Cargo.toml [±master ●]
[package]
name = "scratch"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
lambda_http = "0.6.0"
kleb@klebs-MacBook-Pro /tmp/scratch [9:56:29]
> $ cargo build [±master ●]
Updating crates.io index
Downloaded itoa v1.0.3
Downloaded 1 crate (10.5 KB) in 1.11s
Compiling proc-macro2 v1.0.43
Compiling quote v1.0.21
Compiling unicode-ident v1.0.2
Compiling syn v1.0.99
Compiling serde_derive v1.0.141
Compiling serde v1.0.141
Compiling autocfg v1.1.0
Compiling libc v0.2.126
Compiling cfg-if v1.0.0
Compiling log v0.4.17
Compiling futures-core v0.3.21
Compiling pin-project-lite v0.2.9
Compiling itoa v1.0.3
Compiling once_cell v1.13.0
Compiling fnv v1.0.7
Compiling futures-task v0.3.21
Compiling memchr v2.5.0
Compiling futures-util v0.3.21
Compiling pin-utils v0.1.0
Compiling httparse v1.7.1
Compiling futures-channel v0.3.21
Compiling matches v0.1.9
Compiling try-lock v0.2.3
Compiling ryu v1.0.11
Compiling tower-service v0.3.2
Compiling percent-encoding v2.1.0
Compiling serde_json v1.0.82
Compiling httpdate v1.0.2
Compiling tower-layer v0.3.1
Compiling encoding_rs v0.8.31
Compiling base64 v0.13.0
Compiling mime v0.3.16
Compiling tracing-core v0.1.29
Compiling form_urlencoded v1.0.1
Compiling tokio v1.20.1
Compiling num-traits v0.2.15
Compiling num-integer v0.1.45
Compiling want v0.3.0
Compiling mio v0.8.4
Compiling socket2 v0.4.4
Compiling num_cpus v1.13.1
Compiling time v0.1.44
Compiling tokio-macros v1.8.0
Compiling tracing-attributes v0.1.22
Compiling pin-project-internal v1.0.11
Compiling async-stream-impl v0.3.3
Compiling async-stream v0.3.3
Compiling pin-project v1.0.11
Compiling tracing v0.1.36
Compiling tower v0.4.13
Compiling bytes v1.2.1
Compiling query_map v0.5.0
Compiling chrono v0.4.19
Compiling serde_urlencoded v0.7.1
Compiling http v0.2.8
Compiling http-body v0.4.5
Compiling http-serde v1.1.0
Compiling aws_lambda_events v0.6.3
Compiling hyper v0.14.20
Compiling tokio-stream v0.1.9
Compiling lambda_runtime_api_client v0.6.0
Compiling lambda_runtime v0.6.0
Compiling lambda_http v0.6.0
Compiling scratch v0.1.0 (/private/tmp/scratch)
Finished dev [unoptimized + debuginfo] target(s) in 28.00s
kleb@klebs-MacBook-Pro /tmp/scratch [9:57:00]
> $
In other words, lambda seems to work OK outside of cargo-lambda.
Here is the Cargo.toml for my lambda package:
:!/bin/cat my-lambda/Cargo.toml
[package]
name = "my-lambda"
version = "0.1.0"
edition = "2021"
# Starting in Rust 1.62 you can use `cargo add` to add dependencies
# to your project.
#
# If you're using an older Rust version,
# download cargo-edit(https://github.com/killercup/cargo-edit#installation)
# to install the `add` subcommand.
#
# Running `cargo add DEPENDENCY_NAME` will
# add the latest version of a dependency to the list,
# and it will keep the alphabetic ordering for you.
[dependencies]
lambda_http = { version = "0.6.0", default-features = false, features = ["apigw_http"] }
lambda_runtime = "0.6.0"
tokio = { version = "1", features = ["macros"] }
tracing = { version = "0.1", features = ["log"] }
tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt"] }
Did I do something wrong during cargo-lambda setup? I am pretty tired at the moment after an all-nighter and might be missing something obvious.
I will check back after I pass out asleep for a little while and see if I can solve 🤣🙏
Otherwise, I am looking forward to using cargo-lambda! It seems great! Thank you!!
Features is not working with cargo lambda watch, there are any alternative if I want to have different features for different lambdas (or indeed, to have in the same project lambda binaries and non-lambda binaries.)
We currently use Clap's builtin --version
flag to display information about cargo-lambda. It'd be useful to add more information to that subcommand, like the git sha, and date when the binary was built.
Make the dev server send XRay information to the functions.
Running cargo install cargo-watch
to run the development server takes several minutes. Waiting for it to be installed is not a great experience.
We should install it from their pre-built releases https://github.com/watchexec/cargo-watch/releases
It'd be enough to pin to a specific version and always download that one, so we don't have to deal with GitHub's API and its rate limits for unauthenticated requests.
Error handling in the start
command is a mix of miette, anyhow, error type traits, and unwraps. It'd be nice to make the code a little more cohesive and handle errors in just one way.
Remove the usage of this fork, and just use the main crate: https://crates.io/crates/tokio-graceful-shutdown-without-anyhow
I am trying to migrate from serverless-rust
to cargo-lambda
, for deploying incrementally. One thing I cannot find in README is the way to define route and endpoint with cargo-lambda
. How can I provider the following information?
This would be the config for serverless
functions:
# handler value syntax is `{cargo-package-name}.{bin-name}`
# or `{cargo-package-name}` for short when you are building a
# default bin for a given package.
hello:
handler: hello
events:
- httpApi:
path: /v1/hello
method: get
Is there any way to redirect stdout/stderr to cloudwatch?
Can't see any prints from app in lambda logs or cloudwatch
cargo lambda deploy
should take the content of target/lambda
and upload it to lambda.
It should probably need a way to know if the function already exists, and if so, it'd change the function's version
Attempting to cargo lambda build
a project that transitively depends on openssl fails with,
build/expando.c:2:10: fatal error: 'openssl/opensslconf.h' file not found
#include <openssl/opensslconf.h>
^~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
attempting to build with --arm64
instead fails with,
pkg-config has not been configured to support cross-compilation.
Install a sysroot for the target platform and configure it via
PKG_CONFIG_SYSROOT_DIR and PKG_CONFIG_PATH, or install a
cross-compiling wrapper for pkg-config and set it via
PKG_CONFIG environment variable.
Is there any way to prepare my build environment so that libraries expected on the target will be available at build time, or some documentation I need to read?
For what its worth, I can use cross build ...
together with an awsl2-derived docker container to build the code, and that works. I was hoping that cargo lambda
might simplify the workflow by avoiding the need to setup a custom docker image.
Would be nice to have the feature to deploy several functions at once (as it is done in this example https://github.com/softprops/serverless-aws-rust-multi ). Also would be nice to have ability to put some sort of prefix/suffix to function names.
I have tried with serverless plugin first, but looks abandoned, doesn't work - tries to compile my code with ancient version of Rust and fails. cargo lambda
at least compiles and builds fine. Also deploy of just one function works fine. However, scaling it to several functions becomes boilerplate.
Would be nice to have some sort of descriptor as part of build file...
What do you think? Perhaps I can try to contribute when I have some free time.
Hi, i am observing an error while building my code which has trasitive dependency on openssl-sys via reqwest crate. without cargo lambda, i can build and run programs with dependencies on reqwest.
OS: Mac Monterey 12.4
Rust: version 1.62.1
Cargo: version 1.62.1
Cargo.toml Dependencies
aws_lambda_events = { version = "0.6.3", default-features = false, features = ["cloudwatch_events"] }
lambda_runtime = "0.6.0"
tokio = { version = "1", features = ["macros"] }
tracing = { version = "0.1", features = ["log"] }
tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt"] }
log = "0.4.17"
simple_logger = "2.2.0"
reqwest = { version = "0.11.11", features = ["json"] }
serde_json = "1.0.82"
serde = "1.0.140"
simple-error = "0.2.3"
I am setting the OPENSSL_LIB_DIR and OPENSSL_INCLUDE_DIR to point to my [email protected] installed via homebrew, like so
export OPENSSL_LIB_DIR=/opt/homebrew/opt/[email protected]/lib/
export OPENSSL_INCLUDE_DIR=/opt/homebrew/opt/[email protected]/include/
i also tried, setting the export OPENSSL_DIR=/opt/homebrew/opt/[email protected]/
Note: i have [email protected] also installed via homebrew
i face this error, looks like a linker error to me.
ld.lld: error: undefined symbol: SSL_do_handshake
>>> referenced by mod.rs:3228 (/Users/abhi/.cargo/registry/src/github.com-1ecc6299db9ec823/openssl-0.10.41/src/ssl/mod.rs:3228)
>>> reqwest-628e4659c1aa0421.reqwest.1be7fdc3-cgu.1.rcgu.o:(openssl::ssl::SslStream$LT$S$GT$::do_handshake::h0050cc50cd28c4b5) in archive /Users/abhi/myproject/target/x86_64-unknown-linux-gnu/debug/deps/libreqwest-628e4659c1aa0421.rlib
>>> referenced by mod.rs:3228 (/Users/abhi/.cargo/registry/src/github.com-1ecc6299db9ec823/openssl-0.10.41/src/ssl/mod.rs:3228)
>>> reqwest-628e4659c1aa0421.reqwest.1be7fdc3-cgu.1.rcgu.o:(openssl::ssl::SslStream$LT$S$GT$::do_handshake::hcabf8f3c773d8442) in archive /Users/abhi/myproject/target/x86_64-unknown-linux-gnu/debug/deps/libreqwest-628e4659c1aa0421.rlib
>>> did you mean: _SSL_do_handshake
>>> defined in: /opt/homebrew/opt/[email protected]/lib/libssl.a
many more undefined symbol.
i also get unrecognized flag warning:
= note: warning: unsupported linker arg: -znoexecstack
warning: unsupported linker arg: -zrelro
warning: unsupported linker arg: -znow
Can you please help me fixing this error?
cargo lambda start-api
could start an http gateway that forwards requests to the lambda functions in target/lambda
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.