Giter VIP home page Giter VIP logo

ssh-rs's Introduction

Dev

cargo clippy --all-features --tests -- -D clippy::all
cargo +nightly clippy --all-features --tests -- -D clippy::all

cargo fmt -- --check

cargo build-all-features
cargo test-all-features -- --nocapture

./async-ssh2-lite/tests/run_integration_tests.sh
SSH_SERVER_HOST=127.0.0.1 SSH_SERVER_PORT=22 SSH_USERNAME=xx SSH_PASSWORD=xxx SSH_PRIVATEKEY_PATH=~/.ssh/id_rsa cargo test -p async-ssh2-lite --features _integration_tests,async-io,tokio -- --nocapture

SSH_SERVER_HOST=127.0.0.1 SSH_SERVER_PORT=22 SSH_USERNAME=xx SSH_PRIVATEKEY_PATH=~/.ssh/id_rsa cargo test -p async-ssh2-lite --features _integration_tests,async-io,tokio -- integration_tests::session__scp_send_and_scp_recv --nocapture

ssh-rs's People

Contributors

andras-pinter avatar benesch avatar fasterthanlime avatar guswynn avatar mihaigalos avatar priexp avatar sathiyan-sivathas avatar vkill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

ssh-rs's Issues

Missing file when sending files with SFTP session

Hello,

Following the post I send in Rust community forum, I open an issue.

I got a strange behaviour when I send files in a SFTP server in async mode: sometimes, one or more files sent are not wrote.

This is my Cargo.toml:

futures = "0.3.15"
async-io = "1.6.0"
async-ssh2-lite = "0.2.1"
clap = "2.33.3"
async-std = "1.9.0"

This is my simple SFTP client in Rust:

use async_io::Async;
use async_ssh2_lite::{AsyncSession, AsyncSftp};
use async_std::fs::read;
use clap::{crate_authors, crate_description, crate_name, crate_version, App, Arg};
use futures::executor::block_on;
use futures::future::join_all;
use futures::AsyncWriteExt;
use std::io;
use std::net::{TcpStream, ToSocketAddrs};
use std::path::{Path, PathBuf};

async fn send(sftp: &AsyncSftp<TcpStream>, host: &str, input: &str) -> io::Result<()> {
    let basename = PathBuf::from(input).file_name().unwrap().to_os_string();
    let basename = Path::new(basename.as_os_str());
    // Read input content
    let content = read(input).await?;
    let mut f = sftp.create(basename).await?;
    f.write(content.as_ref()).await?;
    f.close().await?;
    println!("[{}] <- {}", host, basename.display());
    Ok(())
}

async fn run(host: &str, inputs: Vec<&str>) -> io::Result<()> {
    let addr = host.to_socket_addrs()?.next().unwrap();
    let stream = Async::<TcpStream>::connect(addr).await?;

    println!("Starting SFTP session at {}", host);
    let mut session = AsyncSession::new(stream, None)?;
    session.handshake().await?;
    // session.userauth_password("user", "password").await?;
    session
        .userauth_pubkey_file(
            "user",
            Some(Path::new("/home/user/.ssh/id_rsa.pub")),
            Path::new("/home/user/.ssh/id_rsa"),
            None,
        )
        .await?;

    let sftp = session.sftp().await?;
    let funcs = inputs.iter().map(|input| send(&sftp, host, *input));
    let result = join_all(funcs).await;

    println!("result = {:?}", result);
    session
        .disconnect(None, "closed by application", None)
        .await?;
    Ok(())
}

fn main() {
    // Arguments and options
    let matches = App::new(crate_name!())
        .version(crate_version!())
        .author(crate_authors!())
        .about(crate_description!())
        .arg(
            Arg::with_name("host")
                .short("h")
                .long("host")
                .default_value("127.0.0.1:22")
                .help("Server address")
                .takes_value(true),
        )
        .arg(Arg::with_name("inputs").required(true).min_values(1))
        .get_matches();

    let hosts = matches.value_of("host").unwrap();
    let inputs: Vec<_> = matches.values_of("inputs").unwrap().collect();
    block_on(run(hosts, inputs)).expect("Error");
}

Inside ~/tmp/to_inject:

$ ls -l ~/tmp/to_inject                                                        
total 20
-rw-r--r--. 1 user user 2 Jul 23 09:58 file_1
-rw-r--r--. 1 user user 3 Jul 23 09:58 file_2
-rw-r--r--. 1 user user 4 Jul 23 09:58 file_3
-rw-r--r--. 1 user user 5 Jul 23 09:58 file_4
-rw-r--r--. 1 user user 6 Jul 23 09:58 file_5

I run cargo run -- -h 127.0.0.1:3373 ~/tmp/to_inject/*:

Starting SFTP session at 127.0.0.1:3373
[127.0.0.1:3373] <- file_1
[127.0.0.1:3373] <- file_2
[127.0.0.1:3373] <- file_3
[127.0.0.1:3373] <- file_4
[127.0.0.1:3373] <- file_5
result = [Ok(()), Ok(()), Ok(()), Ok(()), Ok(())]

At the destination folder, sometimes (nearly 1 time over 5), not all files are present:

-rw-r--r--. 1 user user 4 Jul 23 12:09 file_3
-rw-r--r--. 1 user user 5 Jul 23 12:09 file_4
-rw-r--r--. 1 user user 6 Jul 23 12:09 file_5

This is the log of the Python sftpserver but I got the same behaviour with SSHd:

$ sftpserver --level=DEBUG --port=3373 --keyfile=/home/user/.ssh/id_rsa
DEBUG:paramiko.transport:starting thread (server mode): 0x617e8580
DEBUG:paramiko.transport:Local version/idstring: SSH-2.0-paramiko_2.7.2
DEBUG:paramiko.transport:Remote version/idstring: SSH-2.0-libssh2_1.9.0_DEV
INFO:paramiko.transport:Connected (version 2.0, client libssh2_1.9.0_DEV)
DEBUG:paramiko.transport:kex algos:['ecdh-sha2-nistp256', 'ecdh-sha2-nistp384', 'ecdh-sha2-nistp521', 'diffie-hellman-group-exchange-sha256', 'diffie-hellman-group16-sha512', 'diffie-hellman-group18-sha512', 'diffie-hellman-group14-sha256', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1', 'diffie-hellman-group-exchange-sha1'] server key:['ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp521', 'ssh-rsa', 'ssh-dss'] client encrypt:['aes128-ctr', 'aes192-ctr', 'aes256-ctr', 'aes256-cbc', '[email protected]', 'aes192-cbc', 'aes128-cbc', 'blowfish-cbc', 'arcfour128', 'arcfour', 'cast128-cbc', '3des-cbc'] server encrypt:['aes128-ctr', 'aes192-ctr', 'aes256-ctr', 'aes256-cbc', '[email protected]', 'aes192-cbc', 'aes128-cbc', 'blowfish-cbc', 'arcfour128', 'arcfour', 'cast128-cbc', '3des-cbc'] client mac:['hmac-sha2-256', 'hmac-sha2-512', 'hmac-sha1', 'hmac-sha1-96', 'hmac-md5', 'hmac-md5-96', 'hmac-ripemd160', '[email protected]'] server mac:['hmac-sha2-256', 'hmac-sha2-512', 'hmac-sha1', 'hmac-sha1-96', 'hmac-md5', 'hmac-md5-96', 'hmac-ripemd160', '[email protected]'] client compress:['none'] server compress:['none'] client lang:[''] server lang:[''] kex follows?False
DEBUG:paramiko.transport:Kex agreed: ecdh-sha2-nistp256
DEBUG:paramiko.transport:HostKey agreed: ssh-rsa
DEBUG:paramiko.transport:Cipher agreed: aes128-ctr
DEBUG:paramiko.transport:MAC agreed: hmac-sha2-256
DEBUG:paramiko.transport:Compression agreed: none
DEBUG:paramiko.transport:kex engine KexNistp256 specified hash_algo <built-in function openssl_sha256>
DEBUG:paramiko.transport:Switch to new keys ...
DEBUG:paramiko.transport:Auth request (type=publickey) service=ssh-connection, username=user
DEBUG:paramiko.transport:Auth request (type=publickey) service=ssh-connection, username=user
INFO:paramiko.transport:Auth granted (publickey).
DEBUG:paramiko.transport:[chan 0] Max packet in: 32768 bytes
DEBUG:paramiko.transport:[chan 0] Max packet out: 32768 bytes
DEBUG:paramiko.transport:Secsh channel 0 (session) opened.
DEBUG:paramiko.transport:Starting handler for subsystem sftp
DEBUG:paramiko.transport.sftp:[chan 0] Started sftp server on channel <paramiko.Channel 0 (open) window=2097152 -> <paramiko.Transport at 0x617e8580 (cipher aes128-ctr, 128 bits) (active; 1 open channel(s))>>
DEBUG:paramiko.transport.sftp:[chan 0] Request: open
DEBUG:paramiko.transport.sftp:[chan 0] Request: write
DEBUG:paramiko.transport.sftp:[chan 0] Request: open
DEBUG:paramiko.transport.sftp:[chan 0] Request: close
DEBUG:paramiko.transport.sftp:[chan 0] Request: write
DEBUG:paramiko.transport.sftp:[chan 0] Request: open
DEBUG:paramiko.transport.sftp:[chan 0] Request: close
DEBUG:paramiko.transport.sftp:[chan 0] Request: write
DEBUG:paramiko.transport.sftp:[chan 0] Request: open
DEBUG:paramiko.transport.sftp:[chan 0] Request: close
DEBUG:paramiko.transport.sftp:[chan 0] Request: write
DEBUG:paramiko.transport.sftp:[chan 0] Request: open
DEBUG:paramiko.transport.sftp:[chan 0] Request: close
DEBUG:paramiko.transport.sftp:[chan 0] Request: write
DEBUG:paramiko.transport.sftp:[chan 0] Request: close
INFO:paramiko.transport:Disconnect (code 11): closed by application
DEBUG:paramiko.transport.sftp:[chan 0] EOF -- end of session

In this log, we can see 5 open, write and close.

The log of pyinotify in the destination folder:

$ pyinotify -v .                                                                                                                                        
[2021-07-23 12:09:16,250 pyinotify DEBUG] Start monitoring ['.'], (press c^c to halt pyinotify)
[2021-07-23 12:09:16,251 pyinotify DEBUG] New <Watch wd=1 path=. mask=4095 proc_fun=None auto_add=None exclude_filter=<function <lambda> at 0x128cb18> dir=True >
[2021-07-23 12:09:23,454 pyinotify DEBUG] Event queue size: 64
[2021-07-23 12:09:23,454 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x100 name=file_4 wd=1 >
[2021-07-23 12:09:23,454 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x20 name=file_4 wd=1 >
<Event dir=False mask=0x100 maskname=IN_CREATE name=file_4 path=. pathname=/home/user/tmp/dir/file_4 wd=1 >
<Event dir=False mask=0x20 maskname=IN_OPEN name=file_4 path=. pathname=/home/user/tmp/dir/file_4 wd=1 >
[2021-07-23 12:09:23,455 pyinotify DEBUG] Event queue size: 32
[2021-07-23 12:09:23,455 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x2 name=file_4 wd=1 >
<Event dir=False mask=0x2 maskname=IN_MODIFY name=file_4 path=. pathname=/home/user/tmp/dir/file_4 wd=1 >
[2021-07-23 12:09:23,456 pyinotify DEBUG] Event queue size: 64
[2021-07-23 12:09:23,456 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x100 name=file_3 wd=1 >
[2021-07-23 12:09:23,456 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x20 name=file_3 wd=1 >
<Event dir=False mask=0x100 maskname=IN_CREATE name=file_3 path=. pathname=/home/user/tmp/dir/file_3 wd=1 >
<Event dir=False mask=0x20 maskname=IN_OPEN name=file_3 path=. pathname=/home/user/tmp/dir/file_3 wd=1 >
[2021-07-23 12:09:23,457 pyinotify DEBUG] Event queue size: 32
[2021-07-23 12:09:23,457 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x8 name=file_4 wd=1 >
<Event dir=False mask=0x8 maskname=IN_CLOSE_WRITE name=file_4 path=. pathname=/home/user/tmp/dir/file_4 wd=1 >
[2021-07-23 12:09:23,458 pyinotify DEBUG] Event queue size: 64
[2021-07-23 12:09:23,458 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x2 name=file_3 wd=1 >
[2021-07-23 12:09:23,458 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x20 name=file_3 wd=1 >
<Event dir=False mask=0x2 maskname=IN_MODIFY name=file_3 path=. pathname=/home/user/tmp/dir/file_3 wd=1 >
<Event dir=False mask=0x20 maskname=IN_OPEN name=file_3 path=. pathname=/home/user/tmp/dir/file_3 wd=1 >
[2021-07-23 12:09:23,459 pyinotify DEBUG] Event queue size: 128
[2021-07-23 12:09:23,459 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x8 name=file_3 wd=1 >
[2021-07-23 12:09:23,459 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x2 name=file_3 wd=1 >
[2021-07-23 12:09:23,459 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x2 name=file_4 wd=1 >
[2021-07-23 12:09:23,460 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x20 name=file_4 wd=1 >
<Event dir=False mask=0x8 maskname=IN_CLOSE_WRITE name=file_3 path=. pathname=/home/user/tmp/dir/file_3 wd=1 >
<Event dir=False mask=0x2 maskname=IN_MODIFY name=file_3 path=. pathname=/home/user/tmp/dir/file_3 wd=1 >
<Event dir=False mask=0x2 maskname=IN_MODIFY name=file_4 path=. pathname=/home/user/tmp/dir/file_4 wd=1 >
<Event dir=False mask=0x20 maskname=IN_OPEN name=file_4 path=. pathname=/home/user/tmp/dir/file_4 wd=1 >
[2021-07-23 12:09:23,461 pyinotify DEBUG] Event queue size: 160
[2021-07-23 12:09:23,461 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x8 name=file_3 wd=1 >
[2021-07-23 12:09:23,461 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x2 name=file_4 wd=1 >
[2021-07-23 12:09:23,462 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x100 name=file_5 wd=1 >
[2021-07-23 12:09:23,462 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x20 name=file_5 wd=1 >
[2021-07-23 12:09:23,462 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x8 name=file_4 wd=1 >
<Event dir=False mask=0x8 maskname=IN_CLOSE_WRITE name=file_3 path=. pathname=/home/user/tmp/dir/file_3 wd=1 >
<Event dir=False mask=0x2 maskname=IN_MODIFY name=file_4 path=. pathname=/home/user/tmp/dir/file_4 wd=1 >
<Event dir=False mask=0x100 maskname=IN_CREATE name=file_5 path=. pathname=/home/user/tmp/dir/file_5 wd=1 >
<Event dir=False mask=0x20 maskname=IN_OPEN name=file_5 path=. pathname=/home/user/tmp/dir/file_5 wd=1 >
<Event dir=False mask=0x8 maskname=IN_CLOSE_WRITE name=file_4 path=. pathname=/home/user/tmp/dir/file_4 wd=1 >
[2021-07-23 12:09:23,463 pyinotify DEBUG] Event queue size: 64
[2021-07-23 12:09:23,463 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x2 name=file_5 wd=1 >
[2021-07-23 12:09:23,463 pyinotify DEBUG] <_RawEvent cookie=0 mask=0x8 name=file_5 wd=1 >
<Event dir=False mask=0x2 maskname=IN_MODIFY name=file_5 path=. pathname=/home/user/tmp/dir/file_5 wd=1 >
<Event dir=False mask=0x8 maskname=IN_CLOSE_WRITE name=file_5 path=. pathname=/home/user/tmp/dir/file_5 wd=1 >

There's only the trace of 3 files.

Update async-io version to 2.0

Hi, since async-io has been updated to 2.3, could you please update the dependency in the Cargo.toml file or re-expose the Async<T> in this crate.

internal error: entered unreachable code when using tokio integration

I'm using an AsyncSession<tokio::net::TcpStream> concurrently across multiple tokio tasks to open channels and execute commands over them.
However, I noticed some of those tasks crashing due to this error:

thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: ', /home/nett/.cargo/registry/src/github.com-1ecc6299db9ec823/async-ssh2-lite-0.4.1/src/session_stream/impl_tokio.rs:39:21

or this one

thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: ', /home/nett/.cargo/registry/src/github.com-1ecc6299db9ec823/async-ssh2-lite-0.4.1/src/session_stream/impl_tokio.rs:81:17

What is going on here?

Deadlock: Session always polls for write readiness, never read readiness

The Problem

Async::write_with is used throughout the codebase, but the corresponding Async::read_with function does not appear anywhere in this library. I don't think that it is safe to assume that the underlying libssh2 session is always blocked on write readiness. libssh2 and the ssh2 Rust wrapper provide a method to discover which direction the session is blocked on. async-ssh2-lite should use this to discover the direction (read, write, or both) and wait on readiness accordingly.

As is, I suspect that AsyncSesson spin and continuously call ssh2 in cases where the session is write ready but not read ready and is blocked on read. Alternatively, we may deadlock if we're blocked on read and write readiness, and writing does not become unblocked until a read is performed. I'm not sure if this will happen in practice, but it seems possible theoretically.

In any case, the current implementation is inefficient at best, and incorrect at worst.

A Solution

AsyncSession should call Session::block_directions to figure out which direction libssh2 is blocked on and wait for readiness accordingly.

Can you to provide a split function for channel?

I am currently using futures_util::io::AsyncReadExt::split, but it has an issue: it cannot be used with the original method, and it does not provide into_inner. Can you provide a similar method that can be used with the original method request_pty_size?

Windows compilation

Hi,
version 0.1.3 does not compile under Windows(rustc v1.45.2, stable-x86_64-pc-windows-msvc toolchain). Compiler is complaining about trait FromRawSocket is not in scope and sugests to import std::os::windows::prelude::FromRawSocket.
Greetings.

fix_windows_compilation.zip

sftp readdir method have some bug.

In mac or k8s this readdir method wait infinitely,in fedora is all right.

        let mut session =
            AsyncSession::<TokioTcpStream>::connect(SocketAddr::from((host, port)), None).await?;
        session.handshake().await?;
        session
            .userauth_password(&config.user, &config.passwd)
            .await?;

        let stream = session.sftp().await?;

        stream.readdir(&Path::new(path)).await?
```rust

Safety: Underlying FD for TCP connection is closed twice

The Problem

https://github.com/bk-rs/async-ssh2-lite/blob/1b88c9cb64553ab395596a4368fc30e8e2e341c4/src/session.rs#L22-L44

From the FromRawFd docs:

This function consumes ownership of the specified file descriptor. The returned object will take responsibility for closing it when the object goes out of scope.

The docs for ssh2::Session::set_tcp_stream read:

The session takes ownership of the stream provided.

If you look at the Drop for SessionInner it's clear that ssh2 isn't doing anything to prevent the stream passed to set_tcp_stream from being dropped.

As a result, I strongly suspect the underlying FD is being closed twice: once by the Session and once by the Arc<Arync<S>> members. This is BAD since the FD could be re-used elsewhere in the program between the first and second underlying call to close(), causing the FD to be closed out from under a type that believes it has exclusive ownership over the FD.

A Solution

I think the solution is pretty simple. Instead of calling session.set_tcp_stream(stream.into_inner()?), call session.set_tcp_stream(stream.as_raw_fd()) and then wrap the original stream in an Arc. This way, Session doesn't own the FD and won't close it when it is dropped. This is fine since the Session doesn't actually need ownership of the FD as long as you ensure the FD outlives Session.

Note that this relies on the current ordering of the AsyncSession<S> members since RFC 1857 ensures members are dropped from top to bottom (and we need to drop async_io last).

pub struct AsyncSession<S> {
    inner: Session,
    async_io: Arc<Async<S>>,
}

Here's what the code might look like:

#[cfg(unix)]
impl<S> AsyncSession<S>
where
    S: AsRawFd + FromRawFd + 'static,
{
    pub fn new(stream: Async<S>, configuration: Option<SessionConfiguration>) -> io::Result<Self> {
        let mut session = get_session(configuration)?;
        session.set_tcp_stream(stream.as_raw_fd());

        let async_io = Arc::new(stream);

        Ok(Self {
            inner: session,
            async_io,
        })
    }
}

Interested to hear your thoughts.

error[E0599]: the method `handshake` exists for struct `AsyncSession<Async<TcpStream>>`, but its trait bounds were not satisfied

code:


pub async fn initConnection<R: Runtime>(initConnectDto: InitConnectDto, window: Window<R>) -> Result<()> {
	let sessionId = initConnectDto.sessionId;
	let ssh_session = ssh_session_service::find(sessionId).await?.expect("当前会话不存在");

	println!("{}", format!("{:?}:{:?}", ssh_session.ip, ssh_session.port));
	let addr = format!("{}:{}", ssh_session.ip, ssh_session.port).to_socket_addrs()?.next().unwrap();
	let stream = Async::<TcpStream>::connect(addr).await?;

	let mut session = AsyncSession::new(stream, None)?;
	session.handshake().await?;

	session.userauth_password(&*ssh_session.username, &*ssh_session.pwd.expect("密码为空")).await?;
	session.authenticated();
}



err_msg:

error[E0599]: the method `handshake` exists for struct `AsyncSession<Async<TcpStream>>`, but its trait bounds were not satisfied
   --> src\service\terminal_service.rs:39:10
    |
39  |     session.handshake().await?;
    |             ^^^^^^^^^ method cannot be called on `AsyncSession<Async<TcpStream>>` due to unsatisfied trait bounds
    |
   ::: C:\Users\10431\.cargo\registry\src\index.crates.io-6f17d22bba15001f\async-io-1.13.0\src\lib.rs:594:1
    |
594 | pub struct Async<T> {
    | ------------------- doesn't satisfy `Async<std::net::TcpStream>: AsyncSessionStream`
    |
    = note: the following trait bounds were not satisfied:
            `Async<std::net::TcpStream>: AsyncSessionStream`


Integration tests flaky

I occasionally see the following flake when running integration tests locally:

exec curl output:000 i:1
thread 'integration_tests::remote_port_forwarding::simple_with_tokio' panicked at 'assertion failed: `(left == right)`
  left: `"000"`,
 right: `"200"`', async-ssh2-lite/tests/integration_tests/remote_port_forwarding.rs:121:17

It seems like something is occasionally corrupting data. Appears unrelated to the issue documented in #16, because I see this flake both with and without the change in #16.

can I use a BufReader?

Forgive me if this is a stupid question, but is it possible to wrap an AsyncChannel in a futures::io::BufReader? At first glance, it looks like it should work right? My goal is to use https://crates.io/crates/nom-bufreader but I haven't gotten it to work yet (i get an error). So I was just wondering if it's possible and if so, could I get an example?

Thank you for your work on this crate!

Also this is the error I get panicked at 'internal error: entered unreachable code: reader indicated readiness but then returned pending', /home/jbenz/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.24/src/io/fill_buf.rs:41:21

This is the point that I get to

                let mut chan = sess.channel_session().await?;
                chan.request_pty("xterm", None, Some((80, 24, 0, 0))).await?;
                chan.shell().await?;
                chan.write(b"environment no more\nfile dir\n").await?;
                let mut reader = BufReader::new(chan.compat());
                let m = reader.parse(Self::method).await;

Add an option to limit the number of concurrent handshakes in bb8-async-ssh2-lite

By default, the sshd server limits the maximum amount of open, unauthenticated sessions (MaxStartups).
If many new SSH connections are opened simultaneously via bb8, that limit can be reached and will result in connection failures.
As far as I know, there is no such limit for authenticated sessions.
By limiting the amount of concurrent connect calls (e.g. via a Semaphore), this issue should be solveable.

Add Clone to `SessionConfiguration`

Hi, I want to spread SessionConfiguration into multi-threads and found that SessionConfiguration does not derive trait Clone. Could you please add the Clone trait to it. Thanks

proxy demo failed when sending file.

it works perfectly when executing the remote commands.

But once I add some code to upload files, it works for the small files but failed when uploading a large file:

        //new code
        let local_file_path = Path::new("/home/dxu/async-ssh2-lite/target/debug/proxy.d");
        let remote_file_path = Path::new("/home/dxu/proxy.d");
        let metadata = std::fs::metadata(local_file_path)?;
        let size = metadata.len();
        println!("file size:{}", size);
        let mut buf:Vec<u8> = vec![0; size as usize];
        let mut f = std::fs::File::open(local_file_path)?;
        f.read_to_end(&mut buf)?;

        // std::fs::read(local_file_path, &mut buf)?;
        let mut remote_file = session.scp_send(remote_file_path,0o644, size, None).await?;
        remote_file.write_all(&buf).await?;
        remote_file.flush().await?;
        remote_file.send_eof().await?;
        remote_file.wait_eof().await?;
        remote_file.close().await?;
        remote_file.wait_close().await?;

        //new code
        let mut channel = session.channel_session().await?;

for a small file, it uploaded successfully. (size:334)
for a large file, it failed (local size: 27704336, remote size:2572288)

bastion_channel read 0
task_with_main run failed, err Custom { kind: Other, error: "Failure while draining incoming flow" }

v0.4.0 cannot compile on windows

Checking async-ssh2-lite v0.4.0

...

error[E0277]: the trait bound u64: AsRawSocket is not satisfied
--> async-ssh2-lite\src\agent.rs:47:32
|
47 | session.set_tcp_stream(stream.as_raw_socket());
| -------------- ^^^^^^^^^^^^^^^^^^^^^^ the trait AsRawSocket is not implemented for u64

channel_session blocked

When the ssh server is disconnected for some reason, channel_session().await will be blocked until the entire TCP connection is reset, and can't terminate even with tokio::time::timeout.

Blocking while executing session.channel_session().await?

Examples

let session = Self::create_session(&parameter).await?;
        let mut channel = session.channel_session().await?;
     
        let size = &parameter.tty_size;
        channel
            .request_pty(
                "xterm-256color",
                None,
                Option::from((
                    size.cols as u32,
                    size.rows as u32,
                    size.cols as u32,
                    size.rows as u32,
                )),
            )
            .await?;

        channel.shell().await.unwrap();
         let read = channel.stream(0);
        let write = channel.stream(0);

            .............................................

         //Use this session elsewhere
        let guard = self.session.lock().await;
       
        let mut channel = guard.channel_session().await?; // Blocking
     
        channel.exec("").await?;
        let mut mem_output = String::new();
        channel.read_to_string(&mut mem_output).await?;
        channel.wait_close().await?;

What would cause him to block?

Incompatible libssh2-sys dependencies

When compiling the latest version of bb8-async-ssh2-lite, I get this error:

error: multiple packages link to native library `ssh2`, but a native library can be linked only once

package `libssh2-sys v0.2.7`
    ... which satisfies dependency `libssh2-sys = "^0.2"` (locked to 0.2.7) of package `async-ssh2-lite v0.4.2`
    ... which satisfies dependency `async-ssh2-lite = "^0.4"` (locked to 0.4.2) of package `bb8-async-ssh2-lite v0.3.0`
    ... which satisfies dependency `bb8-async-ssh2-lite = "^0.3"` (locked to 0.3.0) of package `ssh-worker v0.1.0 (/home/nett/Stuff/sisyphos/ssh-worker)`
links to native library `ssh2`

package `libssh2-sys v0.3.0`
    ... which satisfies dependency `libssh2-sys = "^0.3.0"` (locked to 0.3.0) of package `ssh2 v0.9.4`
    ... which satisfies dependency `ssh2 = "^0.9"` (locked to 0.9.4) of package `async-ssh2-lite v0.4.2`
    ... which satisfies dependency `async-ssh2-lite = "^0.4"` (locked to 0.4.2) of package `bb8-async-ssh2-lite v0.3.0`
    ... which satisfies dependency `bb8-async-ssh2-lite = "^0.3"` (locked to 0.3.0) of package `ssh-worker v0.1.0 (/home/nett/Stuff/sisyphos/ssh-worker)`
also links to native library `ssh2`

It seems like the libssh2-sys dependency in this repo needs to be bumped to 0.3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.