Giter VIP home page Giter VIP logo

estuary's Introduction

Estuary

An experimental ipfs node

Questions? Reach out! slack

Building

Requirements:

  1. Run make clean all inside the estuary directory

Running your own node

To run locally in a 'dev' environment, first run:

./estuary setup --username=<uname> --password=<pword>

Save the credentials you use here, you will need them to login to the estuary-www frontend.

NOTE: if you want to use a different database than a sqlite instance stored in your local directory, you will need to configure that with the --database flag, like so: ./estuary setup --username=<uname> --password=<pword> --database=XXXXX

Once you have the setup complete, choose an appropriate directory for estuary to keep its data, and use it as your datadir flag when running estuary. You will also need to tell estuary where it can access a lotus gateway api, we recommend using:

export FULLNODE_API_INFO=wss://api.chain.love

Then run:

./estuary --datadir=/path/to/storage --database=IF-YOU-NEED-THIS --logging

NOTE: Estuary makes only verified deals by default and this requires the wallet address to have datacap(see https://verify.glif.io/). To make deals without datacap, it will require the wallet to have FIL, and the run command will need the --verified-deal option set to false.

./estuary --datadir=/path/to/storage --database=IF-YOU-NEED-THIS --logging --verified-deal=false

Running as daemon with Systemd

The Makefile has a target that will install a generic but workable systemd service for estuary.

Run make install-estuary-service on the machine you wish to run estuary on.

Make sure to follow the instructions output by the make command as configuration is required before the service can run successfully.

Running estuary using docker

  • View the guidelines on how to run estuary using docker here.

Using Estuary

The first thing you'll likely want to do with Estuary is upload content. To upload your first file, use the /content/add endpoint:

curl -X POST http://localhost:3004/content/add -H "Authorization: Bearer REPLACE_ME_WITH_API_KEY" -H "Accept: application/json" -H "Content-Type: multipart/form-data" -F "data=@PATH_TO_FILE_BUT_REMEMBER_THE_@_SYMBOL_IS_REQUIRED"

You can verify this worked with the /content/list endpoint:

curl -X GET -H "Authorization: Bearer REPLACE_ME_WITH_API_KEY" http://localhost:3004/content/list

You may find the API documentation at docs.estuary.tech useful as you explore Estuary's capabilities.

Sealing a Deal

Estuary will automatically make a deal with Filecoin miners after 8 hours. If you upload more than 3.57 GiB of data it will make the deal sooner.

To keep tabs on the status of your uploaded content and Filecoin deals, you can use estuary-www. Clone the estuary-www repository and run:

npm install
npm run dev

And then head to localhost:4444/staging to see the status of your deal.

Contributing

See CONTRIBUTING.md for contributing and development instructions.

Troubleshooting

Make sure to install all dependencies as indicated above. Here are a few issues that one can encounter while building estuary

Guide for: route ip+net: netlinkrib: too many open files

Error

If you get the following error:

/ERROR basichost basic/basic_host.go:328 failed to resolve local interface addresses {"error": "route ip+net: netlinkrib: too many open files"}

It is because you do not have enough open file handles available.

Solution

Update this with the following command:

ulimit -n 10000

Guide for: Missing hwloc on M1 Macs

The Portable Hardware Locality (hwloc) software package provides a portable abstraction of the hierarchical structure of current architectures, including NUMA memory nodes, sockets, shared caches, cores, and simultaneous multi-threading (across OS, versions, architectures, etc.).

lhwloc is used by libp2p-core. Estuary uses libp2p for the majority of its features including network communication, pinning, replication and resource manager.

Error

`ld: library not found for -lhwloc`

Solution

For M1 Macs, here's the following steps needed

  • Step 1: brew install go bzr jq pkg-config rustup hwloc - Uninstall rust as it would clash with rustup in case you have installed.
  • Step 2: export LIBRARY_PATH=/opt/homebrew/lib
  • Step 3: Follow the steps as per the docs.

On Ubuntu, install libhwloc-dev.

Guide for: cannot find -lfilcrypto collect2

Related issue here

Error

When trying to build estuary in an ARM machine, it returns an error

# github.com/filecoin-project/filecoin-ffi/generated /usr/bin/ld: skipping incompatible extern/filecoin-ffi/generated/../libfilcrypto.a when searching for -lfilcrypto /usr/bin/ld: skipping incompatible extern/filecoin-ffi/generated/../libfilcrypto.a when searching for -lfilcrypto /usr/bin/ld: skipping incompatible extern/filecoin-ffi/generated/../libfilcrypto.a when searching for -lfilcrypto /usr/bin/ld: skipping incompatible extern/filecoin-ffi/generated/../libfilcrypto.a when searching for -lfilcrypto /usr/bin/ld: skipping incompatible extern/filecoin-ffi/generated/../libfilcrypto.a when searching for -lfilcrypto /usr/bin/ld: cannot find -lfilcrypto collect2: error: ld returned 1 exit status make: *** [Makefile:67: estuary] Error 2

Solution

Related solution here

RUSTFLAGS="-C target-cpu=native -g" FFI_BUILD_FROM_SOURCE=1 make clean deps bench

estuary's People

Contributors

10d9e avatar aeryz avatar alvin-reyes avatar anjor avatar arajasek avatar aronchick avatar dependabot[bot] avatar dirkmc avatar elijaharita avatar en0ma avatar franklovefrank avatar frrist avatar geoff-vball avatar github-actions[bot] avatar gmelodie avatar hannahhoward avatar iand avatar jcace avatar jimmylee avatar lanzafame avatar lucroy avatar marshall avatar mvjq avatar neelvirdy avatar ribasushi avatar simonwo avatar snissn avatar softwareplumber avatar toastts avatar whyrusleeping avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

estuary's Issues

could I get an Estaury invite code, please?

hello, I am a developer , and I want use Estaury API to test how to storage file , but I can't get Invite code and sign in , so could you please
give a code for me test Estuary API,thanks

Master build is failing due to indirect stale commit reference in dependencies.

Tested on:

Latest master f5688f5

The bug

The go.mod has this dependency chain:

However commit c83bea50c402 is not present in the github.com/filecoin-project/specs-actors/v7 repo.

So running make build fails:

$ make build
go build 
go: github.com/filecoin-project/[email protected] requires
	github.com/filecoin-project/[email protected] requires
	github.com/filecoin-project/specs-actors/[email protected]: invalid version: unknown revision c83bea50c402

That often happen because the go.mod was pointing to a commit that was not merged, then got merged (with rebase type of merge changing the commit hash), then the branch is removed and after some time when github runs GC the commit is removed.

lotus

The bug was also present in lotus see filecoin-project/lotus#8393. I've tried applying the same fix (using go mod edit -replace) but couldn't get it to work, I belive that because estuary use an older version of lotus.

Reproduction Steps

You / the CI might not have it because you are downloading a cached version from https://proxy.golang.org/.
Either use your own proxy or no proxy (GOPROXY=direct).

export GOPROXY=direct

go clean # Hopefully remove any local cache, else get a fresh OS install in a docker or rent a server for 10 minutes on whatever on demend VPS service you like

git clone https://github.com/application-research/estuary
cd estuary

make build

Remote Peer Is Missing Block

data transfer channel 12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien-12D3KooW9txku6jNfPtCr4Rtvjn2BzgetANMMESS9h675Cdpwndy-1628966926747884906 failed to transfer data: graphsync request failed to complete: Remote Peer Is Missing Block: bafkreih6renvv5rnqzlwr32xpkbsarz232nr4yldblbsu5qf5jmyrwkzti

Has been cited by providers that this is an Estuary Bug.

Allow users to define a replication factor

For some users the default factor of 6 replicas per file is either too much or too little. It might not make a lot of sense right now with the hosted Estuary node to make this part of the request. We would also put a upper and lower bound to the figure, but for self hosted estuary nodes where customer might want to use their own wallets it would make sense to give that level of control as part of the add* and pin requests instead of having to use a global setting.

Create Github wiki

It would be nice to have all the installation, contribution and troubleshooting information on the Github wiki. Content will be more organized once it's there.

project name overlaps with estuary.dev

Hi ๐Ÿ‘‹

I came across this project recently -- neat stuff!

You're likely unware, but the choice of project name and website overlaps Estuary Technologies, Inc, a data infrastructure and technology startup doing business under the Estuary name since 2019.

Given how close the domain of your work and ours lie, I'm concerned about legitimate user confusion. For example, consider someone Googling "estuary data". Similarly the website https://estuary.tech/ could be easily confused as being related to our company "Estuary Technologies, Inc" (a Delaware C-Corp).

Yours is a newer project, started while ours was well underway. As such, would you please evaluate changing or refining the name ? Thanks!

add support for ARM

When I try to compile estuary under an ARM machine I get:
# github.com/filecoin-project/filecoin-ffi/generated /usr/bin/ld: skipping incompatible extern/filecoin-ffi/generated/../libfilcrypto.a when searching for -lfilcrypto /usr/bin/ld: skipping incompatible extern/filecoin-ffi/generated/../libfilcrypto.a when searching for -lfilcrypto /usr/bin/ld: skipping incompatible extern/filecoin-ffi/generated/../libfilcrypto.a when searching for -lfilcrypto /usr/bin/ld: skipping incompatible extern/filecoin-ffi/generated/../libfilcrypto.a when searching for -lfilcrypto /usr/bin/ld: skipping incompatible extern/filecoin-ffi/generated/../libfilcrypto.a when searching for -lfilcrypto /usr/bin/ld: cannot find -lfilcrypto collect2: error: ld returned 1 exit status make: *** [Makefile:67: estuary] Error 2

Architecture:
# lscpu Architecture: aarch64 CPU op-mode(s): 32-bit, 64-bit
OS:
5.4.0-1045-aws

Tag names in JSON config should be standardized

After PR 141 we will have configuration files. Currently the tag names used are in most cases just the internal variable names used within the estuary config objects.

As more configuration options are added we will need to think harder about this. Work on issue #131 will add more configuration options...

Seeking Intructions on How to Run My Own Node

This project looks awesome! Really excited to see a Filecoin interactive project with a fleshed out interface. Great work! The only issue is that all the documentation assumes that the user is going to apply for an API Key which is totally fine, but what if I wanted to run my own node? (Aka make it truly decentralized)? Is there instructions on how to do this because I can't find any.

Again just to be clear, I'm looking for a way to run this project so I don't need to be granted an API key (other than the key that I'm granting myself in the backend of my server running an Estuary node). I feel like that would make it a truly permission-less way to interact with Filecoin.

Update 08/23/21
So I tried to install this repo locally to see if I could get it to run without any external node running. I ran git clone https://github.com/application-research/estuary.git, cd estuary/, make, then make install. I then ran estuary as it was now in my path but It gave me this message could not get API info: could not get api endpoint: API not running (no endpoint). Where do I go from here?

Advertise child CIDs for retrieval

Currently Estuary is only advertising root CIDs. There are customers (such as the zarr-wg) who have retrieval needs which require them to be able to retrieve the child CID content fast.

Estuary opens an unlimited number of connections

Since d00e817 there is no way to limit the number of connections opened by Estuary. (The connection manager high water mark does not seem to apply to newly opened connections).

It's relatively unusual these days for internet traffic not to pass through some kind of stateful packet filter these days. Opening a connection requires any stateful packet filter between estuary and the connection target to track state. State are tables are limited in size. ISPs do throttle the number of states even if they say they don't. (my use case!) The consequence of exceeding the state limit imposed by infrastructure will be the sudden termination of otherwise perfectly good connections.

Estuary still runs into "remote peer is missing block" issues

{"level":"info","ts":"2022-01-10T18:47:08.614Z","logger":"markets","caller":"loggers/loggers.go:20","msg":"storage provider event","name":"ProviderEventDataTransferFailed","proposal CID":"bafyreifkq62sq6koo2f7z546yecc7wmbhn5jrqhia46wxxgw7nc3caqcuu","state":"StorageDealFailing","message":"error transferring data: deal data transfer failed: data transfer channel 12D3KooWCVXs8P7iq6ao4XhfAmKWrEeuKFWCJgqe9jGDMTqHYBjw-12D3KooWBoBFLi2es6QXy485YoAnKwSoKXJYzdaMNxEVkDU6KpJh-1641552698279941539 failed to transfer data: channel 12D3KooWCVXs8P7iq6ao4XhfAmKWrEeuKFWCJgqe9jGDMTqHYBjw-12D3KooWBoBFLi2es6QXy485YoAnKwSoKXJYzdaMNxEVkDU6KpJh-1641552698279941539: graphsync request failed to complete: remote peer is missing block: bafkreiawdgnnbjewsyy2ghqiyuopklylxoz5f7rdd6obazoejfvszcwdva"}
{"level":"warn","ts":"2022-01-10T18:47:08.614Z","logger":"providerstates","caller":"providerstates/provider_states.go:561","msg":"deal bafyreifkq62sq6koo2f7z546yecc7wmbhn5jrqhia46wxxgw7nc3caqcuu failed: error transferring data: deal data transfer failed: data transfer channel 12D3KooWCVXs8P7iq6ao4XhfAmKWrEeuKFWCJgqe9jGDMTqHYBjw-12D3KooWBoBFLi2es6QXy485YoAnKwSoKXJYzdaMNxEVkDU6KpJh-1641552698279941539 failed to transfer data: channel 12D3KooWCVXs8P7iq6ao4XhfAmKWrEeuKFWCJgqe9jGDMTqHYBjw-12D3KooWBoBFLi2es6QXy485YoAnKwSoKXJYzdaMNxEVkDU6KpJh-1641552698279941539: graphsync request failed to complete: remote peer is missing block: bafkreiawdgnnbjewsyy2ghqiyuopklylxoz5f7rdd6obazoejfvszcwdva"}
{"level":"info","ts":"2022-01-10T18:47:10.514Z","logger":"markets","caller":"loggers/loggers.go:20","msg":"storage provider event","name":"ProviderEventFailed","proposal CID":"bafyreifkq62sq6koo2f7z546yecc7wmbhn5jrqhia46wxxgw7nc3caqcuu","state":"StorageDealError","message":"error transferring data: deal data transfer failed: data transfer channel 12D3KooWCVXs8P7iq6ao4XhfAmKWrEeuKFWCJgqe9jGDMTqHYBjw-12D3KooWBoBFLi2es6QXy485YoAnKwSoKXJYzdaMNxEVkDU6KpJh-1641552698279941539 failed to transfer data: channel 12D3KooWCVXs8P7iq6ao4XhfAmKWrEeuKFWCJgqe9jGDMTqHYBjw-12D3KooWBoBFLi2es6QXy485YoAnKwSoKXJYzdaMNxEVkDU6KpJh-1641552698279941539: graphsync request failed to complete: remote peer is missing block: bafkreiawdgnnbjewsyy2ghqiyuopklylxoz5f7rdd6obazoejfvszcwdva"}

ping me on slack if more info is needed. If there is a better place to report this let me know

strange error while computing objects for database

while adding a bunch of content via the add-ipfs endpoint. Related, several requests timed out, hard to say why.

{"time":"2021-05-27T23:05:53.72743161Z","id":"","remote_ip":"136.27.5.43","host":"api.estuary.tech","method":"POST","uri":"/content/add-ipfs?ignore-dupes=true","user_agent":"curl/7.76.0","status":200,"error":"","latency":6962139183,"latency_human":"6.962139183s","bytes_in":228,"bytes_out":282}
2021-05-27T23:05:53.729Z        INFO    estuary estuary/replication.go:835      adding content to staging zone: 7405
providing complete
providing complete
{"time":"2021-05-27T23:06:00.968605236Z","id":"","remote_ip":"136.27.5.43","host":"api.estuary.tech","method":"POST","uri":"/content/add-ipfs?ignore-dupes=true","user_agent":"curl/7.76.0","status":200,"error":"","latency":7165929359,"latency_human":"7.165929359s","bytes_in":228,"bytes_out":282}
2021-05-27T23:06:00.970Z        INFO    estuary estuary/replication.go:835      adding content to staging zone: 7406
providing complete
2021-05-27T23:06:04.383Z        WARN    estuary estuary/reprovider.go:32        providing failed        {"cid": "bafkreiabj75eata5csmrlab5y2jgwenmoci7wj5bykvqlwc3fulruaqnjy", "error": "context deadline exceeded"}
providing complete
providing complete
providing complete
{"time":"2021-05-27T23:06:09.12125581Z","id":"","remote_ip":"136.27.5.43","host":"api.estuary.tech","method":"POST","uri":"/content/add-ipfs?ignore-dupes=true","user_agent":"curl/7.76.0","status":200,"error":"","latency":8077638153,"latency_human":"8.077638153s","bytes_in":228,"bytes_out":282}
2021-05-27T23:06:09.123Z        INFO    estuary estuary/replication.go:835      adding content to staging zone: 7407

2021/05/27 23:06:09 /home/why/code/estuary/handlers.go:510 slice data #257 is invalid: unsupported data
[0.714ms] [rows:0] INSERT INTO "objects" ("cid","size","reads","last_access") VALUES ('<binary>',910,0,'0000-00-00 00:00:00'),('<binary>',969,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',
8362,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',8362,
0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'
0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'000
0-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',8362,0,'0000-0
0-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'000
0-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'
0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,
0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',2621
58,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',2
62158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>
',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<bina
ry>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<b
inary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),(
'<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',124783,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'
),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:
00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:
00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00
00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-
00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-
00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'00
00-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,
'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158
,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262
158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',
262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary
>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<bin
ary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<
binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),
('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00
'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00
:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00
:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),('<binary>',262158,0,'0000-00-00 00:00:00'),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),() RETURNING "id"
2021-05-27T23:06:09.515Z        ERROR   estuary estuary/handlers.go:82  handler error: failed to create objects in db: slice data #257 is invalid: unsupported data
{"time":"2021-05-27T23:06:09.515499736Z","id":"","remote_ip":"136.27.5.43","host":"api.estuary.tech","method":"POST","uri":"/content/add-ipfs?ignore-dupes=true","user_agent":"curl/7.76.0","status":500,"error":"failed to create objects in db: slice data #257 is invalid: unsupported data","latency":317905656,"latency_human":"317.905656ms","bytes_in"
:179,"bytes_out":0}
{"time":"2021-05-27T23:06:18.221928256Z","id":"","remote_ip":"136.27.5.43","host":"api.estuary.tech","method":"POST","uri":"/content/add-ipfs?ignore-dupes=true","user_agent":"curl/7.76.0","status":200,"error":"","latency":8627117039,"latency_human":"8.627117039s","bytes_in":228,"bytes_out":282}
2021-05-27T23:06:18.223Z        INFO    estuary estuary/replication.go:835      adding content to staging zone: 7408
{"time":"2021-05-27T23:06:18.57732807Z","id":"","remote_ip":"136.27.5.43","host":"api.estuary.tech","method":"POST","uri":"/content/add-ipfs?ignore-dupes=true","user_agent":"curl/7.76.0","status":200,"error":"","latency":278116398,"latency_human":"278.116398ms","bytes_in":228,"bytes_out":282}
2021-05-27T23:06:18.579Z        INFO    estuary estuary/replication.go:835      adding content to staging zone: 7409
2021-05-27T23:06:24.384Z        WARN    estuary estuary/reprovider.go:32        providing failed        {"cid": "bafkreibxjb3vschdmxqot2dmkixmrzeqoljr5dorl7rfwrmmrz33w7vc3y", "error": "context deadline exceeded"}
{"time":"2021-05-27T23:06:27.512001712Z","id":"","remote_ip":"136.27.5.43","host":"api.estuary.tech","method":"POST","uri":"/content/add-ipfs?ignore-dupes=true","user_agent":"curl/7.76.0","status":200,"error":"","latency":8860524190,"latency_human":"8.86052419s","bytes_in":228,"bytes_out":282}

How can we delete/unpin a file?

I read the docs and did not find any DELETE request. Will deletion be supported? I saw that in filecoin we can unpin files and this is kind of a deletion mechanism. I would like to be able to remove files when needed.
Also, if my deal is for 180 days, what will happen to my file after this period? Will it get purged? How can I tell the provider I want to keep it longer? How can the provider tell me that the deal is about to expire?

Unify Estuary CLI, Shuttle CLI and Barge CLI

There's quite a bit of replicated CLI code between estuary-shuttle, barge, shuttle-proxy and estuary itself. It would be incredible if we could just have one CLI tool that did it all like:

$ estuary shuttle init
$ estuary shuttle proxy do-something
$ estuary barge do-some-other-thing
$ estuary node init

Obs: here estuary node would be the command for the main estuary node (what I mentioned only as estuary above)

bug: bsget Compile error!

When I run 'make clean all', it shows the following error

go build -o bsget ./cmd/bsget

github.com/application-research/estuary/cmd/bsget

cmd/bsget/main.go:54:23: cannot use ctx (type context.Context) as type config.Option in argument to libp2p.New
Makefile:87: recipe for target 'bsget' failed
make: *** [bsget] Error 2

how to fix it? Thank you!

Fresh install seems to fail with open files limit

Followed all the most recent instructions and got this error:

2022-01-10T03:12:33.058Z	ERROR	basichost	basic/basic_host.go:328	failed to resolve local interface addresses	{"error": "route ip+net: netlinkrib: too many open files"}
2022-01-10T03:12:33.071Z	INFO	estuary	estuary/main.go:304	running key provider func
2022-01-10T03:12:33.071Z	INFO	estuary	estuary/main.go:314	key provider func returning 0 values
2022-01-10T03:12:53.059Z	ERROR	basichost	basic/basic_host.go:328	failed to resolve local interface addresses	{"error": "route ip+net: netlinkrib: too many open files"}
2022-01-10T03:13:03.058Z	ERROR	basichost	basic/basic_host.go:328	failed to resolve local interface addresses	{"error": "route ip+net: netlinkrib: too many open files"}

thoughts?

Consolidate metrics by method

i.e. estuary_blks_base_sync_total -> estuary_blks_base_total{method="sync|delete|deletemany|get|has|etc......."}
why? So that queries over all calls are also possible, i.e. sum(rate(estuary_blks_base_total{}[1m])) by (method)

Releasing under a free software license

If I understand correctly, this is used to power estuary.tech in production. It would be lovely if it is released under a free software license. Personally I would suggest AGPLv3 without any CLA to best benefit the community.

main.go in each applet carrying too much functionality

In each applet (estuary, estuary-shuttle, etc...), main.go:

  • performs command line and environment processing
  • post PR-141 will be processing configuration files
  • sets up state, database connections, resource management, and kicks of the main application processing

dividing these responsibilities between files will help keep the estuary codebase manageable and improve testability by separating concerns. Suggest that main.go handle processing command line and config, setting up a config object with is then handed over to an applet.go (or whatever name is deemed appropriate). This would also be a useful first step toward consolidating the command line processing for the different applets into a single file.

Estuary command-line options passed unfiltered to lotus client

I was asking myself the question: is the 'repo' flag in the Estuary command line actually used?

Searching the actual estuary code, it isn't, but then I happened upon this line in main.go

	api, closer, err := lcli.GetGatewayAPI(cctx)

Where cctx is the Estuary cli context. The 'repo' flag is (I think) picked up deep inside lotus/cli/util.

But: what if estuary command line options ever conflicted with lotus command line options? How would we ever know?

verify data stored on local estuary setup

I need help verifying data stored on my local estuary setup.

Steps:

  • setup estuary
  • setup shuttle
  • setup estuary-web
  • Add file via estuary endpoint

when verifying the cid, i get a message "This CID is not found. It might be pinned by a IPFS Node, you can use the dweb.link URL to check"

the content is accessible at the IPFS gateway,
https://bafkqactumvzxictumvzxicq.ipfs.dweb.link/

I would like to verify that estuary node is pinning the data and filecoin deals are made.

Thanks

malformed module path "embed": missing dot in first path element

I'm trying to build and install estuary and getting the error "missing dot in first path element".

I installed the dependencies on Ubuntu 20.04 and ran make clean all. The build ran for a while then failed with this error.

go: finding github.com/whyrusleeping/pubsub v0.0.0-20190708150250-92bcb0691325
go: finding github.com/hako/durafmt v0.0.0-20200710122514-c0fb7b4da026
go: finding github.com/ipld/go-car/v2 v2.1.1
go: finding github.com/petar/GoLLRB v0.0.0-20210522233825-ae3b015fd3e9
go: finding github.com/whyrusleeping/cbor v0.0.0-20171005072247-63513f603b11
go: finding golang.org/x/exp v0.0.0-20210715201039-d37aa40e8013
build github.com/application-research/estuary: cannot load embed: malformed module path "embed": missing dot in first path element
make: *** [Makefile:67: estuary] Error 1

Implement an intelligent load-balancer to redirect client requests to shuttles

Today clients connect to https://api.estuary.tech to add data, get data and interact with the various API endpoints Estuary exposes.
There is no automated mechanism to steer clients away when a shuttle node goes down or its local storage fills up. Implementing a loadbalancer with a new URL (https://upload.estuary.tech) will solve many problems related to outages and maintenance windows.

Quoting cake: "we could really use is some notion of 'load' across the different estuary nodes, so we can use that to do smarter load balancing. Load balancing right now is done manually by editing priority in the database because we don't have any automated metrics to do switching back and forth."

Enable easy deployment for self hosted Estuary shuttles

As a first step to simplify deployment for self hosted Estuary solutions,
we will enable simple deployment for customers who want to run their own Estuary shuttles for data processing/transfer.

This will include any pre-setup guidance for the customer (options for provider selection, web interface, local storage, etc.) and automation flows needed (such as API token creation).

Retrieval commands don't work in the verify webUI

If i try to retrieve a CID using the retrieval command returned by the CID verify webUI fails with:

$ lotus client retrieve --miner f01392893 bafybeierkfdggz26tvbflaxrzgrlzb6k3g2ptv7hbkpskzxhoqtali2gtu data-2755366 ERROR: The received offer errored: retrieval query offer was unavailable:

The lotus client retrieve command includes the CID of the file, while the correct way of retrieving a CID is to determine the rootCID of the CID I am trying to retrieve.

The steps that work as follows:
Determine the label of the dealID:
$ lotus state get-deal 2755366 { "Proposal": { "PieceCID": { "/": "baga6ea4seaqawzo77rc75wrt7elirecptorakkv3txjhrmrkkxabvj6za2urogi" }, "PieceSize": 2147483648, "VerifiedDeal": true, "Client": "f0397376", "Provider": "f01392893", "Label": "QmR6dciuJGeR5LzSuyn9nUu8pn6qPP9FTdaeeF2mWDzSi3", "StartEpoch": 1306869, "EndEpoch": 2801589, "StoragePricePerEpoch": "0", "ProviderCollateral": "330814207684018", "ClientCollateral": "0" }, "State": { "SectorStartEpoch": 1287571, "LastUpdatedEpoch": -1, "SlashEpoch": -1 } }
use the CID in Label field: QmR6dciuJGeR5LzSuyn9nUu8pn6qPP9FTdaeeF2mWDzSi3
lotus client retrieve --miner f01392893 QmR6dciuJGeR5LzSuyn9nUu8pn6qPP9FTdaeeF2mWDzSi3 QmR6dciuJGeR5LzSuyn9nUu8pn6qPP9FTdaeeF2mWDzSi3
The problem with that approach I could end up downloading a whole DAG if I only need a single CID.
To overcome this I have to add a path selector, which means I have to know where my CID is with the DAG beforehand.
lotus client retrieve --miner f01392893 --datamodel-path-selector 'Links/0/Hash' QmR6dciuJGeR5LzSuyn9nUu8pn6qPP9FTdaeeF2mWDzSi3 file.png

Get CID for all the contents in a Collection

Want to add a commit endpoint that returns a CID for all its contents in that point in time, much like a version control system.

A collection should now have: uuid, humanReadableLabel, currentCID, [prevCIDs]

feat: Round robin deals creation

My current understanding of the way deals are created is:
Upload 3 big files (non staged), A, B and C. First A is scheduled and estuary attempt, trying deals until reaching 6 successes.
Then C is done (order seems random), doing 6 deals.
And then B, 6 deals.

I would like very much if I had one deal for each first, then again a deal for each first, ...
Doesn't need to be perfect, roughly is fine.

It's just that right now, on my account, I have pieces that have 6 deals and others 0, I would like much more if all of them were at 1 or 2 instead.

Maybe a lowest number of successfull deals comes first system would be smarter ? (instead of a round robin)

Estuary escrow process for native FIL

The flow for Fission's webnative-filecoin integrated with estuary is to aim for end-to-end FIL usage.

We're working on completing the PoC, while at the same time moving into planning the "proper" way to do this with using native DID logins for Estuary.

In order to complete the PoC, we need to

  1. A user comes to Estuary, and signs up for a new account
  • Fission auth, pick username, come back to Estuary, they are now logged in
  • A FIL address is created for them

They fund this new address out of band, or are even auto-gifted a small amount of mainnet FIL so they can do their first deal.

  1. They upload some files

  2. They want to do a deal!

  • estimate of cost shown in some way
  • display how much FIL is required
  • FIL sent to Estuary node "escrow" wallet

Optional: instead of doing "accounting", you could create a wallet per account, so that every user just has an escrow wallet. This means you look to chain state for accounting, rather than keeping anything in a database. Don't need to scan transaction logs, just look up balance of the current user <> escrow pair. Also means you don't need 4 and 5 I don't think?

  1. Estuary looks for transaction from the (known) user address
  • it's doing this all the time
  • it needs to know all the transactions sent to its address
  • it keeps track of user address / txns, probably cached in a database
  1. Estuary does a deal
  • it looks in a database for the user address escrow balance
  • if the user has sent enough funds to escrow, it executes the deal

"too many open files" error when runniing estuary

I'm trying to run estuary and repeatedly get the error "failed to resolve local interface addresses ... too many open files".

I've tried increasing the ulimit from 1024 to 10000 (ulimit -Hn 10000 && ulimit -Sn 10000) but get the same error on Ubuntu 20.04.

argosopentech@estuary:~/estuary$ ./estuary --datadir=/home/argosopentech/estuary-data --logging
Wallet address is:  f12ozido5i7idkqv7pogsgxpt7rswxkgl5ur5guki
2022/03/19 14:46:13 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details.
2022-03-19T14:46:13.942Z        INFO    dt-impl impl/impl.go:145        start data-transfer module
/ip4/164.92.159.150/tcp/6744/p2p/12D3KooWJchfMcLrpVmLjX8wCy4SunjjncfVf1ipMotLiBnCN2o1
/ip4/127.0.0.1/tcp/6744/p2p/12D3KooWJchfMcLrpVmLjX8wCy4SunjjncfVf1ipMotLiBnCN2o1
2022-03-19T14:46:13.947Z        INFO    estuary estuary/replication.go:719      queueing all content for checking: 0

   ____    __
  / __/___/ /  ___
 / _// __/ _ \/ _ \
/___/\__/_//_/\___/ v4.6.1
High performance, minimalist Go web framework
https://echo.labstack.com
____________________________________O/_______
                                    O\
โ‡จ http server started on [::]:3004
2022-03-19T14:46:58.507Z        ERROR   basichost       basic/basic_host.go:327 failed to resolve local interface addresses     {"error": "route ip+net: netlinkrib: too many open files"}
2022-03-19T14:47:03.507Z        ERROR   basichost       basic/basic_host.go:327 failed to resolve local interface addresses     {"error": "route ip+net: netlinkrib: too many open files"}
2022-03-19T14:47:08.508Z        ERROR   basichost       basic/basic_host.go:327 failed to resolve local interface addresses     {"error": "route ip+net: netlinkrib: too many open files"}

Filc needs some major usability improvements

  • subcommand descriptions and usage
  • getting started / documentation readme
  • improve command outputs
  • more helpful error messages
  • easy setup (automatic cidlistsdir creation, etc)
  • retrieve files to a specified output location instead of having them disappear into the blockstore abyss
  • blockstore management (specifically clearing blockstore, but maybe others)

Listing pins is still too slow

Apparently timing out even with low limit query param set (1), so probably still has to do with the database. Integrating pin statuses into the database sounds really good, but still need to look into the technical challenges of that. It does seem doable to me, though.

bug: Verify doesn't match against identical CIDs with different versions

Take this CID for example: bafybeih4i7cwnx6kbgsqme22y5prc2v27jsetvvbuksrq6d2lf3r3qg37i have the "This CID is verified on Estuary" message.
However it's CIDv0 conversion QmfKSvYvVXLSo5vpb1ZgBzJMNkWYKFRefSPPSoCdQSuEMF doesn't have it.

Note verify actually works with different multibase if they are all CIDv1 (uAXASIPxHxWbfygmlBhNax18Rarr6ZEnWoaKlGHh6WXcdwNv6).

Many Estuary deals stuck on "StorageDealWaitingForData"

SP feedback:

Stephane: It seems to be for the larger deals (32GiB) as the smaller ones (4GiB, 8GiB, 16GiB) are coming through without too much problems.
I'm also having some transfer trouble. At this stage, I have about 70% failure rate with a combination of both problems above (last 50 deals, 16 were successful).

Stuberman:
Logs from last 100 Estuary deals which did not error out (many stuck in StorageDealWaitingForData)
Estuary 2022.txt

Slack thread with more details: https://filecoinproject.slack.com/archives/C02GQUMFQVA/p1646316105412969

Estuary needs a configuration file

In attempting to set up an estuary node in my lab environment, I've encountered a few issues that could be easily navigated around given the ability to configure libp2p in a manner similar to ipfs, including:

  • throttling the number of new connections
  • changing the hwm/lwm for connections
  • disabling tcp
  • blacklisting connections to local-only services outside my LAN (e.g. 196.254.*)

These settings are currently hard-wired. It seems to me that Estuary would benefit from a JSON configuration file (perhaps supporting some reasonable subset of the go-ipfs Swarm option set)

Happy to submit a PR, but since I'm new to Estuary thought I'd ask for advice first.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.