Giter VIP home page Giter VIP logo

live's Introduction

Ordi Labs Live

Ordinals mempool viewer. View inscriptions before they're inscribed!

Setup

We use Docker to containerize all micro-services and you just need to run a few recipes (see below) to get your devbox up and running. When your systems prerequisites are already met, usually in under 15 minutes.

prerequisites

macos linux windows
Docker for Mac Docker TODO
homebrew
XCode

first installation

git clone https://github.com/ordilabs/live live--ordilabs
cd live--ordilabs
just install

inscription watching

Run a local instance of Ordi Live after starting bitcoin core:

just watch

Open a browser http:://127.0.0.1:3000 to new inscriptions hitting your mempool in real time

developing

All micro-services are managed with 2 simple commands

just run-services
# just clean-services

Once they are running, you can start developing.

cp .env.sample .env # and uncomment the relevant lines
just watch # for changes, recompile rust/css, refresh frontend

additional commands

Once up and running you can perform dev tasks

just open # all .local domains in a browser

# inscribe-1-punk (into the mempool) then mine-1-block
just i1p m1b


# create temporary tunnel to expose your .local on the internet
just run-tunnel

see more commands with just -l

run live through Tor

socat can be used as a relay between the standard web and a hidden service on the Tor network.

  • Add to .env
CORE_ADDRESS=127.0.0.1
CORE_PORT=8332

# Replace {onion-address} + {onion-port} with your data
TOR_ADDRESS={onion-address}.onion
TOR_PORT={onion-port}
  • Then you have two options:

    • Option A: Using Docker and socator
    just socator
    • Option B: Using socat command (currently tested on Linux only). Note: socat needs to be install on your machine.
    just socat

known issues

Linux

Issue by running just run-services command:

Error response from daemon: invalid IP address in add-host: ""
error: Recipe `run-services` failed on line 34 with exit code 1

Quick fix (manually):

  • In just file override run-services as follow
[linux]
run-services:
  # if someone has a better solution, be my guest
  cd docker && docker compose -f docker-compose.yml -f docker-compose.monkey-patch-linux.yml up 
  • Get you local IP address (on Ubuntu Settings -> Network -> Details -> IPv4 Address) and replace GATEWAY_IPV4 with that IP in docker/docker-compose.monkey-patch-linux.yml
version: "3.7"

services:

  nginx-proxy:
    extra_hosts:
      # Replace {IP-ADDRESS} with your local IP address
      # e.g. "host.docker.internal.:192.168.1.212" 
      - "host.docker.internal.:{IP-ADDRESS}"

live's People

Contributors

dependabot[bot] avatar felixweis avatar sectore avatar larrysalibra avatar fjahr avatar

Stargazers

 avatar ordinally avatar ordinalOS avatar Nico Burniske avatar 开来超 avatar Stone Gao avatar 22388o⚡️  avatar  avatar

Watchers

 avatar  avatar  avatar

live's Issues

error: failed to build archive: 'wasm.o': section too large

system: macOS 13.4 (22F66)

clang --version
Homebrew clang version 16.0.4
Target: arm64-apple-darwin22.5.0
Thread model: posix
InstalledDir: /opt/homebrew/opt/llvm/bin

rustc --version
rustc 1.71.0-nightly (5ea3f0ae0 2023-05-23)

error: failed to build archive: 'wasm.o': section too large

The following warnings were emitted during compilation:

warning: In file included from depend/secp256k1/src/precomputed_ecmult_gen.c:7:
[...]
error: could not compile `secp256k1-sys` (lib) due to previous error
warning: build failed, waiting for other jobs to finish...

just install/watch fails on fresh linux

  • add a hint how to install nodejs e.g. on Ubuntu 22.04 LTS
npm install
sh: 1: npm: not found
error: Recipe `install` failed on line 17 with exit code 127
  • fix cookie file is by default in ~/.bitcoin/.cookie not .local/share/...
backend_bitcoin_core=BitcoinCore { auth: CookieFile("/root/.local/share/.bitcoin/.cookie"), root: "127.0.0.1:8332" }
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Io(Os { code: 2, kind: NotFound, message: "No such file or directory" })', src/backend/bitcoin_core.rs:53:78

Thoughts on architecture for real-time Scanner integration

The scanner currently processes the blockchain from the tip backward until it hits the block that includes inscription 0. Since the scanner works with the raw blk and rev files bitcoin core needs to be turned off while it's running to prevent a race condition. Of course, this is not sufficient for operating a real-time explorer.

TODOs:

  1. The processing of a single block needs to be refactored out into its own function. This should be pretty easy to do, take a look at ordilabs/bitcoin-scanner#12 where I extracted that code out into the worker as a starting point.
  2. The raw file from disc processing should still be maintained and just use that function so that it can be used for spinning up new servers/services etc.
  3. To continuously follow the latest blocks I would recommend using the zmq interface. When the scanner is integrated with that interface, it will be notified of new blocks coming in instantly and can process them right away. The only reasonable alternative would be long-polling the RPC interface but that would still mean that blocks would be processed slower than on other explorers and that doesn't sound great.
  4. I am a bit unclear whether there is still an RPC integration needed as well. For example, raw file from disc sync is finished, the scanner stops. The bitcoin core node starts again. Once the bitcoin core node is started the scanner connects to the ZMQ interface. I am unsure whether this can be made robust enough that it can be sure that there is no block missed with this setup alone. I.e. maybe the node started already processing some blocks before the ZMQ connection to the scanner was finished. There doesn't seem to be a way to block core from downloading blocks before the ZMQ connection is established. So probably the scanner also needs the ability to request missed blocks via the RPC interface.
  5. To accomplish the former but also the next item in the list, the scanner should be aware of which blocks it has scanned already. The inscriptions table has a genesis_height but that could be deceiving since there can also be blocks without an inscription.
  6. The scanner should also be made reorg-robust. The seemingly simplest way to accomplish this should be: When a reorg happens just delete all the inscriptions that were in the reorged-out block (should be easy via genesis_height) and process the newly included block. Anything more complex seems unnecessary since reorgs are rare and it would maybe save 1-2 seconds at best.

wasm version mismatch

Solution from leptos-rs/leptos#1051 (comment)

Thanks for the explanation @gbj.

The latest wasm-bindgen and bleeding edge cargos-leptos worked for me without any Cargo.toml changes.

Commands:

  • cargo install -f wasm-bindgen-cli
  • cargo install --git https://github.com/leptos-rs/cargo-leptos --locked cargo-leptos

Inefficient "gc" in tick_bitcoin_core

During the server tick, when we discover that client.get_raw_mempool has less entries than our internal ordipool, we just assume that a new block was found and prune the entire ordipool. that is a slow process currently takes 15 seconds with ~35k transactions in the mempool.

The assumtion that a block was found when the #tx in the mempool goes down is flawed, as replace-by-fee (RBF) can invalidate a tx and it's children.

Also there exist probably more optimal ways to recreate the ordipool hashmap.

tick: bitcoin_core, 35024, 0, 35027, 504_304, 0
tick: bitcoin_core, 35028, 0, 35031, 496_436, 0
tick: bitcoin_core, 35028, 0, 35028, 16_401_797, 15_895_158
processed 31174/31250
tick: bitcoin_core, 31250, 0, 31174, 16_394_627, 15_960_592
tick: bitcoin_core, 31321, 0, 31322, 526_409, 0

Same inscription shown over and over

Behavior:

  • It seems like on every tick the inscriptions that are in the mempool are added to the website again, at least if there are no other inscriptions to show

Screenshot 2023-04-15 at 9 28 54 PM

How to reproduce:

  • Run all services in the development environment
  • Generate one punk
  • Observe http://live-ol.local/ and wait

i18n

TODO

  • i18n provider in src/app/providers:
    • Func lang (current lang)
    • Func change_lang - handler to
    • const LANG (supported languages)
  • Add combobox in header or footer to switch languages
  • Add/update texts in app introduced in #123

Inspiration

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.