Giter VIP home page Giter VIP logo

hyperdiscovery's Introduction

deprecated See hyperswarm replicator for similar functionality.

More info on active projects and modules at dat-ecosystem.org


hyperdiscovery

Old documentation below

This library is compatible with hypercore<=v7, which is now out of date.

build status

Join the p2p swarm for hypercore and hyperdrive. Uses discovery-swarm under the hood. Also works in web browsers using discovery-swarm-web.

This module only works

npm install hyperdiscovery

Usage

Run the following code in two different places and they will replicate the contents of the given ARCHIVE_KEY.

var hyperdrive = require('hyperdrive')
var hypercore = require('hypercore')
var Discovery = require('hyperdiscovery')

var archive = hyperdrive('./database', 'ARCHIVE_KEY')
var discovery = Discovery(archive)
discovery.on('connection', function (peer, type) {
  console.log('got', peer, type)
  console.log('connected to', discovery.connections, 'peers')
  peer.on('close', function () {
    console.log('peer disconnected')
  })
})

// add another archive/feed later
var feed = hypercore('./feed')
discovery.add(feed) // adds this hypercore feed to the same discovery swarm

Will use discovery-swarm to attempt to connect peers. Uses dat-swarm-defaults for peer introduction defaults on the server side, which can be overwritten (see below).

The module can also create and join a swarm for a hypercore feed:

var hypercore = require('hypercore')
var Discovery = require('hyperdiscovery')

var feed = hypercore('/feed')
var discovery = Discovery(feed)

API

var discovery = Discovery(archive, opts)

Join the p2p swarm for the given feed. The return object, discovery, is an event emitter that will emit a peer event with the peer information when a peer is found.

discovery.add(archive, [opts])

Add an archive/feed to the discovery swarm. Options will be passed to discovery-swarm. If you pass opts.announce as a falsy value you don't announce your port (discover-only mode).

discovery.totalConnections

Get length of the list of total active connections, across all archives and feeds.

discovery.leave(discoveryKey)

Leave discovery for a specific discovery key.

discovery.rejoin(discoveryKey)

Rejoin discovery for a discovery key (*must be added first using discovery.add).

discovery.close()

Exit the swarm, close all replication streams.

Options
  • stream: function, replication stream for connection. Default is archive.replicate({live, upload, download}).
  • upload: bool, upload data to the other peer?
  • download: bool, download data from the other peer?
  • port: port for discovery swarm
  • utp: use utp in discovery swarm
  • tcp: use tcp in discovery swarm
  • bootstrap: [string], WebRTC bootstrap signal servers for web
  • discovery: string, discovery-swarm-stream server for web

Defaults from datland-swarm-defaults can also be overwritten:

  • dns.server: DNS server
  • dns.domain: DNS domain
  • dht.bootstrap: distributed hash table bootstrapping nodes

Debugging

Set DEBUG='*' in the environment to enable debugging output inside discovery-swarm.

See Also

License

ISC

hyperdiscovery's People

Contributors

aschrijver avatar chrisekelley avatar deltaf1 avatar joehand avatar juliangruber avatar karissa avatar lukeburns avatar mafintosh avatar martinheidegger avatar max-mapper avatar ninabreznik avatar okdistribute avatar pfrazee avatar rangermauve avatar sammacbeth avatar tinchoz49 avatar xloem avatar yoshuawuyts avatar zhouhanseng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hyperdiscovery's Issues

Add examples to show how discovery.leave works

I'm trying out how discovery.leave works, but failed.

Computer A is the writer:

var hypercore = require('hypercore')
var discovery = require('hyperdiscovery')

var feed = hypercore('./data', {valueEncoding: 'utf-8'})

feed.on('ready', () => {
  var sw = discovery()
  feed.append('\ncool')
  sw.add(feed)
  sw.leave(feed.discoveryKey.toString('hex'))
})

Computer B replicate the archive:

var hypercore = require('hypercore')
var discovery = require('hyperdiscovery')

var feed = hypercore(
  './data',
  '2e7be17277e5b0799408fd022038d5ca1a3586df88db8dc3e736b5977f1a52e9',
  {
    valueEncoding: 'utf-8'
  }
)

feed.on('ready', () => {
  var sw = discovery()
  sw.add(feed)
})

I expect that after sw.leave(feed.discoveryKey.toString('hex')) executed on computer A, data 'cool' will not be replicated to Computer B. But replication still happened.

So, if there're some more examples will be help.

Use native Object.assign instead of xtend

xtend can easily be replaced with Object.assign

This reduce the dependencies and possible also avoid duplicated versions where some package don't use the same version range for the xtend package

// immutable
Object.assign({}, a, b)

// mutable
Object.assign(a, b)

update readme to include opts.live

I'm going through all of the hyper*universe and noticed that hypervision creates its swarms like hyperdiscovery(feed, {live: true}), where feed is a hypercore.

Currently the readme is lacking the live option, so I'm not entirely sure what it does; feels like it might let peers know that a feed is creating new data in realtime?

Two way replication

I have a pretty simple example running with hypercore that allows peers to connect. All peers can read just fine (including live). But it seems as though the only peer who can write to the feed is the original (the one that generated the key). Is this correct? How do I enable two-way replication using hypercore (so that all clients end up with all the data)?

Web support

I think it'd be cool to integrate discovery-swarm-web in the browser field so that people can compile hyperdiscovery to work on the web.

Move to datproject org

I think it'd be a good idea to move this to the dat project blog to make discoverability easier and to keep more Dat-related stuff in one place.

leave vs close connections

Closing the connection isn't the same behavior as discovery swarm. I think it might be nice to keep the api consistent, there are use cases where you want to stop announcing but still keep connections open. 'close' or "destroy" can be the thing that kills the connections as well as stops announcing.

We could use close/destroy but would need to have an API for destroing single feed vs the whole swarm.

update readme to show how to replicate feed

How to replicate hypercore, like discovery-swarm

feed.ready(function () {
  // we use the discovery as the topic
  swarm.join(feed.discoveryKey)
  swarm.on('connection', function (connection) {
    console.log('(New peer connected!)')

    // We use the pump module instead of stream.pipe(otherStream)
    // as it does stream error handling, so we do not have to do that
    // manually.

    // See below for more detail on how this work.
    pump(connection, feed.replicate({ live: true }), connection)
  })
})

Update to hyperswarm

The dat CLI is currently using discovery-swarm, which attempts to connect over a dht (which fails) but also opportunistically tries mdns and dns. Hyperswarm introduces a new DHT which we want to integrate here to replace the DHT that isn't working. Hyperswarm also has holepunching!! This will be great for the CLI.

What do you think about retiring discovery-swarm/channel completely and using as much of hyperswarm as possible, but also adding support for mdns and dns-discovery in this repository. Then we can use this repository in the dat v2 CLI.

Option 2 would be to update the discovery code in hyperswarm/discovery so that it also tries mdns and dns. Then this hyperdiscovery repository would not be needed and could be retired.

Thoughts, @RangerMauve @mafintosh @andrewosh ?

opts.autoListen (?) - some way to make listen async

I want to store the port option in a database for dat-node so we can reuse whatever port worked last. But I think we need some way to make listen & join async. Maybe opts.autoListen or something (default: true), would be a way to do this? Or any other more straightforward suggestions?

Sudo code implementation in dat-node:

// Check for existing port saved
// PROBLEM: Happens async
if (opts.port && db) {
    var subDb = sub(db, `${encoding.toStr(archive.discoveryKey)}-network`)
    subDb.get('port', function (err, val) {
      if (err && !err.notFound) throw new Error(err)
      if (val) opts.port = val
      doListen()
    })
}

// Happens sync
opts.autoListen = false
var swarm = hyperdiscovery(archive, opts)


// ... later with `autoListen = false`

function doListen () {
  swarm.once('error', function () {
    swarm.listen(0)
  })
  swarm.listen(opts.port)
  swarm.join(this.archive.discoveryKey)
  swarm.once('listening', function () {
    subDb.put('port', swarm.port) // save whatever port succeeded
  })
}

We could make the whole network function async in dat-node too. Maybe that is the easier solution but the rest of the networking APIs are sync.

This feed is not writable. Did you create it?

Hello! I'm new and just getting started here...

When attaching to an existing swarm and trying to write to a hyperdrive, I get the error This feed is not writable. Did you create it?. Are there some settings to allow a newly connected peer to write to the shared "disk"?

Thanks!

tests failing (utp)

running the tests on 9.0.2, got double free or corruption (!prev)

not all the time, just sometimes.

How to get the ARCHIVE_KEY?

“Run the following code in two different places and they will replicate the contents of the given ARCHIVE_KEY”
How to get the ARCHIVE_KEY?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.