freenet / freenet-core Goto Github PK
View Code? Open in Web Editor NEWDeclare your digital independence
Home Page: https://freenet.org/
License: Apache License 2.0
Declare your digital independence
Home Page: https://freenet.org/
License: Apache License 2.0
In the future build with both Sqlite and RocksDB as possible backends (via feature) in order to compare behavior and performance characteristics. Probably we will want to stick with one, eventually, but otherwise, RocksDB can be used as a building block KV for more complex databases (is already used as such for distributed databases ala TiKV, for example), but it does not have SQL semantics on top of it, unlike Sqlite, which could prove useful.
In any case demonstrating we can quickly swap between store backends is benefitial so we are not stuck and coupled with one (and since right now we only need to maintain simple key-value semantics is rather easy to keep the backend implementation decoupled from the rest of the application, something we would like to continue in the future).
The scenario is such that when you subscribe initially, currently you are stuck with that peer. There is missing logic yet to recover subscriptions when the node you are subscribed to drops the network, but additionally to that, whenever you get new peer connections which are closer to the location of the contract they should substitute your current subscriptions in order to be more efficient on the number of messages sent around when relaying updates etc.
It would massviely improve the privacy of Locutus if users could opt in to running locutus over Tor/I2P. Once the Tor project finishes Arti, Tor could be integrated directly into Locutus, since arti is written in rust. I2P features a protocol called SAM to facilitate communication between I2P, which is written in Java, and applications written in other langauges. There is a SAM client Library written in Rust on this github repo. (https://github.com/i2p/i2p-rs) Arti is currently on version 0.6.0, and they are aiming for version 1.0 in the future. (https://blog.torproject.org/arti_060_released/)
Allow contracts to follow or subscribe to other contract states, the contracts state can be modified in response to the state and state changes of other contracts.
This should open up a wide variety of use cases and will bring Locutus closer to being a general-purpose decentralized computation platform.
ansi_term is Unmaintained
Details | |
---|---|
Status | unmaintained |
Package | ansi_term |
Version | 0.12.1 |
URL | ogham/rust-ansi-term#72 |
Date | 2021-08-18 |
The maintainer has adviced this crate is deprecated and will not
receive any maintenance.
The crate does not seem to have much dependencies and may or may not be ok to use as-is.
Last release seems to have been three years ago.
The below list has not been vetted in any way and may or may not contain alternatives;
See advisory page for additional details.
Allow contracts to inspect the state of other contracts when deciding whether to validate their state or update it in response to a delta.
This approach treats update_state()
like a consumer of UpdateData
events, which can contain the state or delta for the current contract or related contracts as specified. The function may return when the state has been updated or request additional related contracts.
fn validate_state(
_parameters: Parameters<'static>,
state: State<'static>,
related: Map<ContractInstanceId, Option<State<'static>>>,
) -> ValidateResult
pub enum ValidateResult {
Valid,
Invalid,
/// The peer will attempt to retrieve the requested contract states
/// and will call validate_state() again when it retrieves them.
RequestRelated(Vec<ContractInstanceId>),
}
// Delta validation is a simple spam prevention mechanism, supporting
// related contracts for this would be overkill
fn validate_delta(
_parameters: Parameters<'static>,
state: Delta<'static>,
) -> bool
/// Called every time one or more UpdateDatas are received,
/// can update state and/or volatile memory in response
fn update_state(
data: Vec<UpdateData>,
parameters: Parameters<'static>,
state: State<'static>,
) -> UpdateResult
pub enum UpdateData {
State(Vec<u8>),
Delta(Vec<u8>),
StateAndDelta { state : UpdateData::State, delta : UpdateData::Delta },
RelatedState { relatedTo: ContractInstanceId, state : UpdateData::State },
RelatedDelta { relatedTo : ContractInstanceId, delta : UpdateData::Delta },
RelatedStateAndDelta {
relatedTo : ContractInstanceId,
state : UpdateData::State,
delta : UpdateData::Delta,
},
}
pub struct UpdateResult {
new_state : Option<State<'static>>,
related : Vec<Related>,
}
pub struct Related {
contract_instance_id : ContractInstanceId,
mode : RelatedMode,
}
pub enum RelatedMode {
/// Retrieve the state once, not concerned with
/// subsequent changes
StateOnce,
/// Retrieve the state once, and then supply
/// deltas from then on. More efficient.
StateOnceThenDeltas,
/// Retrieve the state and then provide new states
/// every time it updates.
StateEvery,
StateThenStateAndDeltas,
}
I have noticed that Locutus had no incentive to help the network. I think an incentive system is possible without launching a new cryptocurrency, since as far as I understand Locutus is not a cryptocurrency, but feel free to correct me if I am wrong about that. This incentive system would be based on a simple subtraction problem:
messages you have processed for others - messages others have processed for you
If the result is less than zero, than all message you request after that you owe to people who processed your messages anyway, and your number will not be positive again until after you have paid back those who loaned their computing power to you.
Nodes that do not help the network will receive slower service, since in this system, higher trust balances would be processed first, this will reward those who contribute to the network with faster speeds than those who don't.
There are multiple methods to achieve this, one is a trustchain, and another is a distributed hash table, which would sacrifice reliability for lower resource use. However, it will be necessary to use an opaque trustchain similar to Monero if the trustchain is chosen as the store of trust, luckily, ring signatures, stealth addresses, and ringCT are well researched and can be replicated, even if they are not being used to create a cryptocurrency.
If the trustchain is used to store the equation I mentioned, then measures should be implemented to protect the privacy of the trustchain to protect node operators from repercussions of participating in the protocol, since the trustchain will probably last much longer than the messages themselves. It would be smart to flush older blocks once the trustchain is larger than a certain amount, unspent trust tokens in these "wallets" could then be added to the new genesis block. If this design choice is made, I would recommend flushing the trustchain every 90 days.
Links:
https://www.getmonero.org/resources/moneropedia/ringsignatures.html
https://www.getmonero.org/resources/moneropedia/stealthaddress.html
https://www.getmonero.org/resources/moneropedia/ringCT.html
Best Wishes,
Destroyer
A contract-key can specify a time-consuming task to perform the output of which will be the value of the key, based on the contract parameters. This value will be cached and distributed in the usual way. A contract becomes like a pure function call.
Services like a HTTP requesting services could be made available using a contract as a conduit between service provider and consumer. The service provider may ask for compensation in Karma.
Unsoundness in
dashmap
references
Details | |
---|---|
Package | dashmap |
Version | 5.0.0 |
URL | xacrimon/dashmap#167 |
Date | 2022-01-10 |
Unaffected versions | <5.0.0 |
Reference returned by some methods of Ref
(and similar types) may outlive the Ref
and escape the lock.
This causes undefined behavior and may result in a segfault.
More information in dashmap#167
issue.
See advisory page for additional details.
The current implementation is a bit naive implementation and when propagating a join requests backtracks and waits for an answer from the request chain of all nodes it has been forwarded to (according to the hops-to-live configuration), this should be changed to be made more efficiently.
The suggested algorithm is such that in parallel to the gateway accepting/rejecting the connection from the new peer, forward a message in "fire and forget" fashion (sending the peer connection information along with it), then each node that is being forwarded to will attempt a direct connection to the new peer, in case it would be willing to accept it (enough capacity) while propagating the chain.
The original requester will keep track of how many have responded so far and will keep the operation alive until the op time out is reached or has gotten all the expected new connections (configurable parameter). If the maximum has not been reached, later on a different mechanism will attempt to establish more connections as the node receives new peers information.
Take into consideration the connectivity between peers at the p2p level and passing the connection info when implementing this, cannot be isolated just to the abstract ring layer.
Implement relevant suggestions here: https://github.com/johnthagen/min-sized-rust
A decentralized cryptocurrency that departs from the typical "wallet-based" architecture for a "coin-based" architecture, solving many of the shortcomings of conventional cryptocurrencies.
Most of the complexity in conventional cryptocurrencies is due to the need to prevent double-spending. This is where A transfers funds to B but then executes another transaction to transfer the same funds to C, creating two contradictory versions of the transaction history.
The conventional solution is to broadcast all transactions to everyone and then have network participants commit to one version of the transaction history. They commit by performing a complex computational task ("proof of work" aka "mining") or by staking currency ("proof of stake").
Social credit takes a different approach which relies on real-time observability of data in Locutus. Interested parties - particularly the coin recipient - can subscribe to a coin's transactions and be notified within milliseconds when ownership is transferred.
<<<work in progress>>>
Create a JavaScript library that wraps the websocket communication described in #79 .
stdweb is unmaintained
Details | |
---|---|
Status | unmaintained |
Package | stdweb |
Version | 0.4.20 |
URL | koute/stdweb#403 |
Date | 2020-05-04 |
The author of the stdweb
crate is unresponsive.
Maintained alternatives:
See advisory page for additional details.
The goal of this issue is to have a test contract that allows interaction on the state in the tests that require it. Changes required:
Currently these tests are ignored and marked with the comment:
// FIXME: Generate required test contract
While running a contract through a browser in order to execute a contract I should specify its key. Like: 127.0.0.1/Eu4ByDpJ7mFdyeMHZ8z9WcDQ4wP2i5hTXSac1ZKQ2R9Q/state.html
. For the end user it's not so clear what it means(arbitary symbols). I think it's more comfortable for the end user to specify something more meaningful for him.
Curious: can we introduce aliases? So that 127.0.0.1/Eu4ByDpJ7mFdyeMHZ8z9WcDQ4wP2i5hTXSac1ZKQ2R9Q/state.html
could be replaced with 127.0.0.1/my-app-name/state.html
for example. As I see eth, near have such functionality so probably it also can work for us the same way?
LLVM must be installed prior to cargo build
:
$ sudo apt install llvm
Can this dependency be made explicit in Cargo.toml
so that the user gets a useful warning?
It will be great to have some node manager(as IPFS for example has https://github.com/ipfs/ipfs-desktop). So that a user can control a bandwidth, manage contracts/files, see the world covering(when a location isn't hidden)
CC @iduartgomez, @sanity
Multiple soundness issues in
owning_ref
Details | |
---|---|
Package | owning_ref |
Version | 0.4.1 |
URL | https://github.com/noamtashma/owning-ref-unsoundness |
Date | 2022-01-26 |
OwningRef::map_with_owner
is unsound and may result in a use-after-free.OwningRef::map
is unsound and may result in a use-after-free.OwningRefMut::as_owner
and OwningRefMut::as_owner_mut
are unsound and may result in a use-after-free.noalias
attribute.No patched versions are available at this time. While a pull request with some fixes is outstanding, the maintainer appears to be unresponsive.
See advisory page for additional details.
A number of utilities are made available to the web assembly contracts via extern functions, including:
[u8]
s)The contract API will evolve over time, and we should support a contract versioning mechanism to allow these changes in a backward compatible way.
Here is a suggested approach for representing the versions.
enum ContractContainer {
Wasm(WasmAPIVersion),
// Non-Wasm contracts supported here
}
enum WasmAPIVersion {
V0_0_1 { wasm_code : Wasm, parameters : [u8] },
V0_0_2 { wasm_code : Wasm, parameters : [u8] },
}
For serializing these enums I suggest using a variable length integer library such as varuint, for maximum flexibility.
System: m1 Monterey
Commit hash: a882515
Command: cargo run --example contract_browsing --features local
Output:
[2022-07-17T21:23:18Z INFO contract_browsing] loading web contract DPJ3sM1MDCuPFxEayRsvEtJcD4ySGpCeFfRgCNLWtpfT in local node
[2022-07-17T21:23:18Z INFO contract_browsing] loading data contract 5rwkvFpKoj9r7i6kL9BdQurAjnGhYfbi8uZY4Q6PAyrv in local node
[2022-07-17T21:23:18Z INFO locutus_node::contract::handler::sqlite] loading contract store from "/var/folders/4g/x2ybv6lj4s794_j2w41d6j_r0000gn/T/locutus/db/locutus.db"
[2022-07-17T21:23:19Z ERROR locutus_dev::local_node] other error: Compilation error: unknown relocation Relocation { kind: Elf(264), encoding: Generic, size: 0, target: Symbol(SymbolIndex(6)), addend: 0, implicit_addend: false }
^^^ the problem is here
[2022-07-17T21:23:19Z ERROR locutus_dev::local_node] other error: Compilation error: unknown relocation Relocation { kind: Elf(264), encoding: Generic, size: 0, target: Symbol(SymbolIndex(5)), addend: 0, implicit_addend: false }
^^^ the problem is here
[2022-07-17T21:23:19Z INFO warp::server] Server::run; addr=127.0.0.1:50509
[2022-07-17T21:23:19Z INFO warp::server] listening on http://127.0.0.1:50509
Unfortunately I wasn't able to check whether it compiles on x86 since there is a bug in docker which prevents from emulating x86 correctly(due to a problems with qemu)
As I see it crashes here:
https://github.com/freenet/locutus/blob/a8825159906ff81f3fd8398f9d7be9c507631ae9/crates/locutus-runtime/src/runtime.rs#L173
With such store replacement works fine:
let module = Module::new(&Default::default(), contract.code().data())?;
So something with store. I tried to switch a backend from LLVM to Cranelift and it works now:
https://github.com/freenet/locutus/blob/a8825159906ff81f3fd8398f9d7be9c507631ae9/crates/locutus-runtime/src/runtime.rs#L238
-> Store::new(&Universal::new(Cranelift::new()).engine())
The root cause isn't obvious for me now. As a solution I can propose introducing if target_arch = aarch64 inside instance_store
/// Verify that the state is valid, given the parameters. This will be used before a peer
/// caches a new State.
fn validate_state(parameters : &[u8], state : &[u8]) -> bool;
/// Verify that a delta is valid - at least as much as possible. The goal is to prevent DDoS of
/// a contract by sending a large number of invalid delta updates. This allows peers
/// to verify a delta before forwarding it.
fn validate_delta(parameters : &[u8], delta : &[u8]) -> bool;
enum UpdateResult {
VALID_UPDATED, VALID_NO_CHANGE, INVALID,
}
/// Update the state to account for the state_delta, assuming it is valid
fn update_state(parameters : &[u8], state : &mut Vec<u8>, state_delta : &[u8]) -> UpdateResult;
/// Generate a concise summary of a state that can be used to create deltas
/// relative to this state. This allows flexible and efficient state synchronization between peers.
fn summarize_state(parameters : &[u8], state : &[u8]) -> [u8]; // Returns state summary
/// Generate a state_delta using a state_summary from the current state. Tthis along with
/// summarize_state() allows flexible and efficient state synchronization between peers.
fn get_state_delta(parameters : &[u8], state : &[u8], state_summary : &[u8]) -> [u8]; // Returns state delta
fn update_state_summary(parameters : &[u8], state_summary : &mut Vec<u8>) -> UpdateResult;
struct Related {
contract_hash : [u8],
parameters : [u8],
}
/// Get other contracts that should also be updated with this state_delta
fn get_related_contracts(parameters : &[u8], state : &[u8], state_delta : &[u8])
-> Vec<Related>;
Note that the Related
struct implies the following approach to generating a contract hash:
contract_hash = hash(hash(contract_wasm) + parameters)
The reason is so that a contract_hash
can be created with only a hash of the contract_wasm
, not the entire contract_wasm
. This should reduce contract sizes.
Commit: f363ec1
Platform: m1 Monterey
Problem with steps:
Copying index, state html from root into contracts/freenet_microblogging_web/web
Then running the following script
#!/usr/bin/bash
# clean locutus temp_dir
rm -rf /var/folders/4g/x2ybv6lj4s794_j2w41d6j_r0000gn/T/locutus/
cd crates/http-gw/examples
rm freenet_microblogging_data
rm freenet_microblogging_data.wasm
rm freenet_microblogging_web
rm freenet_microblogging_web.wasm
cd ../../..
cargo build
cd contracts/freenet-microblogging-web
bash compile_contract.sh
mv ./freenet_microblogging_web.wasm ../../crates/http-gw/examples/
cd ../..
cd contracts/freenet-microblogging-data
bash compile_contract.sh
mv freenet_microblogging_data.wasm ../../crates/http-gw/examples/
cd ../..
cd crates/locutus-dev
cargo run --bin build_state -- --input-path ../../contracts/freenet-microblogging-web/web --output-file ../http-gw/examples/freenet_microblogging_web --contract-type web
cargo run --bin build_state -- --input-path ../../contracts/freenet-microblogging-data --output-file ../http-gw/examples/freenet_microblogging_data --contract-type data
cd ../..
cargo run --example contract_browsing --features local
Then replacing DATA_CONTRACT with the new key(state.html file)
Then opening localhost: 127.0.0.1/<key>/state.html
State page is displayed, ws established
Clicking on Send Update
button
Browser dev tools show:
Sending: [{"author":"IDG","date":"2022-06-15T00:00:00Z","title":"New msg","content":"..."}]
state.html:74 Update response: {"err":"client error: unhandled error: RuntimeError: unreachable","result":"error"}
I tried to figure out the place where it crashes and it appears here:
https://github.com/freenet/locutus/blob/f363ec1d26486dac959e05d99e393cd63ee8b104/crates/locutus-runtime/src/runtime.rs#L386
I verified that summary_func is valid(summary_func.is_ok())
So it crashes on call
I tried print an error instead of unwrapping and got that:
RuntimeError {
source: Trap(
UnreachableCodeReached,
),
wasm_trace: [],
native_trace: 0: 0x100de36d4 - <unknown>
1: 0x100de3b50 - <unknown>
2: 0x100a3ebbc - <unknown>
3: 0x100a3ea08 - <unknown>
4: 0x100dd3d9c - <unknown>
5: 0x18e5e44e4 - <unknown>
,
},
)
Add functionality at the ring level to prune dead connections when we start to implement the connection manager layer.
Originally posted by sanity May 29, 2022
Every node in the Locutus network has a location, a floating-point value between 0.0 and 1.0 representing its position in the small-world network. These are arranged in a ring so positions 0.0 and 1.0 are the same. Each contract also has a location that is deterministically derived from the contract's code and parameters through a hash function.
The network's goal is to ensure that nodes close together are much more likely to be connected than distant nodes, specifically, the probability of two nodes being connected should be proportional to 1/distance
.
A Sybil attack is where an attacker creates a large number of identities in a system and uses it to gain a disproportionately large influence which they then use for nefarious purposes.
In Locutus, such an attack might involve trying to control all or most peers close to a specific location. This could then be used to drop or ignore get requests or updates for contract states close to that location.
I think there are at least three categories of solution to this:
When a node joins through a gateway it must negotiate its location with the gateway first. This could be done by both node and gateway generating a random nonce, hashing it, and sending the hash to the other. After exchanging hashes they exchange their actual nonces which are combined to create a new nonce, and a location is derived from that. This prevents either gateway or the joiner from choosing the location.
A contract has multiple copies, each indicated by a contract parameter - the location of each copy will be pseudorandom. A user could query a random subset of the copies to ensure that they receive any updates. If any copy has an old version of the state then the user can update them by reinserting the latest version obtained from a different copy.
A contract contains a parameter for the current date, which will mean that the contract has a different location every day. If today's contract is found to be missing it can be reinserted using an older copy.
While developing locally temp_dir()
is used. So I faced with the issue when I changed my html files but they won't applied for new run just because files in temp dir preserved(in my case it was: /var/...locutus/webs/...
; m1 Monterey platform)
I needed explicitly rm -rf /var/.../locutus
So probably we need to do some cleanup when exit?
Also I faced with the issue when I changed some file and I need to run bunch of comands(recompile web/data contract/data and move into examples)
Since it's an active development as a quick solution - does it make sense to introduce some Makefile/bashscript for that? Something like that:
#!/usr/bin/bash
# clean locutus temp_dir; could be evaluated dynamically
rm -rf /var/folders/4g/x2ybv6lj4s794_j2w41d6j_r0000gn/T/locutus/
cd crates/http-gw/examples
rm freenet_microblogging_data
rm freenet_microblogging_data.wasm
rm freenet_microblogging_web
rm freenet_microblogging_web.wasm
cd ../../..
cargo build
cd contracts/freenet-microblogging-web
bash compile_contract.sh
mv ./freenet_microblogging_web.wasm ../../crates/http-gw/examples/
cd ../..
cd contracts/freenet-microblogging-data
bash compile_contract.sh
mv freenet_microblogging_data.wasm ../../crates/http-gw/examples/
cd ../..
cd crates/locutus-dev
cargo run --bin build_state -- --input-path ../../contracts/freenet-microblogging-web/web --output-file ../http-gw/examples/freenet_microblogging_web --contract-type web
cargo run --bin build_state -- --input-path ../../contracts/freenet-microblogging-data --output-file ../http-gw/examples/freenet_microblogging_data --contract-type data
cd ../..
We also can place a temp dir clean command there
Due to broadcasting and forwarding mechanisms is very possible nodes may be hit several times, from different peers, to replicate the same exact operation in the case of update notifications.
The solution to avoid duplicate costly work (like executing the same contract with the same value over and over) and be safe that this will apply to any ops is to keep a commit log for transactions, since the history of an initial operation is covered by the span of the same id, and double check that this operation has not been already commited previously and resolved.
The commit log can be clean up when garbage from old ops who have timed out is cleaned up based on this same time out duration.
This is related to improvements to the local-node
command for the dev CLI tool. Open tracking issue.
Note: Previously, the term "component" referred to imported JavaScript libraries distributed like applications - this proposal changes that definition.
The role of the Component system is to manage the use of secret information such as private keys. A Component is WebAssembly code that implements the Component interface described below. These Components execute in the node itself rather than in the web browser.
Components can:
A Component implements the following interface:
trait Component {
/// Process inbound messages, producing zero or more outbound messages in response
/// Note that all state for the component must be stored using the secret mechanism
fn process(messages : Vec<InboundComponentMsg>) -> Vec<OutboundComponentMsg>;
}
enum InboundComponentMsg {
GetSecretResponse {
key : Vec<u8>,
value : Option<Vec<u8>>,
}
ApplicationMessage {
from_app : Hash,
payload : Vec<u8>,
},
GetContractResponse {
contract_id : Vec<u8>,
update_data : UpdateData, // See #167
}
UserResponse {
request_id : u32,
response : String,
}
}
enum OutboundComponentMsg {
GetSecretRequest {
key : Vec<u8>,
}
SetSecretRequest {
key : Vec<u8>,
// Option::None will delete value associated with key
value : Option<Vec<u8>>,
}
ApplicationMessage {
to_app : Hash,
payload : Vec<u8>,
}
GetContractRequest {
mode : RelatedMode, // See #167
contract_id : Vec<u8>,
}
UserRequest {
request_id : u32,
/// A HTML fragment supporting a limited set of HTML tags, including hyperlinks
message : String,
/// If a response is required from the user they can be chosen from this list
responses : Vec<String>,
}
}
Components are distributed via their contract; their webassembly and parameters are distributed as the state of a contract together with parameters - similar to applications. Components are identified by the keys of the contracts through which they're distributed.
An application can request the installation of a Component, this may require some interaction by the user, for example, to donate to validate an antiflood token generator.
Antiflood tokens increase the cost of abusive behavior like spam and denial of service attacks within Locutus.
To create antiflood tokens, the user must give up something of value to obtain a token generator. A public/private keypair that meets some cryptographically provable criteria - such as a signed certificate of donation to Freenet. This generator releases new tokens at regular time intervals that depend on the token tier. The lowest tier, 30-second tokens, are released every 30 seconds. Higher tiers release less frequently: 1 minute, 10 minutes, 30 minutes, 1 hour, 2 hours, 4 hours, 12 hours, or 1 day.
Other systems can require that a token of some minimum tier be issued as a condition of some action like adding a message to an inbox. In the event of bad behavior by the generator owner, the recipient can create a complaint, which will be visible to anyone interacting with the generator.
Initially, this will be a centralized solution. The user must make a cryptographically blinded donation to Freenet to obtain a token generator. In the future, we will provide a decentralized on-network way to create new token generators. We're still at the ideation stage with that, but it will not be based on proof-of-work.
generator_key_pair
and blinds generatorPublicKey
to get blind(generator_public_key)
before sending it to https://freenet.org/donations
donation_key_pair
based on the donation amountblind(generator_public_key)
with donation_private_key
to produce signed(donation_public_key, blind(generator_public_key))
signed(donation_public_key, blind(generator_public_key))
is sent back to the usersigned(donation_public_key, generator_public_key)
, this is the initialization certificate and can be used to prove that the generator has been initialized with a donationThe Token Generator Contract keeps track of tokens that have been assigned
The token contract state is a set of token assignments organized by tier (frequency of token release), and then the time the token is released.
The contract verifies that the release times for a tier match the tier. For example, a 15:30 UTC release time isn't permitted for hour_1
tier, but 15:00 UTC is permitted.
Note: Conflicting assignments for the same time slot are not permitted and indicate that the generator is broken or malicious.
struct TokenContractParameters {
generator_public_key : PublicKey,
current_date_utc : Date,
}
struct TokenContractState {
/// A list of issued tokens
tokens_by_tier : HashMap<Tier, HashMap<DateTime, TokenAssignment>>,
}
struct TokenAssignment {
tier : Tier,
issue_time : DateTime,
/// The assignment, the recipient decides whether this assignment
/// is valid based on this field. This will often be a PublicKey.
assigned_to: [u8],
/// `(tier, issue_time, assigned_to)` must be signed
/// by `generator_public_key`
signature: Signature,
}
enum Tier {
minute_1, minute_10, minute_30, hour_1,
hour_2, hour_4, hour_12, day_1
}
For the TokenAssignment to be valid only one TokenAssignment must exist for a given issue_time
and tier
For example, consider a contract representing a message inbox, where senders must spend a token to add a message to the inbox. The inbox decides what tier of token is required, this should be made known to senders.
struct InboxContractState {
messages : Vec<InboxMessage>,
}
struct InboxMessage {
message: String,
assignment: TokenAssignment,
/// `(message, assignment)` signed by generator's public key
signature: Signature,
}
InboxMessage
including a valid TokenAssignment
InboxContract
verifies that the tier is high enough and the assignment signature matchesInboxContract
verifies that the assignment is valid by checking the relevant TokenContractState
, ignoring it if it isn'tA contract that maintains a list of valid complaints for a generator - complaints can be created by a token recepient. Tokens from a generator with an excessive number of complaints may be rejected. The recepient is responsible for adding complaints to this contract.
struct GeneratorComplaintsParameters {
generator_public_key: PublicKey,
}
struct GeneratorComplaintsState {
complaints : HashMap<DateTime, HashMap<Tier, Complaint>>,
}
struct Complaint {
/// A valid assignment is required for a complaint
assignment: TokenAssignment,
/// Reason for the complaint
reason: String,
/// `(assignment, reason)` must be signed by the token
/// recipient to be a valid complaint
recepient_signature : Signature,
}
We should randomize the daily changeover time for current_date_utc
based on the generator_public_key
to avoid a synchronized network change at 0:00 UTC.
assigned_to
field inTokenAssignment
is redundant in anInboxMessage
because we already know theassigned_to
here. We should avoid wasting these bytes.
Token recipients can specify a maximum token age, preventing passive accumulation of a large number of tokens over time
A duplicate token assignment is evidence of malicious behavior by the generator owner and will disable the token generator
We've now described a mechanism for generating scarce tokens that can be spent to gain access to potentially floodable resources like a message inbox.
Tooling that supports writing contracts with AssemblyScript (which is based on TypeScript, which is a superset of JavaScript).
A web browser can be used as a general-purpose user interface for decentralized applications on Locutus. These applications can be distributed via Locutus and interface with Locutus through an efficient WebSocket connection to a user's Locutus node.
This is analogous to FProxy in Fred, but much more powerful because it supports interactive web applications.
http://127.0.0.1:8608/3M3fbA7RDYdvYeaoR69cDCtVJqEodo9vth
u32
length.
tar.xz
file containing a website including index.html
http://127.0.0.1:8608/3M3fbA7RDYdvYeaoR69cDCtVJqEodo9vth/index.html
todo: Also needs to support redirect mechanism, so instead of a payload the address of another contract is given - the gateway will transparently redirect to that. This redirect will be updatable which will allow contract versioning.
A connection to the gateway may be upgraded to a websocket that can be used to get, put, and modify contract state
WebSocket messages are encoded in a binary serialization format such as MessagePack (TBD).
The following commands and responses are supported:
mod get {
/// Client -> Node
struct Request {
key : Vec<u8>,
request_contract : bool,
subscribe : bool,
}
/// Node -> Client
struct Response {
contract : Option<Vec<u8>>,
subscription_id : Option<u64>,
state : Vec<u8>,
}
// Node -> Client
struct Notify {
subscription_id : u64,
state : Vec<u8>,
}
}
mod put {
/// Client -> Node
struct Request {
contract : Vec<u8>,
state : Vec<u8>,
}
/// Node -> Client
struct Response {
}
}
mod update {
/// Client -> Node
struct Request {
contract : Vec<u8>,
delta : Vec<u8>,
}
/// Node -> Client
struct Response {
}
}
This HTTP interface can also serve as a HTTP proxy, which will only permit connections through this interface. This can be used to prevent applications from connecting to non-Locutus websites.
We need to document the process of writing and compiling contracts, and writing web apps that communicate with the node via websocket.
We can keep this low-level and general so that others can use it to build convenience tools like an assemblyscript wrapper for contracts, or a friendly API for websocket communication.
Right now, the repo lacks tags which describe what Locutus is or does, it'd be nice to have some tags such as locutus
, p2p
, decentralized
, etc, so the repo is more discoverable (currently, a JS project shows up when searching for projects tagged with "locutus").
The proposal describes a system for allowing distributed processing on the Freenet network using WebAssembly functions to process input contracts and update the state of other contracts. The system includes mechanisms for validating the accuracy and timeliness of the computed state updates, as well as a reputation system to penalize workers who do not execute jobs correctly.
A “job” takes the state from one or more input contracts, processes it using a provided WebAssembly function, and then sets the state of another contract to the result.
Workers execute jobs. There is also a validation mechanism to ensure workers execute jobs in a timely and accurate manner, as well as a reputation system to punish and exclude workers that don't do their job correctly.
Job is created in WebAssembly, implementing this job
function:
trait Job {
/// Execute a job or determine additional input contracts
fn job(
/// The parameters that form the job along with the webassembly that implements
/// this function.
parameters : [u8],
/// Any requested dependent contracts, this function requests them by returning
/// JobOutput::Dependencies() containing the desired dependencies.
dependencies : HashMap<ContractInstanceId, Dependency>,
) -> JobOutput {
// ...
}
}
pub enum JobOutput {
/// This job requires additional dependencies as specified
Dependencies(HashMap<ContractInstanceId, DependencyCfg>)
/// This job produced its output
Output(Vec<u8>)
}
/// Specify how the contract wishes to receive dependency updates
pub struct DependencyCfg {
/// Should the update include the initial dependency state we retrieved
pub initial_state : bool,
/// How many recent deltas should be included?
pub recent_deltas : u32,
/// How many recent states should be included?
pub recent_states : u32,
}
pub enum Dependency {
NotFound,
Found {
initial_state : Option<State>,
recent_states : Vec<State>,
recent_deltas : Vec<Delta>,
},
}
recent_states
and recent_deltas
mechanism are designed to allow the job to update state incrementally rather than needing to recompute everything every time a dependency updatesThe job WebAssembly is wrapped into a standard job contract as a parameter, the job contract’s purpose is to advertise the job and to collect the job output from workers.
Jobs and workers are assigned positions on the job ring based on a hash of the job’s WebAssembly code or the worker’s public key, respectively.
To allow peers to find jobs close to them efficiently, we use a system of job discovery contracts arranged in a hierarchy. Each contract is responsible for a segment of the ring, half of the parent contract's segment.
The job ring is broken up into a hierarchy of these contracts, each bisecting the parent’s section of the ring.
Each discovery contract in the tree contains a list of the top
rank = tokens / (1 + majority - minority)
tokens - number of antiflood tokens spent on this job in the past 24 hours
majority - estimated number of workers whose output is the same as the majority's
minority - estimated number of workers whose output is different to the majorities
mod DiscoveryContract {
pub struct Parameters {
/// The depth of this discovery contract in the tree, the root contract has
/// a depth of 0.
pub depth: u8,
/// The index of the segment running counter-clockwise
/// If depth=0 then there is only one segment so segment_ix=0. The
/// number of segments = 2^depth.
pub segment_ix : u32,
}
pub struct State {
pub jobs : Vec<Job>,
}
pub struct Job {
// The location of the job request
pub location : Location,
// The request itself
pub request : JobRequest,
/// The JobResult ranks together with the job results themselves
pub results : Vec<(Location, JobResult)>,
}
pub struct JobRequest {
pub code : Wasm,
pub parameters : [u8],
}
pub struct JobResult {
/// The results, keys are either the output itself or the
/// hash of the output. The JobCertificates are those
/// by workers closest to the Job::location.
pub results : HashMap<ResultOutput, Vec<JobCertificate>>,
}
pub enum ResultOutput {
/// If the output is smaller than a specified threshold, then
/// embed it directly for efficiency
Embedded([u8]),
/// If the output is larger than the specified threshold then
/// reference the hash of the output which allows the output
/// itself to be retrieved from a simple content-hash contract
Referenced(Hash),
}
pub struct JobCertificate {
pub worker_location : Location,
pub worker_pubkey : PublicKey,
pub result_signature : Signature,
pub tokens : Vec<TokenAllocation>,
}
pub struct TokenAssignment {
// See https://github.com/freenet/locutus/issues/246
// TokenAssignment::assigned_to must be the hash of the
// job result hash.
}
}
🚧🚧🚧 To be completed 🚧🚧🚧
This is a general tracking issue to monitor the status of the different techniques that will help with clients behind NAT when it comes to establishing direct connection between peers.
One of the problems of NAT traversal is the variance in the network characteristics and configurations when trying to establish a connection between two peers, at every layer of the network and from the pov of both hw (router configuration, LAN, ISP, etc.) and software (for example, is a browser handling the connection for you or do you have direct access to the interface?). A good overview and taxonomy of the different scenarios can be read here.
Some important/interesting resources re. NAT traversal status within libp2p:
Overall tracking issue in their repo: libp2p/rust-libp2p#2052
For more detail check their tracking issue.
Working on top of UDP to add our own solutions disqualifies a big chunk of libp2p, as is not possible to use UDP directly as transport protocol in libp2p (support is under development but only in the Go impl, and is just a prototype). There may be an option, in theory, to build up our own protocol and plugin it in the libp2p (is all based around traits/interfaces so I believe is doable, but we would have to look up) and to use other components that we find useful (e.g. all the identity protocol).
However this would mean building up a significant part of the plumbing and packet handling libp2p does for you and would be a large effort so I would say this should be a last option (unless we decide we don't want to use libp2p at all). Then there is the obvious caveat: If a higher number of developers with more resources have taken a long time to tackle this and haven't succeeded (yet) why would we be faster moving? This would be a major distraction from building our core logic, so if we have to at least would be nice to have the support of other developers. Then, I would favor an other approach, that is:
So we can continue with the current efforts and focus on other areas which are core to our project and make a reasonable assumption that, within reasonable time this will be improved, and if necessary help upstream to improve the situation re. NAT when we run into practical problems as we test.
A WebAssembly contract-key will implement the following functions:
trait ContractKey {
/// Determine whether this value is valid for this contract
fn validate_value(value : &Vec<u8>) -> bool;
/// Determine whether this value is a valid update for this contract. If it is, modify
/// the value and return true, else return false.
fn update_value(value: &mut Vec<u8>, value_update: &Vec<u8>) -> bool;
/// Obtain any other related contracts for this value update. Typicall used to ensure
/// update has been fully propagated.
fn related_contracts(value_update : &Vec<u8>) -> Vec<ContractKey>;
/// Extract some data from the value and return it.
///
/// eg. `extractor` might contain a byte range, which will be extracted
/// from the value and returned.
fn extract(extractor : &Vec<u8>, value : &Vec<u8>) -> Vec<u8>;
}
Peers attempt to relay contracts that they’re likely to receive requests for. Relaying means that the peer subscribes to contract updates and can then respond to requests for that contract immediately rather than forwarding the request.
Due to the small world network topology, peers are generally more likely to receive requests for contracts close to them.
A simple cost function that looks at each resource as a proportion of the total available to the peer and takes the maximum of these proportions:
The peer's goal is to relay the contracts that maximize
New contracts will have an unknown
If the peer is exposed to a contract with an estimated
The goal of this issue is to have a functional CI pipeline working again were we can be confident on the result of it and re-enable the tag at the repo level. Changes required:
locutus-node
subcrate and icnrementally re-enable them as we work on it again, and remove lint warns from there since now that crate is out of sync with the state of the repo.locutus-node
subcrate should be sensible and working, also clean up any linter warnings (including clippy).Once that is done add back the tag/label at the README
Since Locutus will be processing messages, it will be necessary to prevent spam and DOS attacks. I think that hashcash serve this purpose. It is proven, being used in Bitcoin as well as email spam filters. If spam and DOS attacks are not accounted for, than these attacks can push aside legitimate users who wish to use the network. By adding a small cost to sending a message, than it would be much harder to send large amounts of spam, since a small amount of computing power would be needed per message, this would not be much of a burden on legitimate users, but it would make spam operations and DOS attacks exponentially more computationally expensive.
All the best,
Destroyer
Add and finalize the ring protocol related ops, including the following functionality:
Include tests and pass them all:
Its a key value storage network, been in development for quite some time
Trust is the ability to predict whether an identity will do what they say they will do in the future. Karma's purpose is to allow identities to establish trust from their first interaction.
F2's Karma mechanism is inspired by Freenet's Web of Trust plugin.
#[derive(Serialize, Deserialize)]
pub struct TrustLog {
pub public_key : EcKey<Public>,
/// Trust entries signed by public_key
pub log : Vec<(TrustEntry, EcSignature)>,
}
#[derive(Serialize, Deserialize)]
pub struct TrustEntry {
/// Entry creation time in milliseconds since epoch
pub timestamp : u64,
/// What did this entity promise to do, in a standardized but extensible format
pub promise : str,
/// Did the entity follow-through on its promise?
pub kept : bool,
}
Was thinking what would be possible if get and put requests could contain webassembly code.
One thing it would open up is the possibility of multi-stage requests. This would be useful in situations where multiple requests must be made, each dependent on the state retrieved by the last. An example might be navigating an inverted index datastructure as part of a search engine.
Normally the client would need to make several get requests in sequence to navigate the datastructure, but if the request itself knows how to navigate the datstructure then it could avoid the round-trip back to the client at each step, which could be 10-20 times faster.
In order to have better observability and enable performance analysis (specially in async context) swap "log" for "tracing (include a compatible sink for log dependencies to get debug output from the dependencies) and use the open telemetry standard so we can plugin data to external systems (like prometheus or jaeger):
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.