dfinity / certified-assets Goto Github PK
View Code? Open in Web Editor NEWA certified assets canister written in Rust.
License: Apache License 2.0
A certified assets canister written in Rust.
License: Apache License 2.0
There is an authorize(other: Principal)
function to add other principals but no deauthorize(principal: Principal)
function which removes authorized principals. We would love to be able to have this functionality in the certified asset canister because we are building a DAO product where somebody could put their asset canister under control of the DAO.
One potential problem with this that if somebody who is malicious gains control of an authorized principal the deauthorize
function could be used to lock everybody else out and upload a malicious frontend. Without a deauthorize
function however somebody malicious could always spam the canister with a malicious frontend without someone stopping him/her.
My proposed solution would be to add a deauthorize
function while making it so that the controllers of the canister are always authorized (they could wipe the canister and reupload anyways). This way somebody could hand over control to a DAO simply by changing the controller just like it normally works for other canisters.
I wouldn't mind making a pull request for this, but first I wanted to check if you would be open to the idea.
I'm attempting to do the work to claim this bounty: https://twitter.com/dominic_w/status/1467144071449915395
It seems like implementing HTTP Range request functionality will achieve video streaming, and beyond that audio streaming and really any kind of file streaming. I'm not exactly sure what is in scope for this bounty, but I hope to receive guidance on what is acceptable along the way.
Tentatively I'll be following this guide: https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests
I'm not sure how much of it needs to be implemented, as some functionality might not be necessary to have excellent video streaming from most clients/browsers
Supporting ETag & If-Match is crucial for being able to cache the assets on the client side.
References:
If a user tries to access /some/path/to/file
, we should try to read from the store the following paths in order:
/some/path/to/file
/some/path/to/file/index.html
/some/path/to/index.html
/index.html
The resolution is normal for URL resolution of web servers. The last one is for single page applications.
This might require some yak shaving design for configuration of the asset canister in more ways that make sense (like additional headers, cache control, etc).
This feature request also doesn't take into account the certification. I think we would need some process to tell the certification code the canonical path of the resource.
Also on the forum: https://forum.dfinity.org/t/custom-streamingcallbacktoken/9379
This issue has come up while implementing #10
I would like some clarification on StreamingCallbackToken
and http_request_streaming_callback
. I might need to modify the StreamingCallbackToken
or implement a custom http_request_streaming_callback
, but I am not sure if I am able to do that.
Is http_request_streaming_callback
a special function? I assume I could name the function anything I want when creating StreamingStrategy::Callback
, is that correct?
If that's the case, can I also change the StreamingCallbackToken
parameter?
The problem I am trying to solve is that of 206 Partial Content
responses. I need to return very custom slices of content, because the client could request any range of bytes on an asset. The default http_request_streaming_callback
does not quite offer the flexibility I need. I think I can hack it with the current StreamingCallbackToken
, but is there a possibility of implementing my own?
In OpenChat we currently have no way for the frontend to check if it is on the latest version or not.
We want to be able to detect when our assets have been updated so that we can refresh the page and get the latest versions.
We could add a new endpoint which takes an asset's key as its arg and returns the hash of that asset.
ETag caching would work in a similar way by using the hash as the ETag value.
I'm happy to pick up this work if people approve of this solution.
We recompute the full SHA-256 hash once all the chunks are uploaded for extra validation & because the caller is not obligated to specify it. If the file is large, this can lead to running out of cycles during the computation.
I see a few ways to solve this problem:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.