Giter VIP home page Giter VIP logo

Comments (17)

marcoscaceres avatar marcoscaceres commented on August 15, 2024 4

I just don't think this API is more special than any other powerful feature on the Web. If we don't do this for other APIs, we shouldn't do it for this one either. We should be designing these APIs to be safe, secure, and privacy preserving, instead of relying on control lists. Control lists should be seen as a form of failure and last resort, not as starting point.

from digital-credentials.

marcoscaceres avatar marcoscaceres commented on August 15, 2024 1

I'm not sure this would scale, tbh, and would set up a single point of failure and point of control (which goes against the Web, in principle). We already have things like safe browsing lists that aim to keep users safe from the worst actors.

from digital-credentials.

kdenhartog avatar kdenhartog commented on August 15, 2024 1

I personally see not scaling as a good thing, but I still think it's possible to if we needed. First, we've shown this is possible to scale with TLS certificates already and manage these credentials at scale. Also, as you point out we have many other lists. In the example you pointed out as a potential solution to this problem safebrowsing is not used to address permissions abuse on the web which is what this would be for.

Secondly, we need to ask ourselves to what degree we want this to scale. I personally believe it would be a fundamental failure of an open web to have every site requesting a digital drivers license in order to register for it. This would lead to the opposite of an open web where the only way users can access the full web experience is if they've had a third party attestation from a certified party. For example, that would mean 850 million people globally would not have access to a growing number of sites. That sounds pretty closed to me.

Put simply this API is restrictive to the web, but has legitimate use. Therefore I believe we should limit its capability to only legitimate and certified purposes which would require some need for a registry.

from digital-credentials.

marcoscaceres avatar marcoscaceres commented on August 15, 2024 1

Adding for discussion for upcoming call. Although I share all of your concerns, I just don't personally believe that a domain list is going to fly or is in any way scalable (though more than happy to be proven wrong).

The CG should certainly be open to discussing alternatives approaches. It be nice to come up with at least 5 or so and maybe the PING folks have additional ideas (hi @npdoty).

from digital-credentials.

timcappalli avatar timcappalli commented on August 15, 2024 1

2024-05-21 call: consensus that global origin / verifier allowlists are not the right approach.

from digital-credentials.

kdenhartog avatar kdenhartog commented on August 15, 2024

No other APIs directly supply 3P attested PII until just recently with FedCM in very limited circumstances. Every other API until just recently relied on statistical correlation via fingerprinting which often does not satisfy the court of law as being identifiable. I argue that this change does constitute it being "more powerful" in terms of direct correlation and therefore requires greater protections from harm. By supplying and relying upon this information it is highly likely that this will be sufficient evidence that the person operating the computer is also the person identified by the digital credential. No other API has the dual use potential to be used as a tool to limit access to adult content, including on social media sites such as Reddit (as Texas' law directly addresses in section Sec. 129B.002. clause A) and then later be legally required to be provided by the same social media sites when law enforcement subpoenas all information about the user. Do we believe that other legislative bodies won't take this even further and directly establish that all users will have to supply digital credentials in an effort to reduce misinformation leading to a chilling effect of political speech?

The limitations here aren't just for limiting abused of smaller sites either. It's also for providing a checks and balance to legislative abuse in the same way that Google and Mozilla did with Kazakhstan's MITM CA certificate. These limitations act as a method to limit abuse both big and small.

It's also worth reiterating that no other API also stands as a way to lock at least 850 million people out of the web and potentially more depending on regulations that require usage of these digital credentials. There is legitimate harms that have been described here and here and this is a way to limit this harm so that the API is not abused.

With that said is your position here really to downplay these issues being raised by civic organizations and PING aren't a concern? Or do you believe these are valid concerns and we just need to be addressed them in in some other way that will achieve consensus among the broader W3C members? If so, please suggest another alternative that can be critiqued on it's merits as I've done here.

I fully understand that there is heavy commercial and regulatory interests in seeing this API implemented, but the "scaling" of legitimate usage should not outweigh all the potential harms that can come about by us ignoring them or assuming that since we've never needed them before therefore we shouldn't also need them now.

Finally, it's worth pointing out that this isn't the first time that a list has been used to modify well established boundaries. First party sets are doing the same thing (albeit collapsing control boundaries rather than adding additional control boundaries) here to create a set of allow listed origins in order to collapse storage boundaries and other boundaries set around the origin boundary. While the purpose and maintainability between these two lists would be different, it suggests that there is precedent for this and therefore the concept should be seriously considered.

from digital-credentials.

kdenhartog avatar kdenhartog commented on August 15, 2024

Adding for discussion for upcoming call. Although I share all of your concerns, I just don't personally believe that a domain list is going to fly or is in any way scalable (though more than happy to be proven wrong).

The CG should certainly be open to discussing alternatives approaches. It be nice to come up with at least 5 or so and maybe the PING folks have additional ideas (hi @npdoty).

Thank you discussion was all I was after as a starting point. I'm certainly open to alternative suggestions as my main concern is around addressing the harm pre-emptively rather than to focus on a specific solution. Personally, I agree the burden of maintaining these lists is not fun (albeit often act as a good form of checks and balance for censorship) and if we can find a better way I'm all for it. Let's continue to discuss this and see if the group may be able to come up with a better solution.

from digital-credentials.

marcoscaceres avatar marcoscaceres commented on August 15, 2024

I think that even if we don't come up with better solutions, it would be fascinating to see what alternative solutions we can come up with (no matter how far-fetched they are!). I'm quite looking forward to seeing what might get cooked up. There are a lot of really smart folks in this community already thinking about decentralization, so hopefully they will pitch in interesting ideas.

from digital-credentials.

RByers avatar RByers commented on August 15, 2024

Part of the challenge here is that there's so many different use cases, multiple layers of abstraction which may each weigh in on trust decisions, and so many possible different stakeholders in managing trust stores.

Surely some wallets and issuers are gong to rely on a list of acceptable verifiers without us having to building anything for that into the browser API itself (other than our core requirement of reliably conveying the verifier origin to the wallet), right?

For doing anonymous age verification (maybe with a ZKP protocol), I don't think it would be realistic or (hopefully) necessary to rely on an allow list of origins. But for more sensitive use of PII, maybe some sort of per-origin trust signal is really essential. Then who should be defining these lists and wouldn't we expect the curator to vary depending on the credential issuer? Eg. for eIDAS credentials, might the QWAC system provide that trust signal? But for, say, using US driver's licenses, then it's going to be a complete different allow list maintained by some other organization, right? So it's not clear to me at what level(s) of the system such allow lists could be defined.

Anyway, I'm supportive in general of using defensible origin trust lists in the Chrome implementation In particular, for our origin trials coming up, I'd love to find some way to limit in-the-wild testing to known origins who make some sort of attestation, maybe following something like our Privacy Sandbox enrollment. But Chrome is also pretty constrained on what we can do here - in general we can't just provide some functionality to some sites and not others based on the whim of some group of individuals. I think it's going to be essential for us as an industry to be able to iterate together on this, and for that I think it'll be essential to have good transparency on how these systems are being used in practice. To that end I'm currently working on trying to get API usage-by-origin data published like we do for push permission acceptance rate. Hopefully that'll help enable a much more data-informed debate on this topic. I also think it would be very reasonable for other browsers to take a more careful stance on this and not sure it's helpful or even necessary to try to standardize on all the details across all browsers. Perhaps it's a selling point for the privacy of a given browser how it manages such a list?

from digital-credentials.

npdoty avatar npdoty commented on August 15, 2024

in general we can't just provide some functionality to some sites and not others based on the whim of some group of individuals.

I don't understand the proposal to be that the determination would be made on anyone's whim. Data protection authorities, for example, seem entirely unwhimsical to me.

Enrollment or well-known public documentation and third-party evaluation and attestation would be a way for allow-list to be customized (but again, not at a whim, but rather through a trust decision made by a user). I'm also happy to brainstorm alternatives along these lines, although we should recognize that it could actually make it more difficult or less reliable for the verifier/origin to make use of the capability.

I think it's going to be essential for us as an industry to be able to iterate together on this, and for that I think it'll be essential to have good transparency on how these systems are being used in practice. To that end I'm currently working on trying to get API usage-by-origin data published like we do for push permission acceptance rate. Hopefully that'll help enable a much more data-informed debate on this topic.

I'm not sure permission acceptance rate is a particularly compelling piece of data, though I can see how it's relevant. A large social media site that users feel they have no choice but to use can gate access on accepting a permission and obtain a high acceptance rate; that wouldn't indicate that the behavior is appropriate, generally acceptable to users, or legal.

from digital-credentials.

npdoty avatar npdoty commented on August 15, 2024

High-level context for why registration is a useful protection, and one relevant legal citation:
https://github.com/w3cping/credential-considerations/blob/main/credentials-considerations.md#registration

from digital-credentials.

RByers avatar RByers commented on August 15, 2024

I don't understand the proposal to be that the determination would be made on anyone's whim. Data protection authorities, for example, seem entirely unwhimsical to me.

Oh yeah, if there is some outside credible system for assigning trust, we'd almost certainly want to leverage that. That's what I meant by the QWAC case for example. I'm just using the response I normally use to engineers who ask "why can't we just come up with a list of sites we trust?" 😊

I'm not sure permission acceptance rate is a particularly compelling piece of data, though I can see how it's relevant.

No sorry, I didn't mean exactly like that. So for example, perhaps we could publish lists of origins in which users have provided PII from credentials (of course we don't know which users or what PII), and rank them by how often that's done on the site. That would give someone like a data protection authority a data set and test cases to draw on to try to define a reasonable line between acceptable and unacceptable use. In addition some indication of user acceptance rate may provide a useful signal into the extent to which users are saying they trust one site over another (rightly or wrongly in the context of some data protection authority's definition). WDYT?

from digital-credentials.

kdenhartog avatar kdenhartog commented on August 15, 2024

Anyway, I'm supportive in general of using defensible origin trust lists in the Chrome implementation In particular, for our origin trials coming up, I'd love to find some way to limit in-the-wild testing to known origins who make some sort of attestation, maybe following something like our Privacy Sandbox enrollment.

It's likely also useful that we allow overrides in the same way that it is done for chrome://flags/#unsafely-treat-insecure-origin-as-secure.

Long term, something I've been thinking about here as well is to take a scaling approach similar to how adblock lists are distributed. By this I mean that lists can be published online and maintained via their own separate rules, but then users can choose to override these as necessary. This allows for these lists to gain pseudo reputations and achieves a tradeoff of protecting the user by default while still allowing for maximal overridability.

Here's an example of how we handle adblock lists in Brave from a UI perspective:
Screenshot 2024-02-12 at 12 55 14 PM

There's essentially 4 tiers that exist within these lists that allow for a spectrum of trust that's ultimately controlled by the user:

  1. Default enabled lists - These are selected by our team (and actively maintained by our team typically) and provide the backbone for defaults.
  2. installed lists disabled by default - These are generally reputatable lists that are actively maintained and considered to be valuable additions the user MAY want. There may be tradeoffs such as with compatibility or performance which means we leave them off by default.
  3. Custom lists - These are lists the user has found and chosen to add themselves. It's likely rarely used, but the user still has this capability for semi-maximal overridability.
  4. Custom rules list - This is where the user actually writes the rule themselves and would be for highly technical users looking to be very specific about overrides they're making as well as for debugging purposes.

In principle, this allows us to achieve two key goals here:

  1. Oversight produced through checks and balances via selected trusted intermediaries (user/software provider selected)
  2. Maximal control by the user when necessary

In general, I suspect that how these lists would be maintained using the above would be the following:

  1. Default lists are populated by DPAs and authoritative bodies authorized by DPAs. For example, in new zealand we have the Privacy Commissioners office who would fill this role, but could grant the authority to the trust framework authority as granted by the Digital Identity Trust Framework bill.

  2. Installed lists disabled by default - These are lists that would be published by potentially highly trusted NGOs or expert bodies. For example, I could see the EFF or Privacy International taking interest in publishing these lists with their own governance model which users can opt into easily by ticking a box.

  3. Custom lists - These are lists that I see the user finding published around the web. I'd expect that these would be lists that are created by IDPs and/or issuer services that assist to make it easier for users to easily start using these services.

  4. Custom origins - This would work similarly to how the flag above would work and would require the greatest understanding from a user perspective, but would also allow any user to ultimately have full control over who they chose to share their data with.

So for example, perhaps we could publish lists of origins in which users have provided PII from credentials (of course we don't know which users or what PII), and rank them by how often that's done on the site. That would give someone like a data protection authority a data set and test cases to draw on to try to define a reasonable line between acceptable and unacceptable use

I generally like this idea. We can also likely leverage the schemas of the credentials issued and the presentation requests and the related origins in order to better understand the usage without substantial privacy impacts. There's additional ways we could leverage a threshold aggregation reporting system as well to have greater certainty the schemas aren't revealing PII, but I don't think that's necessary from the start.

But Chrome is also pretty constrained on what we can do here - in general we can't just provide some functionality to some sites and not others based on the whim of some group of individuals. I think it's going to be essential for us as an industry to be able to iterate together on this, and for that I think it'll be essential to have good transparency on how these systems are being used in practice
...
I also think it would be very reasonable for other browsers to take a more careful stance on this and not sure it's helpful or even necessary to try to standardize on all the details across all browsers. Perhaps it's a selling point for the privacy of a given browser how it manages such a list?

Generally, I agree that this is a good differentiating feature that Brave may be interested in taking. However, what concerns me is that taking this approach where we don't standardize on some method here would lead to a disjointedness in web compatibility between browsers. Additionally, it's likely to end up being a mechanism abused by sites instead of protecting users if we don't standardize here. This is because the likely outcome here would be sites recommending users switch browsers to register to circumvent the issue rather than to adjust their requested data and seek to get added to a default list. I agree that we can't just provide some functionality to some sites and not others based on the whim of some group of individuals, and I believe that if we're specific and transparent about how sites can utilize this functionality then we'll find a good balance between oversight and accessibility to this feature.

from digital-credentials.

RByers avatar RByers commented on August 15, 2024

We had a great long discussion on the call today which @timcappalli is going to summarize and share notes for.

My main takeaway personally was that the protocol registry we're working on should perhaps contain some information about what trust signals are required in which cases, which browsers and OSes can rely on in their threat models. I understand that EUDI wallets will already be heavily regulated to work only with trusted verifiers, and so, I believe Chrome shouldn't need to add additional protection in front of such requests beyond the user consent we already have planned (through a browser like Brave may still want to). But the situation may be completely different for, for example, a US State driver's license wallet app.

Also I want to point out that there are lots of ways for web pages to interface with wallet apps today, several already in deployment with more on the way (custom schemes, QR code scanning, etc.). If we create barriers in the digital credential API specifically, we should just expect those use cases to exclusively rely on these other mechanisms instead. I'm personally interested in having Chrome apply identity protections broadly, potentially even including typing in one's passport number into a text field. I think that was the thinking behind starting broader work within the context of the W3C ping.

from digital-credentials.

timcappalli avatar timcappalli commented on August 15, 2024

We'll block some time on the B call next week for anyone who was unable to join the call this past Monday.

@kdenhartog will you be able to join on 2024-02-21?

Raw notes from 2024-02-12 call

https://github.com/WICG/digital-identities/wiki/2024-02-12-Meeting-Notes#discussion-limit-access-to-the-api-based-on-known-allow-listed-origins-59

AI generated summary of the 2024-02-12 call notes on this topic:

This discussion revolves around the question of whether there should be a domain allowlist for who can request Personally Identifiable Information (PII) and if this concept applies to browser APIs or other mechanisms as well. The participants agree that there are other primitives that use allowlists, but there is hesitation about relying solely on the browser due to its limitations and the need for a more universal solution.

Some suggestions include:

  • A registry or trust infrastructure for evaluating wallets and their ability to handle requests.
  • Determining legitimate use based on the request itself and implementing a risk scoring system.
  • Considering different credential types and their protections.
  • Implementing well-known properties per relying party to indicate the required properties for handling requests.
  • Potentially having the user agent more engaged in issuance.

However, there is reluctance to build a global allowlist due to its complexity and potential limitations. Instead, it seems that a more nuanced approach, involving trustmarks or other intermediaries, may be necessary to ensure that wallets act in good faith while preserving user privacy. The conversation will continue in the next call.

from digital-credentials.

kdenhartog avatar kdenhartog commented on August 15, 2024

Yup, I can make that call.

from digital-credentials.

timcappalli avatar timcappalli commented on August 15, 2024

2024-07-29 call: agreed with conclusion, closing issue.

from digital-credentials.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.