Giter VIP home page Giter VIP logo

Comments (36)

mozfreddyb avatar mozfreddyb commented on August 23, 2024 11

We at Mozilla have done a thorough spec review and intend to change our standards position to positive: We are convinced of the track record that Trusted Types has in terms of preventing DOM-based XSS on popular websites (thanks to folks in thread for providing these insights!).

That being said, there are some important concerns that need to be addressed before this can ship in a release build for all of our users. First and foremost, there is some functionality (e.g., getPropertyType, getAttributeType that seem a bit odd and their usage in the wild isn't clear to us. Conversations with the google web sec team confirms that there is a lack of clarity in terms of usefulness and usage on the web. Chrome has started to add UseCounters (thanks!).

We also spent some time on the Chrome implementation and found some features that are not even in the standard, which is a bit problematic (e.g., beforepolicycreation event). We expect those features to go through standardization or to be deprecated and removed similarly to the methods mentioned above.

from standards-positions.

jonathanKingston avatar jonathanKingston commented on August 23, 2024 4

Eval / location scope increase

When the developer is implementing a policy they will have the ability to have visibility and mutability of the attributes set on properties and functions. Current APIs like Eval and window.location aren’t currently pollyfillable such that the first party can’t gain visibility of the content going through them. So the first party will be able to embed a third party script and load it into the first party context via a script tag and have visibility of code that would get evaluated.
It seems like this is a slight increase in scope of how the web platform works today but not a big concern.
@annevk probably has knowledge of the problem space here and if it's a concern.

Backwards compatibility

The specification has solved the ability for developers to progressively enhance their site in a few ways within the specification. Browsers that don’t implement the policy type should just work as is.

When a browser decides to ship trusted types however the website will be locked into using these policies for future APIs but also the following cases cause some concern:

More sinks

If a DOM sink to an existing API was missed at implementation time, then the current implementation of the policies would then apply to the new DOM sink. This is likely to be untested for these websites and cause violations/breakage across these sites.
However this seems limited in scope given that many DOM sinks are known. So it would likely be a low breakage if such an event occurred across all implementing browsers.

More types

If is seems that more types are needed to fix XSS on the web, the current shape of the APIs would apply also to the websites that wouldn’t be implementing a policy type here.

So for example if browser vendors decide that there should be a CSS type that prevents injection of CSS custom properties by XSS into the page. In the current implementation of trusted types the existing implementing sites would then apply policies here but not handle the type, in which case any site using CSS variables would throw CSP violations and prevent their use.

This seems like a big concern and likely something the specification could solve with some kind of policy prefixing perhaps. Separating by policy also gives the developer some more flexibility and clarity on what the browser would do for each type.

Partial updates

In the current model of Trusted Types updating parameters on a type doesn’t go through the trusted types. It seems to me that changing a protocol on a URL object should then get checked against the same policy as when it was created. Alternatives to this would be to block all updates to types that have gone through trusted types but this doesn’t seem tenable.

I suggest there needs to be a solution here before this can ship.

Advantages to Mozilla implementing for internal about pages

Having the ability to further restrict internal about pages across other vectors would be useful.

Currently we restrict the use of unsafe HTML by running it through our internal sanitizer. It seems we could change this instead to a throwing model when unsafe assignments are used to the DOM sinks. My understanding is this would have performance improvements when we can guarantee that we don’t need sanitization as sanitization can’t be zero cost, however I don’t think we use many places where we could assert this safely currently perhaps developer tools would be setup for this.

We also gain the advantage of running policies on URL and ScriptURL which is currently seemingly impossible to audit.

Strategies for developers to roll-out

There appears to be a few clear roll-out strategies that help developers become XSS free.

In codebases that are produced mostly by compiled output from TypeScript, Rust, Java etc the compiler may be able to annotate types that clearly never are modified by user data. In such case the compiler can annotate wrap sinks with a blank policy that doesn’t check anything. The browser then will throw exceptions for third party scripts that might not be within the developers control. They could also choose to use a fallback to something like DOMPurify.

In codebases that use a framework such as react, angular the frameworks can implement their own policy within their UI layer. The site then will gain the advantage of the policies with code that doesn’t exist within those frameworks.

Sites that mostly use custom JavaScript can choose to roll-out a stringent policy that strips XSS and gradually change their code to wrap Trusted Types in a way that becomes more performant over time.

Developers also will gain a simpler interface and not be required to know where all of the 70 DOM sinks are for XSS. Simply enumerating all the places in a codebase that can call these APIs currently can be very difficult for existing sites and providing developers with the report only functionality is another useful tool.

There have been concerns about if developers are able to make sane informed choices with such a type system. I'm not too sure this is a problem itself, especially as developers who attempt to implement anything are likely to improve the status quo. If developers choose to use blank policies that do no data manipulation simply to annotate for their own auditing this also gives them new flexibility they currently don't have in auditing these sinks.

Overall I would like for us to collaborate on these issues as I see it as a worthwhile improvement to the web.

from standards-positions.

bartoszniemczura avatar bartoszniemczura commented on August 23, 2024 4

At Meta, we see Trusted Types as a useful security mechanism as well. I believe that broader support across browsers and broader deployment across websites would be beneficial to the web platform overall. I wrote down some data points from earlier this year here.

from standards-positions.

bzbarsky avatar bzbarsky commented on August 23, 2024 2

My take on brief skim is that this is a good space to explore, but I'm unconvinced of the complexity of the policy setup.

@martinthomson any thoughts?

from standards-positions.

stitchng avatar stitchng commented on August 23, 2024 1

I think there's enough here that I'd like to encourage you to create your own repo with its own issue tracker. Issue threads are linear, and when discussing a full-fledged counter-proposal, it's important to keep separable issues separate.

@mikesamuel Our intention was not to diverge discussions on the current spec direction but to bring to the notice of stakeholders such as yourself that a counter-proposal is out there and that we feel is worth discussing. However, you are correct. We need to keep separable things separate and we would have a new repo created to track discussions on this novel counter-proposal on our end and reference here perhaps (or not). The major reason for this counter proposal and why we brought it to this particular issue on this repo stems from the comments of @mozfreddyb here on this issue. We seem to agree with his (@mozfreddyb) position on a quote (from him) below:

"Automatically sanitize within the APIs that parse strings into HTML (e.g., innerHTML).
One could also debate exposing a sanitizer API to the DOM"

We feel that exposing the sanitizer API e.g. DOMPurify to the DOM via the policies created/registered for HTML trusted type (for example) will make the TrustedTypes usage much more comprehensive, effective and robust it seems.

How, under the wicg/trusted-types proposal, does a developer have to do that?

A policy author potentially has to think about all of (HTML, Script, ScriptURL, URL) but that's far less than 70 and, as you've pointed out, most policies other than the default policy tend to only deal with one of those.

The developer who is using a trusted value with a sink, probably has 1 kind of sink in mind: the sink they're using.

The developer has to write code against each DOM sink like so:

     let TrustedPolicy = window.TrustedTypes.createPolicy('my-policy', { 
        createHTML(html){
               return window.DOMPurify(html);
        }  
    });

    document.getElementById('main-page').innerHTML = TrustedPolicy.createHTML('<span x=alert(0xed)>Content!</span>');

This means that the developer will have to remember at each point when dealing with any DOM sink that is in use in the JS codebase under development/review not to do this (When a trusted-types policy is in effect - i.e. registered via the CSP header):

    let TrustedPolicy = window.TrustedTypes.createPolicy('my-policy', { 
        createHTML(html){
               return window.DOMPurify(html);
        }  
    });

    document.getElementById('main-page').insertAdjacentHTML('afterbegin', '<span x=alert(0xed)>Hello there...</span>') // this will throw error because a string is used directly with this DOM sink

There needs to be a presence of mind of the developer to be careful with using each DOM sink as they have to use the value-objects (TrustedTypes) at every instant.

It doesn't seem to me that the ergonomic benefit of:
window.TrustedTypes.HTML.registerPolicySanitizer(name, methodDefinition)
over
window.TrustedTypes.createPolicy(name, { HTML: methodDefinition })
justifies the loss of generality.

We believe the ergonomic benefit of the newly proposed API for TrustedTypes (above) does not lead to a loss of the comprehensiveness or extensiveness stemming from its form. One could argue otherwise that this version of the API for TrustedTypes (above) promotes visibility into each types policy details.

The problem with having just one kind of URL is that some sinks load URLs into the current origin as code, and some load URLs into separate origins or as constrained media.

We do agree with your assertion as the URLs for scripts serve a different contextual use case than other URLs for documents. We could modify the current API signature as follows to accommodate this as such:

window.TrustedTypes.URL.registerPolicySanitizer('a-policy', function(TrustedType){
    return function(url, category){ // category can be either one of: 'document' , 'script', 'style'
      category = category || 'script'
      // this sanitizer will vet the "url" according to the "category"
      return window.URISanity.vet(url, category);
    };
})

From the above code for the a-policy registration, the DOM sinks will be responsible for interacting/callingwith the function with signature (url: String, category: String) with the correct "url" and "category" passed to the DOM sink API e.g.

 
/**!
 * When the DOM sink API below is called, it
 *  calls the registered URL sanitizer and passes
 * "javascript:void(prompt('Hello'));" as the > url and
 * "script" as the > category parameters respectively
 */
myScriptElement.src = "javascript:void(prompt('Hello'));"

Is the proposal to do policy configuration in JavaScript instead of as document-scoped security metadata that is loaded separate from page content?

Yes, it is a proposal to do policy configuration in JavaScript (as an additional method of policy configuration to document-scoped metadata loaded in HTTP response headers. However, we do see a fault in allowing policy configurations in JavaScript. client-side code could be compromised by a third-party attacker which is why we also made a safety hatch for these configurations such that policy configurations are allowed only once. trying to configure again via JavaScript or via a <meta http-equiv="Trusted-Types" content="..."> tag throws an error. So, one can only do policy configurations once either using HTTP response headers e.g:

Raw HTTP headers formatting below

Trusted-Types: type-html 'block-inclusion'; type-document-url 'block-navigation' 'report-violation'; type-script-url 'block-execution' \r\n

The above draws from work and discussions here on Per-Type Enforcement

Content-Security-Policy: trusted-types a-policy; default-src 'self' https: blob: \r\n

Report-To: { "max_age": 10476400, "endpoints": [{ "url": "https://analytics.provider.com/trusted-types-errors" }] } \r\n

or using Javascript e.g:

window.TrustedTypes.HTML.config = {
    throwErrors:true,
    blockIncludes:true,
    reportViolation:false
};

window.TrustedTypes.URL.config = {
    throwErrors:true,
    blockIncludes:true,
    reportViolation:true
};

The Report-To header applies to both cases however as it stipulates the endpoint to report to.

Finally, someone can close this issue out so fresh discussions on this counter-proposal can begin here.

from standards-positions.

koto avatar koto commented on August 23, 2024 1

Thanks for your thoughts, @othermaciej. I'd like to briefly respond to the issues you raise here:

[..] This seems overly complex, perhaps unnecessarily so for the use case.

I've written a FAQ entry giving a little more background about how we ended up with this API shape. I think it offers some advantages, and based on the (limited) feedback we've gotten from developers, it doesn't seem overly difficult to deploy.

This also seems a somewhat incomplete solution to the problem of injective XSS. It provides a framework that helps the author remember to call the validator (or even ensures it, depending on use). But many common XSS attacks don't seem like they would really be defended by this. A lot of XSS attacks depend on tricking a poorly written sanitizer.

Michal Zalewski posted https://lists.w3.org/Archives/Public/public-webappsec/2016Feb/0035.html in 2016, and the experience he relates there matches the experience we have at Google. After evaluating hundreds of XSS over time, the vast majority rely on a lack of sanitization, not a flaw in the sanitizer. This design aims to target that flaw, and we believe it's worth targeting.

Others depend on string pasting that combines a markup template with some input. If the Trusted Type is applied only after that manual template pasting, then it's too late, and the bad data may have gotten in already, even if you trust that the template is safe HTML.

It's true that using Trusted Types does not ensure that you're using it well; a policy that just stamps strings as "trusted" certainly offers less protection than one which sanitizes strings before stamping. Again, though, our experience is that vulnerabilities generally result from incomplete sanitization, not flawed sanitization. Also, note, that Trusted Types does make the flaw you're suggesting harder to write in the first place, as Trusted* objects are not strings, and concatenating them with a string will result in a string, not a trusted object. Today, the developer may sanitize a string, and then concatenate, which is unsafe. That's a hard mistake to make with these types. TT are a tool and they can be used incorrectly, just like a sanitizer may be misconfigured (see also this FAQ entry).

I don't want to hijack this Mozilla's thread. We've had many productive and helpful conversations with the community in our GitHub issues, so I'm happy to continue this discussion over there. DOM XSS is a difficult topic, and we really want to get this one right!

from standards-positions.

othermaciej avatar othermaciej commented on August 23, 2024 1

@koto I will try to file an issue against the spec soon. I only posted here first because of direct invitation. That said:

I've written a FAQ entry giving a little more background about how we ended up with this API shape.

I only did a quick read so far, but the FAQ tries to justify the TrustedTypePolicy, but not the PolicyFactory (which is, essentially, a FactoryFactory).

from standards-positions.

otherdaniel avatar otherdaniel commented on August 23, 2024 1

Hi all, with Trusted Types being proposed for inclusion in Interop 2024 I'd like to provide a Trusted Types update, and specifically provide evidence for Trusted Types effectiveness and real-world developer support. There were several concerns around these voiced in this thread. Today, with hindsight, we can provide solid evidence. (To be fair, I don't know whether that 2020 post still represents its author's current thinking.)

  1. Trusted Type Effectiveness / "Completeness"

    XSS used to be a significant problem at Google, making up 30% of overall VRP rewards in 2018. In 2023, they account for only 4.1%, all for bugs reported against properties that have not migrated to Trusted Types yet. In the past three years, we have not received a single XSS (in VRP; in the wild; or through own research) for a Trusted Types-enabled Google property. We expect to effectively eliminate the XSS problem as we deploy TT across all of Google.

    (The VRP numbers are relevant, because they quantify results from external contributors and should thus be free of an organization's own blindspots. In other words, it's not only that we have been unable to find work-arounds against our own Trusted Types deployments; it's that noone else has been able to find them either.)

    I am unaware of any existing or proposed web platform feature with a similarly impressive anti-XSS track record.

  2. Trusted Types Complexity

    There are misconceptions of how Trusted Types is used: The goal is not to use "trusted" strings (and the factories that create them) everywhere in your application; the goal is to refactor the applications to use non-XSSy DOM sinks as much as possible. The creation of trusted type factories and trusted strings instances is largely a transition measure to help get an application from its legacy state to a safe-by-default state, where one can then simply use require-trusted-types-for "script" to lock it down.

    For cases where this is not possible (e.g. where the application needs to display dynamically generated HTML, or load scripts from dynamically generated URLs), the trusted type factories are used through higher-level libraries that make even those actions secure (e.g. by sanitizing the HTML); there are also proposals to add primitives to the platform that make some previously XSSy tasks secure without using Trusted Types policies (w3c/trusted-types#347, https://wicg.github.io/sanitizer-api/).

    For browser-implementation complexity, we have been able to reduce Trusted Types complexity substantially by piggy-backing on WebIDL and generating code for most of the TT checks. (Some APIs still require custom code.)

  3. Community Support and "Ordinary Web Developers"

    Today, Trusted Types is enforced on approximately 10% of page loads measured by Chrome's telemetry. It has several major deployments and it is actively being deployed and advocated for by third parties. Proposed web platform features with similarly widespread support seem to be rare.

    "Ordinary web developers" - which I assume means developers outside of large, tech-focused corporations - have also taken up Trusted Types. Here are several examples taken from HTTP Archive:

    I'll note that specifically the smaller examples use Trusted Types to simply turn off injection sinks. Those appear to be simple sites where the TT deployment - and thus XSS-safety - is likewise simple. This is by design.

    (As of today all of these use a CSP with require-trusted-types-for 'script' on their main page, enforcing or report only. Since I picked these up through HTTP Archive, I don't know anything about their deployment or motivation.)

from standards-positions.

BrucePerens avatar BrucePerens commented on August 23, 2024 1

Trusted types are being a useful security mechanism for my implementation. The types I have implemented are:

  1. One that must be initialized before the window load event. This assures that it doesn't come from user input.
  2. One that runs the data through a sanitizer.

If it doesn't come from one of these, and I'm using it with an injection site, Chrome will throw an error. On Firefox these also work, but the browser doesn't enforce their use.

I'm a 66-year-old systems programmer, and can't call myself any sort of web API expert. It took a few hours to implement and I used a well-accepted sanitizer, DOMPurify, without auditing that myself.

I didn't really experience any grief with the API. Other APIs are notable for the amount of craft knowledge required beyond what is visible on MDN, not this one.

from standards-positions.

gregwhitworth avatar gregwhitworth commented on August 23, 2024 1

I posted this on Interop 2024 as well but posting across the position issues as well just in case it gets lost:

Salesforce strongly supports the Trusted Types proposal, considering the imminent regulatory changes in the Netherlands and the broader EU, as outlined in the eIDAS Regulation.

The U/PW.03 Standard of DigiD assessment demands the removal of 'unsafe-eval' from CSP, a challenge that will be mirrored across Europe. This presents critical compliance and potential reputation risks for our customers, especially in the public sector and healthcare.

Trusted Types have shown efficacy in XSS risk reduction, demonstrated by Google's successful adoption. This underlines the standard's relevance and potential impact.

from standards-positions.

annevk avatar annevk commented on August 23, 2024

cc @ckerschb

from standards-positions.

dbaron avatar dbaron commented on August 23, 2024

also cc @dveditz @jonathanKingston @mozfreddyb

from standards-positions.

jonathanKingston avatar jonathanKingston commented on August 23, 2024

I have read this a few times already, besides that there might be some missing sinks (I haven't checked) it seems like a worthwhile addition to auditing.

I specifically like the CSP header to turn off all sinks, it might need to be more granular perhaps and could be in a FeaturePolicy perhaps?

We should also update https://github.com/mozilla/eslint-plugin-no-unsanitized to include these types.

My only questions/concerns are:

  • Naming, does TrustedHTML imply that it will never permit XSS to developers?
  • Does this have any impact to the XSS filter in the browser? Perhaps it could be disabled when this type is used?

from standards-positions.

mozfreddyb avatar mozfreddyb commented on August 23, 2024

I find this an interesting approach, but I'm not sure a type system is solution the web has been looking for.

FWIW, I'd be more inclined to widely discuss the approach we have recently taken in Firefox's chrome privilege code: Automatically sanitize within the APIs that parse strings into HTML (e.g., innerHTML).
One could also debate exposing a sanitizer API to the DOM.

from standards-positions.

ekr avatar ekr commented on August 23, 2024

@ckerschb

from standards-positions.

ckerschb avatar ckerschb commented on August 23, 2024

After so many years XSS is still the most prevalent type of attacks on web applications and I agree that some kind of type system for manipulating the DOM could improve the situation. As mentioned in the explainer, security reviews of large applications have always been a pain point and I can see the benefit that security reviewers could focus their efforts around the creation of trusted types. Additionally we could enforce a variety of policies on the various types (maybe even within the DOM Bindings) which sounds tempting to me.

What I am mostly worried about is the policy creation. In the current form of the proposal a careless developer would most likely just register a no-op policy and it would be up to the security team to call that out - so this part would only be a minor improvement to the current string based APIs. Additionally I don’t think it should even be the developer to register the policy, wouldn’t it be better to separate the trusted type creation from the policy creation? Maybe we should try to think of some other policy creation and delivery mechanism. In turn, this shift of responsibility would remove complexity from the type creation system and the burden for developers to come up with a sophisticated policy. But maybe that would also require to build in some string sanitizers into the platform, which I personally would like to see anyway.

from standards-positions.

martinthomson avatar martinthomson commented on August 23, 2024

We've been talking in feature policy discussions about what sites might do to disable those features they most dislike. I'm starting to think that there might be an HMTL6 effort in our future. There are things like document.write() and synchronous XHR that don't really need to survive long term. Element.innerHTML is a more difficult proposition though.

I too am concerned at the complexity of this. Part of that is the perl -T thing that haunts me. A bigger part of that derives from the desire to have custom sanitization routines. Maybe that is unavoidable, but there are probably ways in which this could be simplified. How far do we get with an in-browser sanitizer that essentially only prevents script execution? If we were able to neuter the current entry points and provide unsafe variants with sanitizer hooks, would that be a better model?

from standards-positions.

dbaron avatar dbaron commented on August 23, 2024

So it sounds like people both (a) see value in what's being explored here, and (b) also have some serious concerns about it. Perhaps that implies a position of defer? Or is there some other position that you think could represent a Mozilla consensus (or a process that might achieve one)?

from standards-positions.

martinthomson avatar martinthomson commented on August 23, 2024

defer seems right in that it's premature to be deciding on this.

I suspect that part of the reason we struggle to get any conclusions here is that discussion about the details is only just beginning and sometimes those details are what our position turns on.

from standards-positions.

koto avatar koto commented on August 23, 2024

Hey, given that we Intend to Experiment on TT, I'd like to revive this thread and comment on existing, valid concerns:

I'm unconvinced of the complexity of the policy setup.

Point taken. What we notice so far, especially for migrating existing code, polices become practically no-op, but then are allowed only to be used in certain application parts (e.g. sanitizer, templating system, our own Safe* type system). Policies do introduce complexity, and were not part of the initial design, but we think that they offer interesting properties for securing existing applications (more on that below).

In the current form of the proposal a careless developer would most likely just register a no-op policy and it would be up to the security team to call that out - so this part would only be a minor improvement to the current string based APIs.

In the current form, the security team still has ways of controlling the policy creation (via the headers with the unique policy name whitelist). In similar fashion, CSP whitelists for script-src are sometimes maintained by security teams to detect developers loading scripts from other sources.

The improvement I see is in orders-of-magnitude reduction of the (security) review surface. I agree it's still possible for careless developers to remove all the benefits of typed approach (by e.g. specifying a no-op policy and using it all over the application), but at least it becomes possible to limit that, and that's the design we'd ideally encourage in userland libraries. For example, even the no-op policy can be controlled at its creation time (via name whitelist), and at its usage (code review determining how the policy reference is used).

I don’t think it should even be the developer to register the policy, wouldn’t it be better to separate the trusted type creation from the policy creation? Maybe we should try to think of some other policy creation and delivery mechanism. In turn, this shift of responsibility would remove complexity from the type creation system and the burden for developers to come up with a sophisticated policy.

In the browser implementation, the enforcement is implemented in sinks and makes the sink simply accept a typed value (with optional promotion of a string via a default policy). Policies are now a way of creating typed values. Previously, there were just TrustedHTML.unsafelyCreate functions, which are now removed, because we believed that exposing them only encouraged the design that removes the benefits of the types i.e. there's not enough control possible, and as a consequence, there would be no review surface reduction.

In the current iteration, the policy impl. are userland functions, defined per document. We couldn't find a way to make this simpler (e.g. to define them statically, in some header or metadata file), and in general having configurable type factories as JS object seemed useful in practice. I'm all for trying to find a better way, and brainstorming on this.

To find a common ground, do we all think that having typed sinks (with the same names as the legacy ones) are useful for DOM XSS prevention? If so, then we just need to figure out what's the best way of creating types, given the constraints (developer laziness, large security review surface, existing code). Policies are one idea we're trying to battle test now, but we're open to discussing other ways.

from standards-positions.

stitchng avatar stitchng commented on August 23, 2024

There is a alternate proposal from @isocroft on TT which can be found here as a proof of concept and his thoughts around this is to help the ongoing discussions on how TT should be rolled out in browsers and also how simple the web developer experience should be. The idea is to "invert control" in a sense to make it possible to reduce the cognitive inertia that web developers today might have with the current spec direction of TT.

So, DOM sinks like innerHTML for example have knowledge of using a registered policy santizer to act on any (perhaps potentially unsafe) HTML string passed to it. Its behavior can also be modified accordingly too.

Here is code that explains the alternate proposal

window.TrustedTypes.HTML.registerPolicySanitizer('alt-policy', function(TrustedType){
    window.DOMPurify.addHook('afterSanitizeElements', function(currentNode, data, config){
       // more code here 
    });

    return function(dirty){
       return window.DOMPurify.sanitize(dirty, {
		      USE_PROFILES: {svg: true, svgFilters: true, html: true},
		      ADD_TAGS: ['trix-editor'], // Basecamp's Trix Editor
		      ADD_ATTR: ['nonce', 'sha257', 'target'], // for Link-Able Elements / Content-Security-Policy internal <script> / <style> tags
          KEEP_CONTENT:false,
		      IN_PLACE:true,
		      ALLOW_DATA_ATTR:true,
		      FORBID_ATTR:['ping', 'inert'], // Forbid the `ping` attribute as in <a ping="http://example.com/impressions"></a>
          SAFE_FOR_JQUERY:true,
          WHOLE_DOCUMENT:false,
          ADD_URI_SAFE_ATTR:['href']
	     	})
    };
});

window.TrustedTypes.URL.registerPolicySanitizer('alt-policy', function(TrustedType){
    return function(url){
      // URISanity is a ficticious URL sanitizer (doesn't exist - yet)
      return window.URISanity.vet(url);
    };
})

/****=== configurations ===****/

/* blockIncludes: blocks including potentially unsafe HTML strings into the DOM hence modifying the behavior of `innerHTML` */
/*throwErrors: throws errors when the sanitizer detects unsafe HTML string content */
window.TrustedTypes.HTML.config = {
    throwErrors:true,
    blockIncludes:true,
    reportViolation:false
};

/* blockNavigation: blocks navigating to potentially unsafe URLs hence modifying the behavior of `location.href` or `location.assign()` */
/*throwErrors: throws errors when the sanitizer detects unsafe URL string content */
window.TrustedTypes.URL.config = {
    throwErrors:true,
    blockNavigation:true,
    reportViolation:false
};

/* In the example below, `innerHTML` throws a `TrustedTypesError` because of the "ping" attribute and also does include the HTML string into the DOM too*/
document.body.getElementsByName('wrapper')[0].innerHTML += '<a ping="https://www.evilattacker.com" href="#">Hello World!</a>';

/* This will also work for other DOM sinks and chokepoints too */
document.body.lastElementChild.insertAdjacentTML();

/* Also for this too, the behviour of assign is modified by */
document.location.assign("http://my.nopoints.edu.ng/_/profiling/28167/#section1")
<meta http-equiv="Content-Security-Policy" content="trusted-types alt-policy">

The above actually proposes programmatic configurability over declarative as it is more cheaper and also doesn't require the web developer to keep all 70 DOM sinks in mind as he/she writes code based on TT. It also proposes that types should not be proliferated to deal with each kind on data passed around on the front-end. The use of a policy trusted ( types group ) or ( types form - a he (@isocroft) calls it) might be more efficient going forward.

For example: URIs for scripts / dynamic resources / stylesheets can come under a single types group or types form : URL

So, for stylesheets, there would also be:

window.TrustedTypes.CSS.registerPolicySanitizer('alt-policy', function(TrustedType){
   // code goes here
});

We would love your take on this alternate proposal on TT. The POC implementation of the above is here. You can try it out yourselves to see how it works.

from standards-positions.

mikesamuel avatar mikesamuel commented on August 23, 2024

@stitchng

Thanks for this. I think there's enough here that I'd like to encourage you to create your own repo with its own issue tracker. Issue threads are linear, and when discussing a full-fledged counter-proposal, it's important to keep separable issues separate.

I'm probably misunderstanding this, so apologies if parts are off the mark.

window.TrustedTypes.HTML.registerPolicySanitizer

Yes, most policies probably only have a create method for one type.
This does look very nice and clean.

Policy names are first-come first-serve, so this would only allow each whitelist entry to correspond to one trusted type.

The default policy is likely to cover more than one trusted type.

It doesn't seem to me that the ergonomic benefit of:

window.TrustedTypes.HTML.registerPolicySanitizer(name, methodDefinition)

over

window.TrustedTypes.createPolicy(name, { HTML: methodDefinition })

justifies the loss of generality.

also doesn't require the web developer to keep all 70 DOM sinks in mind

How, under the wicg/trusted-types proposal, does a developer have to do that?

It seems to me that the developer who is crafting a trusted value only has to keep in mind the type of content that they're crafting: one of (HTML, Script, ScriptURL, URL).

A policy author potentially has to think about all of (HTML, Script, ScriptURL, URL) but that's far less than 70 and, as you've pointed out, most policies other than the default policy tend to only deal with one of those.

The developer who is using a trusted value with a sink, probably has 1 kind of sink in mind: the sink they're using.

URIs for scripts / dynamic resources / stylesheets can come under a single types group or types form : URL

This seems separable from the API for crafting policies.

The problem with having just one kind of URL is that some sinks load URLs into the current origin as code, and some load URLs into separate origins or as constrained media.

// Loads into a separate document with an origin determined by the URL (modulo javascript:)
myAElement.href = url;
// Loads constrained media
myImgElement.src = url;
// Loads content into this document.  Origin of URL not used to separate.
myScriptElement.src = url;

It seems that the first two need far less scrutiny than the third, and within Google, we've simplified migrating legacy applications a lot by just allowing any (http:, https:) URL for the first two.
Applying that same policy to the third would be disastrous.

window.TrustedTypes.HTML.config = ...

Is the proposal to do policy configuration in JavaScript instead of as document-scoped security metadata that is loaded separate from page content?

from standards-positions.

mikesamuel avatar mikesamuel commented on August 23, 2024

Finally, someone can close this issue out so fresh discussions on this counter-proposal can begin here.

Thanks for responding and I will follow up there.
I suspect Mozillans may keep this issue open since it's a vehicle for them to collect their thoughts.

from standards-positions.

jonathanKingston avatar jonathanKingston commented on August 23, 2024

I would like to highlight a few things here, the ongoing TAG review: w3ctag/design-reviews#198 and also the feedback from @annevk w3c/trusted-types#176

Largely the issue mentioned is around the brittle nature of the current implementation in that policies aren't manadated against a feature and instead are applying to the callsite. This comes at a cost of potentially missing many APIs as new features to the web get added.

Overall I think there is interest in implementing Trusted Types but depending on webidl/callsites may be an oversight that we end up regretting.

from standards-positions.

koto avatar koto commented on August 23, 2024

It's a fair point. After a few iterations (especially after w3c/trusted-types#204 deprecating TrustedURLs that were the most loosely defined), we focused the API on DOM XSS prevention, explicitly removing the containment - or preventing the requests - from the goals. What's left is quite limited number of vectors / sinks, roughly:

  • callsites of HTML parsers (innerHTML, DOMParser.parseFromString, document.write etc)
  • JS execution (eval, setTimeout, javascript: navigation, script node manipulation, inline event handlers)
  • loading scripts dynamically (e.g. JSONP)

The application at callsites is important; Focusing on sources of the trusted values and erroring out as soon as the untrusted value is used in a DOM sink allows the authors to verify the enforcement statically, and avoid runtime surprises. We intend to shorten the feedback loop that current CSP has. In short, I'd rather the API keeps throwing at setAttribute, innerHTML setter and such.

It's hard to predict how specs will evolve, but it seems we might be able to assert the restrictions we want to be respected at the beginning of those two algorithms:

If possible, I'd like those assertions to be based on the TrustedTypes IDL extended attribute, so something like:

Assert: If this algorithm is called from an IDL construct, that construct has TrustedTypes extended attribute.

Additional sinks, e.g. for eval, setTimeout, and javascript: navigations are already explained in CSP terms.

Does that sound reasonable?

from standards-positions.

annevk avatar annevk commented on August 23, 2024

I'm not entirely sure that always works and it's definitely not as simple as that, but it's probably getting too much in the weeds for the scope of this repository.

from standards-positions.

mikesamuel avatar mikesamuel commented on August 23, 2024

@jonathanKingston
Please excuse my ignorance. What does it mean "that policies aren't mandated against a feature?"
I understand the part about fundamental algorithms should be guarded.
Do you want type metadata to contain enough information to distinguish setters that should be guarded from those that shouldn't?
If assertions around callers (presumably statically checked) are insufficient, are you envisioning runtime enforcement, i.e. trusted types discipline for browser internals?

for the scope of this repository

Would w3c/trusted-types#176 be a better place to get into the weeds?

from standards-positions.

annevk avatar annevk commented on August 23, 2024

We've had a couple more rounds of brief discussions since August, both internally and with Google. The major takeaways:

  • Some trusted type enforcement will move to the underlying primitives. This reduces the amount of APIs that are impacted, but might make debugging slightly more involved. The exact details are still being worked out.
  • The scope of the API will be re-reviewed to ensure it's forward compatible with adding more primitive enforcement points.
  • There's a worry that the API is too complex for adoption by the long tail of sites and the work done on frameworks to date still puts the onus on individual site developers to take advantage. We know that frameworks, e.g., Ember.js, have raised the adoption issue with CSP in the past and it would be really good to know that that isn't a problem this time around. (It does seem more doable as enforcement is configurable through APIs.)

So I guess we're somewhere between worth prototyping and non-harmful.

I think defer might be problematic as it could have somewhat significant impact to many core APIs.

from standards-positions.

dbaron avatar dbaron commented on August 23, 2024

So I guess we're somewhere between worth prototyping and non-harmful.

At some point we have to pick.

Does @lukewagner have an opinion here?

from standards-positions.

lukewagner avatar lukewagner commented on August 23, 2024

I haven't been part of the more detailed discussions mentioned above, so I don't know the list of practical concerns, but at a high-level, the problem seems motivating and the rough approach make sense, so it seems worth prototyping to me.

from standards-positions.

othermaciej avatar othermaciej commented on August 23, 2024

@annevk asked for WebKit input, so here's our brief thoughts:

The goal seems nice: "allows applications to lock down DOM XSS injection sinks to only accept non-spoofable, typed values in place of strings". But this seems incredibly hairy and complicated as a way of achieving the goal. A TrustedType is created by a TrustedTypePolicy, which is made by a TrustedTypePolicyFactory that uses a TrustedTypePolicyOptions. This seems overly complex, perhaps unnecessarily so for the use case.

This also seems a somewhat incomplete solution to the problem of injective XSS. It provides a framework that helps the author remember to call the validator (or even ensures it, depending on use). But many common XSS attacks don't seem like they would really be defended by this. A lot of XSS attacks depend on tricking a poorly written sanitizer. A framework to help you call the sanitizer doesn't address that at all. Others depend on string pasting that combines a markup template with some input. If the Trusted Type is applied only after that manual template pasting, then it's too late, and the bad data may have gotten in already, even if you trust that the template is safe HTML.

Before supporting this standard, it would be good to have some evidence that ordinary web developers (and not just top experts) can understand this complexity and derive a security benefit. Apparently much of what has been proposed here can be polyfilled, so it seems best to get more experience that way.

from standards-positions.

mikewest avatar mikewest commented on August 23, 2024

Hey folks! As an FYI: there is an intent to ship thread ongoing on blink-dev@. IMO, the right thing for Blink to do at this point is to put this API in front of developers, and iterate on it as we learn from their feedback. If y'all believe there are things we really ought to change before shipping it, please do weigh in. As I hope has been evidenced by our conversations over the past few months, we value your input!

from standards-positions.

zcorpan avatar zcorpan commented on August 23, 2024

Thanks @otherdaniel! Reopening for review.

from standards-positions.

mozfreddyb avatar mozfreddyb commented on August 23, 2024

I'm not sure if this forum is the best place to discuss this further, but I'm super curious. As someone who's extremely unaware of the web security regulation within eIDAS. Can you help point us in the right direction?

In case this gets too much into a back-and-forth discussion, I suggest this conversation is moved to the mozilla matrix #security channel. https://matrix.to/#/#security:mozilla.org

from standards-positions.

mozfreddyb avatar mozfreddyb commented on August 23, 2024

@otherdaniel You authored the patch that adds use counters. Can you make sure this is exhaustive? From looking just at the aforementioned changeset it seems the event handler is missing. There's likely more.

from standards-positions.

mbrodesser-Igalia avatar mbrodesser-Igalia commented on August 23, 2024

largely a transition measure to help get an application from its legacy state to a safe-by-default state, where one can then simply use require-trusted-types-for "script" to lock it down.

@otherdaniel: wondering about "largely a transition measure". When considering trusted types policies (not the static methods like fromLiteral), are they only intended as a transition measure?

from standards-positions.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.