Giter VIP home page Giter VIP logo

Comments (120)

jxnblk avatar jxnblk commented on May 20, 2024 22

Hi! 👋 Co-author of the System UI Theme Specification here. Thanks for bringing everyone together in one place!

Based on the Twitter conversations, I originally thought that this sounded very similar to the efforts we're working on in the Theme Specification, but after seeing the examples in the initial comment, I suspect that there might be slightly different goals.

If I'm wrong, I'm happy to combine efforts into a single place, but either way I'd love to make sure the two efforts can work together and build on top of one another – or perhaps the two specs can live under the same roof.

For background on where we're coming from, some of our high-level goals, which I've written about here are:

  • Create a common naming convention that other OSS libraries can adopt to ensure greater interoperability
  • Be as unopinionated as possible, striving for the lowest-common denominator (most naming conventions are derived directly from CSS)
  • Be as flexible and extensible as possible
  • Use an object that is JSON-serializable and works as production-ready shippable code

Part of the reason the word theme is used here is that this should allow components to be written in a themeable way so that, for example, the same datepicker component can be installed and used at GitHub or Artsy, but match their own brand's "theme".

As far as adoption goes, the spec is built into a few OSS libraries, and there is some interest from the following projects:

  • Styled System, which is used internally by Artsy, Priceline, GitHub, and others.
  • Modulz also seems likely to be adopting part of the spec – cc @peduarte
  • Theme UI is planned to be used in official Gatsby themes and Docz
  • Tailwind CSS expressed interest – cc @adamwathan
  • Several people have reached out to Material UI, but they are limited by their release schedule
  • Smooth UI & xstyled cc @neoziro
  • DesignQL cc @johno

In my opinion, adoption among the open source community is really key to adoption in proprietary tools. That is, you have to provide something so good that it would be silly not to use, but businesses will rarely have interoperability as a primary goal.

As far as the goals set out above, here are my thoughts.

For core properties (name, value, description…), a design token file must be both human-editable and human-readable

Agreed. This is sort of a definition of code, i.e. human-and-machine-readable language.
If human's can edit design tokens, then I would say that you're describing something more akin to machine learning.

The format is simple, extensible, and as unopinionated as possible

This is also one of our principles, and I think it's key for any standard to achieve adoption, which is a very difficult thing to pull off.

Vendors (design system tools, design tools…) can store information for their own usage, both globally and for each token

This sounds equivalent to what I mean when I say it should be flexible and extensible. By creating a solid foundation, others should be able to build anything they need on top of that foundation.

The format translates well to existing design tools, in order to facilitate adoption

Any schema can be converted to another shape. I might be reading this the wrong way, but I would argue that accepting translation leads to fragmentation. We already have transformers for parsing data from the Figma API, or converting a theme-spec compliant object to other libraries like Tailwind CSS, but this creates friction and doesn't lead to the level of interoperability that I would like to see tools adopt.

So far, this sounds like we're fairly closely aligned with goals, even if they are slightly different. However, the code examples make me suspect that the aim here is more for a documentation format and less about a schema. If that's the case, the Theme Specification and this one could both be adopted and not be mutually-exclusive.

Given this example:

interface Token {
  name: string;
  value: any;
  description?: string;
  data?: Data;
}

The Theme Specification intentionally omits this level of schema definition and does not include documentation or metadata since that would not be desirable in production code and isn't required for interoperability. Things like code comments and type definitions are removed during compilation and more human readable documentation is generally stored in formats like markdown or MDX.
That said, you could absolutely use the schema above to store a Theme-Spec-compliant theme object that is used in an application.

I'll try to chime in again with more thoughts later, but we have an initial discussion around the Theme Specification in this issue if anyone in this thread would like to join the discussion.

I hope this is helpful, and hopefully there's a way that both efforts can work together and we can stop reinventing the wheel as often. ✌️

from community-group.

mirisuzanne avatar mirisuzanne commented on May 20, 2024 18

@kevinmpowell I think it may be a mistake to look for a "best" color-format or unit. If that were possible, everyone would already be using a single approach. In order to build something truly cross-platform, I think we have to embrace the complexity/flexibility of that problem. That's the entire underlying vision of the web, but a problem most native tools are able to simplify or restrict.

I often hear "cross-platform" used as a reason to avoid CSS – but I think it might be the reason to look at CSS as the most relevant prior art. CSS is explicitly designed to work across media, adapting to any context. All the JSON/YAML solutions I've seen hand-wave away that complexity, in favor of simple units like px or hex colors.

With both colors and sizing, the latest CSS specs are pushing hard in the other direction: support for muliple color spaces (like CIE LAB/LCH), and different display gamuts, in addition to the existing formats for sRGB color (hex/hsl/etc). For sizes, we get physical values (in/cm/etc), digital-but-static (px), relative (%, vh, vw, em, rem, ch, etc), and even totally dynamic, contextual sizes (fr, flex). And it is often the least definable/convertible units that provide the most powerful systems – because they communicate dynamic relationships between tokens.

Conversion can happen between some color formats, and some unit types, but not all – and those conversions can be lossy if you cross between color-spaces or unit-types. The different formats/unit-types exist because they carry different meanings & metadata, which can't simply be discarded for the sake of simplicity. It's a hard problem to solve, and the results are complicated because they rely on consistent adaptability rather than total uniformity.

It's my main concern with this project: Will non-web tools be able to interface with a token spec that is truly flexible/contextual/systematic enough to work on the open web? If so, does CSS provide a useful baseline? Or are people hoping we can limit ourselves to a single type of static unit that is easy to define, like most YAML/JSON solutions do?

from community-group.

bomberstudios avatar bomberstudios commented on May 20, 2024 16

Ale from Sketch here, quick comment to say ‘hi’ and let everyone know that:

  • we’re definitely keeping an eye on this project, and I’ll be happy to be your point of contact for all things Sketch, and…
  • interoperability is high on our list of priorities. I don’t think it makes sense for any tool to try to own the world, and playing well with others is key. Every team has different needs, and trying to be everything for everyone is a recipe for disaster 😅

from community-group.

kaelig avatar kaelig commented on May 20, 2024 13

Hi everyone, this is the most active thread in this project so I thought this would be an appropriate place to post this.

@jina and I sincerely hope you and your close ones are doing okay, considering the current situation.

Activity regarding editor recruitment is suspended for a few weeks, so people who applied can focus on their health, families, and process changes to their daily lives.

Discussion in this thread is still relevant, as it will help the editors of the specification publish a first draft when we have a team in place. I know for a fact that companies building design tools are also watching this conversation, so please keep going!

Thank you, and take care ❤️

from community-group.

danoc avatar danoc commented on May 20, 2024 11

One interesting scenario that's come up in Thumbprint is that some of our tokens have different values depending on the platform.

For example, our spacing system on web versus native is:

Token Web iOS/Android
space1 4 4
space2 8 8
space3 16 16
space4 24 24
space5 32 32
space6 64 48
space7 128
space8 256

(Notice that space6 has a different value and space7 and space8 are web only.)

Here's what our JSON file for this looks like:
https://github.com/thumbtack/thumbprint/blob/73bedcf83fb01f3c8617aee30b6a14e4e9143c0c/packages/thumbprint-tokens/src/tokens/space.json

Not sure if this should be supported, but it's something to consider. 🤷‍♂

from community-group.

dominikwilkowski avatar dominikwilkowski commented on May 20, 2024 11

Hi everyone,

I've been following this repo for a while now. Very keen to help and see where it goes.
Some thoughts from my side:

  • Even though we may not want to boil the ocean it's good to talk about what the horizon looks like to make sure we don't paint our-self into a corner. Then go back to the small scope and deliver the most basic first step.
  • I'd be very keen on a schema driven token system. This may or may not fall into the scope of this group but a schema driven data-structure would give us type-safety, documentation out of the box and make it possible to import only those types we need and not build a monolith. I take inspiration from GraphQL here.
  • Grouping tokens is something we found very valuable as opposed to atomic tokens. Things like type packs that group font styling (font weight, line height, family etc) into semantic groups helps communicate the design intent and build a common language.

Something that would help this would be a self-referencing system where I can declare an atomic token (lineheight: 1.2 or rounded-corners: 6px) and can reuse this in the same file or system:

{
  color: {
    brand: '#000',
    action: '#999'
  },
  space: {
    'line-height': [ 1.2, 1.4, 2 ],
  },
  type: {
    'page-headline': {
      family: 'SF',
      weight: 900,
      size: '2.5rem',
      line-height: space[ line-height ][ 2 ]
    },
    'section-headline1': {
      family: 'SF',
      weight: 700,
      size: '2rem',
      line-height: space[ line-height ][ 1 ]
    },
    'section-headline2': {
      family: 'SF',
      weight: 700,
      size: '1.5rem',
      line-height: space[ line-height ][ 1 ]
    },
    'body': {
      family: 'SF',
      weight: 300,
      size: '1rem',
      line-height: space[ line-height ][ 0 ]
    }
  }
}

No matter where these things fall, I'm keen to help.

Cheers from Australia

from community-group.

colmtuite avatar colmtuite commented on May 20, 2024 9

👋🏻Modulz co-founder here.

Thanks for kicking this off, I'm very interested in seeing this discussion evolve.

We're a few months away from first-class theme/tokens support in Modulz. We've adopted Theme Specification for Radix (our own design system) and we're working towards adopting it for Modulz theming in general.

I'm very keen to see a standard for many aspects of design systems. I'm up for investing time and other resources into it, if people think that might be helpful.

from community-group.

Kilian avatar Kilian commented on May 20, 2024 9

There's a tension here between us naturally wanting an all-encompassing spec but also not wanting to 'boil the ocean'. I would suggest we start very small and pave the cowpaths before coming up with new things. The cowpaths here being: existing implementations in design tools, and examples used in more than 1 (or any other n) publicly available design system.

Paving cowpaths

If we zoom in on colors and the way they're used now:

  • Sketch: Colors are unnamed and ordered
  • Adobe XD: Colors are named and ordered
  • Figma: Colors are unnamed and unordered(?)
  • Many design systems: Colors are named, ordered and grouped/shown in a range

There's a discrepancy between how people want (others) to use colors, namely they want certain colors to be used only for backgrounds, and they want people to choose between named colors blue-400 and blue-500, and what design tools offer, which is mostly 'a list of colors'.

Keeping things small

Variants, platform-dependent tokens and meta data are all important but what is the least this format should do to be useful? For example, these can be solved by having different token files for different variants. Vendors could choose how to interpret tokens without that having to be in the spec (if we put that in a spec and they ignore it in favor of their own translation of the true value, then what good is the spec?)

What do we want a token to be?

If we keep most of these things out, then focussing on what an actual token should encompass has us ask questions like:

  • Is a token a single value, or can it also be a range, or multiple values? (like in system-ui),
  • Following from that, are all tokens named?
  • Should we enumerate the data type? (string, int, color, json)
  • What about usage type (like for colors: "text" vs "background")
  • What happens with related token values? (Like a color range: each color is an individual token, but they live in a range. Additionally, A "heading" token might be comprised of a color, a font-family, a font-size, a font-weight, a font-variant, a line-height and letter-spacing. I like UDT's id references here, but how would that look for combinatory tokens)
  • Most systems out there have a relatively similar grouping: colors, typography, spacing, border radius, shadow/elevation. Does it make sense to bring this along, or should tokens be ungrouped and solely depend on data type or usage type? This would be unopinionated but could also hamper adoption.

Still, some of these questions already make things bigger rather than smaller (the nature of exploration).

from community-group.

c1rrus avatar c1rrus commented on May 20, 2024 8

Hi all. I'm the person behind Universal Design Tokens, which I started with essentially the same goal in mind: Defining a single file format for (source) design token data. So glad to see that others are tackling this too! Thanks @kaelig for kicking off this thread!

Reading through the thread so far, I have a few comments / thoughts:

  • There were a couple of comments about groups of tokens. I too feel this is something the format ought to support. I'm quite fond of Style Dictionary in that respect, since it lets you nest groups as much or as little as you like (although it does recommend their "Category / Type / Item" structure). I realise that having multiple token files (possibly arranged in folders) is another way of achieving this. But, in the spirit of being "unopinonated", I'd suggest we should strive to allow some structure within the files themselves.
  • The comments about different values for a certain OS are interesting. I wonder, if you go beyond just UIs, whether a similar mechanism could be useful when using design tokens in other media. Consider print - perhaps a color token could provide an explicit Pantone color in addition to an RGB value which would override the result of an automated conversion that tools would otherwise perform.
    • That being said, I also agree that this perhaps not a priority for version 1. As others have said, let's not boil the ocean. This is something people could use the data field for initially. If the working group spots patterns - i.e. lots of teams and/or products trying to solve the same thing - then it can be incorporated into a future spec revision as a dedicated property of the token objects. (Akin to how the WHATWG try to "pave the cowpaths" in the HTML5 spec by analysing real-world usage patterns)

I also have some additional suggestions:

  • Spec versioning. All going well, whatever spec we come up with won't be the final chapter in design tokens, so it feels inevitable that there will be future revisions. Versioning the spec therefore becomes important to help keep track of changes and communicate them to the wider community. I propose we adopt the SemVer convention for versioning the spec as it's already broadly used, various tools and libraries have support for it and I think it lends itself well to a spec like this. Similar to an API, we will have fixes, new additions and, perhaps some day, breaking changes.
  • Version identifier in file. Related to the above, I think it's a good idea for token files to identify which version of the spec they conform to. The Lona colors spec appears to have something like this in the form of $version and my own UDT format intends to do the same via a $schema property.
    • Consider a tool that is able to understand version 1.2.3 of this spec. Then, later, we introduce a v2.0.0 spec that has breaking changes. So, a 2.0.0 file is no longer guaranteed to work in that tool. Having a version identifier would let the tool determine this incompatibility automatically and warn the user.
    • Similarly, another tool might have support for both 1.x and 2.x file formats, but may need to parse them differently, so having the version in the file lets them do that. It's just like DOCTYPEs in HTML and how browsers switch between "quirks mode" and "standards mode", depending on what DOCTYPE a page has.
  • Token types. Looking at tools like Theo and Style Dictionary, they need to have some notion of what type a token's value is in order to correctly convert or transform that value. Theo does this by having type properties on the values themselves. StyleDictionary instead leans on the "Category / Type / Item" structure to determine the type of a token (and if you choose to not follow that structure, then the onus is on your own config to correctly assign types to your tokens).
    • I prefer something akin to Theo's approach for this. Having a type attribute directly on a token is more explicit and, perhaps, more human-readable too.

from community-group.

c1rrus avatar c1rrus commented on May 20, 2024 8

Now that we have a sizable group of representatives from various companies and projects, it's perhaps time for us to become more active.

I'll start my summarising my reading of this thread so far (and please do correct me if you disagree or if I miss out anything!) and then suggest a few areas I think we need to focus on.

Summary of thread so far

  • Everyone here is an agreement that a standardised file format for design tokens is a good idea
  • I think there's a consensus (or at least, no disagreement) that:
    • We should start small and "not boil the ocean"
    • The format should be human readable
    • Support for categories / groups of tokens is desirable
      • But, the file format should not prescribe a particular organisation of design tokens
    • Support for aliases (aka references) is desirable (i.e. one token can reference the value of another)
    • Tokens should have (optional) descriptions
    • Tokens should be allowed to contain other, optional meta-data as well
    • Tokens should have explicit data types
  • Theming and notions of inheritance or overrides have spawned a separate discussion thread. I suggest that this is a higher-level concern that would build on a low-level file format, rather than be an integral feature of that file format. What do you think?
  • Other features that have been proposed and/or discussed - but arguably don't need to be part of the initial version of the spec - are:
    • Allowing multiple values for the same token (e.g. for variations between platforms).
    • Allowing proprietary vendor data per token (so tools can save any auxiliary info they need along with the design token)
    • Saving the spec version that a file conforms to (so that, if there are breaking changes between future versions of the format spec, tools can easily determine whether or not they can reliably parse a given design token file)
    • The format should be able to express relationships between tokens (perhaps via adjustment functions and math operations). E.g. tokenA = 2 * tokenB

Suggested next steps
I think we need to agree some ways of working.

For instance, all discussions so far are in Github issues (and to a lesser extent PR comments). A Gitter channel was set up but has barely been used. Also a W3C Community Group is being setup, which will provide us with things like a mailing list.

I think long threads like the comments on this issue are not great for technical discussions about specific features because, as they grow ever longer, it becomes harder to track different threads. We could start creating lots of issues for specific topics (e.g. one for file format, one for data types, one for categories, etc.), but I fear that will then make it harder to keep track of the bigger picture of how everything hangs together.

What I therefore propose is that we do a PR that adds an initial skeleton file format spec as a markdown file. It could use the TypeScript pseudo code from this issue as a starting point, or it could be more of a narrative akin to what I started doing for UDT (which is very much work-in-progress and will hopefully be superseded by whatever we cook up here). I don't think it matters to much since whatever we start with will evolve quite substantially over time. Then we successively add or edit it via PRs.

Those PRs should aim to focus on a single issue (e.g. adding a chapter about X, amending the definition of Y, fixing a typo, etc.). Each goes through a peer review and, if accepted and merged, becomes part of the emerging spec.

At some point, once it's mature enough, we can version and release a snapshot of that spec. The same process continues though in order to produce future iterations of the spec.

This of course raises the question of who are the maintainers that can merge a PR? So far I think it's only @kaelig, but I suggest we have a few more to avoid bottlenecks when folks are busy with other things or on vacation. (A related question is whether or not the same set of people should become "chairs" of the W3C community group)

Besides very specific discussions via PRs, I think there is value in having a place for broader discussions. So far it's kind of been this comment thread, but I don't think it should remain that way. We may eventually have many Github issues, so how would a newcomer know that the comments in one particular issue are for general chit-chat. That doesn't feel very intuitive to me.

I propose we choose one place where these general discussions happen (so that people don't have to keep tabs on multiple sources). It could be Gitter or the mailing list we'll get from the W3C community group. My preference would be the latter since it feels more official and also has a publicly accessible archive.

I'm sure there's more stuff to figure out - but I'm putting these suggestions forward to get the ball rolling. :-)

from community-group.

dabbott avatar dabbott commented on May 20, 2024 7

👋🏻 Lona here! (@mathieudutour is also Lona). I'm excited to see this get off the ground!

A couple more meta thoughts:

  • I'd like to see a separate discussion around the principles. I want to make sure we have a clear agreement on the problem(s) before getting too deep in possible solutions. For example, principle #3 (vendor's storing their own info) doesn't seem important to me, but maybe with more discussion I would understand it better.
  • If this group is self-organized and we all tag our friends it's possible we'll end up with a very un-diverse group... which IMO will lead to a solution that doesn't work for a lot of people/tools/systems. It might be worth thinking about who should/shouldn't be involved and to what degree.

Sorry I haven't contributed much so far, I have some other priorities this week and next, but hope to hop in more soon

from community-group.

nikolasklein avatar nikolasklein commented on May 20, 2024 7

Hey there,
Niko from Figma here. :) Wanted to drop in, too, and say that this is really exciting and that the discussions here have been super interesting to watch evolve! Very much looking forward to seeing where this is going!

Thanks for looping me in, @kaelig!

from community-group.

oscarotero avatar oscarotero commented on May 20, 2024 7

Hi everyone.

I'm a designer and developer interested in this specification. I'm not representative of any of the companies building design tools but have some experience working with design systems and design tokens, so I'd like to leave here my opinion in case it helps.

I'm agree that CSS should be the basis of this new specification. It's a battle tested language with more than 24 years of evolution and have solved many of the concepts discused here, like colors, sizes, fonts, etc. At the same time, it has a friendly syntax, easy to read by humans and to parse by machines. So I propose create a new format based in CSS, let's say: CTS (Cascade Tokens Sheet). This new format would look similar to CSS:

colors {
    /* This is a comment for the primary color token */
    primary: rgba(233 2 0 0.3);
    secondary: #333;
}

sizes {
    small: 14px;
    normal: 16px;
    medium: 20px;
    big: 40px;
}

typography {
    main {
        font-family: Inter;
        font-size: $sizes.medium;
    }
}

The main differences between CTS and CSS is that in CTS there's no selectors, but keywords with the name of the group (colors, sizes, typography) and other groups can be nested (like typography main). Note also that in typography.main.font-size we are linking the value of sizes.medium. We could use a $ to indicate this (just an idea).

Like in CSS, you can import other CTS that extend or override some values. This allows to create different versions of tokens for different platforms:

@import './basic.cts';

colors {
    /* We are overriding the primary color */
    primary: #345;
}

Other feature that we can use from CSS is media queries, allowing to override values in different platforms:

@import './basic.cts';

/* Override the font in iOS */
@media iOS {
    typography {
        main {
            font-family: 'SF Pro';
        }
    }
}

The advantages of this new format:

  • Easy to update by humans (and not only for developers but also designers). Design tools like sketch or figma could even allow to edit the tokens of a file using a simple text editor.
  • Flexible: you can insert comments, import others cts, override values etc.
  • Does not require to learn anything new. Use the CSS syntax that everybody knows
  • Evolve with CSS. New features from CSS can be implemented here: color manipulation, new color spaces (lab and lch), animations, typographic features, different units (px, cm, mm, rem, deg etc)
  • It's scalable, from simple & small design systems to big companies with many different platforms and services.
  • Any fixed structure will be obsolete soon, because design systems are changing continually. This new format does not force to follow a specific structure, so it can be adapted to your needs.

from community-group.

mkeftz avatar mkeftz commented on May 20, 2024 6

Hello, 👋 Co-founder of Interplay here.
Thank you for starting this @kaelig 🙏

After reviewing the spec and comments above, here are some thoughts...

1. Overlap with System UI Theme Specification
Firstly I think it should be agreed on if/how this lives with System UI Theme Specification. It is a great spec and as @jxnblk mentions above, has some very similar goals to this. At Interplay we already support System UI structure for tokens. However, from a design tool perspective, it is intentionally missing some of the extra metadata around tokens we need. This looks like it will fill that gap. Possibly aligning the specs so design-tokens can compile to System UI?

2. Theming/Platform specific values
This is a tricky one. I feel having multiple variant values in a token would get overly complex, especially when you consider a matrix of values. e.g. what's a token value for mobile and dark mode?
As suggested previously this will bloat the file size and make the file less human readable/writable.

Another option would be to leave this above the spec for now. i.e. The spec only deals with the values for a specific theme/platform. You can have complete token-sets (with the exact same structure) for each theme/platform/whatever. Obviously, in implementation, they could inherit from a base set.
@kaelig - is this like the theming "overrides" in Theo?

3. Token Structure
One of the biggest challenges from the design tool perspective is the structure/categorization of the tokens. We need to be able to import tokens from code and understand what they can be used for and how they should be grouped. I like Style Dictionaries "Category/Type/Item" structure but think it could be too opinionated for the spec. Possibly specific meta-data fields on the tokens could work? e.g. "type" and "category"?

4. Cross-referencing
Not mentioned yet, but I think the ability to cross-reference tokens is important. This allows for "option" and "decision" tokens. Style Dictionary does an awesome job of this.

Other than that, I love the idea of having a spec, and happy to help however we can.

from community-group.

camacho avatar camacho commented on May 20, 2024 6

👋 Hi, Patrick from Framer here!

We are excited to see this effort get off the ground. We believe in the vision of sharing design properties seamlessly across tools and platforms and look forward to seeing this effort advance.

Please feel free to reach out to me for all things Framer related 😄

from community-group.

mirisuzanne avatar mirisuzanne commented on May 20, 2024 6

Hey all 👋 – I work on Sass, Accoutrement, and Herman. Glad to see this happening! Thanks for the invite @kaelig.

I tend towards the keep-it-simple suggestions above - with a concern for writing-by-hand. In my experience design systems have to be co-maintained by designers and developers (including CSS developers) across a range of contexts. While it's great (and necessary) to build tooling around the format – I would find it much less useful if it requires tooling to write/maintain easily.

And I'll follow that up by adding some complexity. 😬 I think alias references are necessary, but only a half solution. Design systems should be able to store meaningful relationships that include adjustment of values: e.g. space2 = space1 * 2. If I can't capture real relationships, it doesn't feel like a system - it's just a list of keys and values. I realize that becomes much more complex when you start thinking about color-spaces, and the meaning of a term like "lighten" – but something to consider…

(Or it's possible that both of my concerns should be addressed on a syntax-sugar layer above the spec?)

from community-group.

tobestobs avatar tobestobs commented on May 20, 2024 6

Hi all 👋this is Tobias, I'm part of the Material Design team at Google.

It's super exciting to see so much interest in an open Design Token format. We're thinking about how Design Systems can be described using Tokens and made portable across tools and platforms.

I know that there is the intent to keep things manageable to begin with. However we are thinking about Tokens not just for basic stylistic attributes like color and type (subsystems in the world view of Material Design) but also for component level values (like the background color of a button that would reference a token carrying a color value).

It would be very exciting to be able to load a Token file into a compatible design tool that would then be able to render the components described in in the Tokens.

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024 5

instead of platforms, I think a more generic term could be variant or theme. A variant could be a platform but it could also be a dark theme for colors, etc..
The value of a token could then be an object keyed with the variant names (if the value should change between variants ofc)

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024 5
{
  variants: [
    { name: "web" },
    { name: "ios"  }
  ],
  tokens: [
    { name: "token1", value: { web: 1, ios: 2 } },
    { name: "token2", value: 3 }
  ]
}

Let's say you want to add a new variant android. Then whatever tool you are using to manage your tokens could warn you that token1 needs to be looked at because it's missing the android variant. That way your are sure that you won't have any surprise about tokens implicitly using values that they shouldn't

from community-group.

ehudhal avatar ehudhal commented on May 20, 2024 5

👋Hi everyone, Ehud from InVision here. Very excited for this initiative. Looking forward to working with you all to support sharing design properties across platforms. Thanks for bringing us together @kaelig !

from community-group.

peduarte avatar peduarte commented on May 20, 2024 5

Hello 👋 Pedro from Modulz here. As Colm mentioned above, we've adopted some parts of System UI's Theme Spec for our Design System, Radix

Since Modulz is a tool built and optimised for Design Systems we'll also be using Theme Specs within the product itself, so I'm extremely happy to see this initiative of forming a theme standard.

from community-group.

zol avatar zol commented on May 20, 2024 5

Hey everyone 👋Zoltan from Chroma here.

I'm excited to help out on the spec however I can. I'm especially interested in better ways to connect Storybook to design tools.

from community-group.

gregberge avatar gregberge commented on May 20, 2024 5

Hello! Greg from xstyled.

Very good initiative! We are already following the specification started by @jxnblk: https://system-ui.com/theme/. I want to stick as possible as standard, so I am in.

from community-group.

ChucKN0risK avatar ChucKN0risK commented on May 20, 2024 5

Hi, Louis co-founder of Specify here 👋

I'm the author of Design Tokens For Dummies.

We are excited to see this conversation coming up. We believe in the vision of sharing design properties seamlessly across tools and platforms and look forward to seeing this project advance.

We're thrilled to see all tools represented here and we're excited to be a part of this effort.

from community-group.

NateBaldwinDesign avatar NateBaldwinDesign commented on May 20, 2024 5

You know what after writing that I realize perhaps functions are not necessary — if the standard is a data model, there’s no need considering teams can create authoring tools that build or output the tokens, at which point they can just run whatever functions they desire on the tokens. So I retract my comments about function support since it can be handled outside of the spec. Thanks for listening while I think out loud 😜

from community-group.

ilikescience avatar ilikescience commented on May 20, 2024 5

👋 I'm Matt, a designer and developer - I've been exploring how design tokens can be stored, transformed, and accessed in ways that maximize their utility. You can read that work here.

It might be useful to separate some of the individual aspects being discussed (typing, uniqueness, aliasing, order-dependency) from specific language specs and interpreters (CSS, JSON, SASS) ... it'll help clarify some of the discussion and avoid wrestling with the complexity of the existing implementations.

I'll try and kick off some of those topics — though, as we get into some of the more "pure" concepts, my understanding gets a little fuzzier, so please correct me if I'm using these terms incorrectly.

High-level structure

I personally think of a design token as a key-value pair meaning it consists of two parts: one part is used for reference (looking up the token, talking about the token), and the other is used for application (indicating the color to be used, the font family, the border radius).

Is that the mental model that y'all use, too? Might be nice to just put a checkmark in this box :)

Typing

Reading through all the great conversations happening here, it's clear that typing is very important. Not only is it a key to a human-readable and human-writable format, but it's also going to have a big impact on the machines/code that read and write the tokens.

The main question around typing: Should tokens be strongly typed, weakly typed, or not typed at all?

Strongly typed: this might involve defining types as part of the spec. A token is only properly-formed if its types are declared and validated at compile time. This makes tokens a little harder to write, but has benefits for performance in the programs that utilize them.

Weakly typed: this puts the burden of type-checking on the interpreter. Tokens are easier to write, but applications have to do some extra work to check types before utilizing the tokens.

Not typed: this is some deep theory stuff that I don't understand very well.

Uniqueness

  • Should it be safe to assume that a given token is defined once and only once?
  • If no (ie, a token might be defined more than once) , should it be safe to assume that the two values are the same?

Some analogies here:

In JS, I can't define a const more than once. ECMAScript defined this rule to help interpreters be a bit more efficient.

In CSS, I can define a rule (like .token {}) over and over again.

What are use cases for defining a token more than once? What kinds of complications would that introduce to the humans that write and maintain the code, and the machines that have to correctly interpret these definitions?

Aliasing

I've found quite a few use cases for aliasing in writing tokens or using tokens — sometimes it's a lot more convenient to think about the button-background token than the purple-50 token.

However, there are some tradeoffs that come with writing aliases into the spec. For instance, what do we do about circular references?

Order dependency

Some current implementations of design tokens produce order-independent token files — Theo and Style Dictionary both work with JSON and YAML, which are essentially associative arrays.

Earlier in the conversation, folks have mentioned some operations and use cases that might be order-dependent, like overrides and functions.


I think that there's a ton of experience we can draw on from the history of other specs and how they were adopted over time — but ultimately it's the answers to these very core questions that will inform the shape and scope of the format specification.

from community-group.

zackbrown avatar zackbrown commented on May 20, 2024 4

I know this conversation focuses on the "shape of the data" rather than the implementation, but FWIW our take on platform-specific overrides in Diez is to use TypeScript decorators on top of property declarations. variant is a solid way to think of this too, @mathieudutour!

@override({
  ios: 32,
  android: 16,
  web: 32,
}) spacingTop = 16

Is it worth elevating "cross-platform" as one of the guiding principles? Seems like a pretty big fork in the road. IMHO any spec with a claim to a Design Token Standard should treat platform-specifications(/overrides/variants) as first-class, but can do so without "hard-coding" specific platforms.

from community-group.

berkcebi avatar berkcebi commented on May 20, 2024 4

Hey, hey! Berk here from Zeplin.

It's wonderful to see all tools represented here. Super excited to be a part of this effort and looking forward to contributing and seeing it evolve over time! 🎡

from community-group.

lucastobrazil avatar lucastobrazil commented on May 20, 2024 4

Hey all, wondering what the roadmap / plan is for this initiative? LOVE the convo and hearing the ideas - is there any concrete plan for how to make decisions about this? Anyone tried / prototyped out these ideas and got any feedback?

I ask because I'm about to start a new project with our DS that will involve extracting our values as tokens and I'm super keen to stay aligned with this spec.... but it's hard to understand exactly what the spec is proposing right now.

Happy to be a guinea pig here !

from community-group.

kevinmpowell avatar kevinmpowell commented on May 20, 2024 4

Good stuff all around @mirisuzanne

I think it might be the reason to look at CSS as the most relevant prior art.

☝️ I'd like to hear more about this approach. As this work is being done as a W3C project I think looking to existing web formats as a source of truth feels "right." 👍

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024 3

A easy way to adopt the format without caring about the variants at first would be to specify which one is the default one and the existing design tool just has to care about that one

from community-group.

zackbrown avatar zackbrown commented on May 20, 2024 3

@mathieudutour

I don't think we needs to go out of our way to make it human-writable.

appears to be at odds with Principle 1 outlined by @kaelig :

For core properties (name, value, description…), a design token file must be both human-editable and human-readable

Granted, the joy of determining a standard is that of wrangling a variety of viewpoints & needs. And compromising! Maybe @kaelig and I are in the minority here in desiring hand- read/writebility?

Design tokens are a promising key to achieving that fabled "single source of truth" for a design language — maybe even, ultimately, to entire applications. To me, hand-editability is important because any "single source of truth" must be accessible to a maximum number of stakeholders [incl. developers and low-code designers.]

You can always build tooling on top of hand-editable code [see any IDE ever], but supporting hand-editability of machine-generated or 'machine-first' code is a very different beast. To me this all circles back to prioritizing ergonomics & minimalism in the data format.

from community-group.

blackfalcon avatar blackfalcon commented on May 20, 2024 3

Please correct me if I am wrong, but the point of this project is to define a Token Specification? The proposal itself clearly states,

At the moment, this proposal doesn't advocate for a particular file format (JSON, TypeScript…), it merely discusses what the shape of it should look like.

Is the purpose of this project to define a new standard media type? The question I have is, what is the media type? Or is that still part of the discussion.

I see this going one of a few ways.

  1. There is the pattern of the application/ld+json spec, e.g. the opportunity for application/token+json.
  2. Or possibly a new text specification, e.g. text/token

In either case, there could be the argument for a new file type, e.g. .tokn

The differences between media types are subtle. If you are not familiar with the differences in media types, read more.

I bring this up as I see forks of this conversation expressing concerns with not only the storage of data but the transformation of and production of date from other data. The differences in that are huge.

I also see opinions about strong typing and implementation. I feel these opinions are misplaced as there are two parts of a media type. The content and the engine.

I feel that this conversation should be about the content, and the implementation of the engine is up to interpretation over time. E.g. the media type of application/json describes what JSON is, but not specifically how any platform is to execute this media type.

Lastly, as this thread is getting increasingly long, what are the next steps to aggregate opinions and maintain focus on the objective?

from community-group.

ventrebleu avatar ventrebleu commented on May 20, 2024 2

Definitely need some kind of category/grouping system, alias could be useful too.
Something like:

{
    name: 'I am a token',
    value: 'This is a value',
    description: 'A nice description.',
    category: {
        'Color': {
            'value': 'Primary'
        }
    },
    alias: {
        'value': 'Also known as',
        'value': 'Peter Parker'
    }
    data: {
    myOwnFlag: true,
    oneMoreThing: 'yay'
    vendor: {
        '@sketch': {
        // ...
        },
        '@figma': {
        // ...
        }
    }
    }
}

from community-group.

danoc avatar danoc commented on May 20, 2024 2

Thank you @danoc – do you know if any design tool supports this kind of theming/ platform-specific variants so far?

I do not! We rolled our own a while back:
https://github.com/thumbtack/thumbprint/tree/master/packages/thumbprint-tokens

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024 2

I kind of do disagree actually haha. While I do think it should be kept human-readable, I don't think we needs to go out of our way to make it human-writable. I'm sure it will be pretty easy to build tools to edit tokens once we agree on a common format. And so I'm not too worried about introducing duplication as long as it's still readable.

I'd even argue that specifying all the variants is more readable than a default value + overrides.

So my point was to make sure that nothing weird can happen with the data format and that there is only way way to write one thing. So a token should either be a constant or depends on the variant. Otherwise you have multiple way to write a constant: default value + all the overrides with that value, default value + 1 override with that value, etc.

from community-group.

c1rrus avatar c1rrus commented on May 20, 2024 2

I also wonder whether there should be an additional v1 priority:

  • Create a conformance test suite to enable tool vendors to verify their tools output design token data correctly

That might be a validator, some kind of coded test suite or a combination of those. But I feel it's worth having something like that as soon as possible.

I believe the main motivation for an effort like this is for better interoperability. I want to one day be able to save out colors from PhotoShop, put them into ColorBox to to generate tints and shades, and then run them through a contrast checker to find accessible pairings and then save the results into my design system's source code without ever needing to convert between file formats.

So, the more tools we can provide to let developers ensure their software parses or writes our universal format correctly, the less risk of bugs and incompatibilities creeping in and - ultimately - fragmenting the eco-system and reducing the benefits.

from community-group.

kaelig avatar kaelig commented on May 20, 2024 2

A lot of the (super interesting ❤️) comments mention theming. After talking with @dbanksdesign earlier today, we think it'd make sense for theming to get its own task force. I'd like to start the conversation by laying out the basics of what a simple, low level theming spec could look like and ask y'all to comment on this topic over here: #2

from community-group.

zol avatar zol commented on May 20, 2024 2

Thanks @kaelig, glad you saw it. I didn't want to mention it here as it's sort of tangential to design tokens but we're committed to increasing interop between tools in our space!

from community-group.

jthoms1 avatar jthoms1 commented on May 20, 2024 2

Hello! Josh here from Stencil DS / Ionic.

Just found this thread about the specification as we are working on some tooling around tokens. This spec is pretty exciting to us and we would love to help out if possible.

from community-group.

c1rrus avatar c1rrus commented on May 20, 2024 2

While I agree that it would make sense for adjustments to become part of the spec - as @mirisuzanne rightly says, that can help make the relationships and intents that underpin the system more explicit - I suggest that we tackle them a bit further down the line. As it stands we don't even have a basic spec for file format, permitted types of values, etc. I suggest that those are prerequisites for more advanced functionality like adjustment functions. (On a related note - maybe we need to start a backlog somewhere so that all these awesome suggestions don't get forgotten!)

It's also conceivable that tools (assuming they all begin supporting the format we define) could provide a work-around until the format "natively" supports adjustments. I'll outline a couple of examples to show what I mean:

Modular scale of text sizes
A design tool like, say, Sketch might one day be able to import all font size tokens from a file using our new format. In the absence of math operators, the file would therefore need to contain the pre-calculated values. However, from the perspective of a design system team that authors and maintains the "single source of truth" design token files, they may prefer to only be storing the inputs need to generate the font sizes - i.e. a base font size and a multiplier (and perhaps min and max points on the scale).

In this scenario the design system team could maintain a "source" token file that only contains those values. But, they could use a build process to generate a "dsitribution" token file which is then made available to consumers of that design system. Imagine if, for instance https://www.modularscale.com/ (or better yet, some kind of CLI utility with equivalent functionality) could load a token file, generate a scale and save out the values to a new token file. That could work.

Color palettes
In a similar vein, if tools that manipulate colors (e.g. https://www.colorbox.io/) gain support for our standardised format, they could be fed with some "source" token values and then save out (possibly to a new file, if desired) calculated "distribution" token values for other tools to then consume - or further manipulate.

So, with a setup like that you could potentially maintain a minimal set of source colors and adjustment parameters and then lean on tools to generate all the tints and shades needed for the full palette.


It's exactly this kind of interoperability that I hope we can unlock by defining a standardised file format for design tokens. I love the idea that one day I'll be able to mix and match all kinds of tools to create, maintain and also use design tokens. :-)

from community-group.

NateBaldwinDesign avatar NateBaldwinDesign commented on May 20, 2024 2

👋 Hey there folks. I'm a designer for Adobe's Spectrum and have been maintaining our token system for the past few years. I'm also the creator of Leonardo, which is a tool for generating colors based on desired contrast ratios.

So much of this thread is spot on. I have a few thoughts to add.

  1. Cross-referencing tokens is necessary, but may not need to be part of this scope
  2. Contextual changes to token values are also necessary, but may be out of this scope depending on how it's handled. I see 'contextual changes' as akin to "themes", which can encompass color changes, size value changes, and combinations of font, color, and size changes for any given context. (ie "color theme" changes color, "scale" changes sizes, "locale" changes font/size/colors)

Something that seems suggested but not fully articulated is the notion that tokens may act as configurations for outputting values. @c1rrus mention of type scales is a good example. One may see the output of each font size as the tokens, however the core set of tokens that define those values are computed from the base font size & multiplier.

base-font-size: 14px;
type-scale-multiplier: 1.125;

// outputs:
font-size-100: 14px;
font-size-200: 16px;
font-size-300: 18px;
...

It's worth noting that constraints are still required in this use case, in order to ensure consistent naming of the output tokens (font-size-100) as well as how large/small they go. For example, you would not likely wish to output a 5px font size, even though it conforms to the base font + multiplier.

What appears to be the topic in question would be w/r/t whether the output tokens are stored in the token library, how naming is handled (generated?), etc. I believe cross-references and calculations will help resolve this. As @mirisuzanne said

"If I can't capture real relationships, it doesn't feel like a system "

So this could be:

base-font-size: 14px;
type-scale-multiplier: 1.125;

// outputs:
font-size-100: base-font-size;
font-size-200: base-font-size * type-scale-multiplier;
font-size-300: base-font-size * (type-scale-multiplier * 2);
...

Note: the above examples are only illustrating concept of input/output tokens, not an opinion on the structure.

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024 2

the core set of tokens that define those values are computed from the base font size & multiplier

That's interesting. In the more generic sense: shall we allow tokens' value to be the result of a function taking other tokens as arguments, eg. tokenC = fn(tokenA, tokenB) (here the multiply function but we could also think about a saturate function for colors, etc.)? If so, do we need to define a set of functions in the spec? Do we need to be able to define functions with the format?

a brand may likely wish to follow a system like Pantone (or of their own invention), where optimal color values are manually selected for each colorspace

Isn't the colorspace just a variant (as discussed more on #2)? Do we need to be arbitrary about the kind of variants we allow for each "types"?

from community-group.

c1rrus avatar c1rrus commented on May 20, 2024 2

Interesting stuff.

I think we need to ask ourselves what kind of thing we want to define in this community group. Are we aiming to define a file format that simply stores inert design token data in an interoperable way. Or are we trying to create something akin to a scripting language with functions, operators, etc. that tools can execute to dynamically generate (additional) design tokens?

My assumption had been that we're trying to make the former (and thus things like functions would be out of scope - at least for now).

For what it's worth, I think we could lean on tools to fill the gap. I see no reason why a tool can't read in a source design token file, do some kind of manipulations, and then write out a generated design token file (which itself could be fed into further programmes).

Let's imagine we had a source tokens file like master-tokens.tkns:

[
  {
    "name": "base-font-size",
    "value": "1rem"
  },
  {
    "name": "font-size-multiplier",
    "value": 1.5
  }
]

It wouldn't be hard to make a command-line utility that can read such a file, be told which tokens to use as base font size and multiplier, and what range of sizes to generate, and then spit out a new token file:

cat master-tokens.tkns | modular-scale --base="base-font-size"\
    --ratio="font-size-multiplier"\
    --start-point=-1\
    --end-point=2\
    --base-name="font-size-"\
    --suffix-start-value=100\
    --suffix-increment=100 > augmented-tokens.tkns

Which might create an augmented-tokens.tkns as follows:

[
  {
    "name": "base-font-size",
    "value": "1rem"
  },
  {
    "name": "font-size-multiplier",
    "value": 1.5
  },
  {
    "name": "font-size-100",
    "value": "0.6666rem"
  },
  {
    "name": "font-size-200",
    "value": "1rem"
  },
  {
    "name": "font-size-300",
    "value": "1.5rem"
  },
  {
    "name": "font-size-400",
    "value": "2.25rem"
  }
]

You can then imagine stringing together a number of such utilities to produce a final output. So while the data being piped from one tool to the next is in our standardised token format, you might never need to actually save it out to a new token file (unless you want to). For example:

cat my-tokens.tkns | modular-scale --bla --foo | add-tints --baz -x | add-shades --bar="whatever" | tkns2sass > output.scss

Of course command-line utils are only one kind of tool. These could just as well be interactive GUI tools. Or a mix of both.

Imagine a future version of @NateBaldwinDesign's Leonardo that can read a token file, pick out all the color tokens* and present them to the user, let them select which of those to use as key colors and then save the generated colors out to a new token file (or alternatively, just add them to your original file).

Note that my imagined examples make no assumptions about how you've organised or named your tokens.

I think just a "basic" format without functions or operators could already be tremendously useful. Tools like my made up examples could easily provide the kind of results previous comments were suggesting.


*) A use-case like my hypothetical Leonardo scenario implies some notion of type in our format. Otherwise a tool has no reliable way of knowing which tokens are, say, colors and which are something else. Furthermore, I'd expect tools that only know how to do something with certain types of tokens, to just ignore any tokens that are of another type (and hopefully to leave them untouched too).

from community-group.

joestrouth1 avatar joestrouth1 commented on May 20, 2024 2

There are some things the format could specify to help make implementing functions and other tooling easier. Particularly with types, although much of the discussion in this thread centers around distinguishing token types (categories) like color vs font size.

What about typed values? I mostly see value represented as a string, with both the value and the unit/format combined like '1rem'. This is true for colors as well, like '#000000' or 'rgb(a,b,c)'.

Wouldn't it be helpful to instead store unit/format separately? Particularly for more complex token types like color or elevation.

const red: RGBToken = {
    name: 'red-100',
    value: {
        red: 255,
        blue: 0,
        green: 0
    },
    format: ColorFormat.RGB
}

const redHex: HexToken = {
    name: 'red-200',
    value: '#FF0000',
    format: ColorFormat.Hex
}
interface FontSizeToken {
    name: string
    value: number
    unit: FontSizeUnit
}

const baseFontSize: FontSizeToken = {
    name: 'font-size-base',
    value: 16,
    unit: FontSizeUnit.Pixels
}

Color Playground | Font Playground

This way for any token in a particular category, a tool could check one field and know how to ingest or transform its value. format could be used to signify that value holds a reference to another token, a description of a function call, etc.

from community-group.

Martin-Pitt avatar Martin-Pitt commented on May 20, 2024 2

Pulling some ideas from JSON Schema that would help around the human-editable & human-readable principles:

  • comment: To readers or maintainers of the file / token. When you are designing tokens themselves and need to mention something important, a note or technical documentation. Such as how it is produced from other tokens, etc.
  • description: To the end user, what this token expresses, simple explanation, readable from a design tool. Optionally a title perhaps for a humanised name? (Example: news-font-size could have title News Font Size)
  • deprecated: Same per @calebdwilliams recommendation. Boolean to indicate deprecation and removal in the future.

Another thought might be declaring dependencies and/or dependents of tokens.
But I feel that ought to be implicit with what we decide on for the method of token relationships with calculations/references to make it more declarative. Machines are good at finding out such relationships, humans might err. However humans are great at more declarative stuff, e.g. font-size-200: font-size-100 * type-scale-multiplier.

Mind that statuses can be implied through separate files entirely as well. Filesystems are a powerful system of organisation and categorisation alone. E.g. like a file of experimental tokens.
However some things like deprecation are very hard human problems that are tackled directly like this, which is why I prefer a single boolean. The composition with comments lends itself well, for example a link to a github issue that explains the deprecation.

from community-group.

kaelig avatar kaelig commented on May 20, 2024 2

@lucastobrazil @anotheruiguy @kevinmpowell these are super interesting questions, keep them coming! They'll help the editors of the specification draft a more comprehensive Editor's draft for this module.

Update on the timeline: @jina and I are getting in touch with the applicants following the call for editors (see #38), and should have a team ready to look into this more closely over the next few weeks.

from community-group.

akalenyuk avatar akalenyuk commented on May 20, 2024 2

Hey everyone!

I'm one of the authors of Puzzle Tokens plugin for Sketch which allows tokenization of prototypes via CSS style preprocessors, specifically Less/Sass. I'm following the conversation and wanted to share some experience that we had while migrating from hardcoded values to design tokens in our company.

We started with JSON as a way to store the specification but after several iterations we came to a Less/Sass based design token spec format from which we benefited in so many ways:

  • Well documented and easy to grasp syntax which extends standard CSS syntax
  • Native support for parameters (which we used as tokens storage) and the ability to group it via array (which helped keep it clear and save some space)
  • Built-in functions to manipulate design tokens values, while still keeping it in formulas. For one thing, it's color manipulation functions, so now it's much less 'magic numbers' in the design token spec and you can always reverse-engineer the logic a designer has put when introduced a new color.
  • Ability to describe and store rules of application of tokens via CSS selectors, which suits very well for all modern prototyping tools like Sketch and Figma as you tend to organize layers via nesting and grouping which matches to a DOM tree. Strictly saying, It's not supposed to be a part of design tokens spec per se, still it is really handy to be able to apply a token value to a Symbol in the same file, if you want to.
  • Storing design tokens in a format that is natively readable by the development team, so there's no need in extra logic to interpret as it can be naturally taken by standard Less/Sass preprocessors.
  • Ability to structure the design tokens via keeping separate pieces in different files

Lots of pros but there are cons for sure. For example we didn't have a chance to solve it for mobile platforms as we mainly work web.

Hope that makes sense and helps!

from community-group.

mirisuzanne avatar mirisuzanne commented on May 20, 2024 2

I like this direction generally, though we need to be careful about how we build on top of CSS if we want to be able to evolve along with it. If we build on CSS syntax naively, we're likely to cause conflicts down the road (as has happened with every CSS pre-processor).

In some ways what we're talking about is exactly custom properties – but stored in a "selector" structure that is type-based rather than DOM-based. I'd expect that we need both established types that every tool understands (colors, fonts, etc) and the ability to create custom groupings per-project. That could potentially be handled with different selector types (eg IDs, classes, elements) – or some type of new @ blocks (@colors { ... }).

I like thinking that direction, but:

  • We need to think carefully about how tokens should cascade within these structures, so we achieve the ability to establish patterns and then override them for particular themes/media, but do it in ways that aren't surprisingly un-CSS
  • As soon as we add new syntax e.g. @colors, we're inviting a conflict if CSS ever decides to do something similar. Existing id/class/type selectors are less risky for that sort of conflict, but more risky in terms of confusing what they do.
  • In my mind, the main feature missing from CSS properties (including variables) is the ability to adjust from the current/inherited value. CSSWG has various proposals on the table, but nothing fleshed out. Even the color-adjustment syntax doesn't address this.

@Martin-Pitt I'm curious what you mean by variables being "a bit late"? I don't follow that.

from community-group.

Blind3y3Design avatar Blind3y3Design commented on May 20, 2024 2

👋 Hello everyone. Just getting some thoughts out here for the group to mull over.

High-Level Structure

I feel as though adopting or starting from the CSS syntax and looking in to using rules and properties associated with CSS is going to cause a sort "lock-in" and reduce the "platform/tool/language agnostic" ideal of design tokens.

CSS by it's own nature is a web technology. It does not need to serve native applications, and as such does not have considerations for how different native platforms handle their data or declarations.

Where a language like JSON/YAML/XML is almost purely data storage and key/value pair associations. Using these languages as a point of origin would result in a less restrictive structure and allow for a simpler association between a token's name and it's value.

Using something like YAML you can still get nested groupings/associations:

font:
    size:
        small: 8px;
        medium: 16px;
        large: 32px;
    family:
        display: fancy-font;
        body: normal-font;

This could result in tokes like font.size.small and font.family.body

One other consideration I think may be worth mentioning is that unless the CSS Working Group updates the CSS spec, and then browsers adopt it in a timely manner whatever format we choose will have to be compiled into native CSS (same logic for your pre-processor of choice) anyway.

If we're going to have to compile from "token syntax" to "platform syntax" (this could be swift, css, js, etc) I think it makes the most sense to try and make the "token syntax" simple/easy enough to allow for quick and efficient compilation to any/all of the other "platform syntax" languages.

Uniqueness

In my experience we typically only define our tokens once, however I could see a world where at a larger org, or for something like a "bootstrap of design tokens" a team may want to take a "master" tokens file and suppliment or modify it with new definitions for existing tokens.

Example:
"Master" token file defines a color palette using basic terms like primary and secondary

color:
    primary: #7F0000;
    secondary: #320000;

The tokens fit most of our needs, but my team wants to change the colors. Rather than create new tokens, it would be nice if we could simply override the existing token definitions.

color:
    primary: #6666FF;
    secondary: #33337F;

In this example the format would need to function similar to var, let, or the CSS cascade. If I redefine something further down in the file, or in an import after the original it would need to update the value of the key.

@import tokens-master
@import custom-token-definitions

color: $color.primary;

This would be expected to output to

color: #6666FF;

Aliasing

I would agree with this being a must-have. If only for the ability to define specific component/state/relational values based on an existing token's value.

The biggest use case that our team uses this for is for branding and state values. All of our system's colors have generic names, they are then aliased to product specific values based on that product's branding. The same logic applies to states. It makes more sense to define color.light, color.default, and color.dark once, and then alias states to those values rather than have a new definition for each components hover/active/disabled/focus state.

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024 1

IMO a group is just a token which has an array of tokens as value. To reference another token, you can then use an array where each item is the name of the token you need to go through:

{
  tokens: [
    { name: "token1", value: 1 },
    { name: "group", value: [
        { name: "nestedToken", value: 2 },
        { name: "nestedToken2", value: 3 }
    ]},
    { name: "refToken", value: ["group", "nestedToken2"] }
  ]
}

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024 1

Of course it needs to be human-writable, and if it is readable, it will be writable. But introducing confusion (and multiple ways to write the same thing is def confusing) for the sake of convenience when writing the file by hand isn't something we should lean to IMO.

from community-group.

NateBaldwinDesign avatar NateBaldwinDesign commented on May 20, 2024 1

I also believe that types is an important aspect of the spec, and wonder if formats differ per type? The example I have in mind is regarding color. The requirements for alternative values for color is different from the notion of alternative values based on platform or context.

For instance, a brand may likely wish to follow a system like Pantone (or of their own invention), where optimal color values are manually selected for each colorspace, knowing that a direct conversion will not result in an optically correct output. Best example is CMYK to RGB, however with the fact that web uses sRGB and many device screens support P3, this becomes more relevant.

colors: {
  'Red': [
      // each tint/shade is an object in this array
     {
        name: 'red-100',
        contrast: 3,
        values: {
            srgb: 'rgb(a,b,c)',
            p3rgb: 'rgb(a,b,c)',
            cmyk: ...
            lab: ...
       },
       ...
   ],
   'Blue': [ ... ],
   'Gray': [ ... ],
}

I wonder how to handle grouping, which can be seen at a few levels. First level being a set of tints and shades each with necessary data, second being a set of the color sets (all tints and shades for every color in the palette)

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024 1

You don't really need a scripting language to define some sort of relationships:

[
  {
    "name": "base-font-size",
    "value": "1rem"
  },
  {
    "name": "font-size-multiplier",
    "value": 1.5
  },
  {
    "name": "font-size-100",
    "value": {
      "func": "multiply",
      "args": ["base-font-size", "font-size-multiplier"]
    }
  },
  {
    "name": "font-size-200",
    "value": {
      "func": "multiply",
      "args": ["font-size-100", "font-size-multiplier"]
    }
  },
  {
    "name": "font-size-300",
    "value": {
      "func": "multiply",
      "args": ["font-size-200", "font-size-multiplier"]
    }
  },
  {
    "name": "font-size-400",
    "value": {
      "func": "multiply",
      "args": ["font-size-300", "font-size-multiplier"]
    }
  },
]

as long as you know what multiply means, it's not too hard to create really powerful sets of tokens which can be played with in a GUI tool (as opposed to the pipe example you proposed: the output is inert and doesn't capture the relationship between 2 tokens). You could actually imagine an alias as the identity function.

I'm seeing the cli tools you mention as the kind of one-to-many functions @NateBaldwinDesign talks about: useful to bootstrap tokens but you still want to store them in a way that:

  • can be modified by a tool without knowing how you bootstrapped them
  • can be evaluated by a tool to render them

from community-group.

Martin-Pitt avatar Martin-Pitt commented on May 20, 2024 1

If you are already leaning on CSS with the values & units, given mentions of rem and such, the module has mathematical expressions: https://www.w3.org/TR/css-values-4/#calc-notation

To make it easier for tools/scripts to read tokens, it might be possible to compute most or all values into a computedValue or the like: https://www.w3.org/TR/css-values-4/#calc-computed-value

from community-group.

calebdwilliams avatar calebdwilliams commented on May 20, 2024 1

Love the conversation that's going on here. One thing I think is missing is a standardized means of deprecating a token (or indicating the token's usability status). While this could be defined in some loosely-typed metadata property, the token's status is something that I believe can and should be standardized.

from community-group.

kaelig avatar kaelig commented on May 20, 2024 1

Is the end goal to advocate for a particular file format? In a lot of ways this feels like one of those scenarios where the medium is the message and that the file format that these tokens are written to will largely influence their structure.

Picking (or inventing) a file format is something that's on the table and could definitely influence the structure. I'd love to see a few people work on this specific topic – I expect this is something the format module editors will look into very soon.

from community-group.

joestrouth1 avatar joestrouth1 commented on May 20, 2024 1

I agree that your use case is something to be stored in one place and consumed by anything, I just don't know that it's a "token". One is a value, the other is a set of values of different types and a selector that says when to apply them. It's like a CSS declaration vs a CSS ruleset. We can't write a ruleset until we know what a declaration looks like. Their responsibilities are different enough to warrant separate consideration; I don't know that the format for the smallest piece must necessarily work for larger compositions.

from community-group.

NateBaldwinDesign avatar NateBaldwinDesign commented on May 20, 2024 1

@lucastobrazil I agree with @joestrouth1 on this. In terms of your problem, the font sizes should each be a token, and line heights should be separate tokens as well. You can use various strategies such as abstracting or grouping tokens to accomplish specificity such as the combination of line height + font size in the context of your particular heading component. But this is a separate issue from the spec; this is token strategy and implementation.

from community-group.

mirisuzanne avatar mirisuzanne commented on May 20, 2024 1

File format & typing both raise questions for me about how aware the format needs to be. I find many of the common tools frustrating because they use formats like JSON or YAML that have no concept of actual design data-types (lengths, colors, etc), so everything is considered a string.

That may be a tooling issue, but also may be worth thinking about… If it's worth having a typed format, is it worth ensuring that format includes design-specific types?

from community-group.

blackfalcon avatar blackfalcon commented on May 20, 2024 1

@mirisuzanne 100% agree. That is one of the reasons I raised the question as to what thought has been put into defining the media type. Without that, talking about 'features' seems foolhardy IMHO as there is no real way to have that discussion.

I like what you are saying that this is simply not data, but it's data with type structure. Data will have implicit meaning. This also means that tooling that is required today will not be needed as this spec is adopted. Data sent in the .tokn file can be directly consumed by the browser and applied to the CSSOM. I like that.

I don't think we can do something like text/token+css unless the tokens are written using the CSS spec. At least that is how I understand it.

We could be looking at proposing application/css ... whereas we would not be restricted by the CSS spec, but that will be a really huge uphill road.

Sorry ... ideating out loud. But in the end, I see application/token+json being the best option.

from community-group.

kevinmpowell avatar kevinmpowell commented on May 20, 2024 1

On Value Types

@mirisuzanne and @blackfalcon I'm also curious about the various types we'd have. I know there are other aspects of the spec being worked on in different modules: Color, Space, Typography, etc. I would imagine they'll be opinionated about the formats of those different values, but I think it's worth discussing here as well.

In thinking about values is there a "best base" value type that exists in each category? A simple example to get the conversation going:

For color, there's HEX, but there's also RGBa. A color may be more accurately represented in RGBa than with HEX. If the token value was stored as an RGBa, the HEX could be derived from RGBa (with the loss of the alpha value), but the inverse is not true.

Some consumers may not support the alpha value, but token authors may want to supply it.

I'm also ideating out loud, but in a way it's a bit like progressive enhancement. My example color token has the alpha value for consumers that can use it, but for others a simpler HEX can be derived.

Now zoom out from the specifics of color, but think to space (px, rem, em, in, cm?), and typography (px, rem, em, pt) and any others. Are there specific units of measure or representations that serve as the "best base" for the token value?

from community-group.

gmilano avatar gmilano commented on May 20, 2024 1

Hi guys, I'm CTO of GeneXus a Low Code development platform, GeneXus probably will be a user of any outcome of this group.
I've have been following the conversation, we are really interested on the final result of this group.
We have being working on several of the topics around Design Tokens because we are allowing modeling Design Tokens inside our platform, we couldn't wait to have a final specification so we started as many our own.
Obviously we are open to try to find a unique way to specify design tokens for design not only for web but for mobile native apps too.

The definition of our Design Token language is still a working process and you can take a look on it and leave us your comments on the language we are defining.

We decided on several things we are discussing here:

  • We created a new format for Design Tokens

ie:

tokens MyDesignSystemTokens
{
  **#colors**
  {
    Primary: rgb(108, 154, 235);		
    Secondary: #ab2;
    Background: rgba(200,200,150,0.9);
  }
  **#spacing**
  {
    Small: 12px;   
    Medium: 22px;        
    Large: 45px;
  } 
  **#fonts**
  {
    Primary: arial;   
    Secondary: georgia;  		
    Third: roboto;		
  }
}
  • The concepts of the language are: TokenSets, Token Type, Token Name, Token Value
  • Tokens can have associated metadata
  • There are parameters for TokenSets and conditional tokens based on those parameters.

The Tokens are the options of my Design System, but those options can be grouped on Token Sets, there are cases where just a small set of token options need to be changed, for example a token set for a Dark or a token set for Light.

Imagine your design system is called Unanimo, Unanimo has two defined token sets, Dark and Light, so instead of defining twice. We define our token set in the following way:

tokens Unanimo(option : enum(dark, light))
{
   @ $option is dark
   {
     #colors
     {
        text = white;
        back = black;
     }
   }
   @ $option is light
   {
     #colors
     {
        text = black;
        back = white;
     }
   }
   /// The rest of the tokens here

}

Then after that the designer can use the Token Set options in this way: Unanimo(dark) or Unanimo(light) in any place where the styles are defined for the Design System.

The working specification is here GeneXus Design Tokens Specification

You can try the language real time on GeneXus Design Tokens Editor
Basically is a side by side editor (textual - visual) for Design Tokens. It's an alpha version but it works ;)
Just try to paste some of the samples given in the specification page to see the idea.

We didn't decide yet if we are going to use functions as values for design tokens. At first we are going to keep it simple and do not allow to use functions and function composition as values for tokens, but i'm still have doubts on the topic of token dependencies and token values. Because of that you will find a 404 page if you follow the Token Type link in our specification ;)

For us things like metadata will be very important for tooling, metadata that for sure we will use are:

  • Tokens Converters (How to convert the value of a token to other types, units, etc)
  • Tokens Type Editors (How to edit this token type for a given tool)
  • Tokens Descriptions (How this token will be presented on UI widgets like can be a property inspector)
  • Tokens Valid Resolvers (In some contexts the token value should have some kind of extra validation more than just the token type validation)

from community-group.

jonathantneal avatar jonathantneal commented on May 20, 2024 1

I understand and agree with the appeal of using the CSS syntax, and I would be happy to help guide such an implementation, if that would actually help.

As @mirisuzanne as expertly pointed out, building upon CSS naively is likely to cause conflicts down the road. I would describe that likelihood as “nearly certain” or “inevitable”; therefore, I strongly second their advice.

Implementation wise; there are at least 2 directions that this could take which would use CSS safely (tho, I could be so, so, so very wrong).

  1. We build upon the CSS Syntax as a core foundation. With this option, our format has the freedom to copy any of the existing rules, properties, and keywords that we like from CSS while creating or redefining ones where we see fit. If done with great care and great documentation, this could produce a very elegant and readable format. By leveraging the syntax, this would not be too difficult to build additional tooling around. However, this format would represent a whole new superset of CSS, incompatible with the full suite of CSS features in browsers. It would probably require a new file type and file extension as well (e.g. FILE.dcss or FILE.design.css).

  2. We build upon the whole suite of CSS specifications. With this option, our format is already readable by most tools and browsers, even if our rules and declarations don’t necessarily apply to any elements on the page. We still have the freedom to imagine new concepts in CSS, but those concepts would need leverage namespaces, and or vendor prefixes, and or custom prefixes to protect themselves from ever conflicting with true CSS. The format might not be nearly as elegant or readable, but it would be the most compatible.

/* Hypothetical example of the first direction. */

@define colors {
    primary: lab(50 75 100 30);
    secondary: lab(22 0 0);
}

@define sizes {
    small: 14px;
    medium: 20px;
}

typography {
    & main {
        font-family: Inter;
        font-size: sizes.medium;
    }
}
/* Hypothetical example of the second direction. */

@namespace url("https://github.com/design-tokens/[email protected]");

@--define colors {
    primary: lab(50 75 100 / 30%);
    secondary: lab(22 0 0);
}

@--define sizes {
    small: 14px;
    medium: 20px;
}

typography {
    & main {
        font-family: Inter;
        font-size: --def(sizes.medium);
    }
}

from community-group.

kaelig avatar kaelig commented on May 20, 2024

Our mileage varies across projects / companies / tools, so I would like to hear thoughts and feedback on where this draft format works / doesn't work, with use cases showing where it breaks 🧐

⚠️ Please provide use cases and examples in your posts (where appropriate)

from community-group.

kaelig avatar kaelig commented on May 20, 2024

Thank you @danoc – do you know if any design tool supports this kind of theming/ platform-specific variants so far?

from community-group.

dbanksdesign avatar dbanksdesign commented on May 20, 2024

I think this is a great start! We might want to focus on the token interface to start, so we don't "boil the ocean".

@danoc brings up an interesting issue, where is the platform-specific information, and do we even include it in the core token interface? Theo and Style Dictionary keep that out of the tokens themselves by using transforms. Same with names, they could be different per platform as well. Maybe if this is a universal/interchange format each token could have any platform data it wants to provide. For example, in @danoc's example, the space8 token would only have 'web' platform data...

interface Token {
  name: string;
  value: any;
  description?: string;
  platforms: Platform[];
  data?: Data;
}

interface Platform {
  platform: string;  
  name: string;
  value: any;
}

from community-group.

kaelig avatar kaelig commented on May 20, 2024

Those are definitely super valid use cases!

Theo achieves this with the concept of platform-specific and theming "overrides", or one could also do this by importing a different set of token aliases, specific to the platform or theme.


We might want to focus on the token interface to start, so we don't "boil the ocean".

I've added another principle to the draft above:

"The format translates well to existing design tools, in order to facilitate adoption".

This means we need to ask ourselves:

  • How much would design tools need to adapt to be able to work with this data?
  • Is there a way the schema could be extensible accommodate for theming and platform-specific concepts (via the data field, or other), without having them baked into the spec?

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024

@zackbrown I was also thinking about having a default value and some overrides for variants but with @dabbott we found that it might be safer to specify a value for all the variants so that when you add a new variant, your tool can warn you about the places where you need to look at (eg. the tokens not specifying the value for the new variant)

Is it worth elevating "cross-platform" as one of the guiding principles? Seems like a pretty big fork in the road. IMHO any spec with a claim to a Design Token Standard should treat platform-specifications(/overrides/variants) as first-class, but can do so without "hard-coding" specific platforms.

I think it's necessary too. But I'd say "platform independent" instead "cross-platform".

from community-group.

zackbrown avatar zackbrown commented on May 20, 2024

@mathieudutour to help me understand:

we found that it might be safer to specify a value for all the variants

could you sketch out (or point to) an example/pseudocode?

from community-group.

kaelig avatar kaelig commented on May 20, 2024

cc @nikolasklein you worked on the styles panel in Figma – do you have thoughts on data structure for grouping/categories?

from community-group.

zackbrown avatar zackbrown commented on May 20, 2024

@mathieudutour gotcha, I totally agree with this principle (re: variants; few messages above).

Implementation-wise (speaking from the perspective of having already implemented this!) Diez leans on the TypeScript compiler [and a subsequent static-analysis pass for our transpiler] as the tool providing a warning, e.g. that you tried to use a platform dart that wasn't defined.

It would be quite easy to warn also that e.g. "Your overrides are sparse! You might be forgetting Android definitions" without having to contort the data format. In other words, the data is either there or it isn't; IMO it should not be the purview of the data format to make it "extra easy" to perform static analysis, e.g. by duplication; the data format should instead strive to be minimal + ergonomic.

Automated tooling can read that data and determine "sparse overrides" or "missing variants" almost regardless of the shape of the data. So keep it human-centered! [I doubt we disagree on this point!]

from community-group.

danoc avatar danoc commented on May 20, 2024

I'd even argue that specifying all the variants is more readable than a default value + overrides.

Agreed. In my earlier example, which would be the default for space6? The web or native value?

from community-group.

maraisr avatar maraisr commented on May 20, 2024

Hello 👋🏻 Marais here! Human behind Overdrive. Just here dropping my two cents:

  • This file is going to get massive, especially if you have a 1-9 colour scale, per colour. So to battle that, would be good build fragments like a colours.json, sizes.json - and combine them together with a CLI or something?
  • For things like elevations, that visually look different when applied to a background. Ie a white'ish background, drop shadows look more prevalent - whereas applied to a deep red, you'd darken everything more so it "looks more there". So would that mean a new token "red-elevation-1", or could we give tokens context? So we can have things like $COLOUR_RED_900__ELEVATION_1 - or is this more or less what @ventrebleu was talking about, with aliases?
{
	tokens: [
		{
			name: 'elevation-1',
			value: '0 1px 10px 0 rgba(0, 0, 0, 0.03)',
			'@context': [
				{
					'colour-red-900': '0 1px 10px 0 rgba(0, 0, 0, 0.09)',
				},
			],
		},
		{
			name: 'colour-red-900',
			value: '#780502',
		},
	];
}
  • Would also be nice to also allow micro-tokens, like component specific tokens. Like within a checkbox component you might want to token the size of the checkbox's tick-box. But not necessarily have that token hoisted to the design system. Just thinking of scenarios where you want input's heights the same height as buttons. So the decision of "do we make a token for button heights" that our "inputs use", or a "token for our inputs height" that our "buttons use" - "or 2 tokens with a comment for future me to keep in sync".
  • Bit of a trivial one. There's a name and a value in each token item, then a vendor override. Is this to say "our primary product is web" so the value's are for web targets, with overrides for Figma and say Sketch. Or should we merely just have name, and the value would be in the @scss vendor, or the @web .css vendor?

Thanks for all the efforts though guys!

from community-group.

kaelig avatar kaelig commented on May 20, 2024

Thank you @zol, I was just about to reach out to the Storybook team following the introduction of the new Component Story Format in v5.2! Welcome :)

from community-group.

NateBaldwinDesign avatar NateBaldwinDesign commented on May 20, 2024

What I therefore propose is that we do a PR that adds an initial skeleton file format spec as a markdown file.

@c1rrus 's recommendation here sounds appropriate here -- has this begun already? I'd be curious to see where this is at

from community-group.

joestrouth1 avatar joestrouth1 commented on May 20, 2024

From the principles:

The format is simple, extensible, and as unopinionated as possible

Language agnostic data formats like JSON and YAML are very unopinionated, but don't support things like functions. Should the spec take a stance on how users combine tokens? This would necessitate running all token files through some tool before use.

Is tokenC = multiply(tokenA, tokenB) "human readable"? To use tokenC in a design, I would need to look up both tokens, convert them to RGB if not stored in that format, and trust that my napkin math is correct.

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024

You could represent a function call with JSON:

{
  value: { function: 'multiply', arguments: [tokenA, tokenB]}
}

so that’s not really an issue.
But you need to know what multiply is, hence my comment about about whether we should specify that or not

from community-group.

NateBaldwinDesign avatar NateBaldwinDesign commented on May 20, 2024

Regarding functions, I think it's not too uncommon for some type of function to generate tokens, whether it be a function of one token against another (tokenC = fn(tokenA, tokenB)), some basic generic function against a token (tokenB = saturate(tokenA, 10%)), or even some more complex function that is an invention of the design system authors themselves (such as Polaris' "color factory", or Adobe's contrast-based color generation tools). In some cases this is a 1:1 (one token resulting of a single function call), whereas other cases this is a one-to-many (small set of values generates many more tokens).

I do not believe we should define sets of functions within the spec, however it's worth noting that tokens may be generated as a result of these functions. I'm not entirely sure what would need to be defined in order to support this appropriately, but the simplest use case of type scales would be a good straw man.

Isn't the colorspace just a variant (as discussed more on #2)? Do we need to be arbitrary about the kind of variants we allow for each "types"?

I don't see this as a variant per se but that might work. Althoug variant implies that it's a choice, whereas in this case it's not a choice; it's a correct or incorrect value for a token based on ouptut that a product/consumer needs. In this type of structure #2 (comment), it could become messy. At that point you're defining a light/dark theme variant, and then for each theme variant of a color there would also need to be colorspace variants. The model would become difficult to reason about.

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024

I do agree that functions would be nice but I don't see how tools would be able to work with them if we don't specify what they are. Like what is saturate(tokenA, 10%) if you don't know what is saturate? See the CSS 4 spec (outdated) for some example of function specifications

other cases this is a one-to-many

Do you have an example of a one-to-many function that couldn't be represented by a few 1:1 functions? I'm struggling to imagine it (not to mention that the naming of the generated tokens becomes a lot harder). Sure you can have a few functions doing the same things with a different factor (in the case of your font size scale example) but wouldn't it be easier to just write x times the function needed? That would make it easy to control the names of the tokens, as well as not overloading the specification

So the font size scale example could be represented as such:

let type-scale-multiplier = 1.125

let font-size-100 = 14
let font-size-200 = multiply(font-size-100, type-scale-multiplier)
let font-size-300 = multiply(font-size-200, type-scale-multiplier)

I find that a lot more readable (and easier to explain) than

let base-font-size = 1.125
let type-scale-multiplier = 1.125

let [font-size-100, font-size-200, font-size-300] = magicScaleFunction(
  base-font-size,
  type-scale-multiplier
)

variant implies that it's a choice

I don't think it does. A variant is just a dimension of your design tokens. Color space is a perfect example of that IMO: depending on the color space, your tokens change (but they might also change depending on the theme at the same time).
How a variant is selected (be it the system localisation, the time of day, the type of screen) is out of scope of tokens specification, it's application logic.

At that point you're defining a light/dark theme variant, and then for each theme variant of a color there would also need to be colorspace variants. The model would become difficult to reason about.

Isn't that true in any case?

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024

the token's status is something that I believe can and should be standardized

Should it? A status seems arbitrary and dependent on the workflow of a team (eg. release cycle, naming convention, etc.).
What does "deprecating a token" means? "Will be removed in the next version" (that's the common definition for "deprecating" something in the software world? But then that implies a version?
I'd be wary to add stuff to the specs too fast as it's always easy to add stuff later but much harder to remove stuff (except if we want some deprecation warning on the spec 😄 )

from community-group.

souporserious avatar souporserious commented on May 20, 2024

Something similar to GraphQL's @deprecated directive could be nice. Although, like @mathieudutour said, it's probably too early to adopt something like this when the core pieces are still being worked on.

from community-group.

joestrouth1 avatar joestrouth1 commented on May 20, 2024

I do agree that functions would be nice but I don't see how tools would be able to work with them if we don't specify what they are. Like what is saturate(tokenA, 10%) if you don't know what is saturate?

The format could specify a means of referencing functions without defining them. It would be up to the tool to define saturate or notify the user that they're referencing an unknown function.

  {
    "name": "color-blue-400",
    "value": {
      "func": "lighten",
      "args": ["color-blue-300", "10%"]
    }
  }

If the spec defines what lighten means, it is opinionated about what color model to use. "Lightening" by 10% in HSL, HSV, or CIELAB would result in different tokens. One can imagine a similar issue for saturate in HSL/HSV. Once the format has been in the wild for some time there may arise a set of common functions, implemented across tools, that could be standardized at a later date.

from community-group.

mathieudutour avatar mathieudutour commented on May 20, 2024

If the spec defines what lighten means, it is opinionated about what color model to use

Yeah, but you could define different function for different color model. If you don't specify what lighten means, you will end up with different tools giving different results.

The format won't define what the functions are, but the specs should. So in the format, you can reference any function, even if they aren't defined by the spec, and your tool can decide to handle it if they want to.

Just like in CSS:

div {
  width: grow(10px, 10);
}

That's a valid CSS syntax, but grow doesn't mean anything. You can have tools that handle it (like postcss), but that doesn't mean it's valid CSS. It's your own responsibility to make sure you use tools that know what grow is.

But I do agree that there shouldn't be too many functions defined a priori, and we should wait to have a strong consensus around the definition of a function before putting it in the spec

from community-group.

NateBaldwinDesign avatar NateBaldwinDesign commented on May 20, 2024

I would imagine simply allowing anyone to use any function within the tokens as (likely) an external library/source. I have doubts about standardizing specific functions within the token spec. For color in particular, you’re not only defining functions based of specific properties within a variety of color spaces and modes, but then this almost implies a full color library would have to be implemented as part of the spec just to be able to have a consistent set of standard functions. That sounds like boiling the ocean. What is more reasonable in my mind is to say tokens can be written using external functions, such that a token value is returned from the function prior to implementation in components. For example, this could be done using build scripts.

I second @calebdwilliams recommendation. Deprecation is a big hurdle and at least incorporating basic statuses for tokens such as “experimental”, “stable”, and “deprecated” would be very useful. In terms of “what does deprecated mean?”, well it means don’t use it any more. Leave it to individual teams to decide if that means they will version or follow certain processes. We don’t need to standardize release/versioning/process in here but I do believe status is very important. To that affect, it is helpful to have a reference when deprecating tokens. We always output tokenA is deprecated; use tokenB to ensure we offer actionable information. Perhaps that type of message could just be an updated description, however.

from community-group.

calebdwilliams avatar calebdwilliams commented on May 20, 2024

Quick question for @kaelig, when you say

At the moment, this proposal doesn't advocate for a particular file format (JSON, TypeScript…), it merely discusses what the shape of it should look like.

Is the end goal to advocate for a particular file format? In a lot of ways this feels like one of those scenarios where the medium is the message and that the file format that these tokens are written to will largely influence their structure.

from community-group.

lucastobrazil avatar lucastobrazil commented on May 20, 2024

(Also just continuing thinking here)...

Perhaps if we're going to strongly type it with TypeScript, we could prototype some of these types to try out? People could then fork it and try build their own to see how it feels.

I am only a beginner at TS (although quite experienced in JS), but am considering giving it a go as a way to learn. If there's interest here I'll keep this thread updated.

from community-group.

lucastobrazil avatar lucastobrazil commented on May 20, 2024

Got a new question... I am making some tokens for our type scale at the moment, but hit an interesting point:

const FontSizes = [
  {
    name: 'small',
    value: 12,
    unit: 'px',
    lineHeight: 1.2,
  },
   ...
];

I have lineHeight in here because it's tightly coupled to the h1 type scale definition. No other type sizes need this value (although some may share the same value by coincidence). I also have a font size caption which is the same font size, but it differs in its letter spacing:

const FontSizes = [
  {
    name: 'small',
    value: 12,
    unit: 'px',
    lineHeight: 1.2,
  },
  {
    name: 'caption',
    value: 12,
    unit: 'px',
    lineHeight: 1.2,
    letterSpacing: "14px" // <-- 
  },
];

So obviously this opens up a can of worms in terms of where to draw the line between separating values in different tokens and grouping them.

For us, our Sketch library currently has all of these text styles set up as Sketch Text Styles -> so the intended ergonomics for the developer are to basically have the same discrete set of text variants:

<Text variant="caption">My Groovy Caption</Text>

I think that everyone's going to have a different 'where to draw the line' decision point when making their tokens, but also think there could be a clear spec for how to add those extra values. perhaps like the metadata suggestions raised earlier in this issue.

Also thinking that perhaps then FontSizes is misleading, instead it should be FontVariants

const FontVariants = [
  {
    name: 'caption',
    data: [
        fontSize: {
          value: 12,
          unit: 'px'
        },
        lineHeight: {
          value: 1.2
        },
        letterSpacing: {
          value: 14,
          unit: 'px'
        }
    ]
  },
];

I guess the type of the data fields should be standard enough (key { value, unit }), but FontVariants as a token category would prob. not apply to all

from community-group.

joestrouth1 avatar joestrouth1 commented on May 20, 2024

Those seem one level removed from design tokens, at least to me. I see tokens most commonly defined as atomic values that can't be broken down further into useful constituent parts. What you've described is a composition of tokens. Maybe the group can tackle those after the token specs are more developed.

from community-group.

lucastobrazil avatar lucastobrazil commented on May 20, 2024

@joestrouth1 Fair point! But in these examples for instance, what kind of token would the lineHeight values live? They only relate directly to that 'token', even if it is bigger than the usual idea of a token. Agreed it's not the same as the atomic ones, but also i wonder how useful a token of fooLineHeight is to anything else other than foo.

Practically speaking, if I wanted to say for instance "across all platforms, h1 should always be X font size, line height and font", then that would be something to be stored in one place and consumed by anything (react native, react, sketch, figma etc), no?

from community-group.

lucastobrazil avatar lucastobrazil commented on May 20, 2024

Good points all! Agree here that it’s one step further than tokens. In this example I guess then we’d have a separate lineHeight token that is grouped with the font size somewhere else.

Might be worth bringing up / defining the depth for tokens (atomic, molecular etc). Unless they should only be atomic?

Anyway I know there is a team of editors working on this, looking forward to hearing how things are going :-)

from community-group.

frebro avatar frebro commented on May 20, 2024

We did font-sizes and line-heights as separate tokens, mainly because each font-size can have multiple line-heights: "regular" and "loose". We couple font-sizes and line-heights by size index, basically like this:

font-size-1: 14px
font-size-2: 20px
line-height-1: 1.25
line-height-1-loose: 1.45
line-height-2: 1.2
line-height-2-loose: 1.4

Then made into type styles, like this:

type-style-1
  font-size-1
  line-height-1

type-style-2
  font-size-1
  line-height-1-loose

type-style-3
  font-size-2
  line-height-2

type-style-4
  font-size-2
  line-height-2-loose

That's one concrete reason for separating font-size and line-height as tokens.

from community-group.

kevinmpowell avatar kevinmpowell commented on May 20, 2024

Zooming out a bit to the file format itself and an earlier recommendation to "pave the cowpaths."

I'd be keen to poll potential consumers of this new spec (Sketch, Figma, Adobe XD, Scss, CSS Variables, iOS, Android, etc.) and see an aggregate list of what their current token structures look like. Are they all mostly JSON? XML? YAML? is there a "most common" file type already?

I would imagine Theo and/or Style Dictionary source code would be good locations to find that information.

Could an MVP be a static file of key/value pairs (no functions, transformations, compilations) that the majority of those consumers could read?

It seems like some of the more advanced features could be added later, but could an initial goal be centered around accelerating adoption among consumers?

from community-group.

mirisuzanne avatar mirisuzanne commented on May 20, 2024

I think there would be several things to look at. CSS already has:

  • Design types like sizes & colors (even layouts) in various cross-platform formats (see eg Colors Module, level 4)
  • Syntax and parameters for mixed-unit math, and color-adjustments across gamuts (see eg Colors Module, level 5)
  • A syntax for contextual variations, based on media-type and feature-support (with media/support queries)
  • Cascading logic for allowing more targeted rules to override more general patterns, allowing values to inherit, and adapting values to nested contexts (Cascading & Inheritance)
  • Even object-like dictionary structures (which I think of as missing from CSS) are available if you think of an entire selector block as a complex token…

I think most or all of the features I've seen discussed could be built on top of existing CSS syntax, types, and functions without reinventing a number of complicated wheels. Some features might need additional work – either defining how the syntax is used, or extending it - but (like Sass) some of that work could also merge back into CSS over time.

from community-group.

Martin-Pitt avatar Martin-Pitt commented on May 20, 2024

That sounds good, and actually there are print-related design tokens that would benefit from the media scoping for example.

(As an aside CSS also will have nesting anyway btw: https://drafts.csswg.org/css-nesting/)

Use of keywords in place of element selectors would be good to think more about on.

Being able to more innately link variables from other tokens at a native syntax level would probably be a big win as that is the weakness of CSS that was making it really hard to argue for it.

JSON on one hand has awesome nesting and variable access in languages, but harder to write and doesn't do much.
CSS being flexible and easy to update but doesn't lend itself to reusability (css variables sorta helps but it's a bit late).
So a combination of JSON and CSS is probably the best here.

I think @oscarotero hit a nice balance there of human readability and editing.

Where in place of selectors they are keywords that can be used to access the tokens in that group.

Would being strict about standardised keywords instead of user-defined names mean that the file is flatter and thus better as a universal format? (Namespacing can still be done at a filesystem-level mind you)

from community-group.

ManeeshChiba avatar ManeeshChiba commented on May 20, 2024

Hi everyone,

I really like the idea that @oscarotero proposed. Something that I think is worth defining and perhaps naming is that some of this specification will detail a fixed set of options, and the rest will detail how those options are composed. I think this is where the discussion around variable and css custom properties are heading as well.

I find this very similar to the distinction Brad Frost makes in Atomic Web Design between atoms and molecules.

Atoms are basic building blocks. In our use case, the defined options or definitions:

colors {
    primary: rgba(233 2 0 0.3);
    secondary: #333;
}

sizes {
    small: 14px;
    medium: 20px;
}

font-families {
	heading: Hoefler Text;
	display: Inter;
}

While molecules are groups of atoms combined together. In our case, the compositions which could use the previously defined options:

typography {
    main {
        font-family: $font-families.display;
        font-size: $sizes.medium;
		color: $colors.primary;
    }
}

I think this distinction is worth noting and naming because generating new compositions do not necessarily require new definitions, or we may choose that definitions are not required at all:

typography {
    main {
        font-family: Inter;
        font-size: 20px;
		color: rgba(233 2 0 0.3);
    }
}

Though I think there are advantages beyond DRY that definitions/options/variables solve;

  • They allow an aspect of a Design System to be fixed and versioned. You could generate new compositions without adding to the definitions.
  • They allow for Human and Machine readable audits of what is in the system
  • They open a way for tools to report which definitions are used and which are not
  • Definitions could be shareable (if the syntax is fixed) so that an existing composition could inherit a from new definitions easily
  • Overrides from the definitions would need to be explicit

from community-group.

oscarotero avatar oscarotero commented on May 20, 2024

Thanks for the valuable comments for this proposal. I'm agree with @mirisuzanne that using CSS as the basis for the new format must be done carefully and we should avoid reusing CSS syntax for things not created for that purpose, in order to prevent future conflicts and confusion from people comming from the CSS world.

But I also think that we don't need to be fully compatible with CSS, because the purpose is different. Just use those aspect that are common, like definitions of colors, gradients, transformations, different units of space, time, radious, and use a similar syntax. But CSS is a language to style html documents, and this new format is to define design values that can be used later not only with real CSS but also other design tools and technologies. I'm closer to the first option proposed by @jonathantneal (maybe the & sign for nesting is not necessary here, and I don't understand the difference between the groups with @define rules and the typography group).

Let me address some topics that have come up:

Typings

Typed properties may be useful in some cases to prevent errors. CSS has not syntax for types (although there's a new javascript API for that), but we can add typing features to the new tokens syntax. I'd avoid using at-rules (like @colors) and, instead, prepend the type to the property name. Example:

my-colors {
  Color primary: #456;
  Color secondary: #666;
}

This declares the type of the value for each property and if you want to override the values using a different type, it would generate an error:

@import './base.cts';

my-colors {
  primary: red; /* This is a valid color */
  secondary: 23px; /* This is not valid, throws an error */
}

We could even allow to assign types to groups, so every child of this group must comply with the group type definition:

Color my-colors {
  primary: red;
  secondary: 23px; /* Not valid */
}

CSS custom properties and variables

I think custom properties (the -- properties) are a different thing, not equivalent to links used in the tokens (Using $ in these examples). The main difference is that custom properties are assigned to DOM elements, they are not really variables in the same way that SASS or LESS variables. They are spreaded following the DOM structure, not the CSS structure.

In addition to that, the syntax of css variables is a bit verbose, because technical reasons. It's a new feature of CSS that must be compatible with some legacies (like vendor-prefixes, ie hacks, etc) and must prevent conflicts with future new properties (this is why they start with --). The new tokens format has not these constraints and due linking and reusing is a core feature of this new format (to implement systems like Atomic design mentioned by @ManeeshChiba) , it should be the less verbose as possible. Anyway, if you prefer something more aligned with the CSS style, another idea is using the token() function (or similiar name) instead de dollar sign:

typography {
    main {
        font-family: Inter;
        font-size: token(sizes.medium);
    }
}

from community-group.

jonathantneal avatar jonathantneal commented on May 20, 2024

The new tokens format has not these constraints and due linking and reusing is a core feature of this new format (to implement systems like Atomic design mentioned by @ManeeshChiba) , it should be the less verbose as possible.

I like this statement, @oscarotero. I may appear at odds with it sometimes. To me, there is writing something that works with legacy CSS, and then there is writing something that works with CSS itself.

To use a kitchen analogy, working with CSS itself can be like working with flour, sugar, water, and yeast. Meanwhile, working with legacy CSS can be like working with canned dough from the refrigerator. I love biscuits from a can! But if our canned dough was for biscuits and we wanted to make a cake, I would totally understand and endorse the decision to just use the core ingredients.

We are still limited by core ingredients. In the kitchen analogy, we can’t turn the sugar into stevia, and we can’t turn the water into wine. In CSS, we can’t change declarations into rules.

To be specific, in a list of declarations — like within a typography {} rule — we can’t read main:last-child as a declaration until we move ahead to reveal the more complete CSS as main:last-child {}, then retroactively decide that it was actually a rule. That is until we read ahead even further to reveal that the CSS was main:last-child {}; and decide that actually it was a declaration the whole time.

In a spirit of helpfulness, I hope this helps explain why I will push for any CSS-like language to follow CSS syntactically, even if it isn’t attempting to be compatible with legacy CSS features. 🙏

from community-group.

jonathantneal avatar jonathantneal commented on May 20, 2024

That’s a fantastic write-up, @ilikescience. I also enjoyed your post about your design API. I found the GraphQL interface quite straightforward. In that post I spotted the category relationship field, where I presume you’ve gone down this road quite a bit farther. That kind of experience will be super helpful.

Typing

I can see how typing might be helpful. I would be interested in seeing this explored more. In a CSS-like markup, I could imagine at-rules and functions accomplishing this. Tho, I like that you are focused on the should we before the how do we. It is harder for me to separate them. 🤷

Uniqueness

Regarding const and the prompt “What are use cases for defining a token more than once?”:

I would not be in favor of const for style rules, i.e. selectors paired with a block of styles. Correlating const with rules seems problematic to me, as CSS rules are better compared with JS class, Class.prototype prototypes, and {} objects.

To the point of the prompt; if I interpret a style rule like token { color: black } as const token = { color: 'black' }, then I’m unsure what an override to token looks like in CSS when I want to assign a background color to the token definition. In JS this might look like token['background-color'] = white, while in CSS it would look like another rule — token { background-color: white }.

If const meant the selector could only be used once, then the interpretation would more closely compare with const body = Object.freeze({ color: 'black' }), seems more restricting than what is possible in either CSS or JS. 🤔

Aliasing

I think aliasing is a must-have, tho most-crucial for design systems over time. Aliasing can happen in the start, as in primitive tokens like you described, and also in higher order components. Like, sometimes things are referred to by how they look; sometimes things are referred to by how they function; and sometimes things are referred to by how they relate to other things. And on and on. Things get referred to multiple ways within a design system, usually based on history or context. Aliasing can seem antithetical to order and consistency, but consistency gets super hard the more components get created and the more time passes.

Most design systems that I have seen up-close start out as a state of the union, often combining some new desires of the designers or developers with a larger set of existing patterns in production. The new and old are best paired with aliasing. Then, in the course of putting the design system together, the team makes choices over how to group color tokens along some axis of intensity, or how to group spacing by some multiple. And then later in the course of using the design system, exceptions are made and whole concepts change or get reimagined, and new pattern emerges. The new and old are best transitioned with aliasing.

Aliasing for all the things!

Order dependency

I probably have more to learn here. I would expect it to resolve a dependency tree like JS imports; e.g. Two JS files can both import from the same third JS file.

from community-group.

c1rrus avatar c1rrus commented on May 20, 2024

Wow. Lots of great activity going on here! It's sparked a ton of questions in my head...

@oscarotero I really like your CSS-like syntax proposals! I'd definitely support exploring that direction more.


@mirisuzanne I'm not sure I've fully understood what you meant by "custom groupings". Are you thinking of allowing people to define their own, custom group types for token values? I.e. something similar to interfaces found in some programming languages (e.g. TypeScript)? Or did you have something else in mind?


@ManeeshChiba How would you expect tools to interpret your proposed "molecule" tokens? Using your example:

typography {
    main {
        font-family: $font-families.display;
        font-size: $sizes.medium;
        color: $colors.primary;
    }
}

Is the nesting...

  • a short-hand for declaring some tokens whose names share common prefixes? I.e. it's equivalent to something like typography.main.font-family: $font-families.display; typography.main.font-size: $sizes.medium; ...
  • or, is this like a bunch of CSS declarations - i.e. you're assining the value of the $font-families.display token to the font-family property. If so, what is typography main selecting?
  • something else entirely ;-)

@jonathantneal Great points about the different ways we might re-use or build upon CSS syntax. I think you've touched on something interesting there: What are the concepts we might want to inherit or copy from CSS?

Most design token tools to date treat them as key-value pairs. Taking Styledictionary as an example, you might define a single design token in a JSON input file like so:

{
    "my-token": "#f00"
}

...and it can translate that in a range of different output formats. E.g. the SASS output might be: $my-token: rgba(255,0,0,1);, the JavaScript output might be: export const myToken = '#ff0000';, and so on.

My (possibly incorrect) interpretation of @oscarotero's original proposal was an alternative, CSS-inspired syntax for expressing same kind of design token input data. So, for instance, a future Style-dictionary-like tool might read in such files instead of JSON.

However, what's missing is the equivalent of CSS's selectors and properties. There's nothing that specifies what parts of a UI certain token values should be applied to. (That's assumed to be the job of a designer and/or developer who consumes the (potentially translated) tokens into their project)

It's a totally valid question to debate whether a design token format should let you assign tokens to properties and thus essentially define the visual appearance of a UI, but so far they mostly don't (though Diez is beginning to blur those boundaries!).

If I understood your option 2 correctly, where "we build upon the whole suite of CSS specifications", we'd be making a superset of CSS. I suppose we'd get all of CSS's properties, selectors and other goodness "for free" in that case. But, assuming we'd want to empower people that could write tools that knew how to translate that into something meaningful for completely different platforms (e.g. output native Swift UI code for iOS), I'd expect there to be some substantial challenges. Does native UI code for, say iOS, have an equivalent to the DOM in a web-browser and if not what how would we interpret a CSS selector? Also, would we not be reinventing the likes of SASS and LESS?

I suspect I misunderstood the options in your comment though and you meant something else?


@ilikescience Thanks for that excellent write-up! The more I think about it, the more I feel we need to capture and name the concepts we're all talking about. Then we can begin to decide which ones are in-scope for a design token format (at least for v1) and which are not.

As you already know, there's already been some discussion amongst the editors about how it might make sense to begin by defining the "model" or "mechanisms" of design token data first and then worry about what syntax(es) it can be serialised to / de-serialised from.

I feel like your comment is the first step towards being able to define such a model. :-)

from community-group.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.