Comments (84)
To be extra clear -- I'm proposing that we make this experience part of core kubernetes. The fact that core k8s is a set of things that have to be set up together is a powerful thing but it makes things look very very complex. We should be willing to have a sane set of defaults and embedded solutions built in to the main distribution.
Right now our "manual install" page is incredibly daunting. We should aim to reduce that (at least for a given set of integrated services) to a single screen without the crutch of automation tools that paper over the complexity.
from enhancements.
our docs and all our items for 1.13 are in place.
this tracking issue can finally be closed.
kubeadm is now GA. π
/close
from enhancements.
Docs should ideally include clarification on the relationship between the various cluster deployment solutions, because it's getting very confusing for users as to why there are so many and which one is "official" and/or "recommended," especially since there are many under the kubernetes organization. Specifically:
- kubeadm (in-tree)
- https://github.com/kubernetes/kops
- https://github.com/kubernetes/kube-deploy
- https://github.com/kubernetes/kubernetes-anywhere
- https://github.com/kubernetes-incubator/bootkube
from enhancements.
On Wed, Jun 22, 2016 at 2:02 PM Tim Hockin [email protected] wrote:
Something that crossed my mind with docker 1.12 - built-in kvstoreeans
that libnetwork's overlay driver might be viable for us. Having a built-in
network mode for Docker installs that works anywhere and doesn't require
extra components might be nice.Relying on libnetwork for bootstrapping sounds like a mess longterm. I
would much rather figure out bootstrap, reconfiguration, etc of CNI, which
we need to figure out anyways, than make some compromise that puts a new
dependency on the Docker engine.
from enhancements.
cc @derekparker @aaronlevy @pbx0 from the CoreOS team working on https://github.com/coreos/bootkube and the self-hosted stuff with @mikedanese to realize a k8s driven creation and update story.
from enhancements.
@klizhentas Yes! The idea is to make small clusters super easy. Folks looking for large clusters will want to manage etcd independently. Users can choose to take on the complexity of breaking everything out but it'll be an advanced move.
from enhancements.
@smarterclayton I think we just need to pick something to get going. The easiest zero-config option would be the way to go.
from enhancements.
This has become a discussion issue rather than a tracking issue. It's great that lots of people are interested in this topic. We could use help. I created a github team (sig-cluster-lifecycle) and googlegroup (kubernetes-sig-cluster-lifecycle), which you can request to join. I'm going to rename the sig-install slack channel to sig-cluster-lifecycle. We should brainstorm in the googlegroup rather than generate more github notifications.
Also, a number of people have been working in this area for a while. We're going to summarize the current state and make a prioritized list of work items that have already been identified.
from enhancements.
Reminder - these issues (ideally) should be for only discussion about the flow of the work items related to the feature. If you would like to discuss (please do!) please do it in the Google group.
from enhancements.
Updated the checkboxes above. This is largely implemented for 1.6. The last bugs and and UX issues are getting ironed out now.
The biggest missing thing now for beta for this is documentation. Much of the UX here is hidden from users but should still be documented.
from enhancements.
the issue is on track, yes.
kubeadm labels are auto-applied to our PRs, too many to log and track on our side:
https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+is%3Aopen+label%3Aarea%2Fkubeadm
as explained in a release team meeting these are our 2 GA items:
the list in here is a critical one:
kubernetes/kubeadm#1163
this is for the kubeadm config:
kubernetes/kubeadm#911
docs are mostly command reshuffle and are TBD.
from enhancements.
I think we can demonstrate the composable nature of the platform without
having to build monolithic binaries that are contrary to the spirit of
microservice architecture.
There are two separate but related topics: ability to create a node (and
bring up control plane), ability to have a new node join an existing
cluster easily. I worry moving to monolithic binaries don't necessarily
help either cause.
I also think if we want to advocate being agnostic about a particular
container runtime, the setup process should follow suite. This is why I
like @mikedanese ideas in the space since they start with the Kubelet
(which could work with any container runtime it's pointed against) rather
than starting with a particular container runtime launching the Kubelet.
On Wednesday, June 22, 2016, Alexander Klizhentas [email protected]
wrote:
@jbeda https://github.com/jbeda so in the simple case
kube-controller-manager, kube-apiserver,kubelet + etcd will be one go
binary?β
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#11 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AF8dbMrm4XRDpultZxQyXkAXkGxMpg13ks5qObyVgaJpZM4I8A4G
.
from enhancements.
Biased, because I'm already working on it, but I'd advocate for putting effort into self-hosted (k8s-on-k8s) installations as a means of simplifying both cluster creation, and lifecycle management.
If the installation contract becomes "I can run a kubelet" and everything else is built atop that in containers -- then installation criteria could become as simple as "does your node pass node-e2e tests?"
More or less this is already possible in a simple case with all core components being run as static pods. This is how many installations work, and is well understood. The problem with this approach is that it becomes difficult to transition from "this was easy to install" to "I want to customize/modify/update this installation".
As soon as we want to make modifications to this cluster, we're back to some kind of "modify files on disk" configuration management (salt/ansible/chef/etc). In this case it doesn't preclude us from having a "simple" installation and other "production" deployment tools. Or even decide on standardization/contract where a more complex tool can take over from the other static installation (e.g. kube-apiserver.yaml will exist in /etc/kubernetes/manifests)
Alternatively, in the self-hosted scenario, the static installation can remain simple on first install (in concept, replace your static pod definitions with deployments/daemonsets). But then can be modified / extended without relying on external configuration management (or external deployment tooling that needs to evolve in lock-step with your cluster) -- everything is by definition just a kubernetes object.
Updates to the cluster can become api-driven / or even an update-controller application. Assets (tls, configuration, flags) travel with their components as they are also just kubernetes objects (secrets / configMaps). We get all the niceties of kubernetes lifecycle management.
Now all that being said, networking really is a hard part. Maybe this comes down to figuring out the coordination at the kubelet level + cni (e.g. how do I self-host flannel-client + allow it to configure networking for subsequent pods).
from enhancements.
For this particular feature I think it would help to go backwards - not from implementation to UX, but vise-versa. If we figure out user experience with this right, the implementation will follow.
Here's the ideal scenario that users can see on k8s quickstart page:
Starting single node k8s
wget https://kuberenetes.io/releases/latest/kube
# starts both node, API, etcd, all components really
kube start
This will let users to explore kubernetes, start containers
Adding node
Then there's a use case when users want to get to run smaller clusters to play with failover, HA and so on.
On first node, execute:
# adds provisioning token to securely add new nodes to the cluster
kube token add
<token1>
On any node to be added in the cluster:
kube start --token=<token1> --cluster=https://<ip of the first node>
That's the minimum amount of steps I can imagine to bootstrap the cluster in dev mode. If we figure out this UX first, everything else will follow.
from enhancements.
kubeadm will still be beta in 1.9.
from enhancements.
As discussed in the SIG, no extra docs are needed for this feature.
This is an umbrella issue for the general development of kubeadm, and kubeadm is gonna be beta for quite some time yet. These docs have existed since v1.4, and will still be available of course in all future releases.
from enhancements.
from enhancements.
Can we have init steps executed selectively yet? Or had it lost from the radar?
yes, in 1.12 there is kubeadm alpha phase....
which handles init phases. in 1.13 these would be more widely available.
from enhancements.
update:
all feature PRs were merged. during code freeze we have time to dig for bugs.
remaining docs PR is WIP:
kubernetes/website#10960
from enhancements.
Other related efforts/prior art:
- List of core k8s features that will help all deploys: kubernetes-retired/kube-deploy#123
- [closed] Original proposal around this [somewhat outdated]: kubernetes/kubernetes#2303
- [closed] Proposal to rework Kubernetes deployment CLI: kubernetes/kubernetes#5472
- [closed] RFC: kube-bootstrap: kubernetes/kubernetes#16077
- Implement the cluster bootstrap API: kubernetes/kubernetes#5754 (pulled out of kubernetes/kubernetes#2303)
- How to get kargo project into kubernetes github tree kubernetes/kubernetes#27948
- http://kubernetes.io/docs/getting-started-guides/scratch/
- https://github.com/kubernetes/community/wiki/Roadmap:-Cluster-Deployment
- https://github.com/kubernetes/kubernetes-anywhere
- https://github.com/kubernetes/kube-deploy
- https://github.com/coreos/bootkube
[I'll update this comment with new links as they come in]
from enhancements.
@mikedanese -- I know this is a lot of what you've been working on. I'd love to get that reflected here and scoped for 1.4. Do you mind shooting some pointers over?
from enhancements.
There is a class of infrastructure (that doesn't currently exist) that would benefit all deployment automations. We should try to enumerate what these items are, give them relative priorities and advocate for them in v1.4 planning. I started to create a list a couple days ago: kubernetes-retired/kube-deploy#123 cc @justinsb @errordeveloper @bgrant0607
from enhancements.
@jbeda kubernetes/kubernetes#5472, kubernetes/kubernetes#16077 and kubernetes/kubernetes#5754 are related to this.
from enhancements.
Something that crossed my mind with docker 1.12 - built-in kvstoreeans that libnetwork's overlay driver might be viable for us. Having a built-in network mode for Docker installs that works anywhere and doesn't require extra components might be nice.
Might require some work to not assume prefixes per node.
from enhancements.
Just so I understand it better embedded etcd will be optional right? As in production we would want to still deploy etcd separate from the API/Scheduler/Controller
from enhancements.
Network is the hardest part - you can ignore security and edge cases as long as pods can talk to each other. Leveraging libnetwork seems like a practical choice where possible (or just have a daemonset that drops in your favorite network auto provisioner via CNI). Once the node is started we can run any code.
from enhancements.
I think the model we present should not look that different from the
production model. I am a fan of making it easy to launch a Kubelet that
then has a static manifest with sensible defaults to launch control plane
that is not hidden in a mess of salt. On that model, etcd can still be a
pod as well as other parts of our control plane.
On Wednesday, June 22, 2016, Joe Beda [email protected] wrote:
@smarterclayton https://github.com/smarterclayton I think we just need
to pick something to get going. The easiest zero-config option would be
the way to go.β
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#11 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AF8dbA8VZgUidBorolDZ7ajk8w7C0O6xks5qObecgaJpZM4I8A4G
.
from enhancements.
@jbeda so in the simple case kube-controller-manager
, kube-apiserver
,kubelet
+ etcd
will be one go binary?
from enhancements.
@klizhentas @derekwaynecarr I don't know what the binaries will be that we ship. I do know that we have to make it dead easy to download a thing and get it up and running. If we can get stuff self hosted on the cluster in a container, that would be a good solution. The number of steps needs to be reduced to ~1 per node.
Let's start with the ideal set of things we want the end user to type. From there we can figure how to get there in a sustainable way (and with the opportunity to do everything in a more explicit way for advanced users).
from enhancements.
@jbeda - agree on focusing on desired ux command first
On Wednesday, June 22, 2016, Derek Carr [email protected] wrote:
I think we can demonstrate the composable nature of the platform without
having to build monolithic binaries that are contrary to the spirit of
microservice architecture.There are two separate but related topics: ability to create a node (and
bring up control plane), ability to have a new node join an existing
cluster easily. I worry moving to monolithic binaries don't necessarily
help either cause.I also think if we want to advocate being agnostic about a particular
container runtime, the setup process should follow suite. This is why I
like @mikedanese ideas in the space since they start with the Kubelet
(which could work with any container runtime it's pointed against) rather
than starting with a particular container runtime launching the Kubelet.On Wednesday, June 22, 2016, Alexander Klizhentas <
[email protected]
javascript:_e(%7B%7D,'cvml','[email protected]');> wrote:@jbeda https://github.com/jbeda so in the simple case
kube-controller-manager, kube-apiserver,kubelet + etcd will be one go
binary?β
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#11 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AF8dbMrm4XRDpultZxQyXkAXkGxMpg13ks5qObyVgaJpZM4I8A4G
.
from enhancements.
I took the liberty of creating a post that summarizes the generic expectations stated here in the googlegroup (kubernetes-sig-cluster-lifecycle) to continue the brainstorming: https://groups.google.com/forum/#!topic/kubernetes-sig-cluster-lifecycle/LRMygt2YNrE
from enhancements.
Please, take a look at this proposal - kubernetes/kubernetes#27948
from enhancements.
@aronchick @bgrant0607 @metral Agreed -- let's take this to the mailing list.
https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle
from enhancements.
@jbeda For your list of related work: https://github.com/InQuicker/kaws
from enhancements.
Update on the plan for this feature: https://docs.google.com/presentation/d/17xrFxrTwqrK-MJk0f2XCjfUPagljG7togXHcC39p0sM/edit?ts=57a33e24
from enhancements.
Proposal: kubernetes/kubernetes#30360
from enhancements.
Update on this: all the dependencies we needed for this landed before the 1.4 feature freeze. We've agreed in the SIG that kubeadm
experimental packages can be included in the next few weeks in time for the release (it doesn't need to be in the release branch). So this is on track to be in alpha for 1.4.
See also our WIP demo: https://www.youtube.com/watch?v=Bv3JmHKlA0I&feature=youtu.be
from enhancements.
@jbeda Are the docs ready? Please update the docs to https://github.com/kubernetes/kubernetes.github.io, and then add PR numbers and have the docs box checked in the issue description
from enhancements.
Ping. Any update on docs?
from enhancements.
Another ping on docs. Any PRs you can point me to?
from enhancements.
@janetkuo @jaredbhatti so sorry for the delay replying to this, I was travelling. I'll be working on docs imminently. When do you need them by, at the latest?
from enhancements.
Launch date is Tuesday, 20 September. Ideally we'd like to get them merged in ahead of time, say by Friday 16 September if possible.
from enhancements.
@lukasredynk @jbeda yet another ping on docs.
from enhancements.
The docs PR is out for review kubernetes/website#1265
from enhancements.
Thanks. I've got it in tracking now.
On Wed, Sep 21, 2016 at 3:36 PM, Mike Danese [email protected]
wrote:
The docs PR is out for review kubernetes/website#1265
kubernetes/website#1265β
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#11 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ARmNwEuVjs5RvAnhKKVyrkeZcFj5X9FFks5qsbF0gaJpZM4I8A4G
.
from enhancements.
The subfeature request #138
from enhancements.
@jbeda @mikedanese can you confirm that this item targets beta in 1.6?
from enhancements.
Yes it does.
from enhancements.
@mikedanese could you please link the proposal with the amended kubernetes/kubeadm#22 feature flags proposal? I'm lost in tracking its status
from enhancements.
@jbeda please, provide us with the release notes at https://docs.google.com/spreadsheets/d/1nspIeRVNjAQHRslHQD1-6gPv99OcYZLMezrBe3Pfhhg/edit#gid=0
Also, please, select the valid checkpoints at the Progress Tracker.
from enhancements.
@lukemarsden Is pulling together release notes.
@luxas -- do you know if anyone is on the line for doc updates for kubeadm for 1.6? If not, I can probably take a pass.
from enhancements.
Doc update for Beta: kubernetes/website#2829
from enhancements.
This stays in beta for the v1.8 release. There are some dependent work we are pushing instead (Bootstrap Tokens and easy Upgrades for instance). Once we have all dependencies on beta level or higher, we can push this last overall piece (kubeadm) to GA
from enhancements.
@roberthbailey @luxas @jbeda @mikedanese is it still beta for 1.9?
Also, please update the feature description using https://github.com/kubernetes/features/blob/master/ISSUE_TEMPLATE.md
from enhancements.
@jbeda π Please indicate in the 1.9 feature tracking board
whether this feature needs documentation. If yes, please open a PR and add a link to the tracking spreadsheet. Thanks in advance!
from enhancements.
@jbeda Bump for docs βοΈ
/cc @idvoretskyi
from enhancements.
@luxas Thanks! π
from enhancements.
from enhancements.
@roberthbailey @luxas @jbeda @mikedanese
Any plans for this in 1.11?
If so, can you please ensure the feature is up-to-date with the appropriate:
- Description
- Milestone
- Assignee(s)
- Labels:
stage/{alpha,beta,stable}
sig/*
kind/feature
cc @idvoretskyi
from enhancements.
@roberthbailey @luxas @jbeda @mikedanese @kubernetes/sig-cluster-lifecycle-feature-requests
There've been plans to promote kubeadm to GA in 1.11, the upcoming release. Can you let us know if these plans are still valid?
from enhancements.
@idvoretskyi We do not target GA in v1.11. We have executed very well on our path to GA though, eliminating dependencies and roadblocks. Hopefully in v1.12, but you'll never know. We are still blocked on some other parts (like kube-proxy ComponentConfig graduating to beta from sig-network). We're getting really close though π
from enhancements.
We're hoping this to go GA in v1.12. Let's track it as that for now and see if we meet the criteria later in this cycle.
from enhancements.
Tracking as GA for 1.12
/kind feature
from enhancements.
Hey there! @jbeda I'm the wrangler for the Docs this release. Is there any chance I could have you open up a docs PR against the release-1.12 branch as a placeholder? That gives us more confidence in the feature shipping in this release and gives me something to work with when we start doing reviews/edits. Thanks! If this feature does not require docs, could you please update the features tracking spreadsheet to reflect it?
from enhancements.
@roberthbailey @luxas Bump for docs βοΈ
/cc @idvoretskyi
from enhancements.
/assign @timothysc
we do have good tutorials on how to setup kubeadm clusters already:
https://kubernetes.io/docs/setup/independent/
(under Bootstrapping Clusters with kubeadm
)
but i'm missing on historic context for this tracking issue.
from enhancements.
I think we're pretty okay docs-wise from earlier releases already, but @timothysc will make the judgement if many new docs are needed or existing docs have to be updated
from enhancements.
@timothysc @luxas @Bradamant3 --
I think we're pretty okay docs-wise from earlier releases already, but @timothysc will make the judgement if many new docs are needed or existing docs have to be updated
Where did we land on this? Are the docs in a good state for 1.12? Are we still planning to land this as GA for 1.12?
cc: @zparnold @jimangel @tfogo
from enhancements.
this gets my π for GA in 1.12, in terms of docs.
(we do have to covert some other aspects.)
we have decent documentation in place and we haven't had major, negative feedback on the instructions for cluster creation with kubeadm which were improved a lot in 1.11. in the meantime we continue to improve the docs where possible.
pages to note:
https://kubernetes.io/docs/setup/independent/install-kubeadm/
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
https://kubernetes.io/docs/setup/independent/high-availability/
from enhancements.
Hi. This is currently being tracked for 1.13. I want to see what changes are being made for 1.13 for the feature to be considered GA/Stable.
This release is targeted to be more βstableβ and will have an aggressive timeline. Please only include this enhancement if there is a high level of confidence it will meet the following deadlines:
- Docs (open placeholder PRs): 11/8
- Code Slush: 11/9
- Code Freeze Begins: 11/15
- Docs Complete and Reviewed: 11/27
Thanks!
from enhancements.
Our objective is to take kubeadm to GA this cycle.
from enhancements.
Reading into https://kubernetes.io/docs/setup/independent/high-availability/ I noticed there is now a neat config file (kubeadm-config.yaml
in the guide). But I was not sure what is the status for "features flags" support for that config? Can we have init steps executed selectively yet? Or had it lost from the radar?
from enhancements.
@bogdando see the "full" example in the doc for featureGates
:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/apis/kubeadm/v1alpha3/doc.go#L160-L248
from enhancements.
@neolit123 @roberthbailey is this still on track for GA in 1.1.3? do we have a link to list of pending PRs or issues for us to track this better?
from enhancements.
i don't have powers here to update the OP.
in terms of docs we are in a good state, test coverage might need some attention as much i understand the goals here.
i'm going to have to defer to @timothysc on this one.
from enhancements.
@timothysc could you please provide a more latest update on the status of Kubeadm for GA in 1.13. Specifically around
(i) how many and which PRs (code and test) are pending
(ii) latest status of docs and links to docs PR
(iii) Status of failing kubeadmn tests in master blocking
With code slush nearing us on 11/9, could you provide us with an ETA of when you expect all pending things to land in master? Given we need sometime to stabilize things before Code freeze on 11/16, it might be a good idea to timebox the remaining work and make a Go/No-Go call for GA in 1.13 before Code freeze. Thanks !
from enhancements.
(iii) Status of failing kubeadmn tests in master blocking
this should have green runs today.
from enhancements.
@AishSundar I wish it were that simple. There are a number of PRs in flight and most of the docs will be minor changes. Progress is good, but I probably won't have a good answer for you until ~ next week. Almost all of the work is not "net-new" features but cleanup and bug fixes in shuffling for GA, which will likely span into slush.
from enhancements.
Ack that @timothysc and thanks for the update and consolidating all the Kubeadm GA work under this issue. We will check back once in slush.
from enhancements.
@timothysc Hi I'm an enhancements shadow for 1.13 - checking in on progress for this issue. Code slush is 11/9 and Code freeze is 11/15, is this issue still on track for those milestones? Thanks!
from enhancements.
@timothysc can you drop in a list of PRs we should be tracking for Kubeadm going GA? Thanks!
from enhancements.
@timothysc @neolit123 can one of you attend the Release burndown meeting next week (Mon, Wed or Fri) to give the latest update on Kubeadm GA.
from enhancements.
i can try joining today.
update on docs:
for GA we only need 2 docs PRs, one already merged and one is a WIP placeholder, TBD before 19th:
kubernetes/website#10937 (comment)
from enhancements.
@neolit123: Closing this issue.
In response to this:
our docs and all our items for 1.13 are in place.
this tracking issue can finally be closed.
kubeadm is now GA. π/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from enhancements.
Related Issues (20)
- Traffic Distribution for Services HOT 15
- Promote PolicyReport API to a Kubernetes SIG API HOT 8
- per-request Read/Write deadline HOT 1
- kubeadm: make a control-plane's kubelet talk to the local API Server on kubeadm join. HOT 3
- add opt-in compression support in storage for apimachinery HOT 3
- Implement `apiserver_watch_cache_read_wait` metric HOT 2
- `ContainerStatus` image reference field HOT 2
- Add CPUManager policy option to restrict reservedSystemCPUs to system daemons and interrupt processing HOT 2
- KEP: Support for Cloud Native Confidential Computing: Integrity Measurement and Attestation Services HOT 2
- In-tree volume plugins remove dynamic provisioning support HOT 1
- Redesigning Kubelet Probes HOT 5
- Evacuation API HOT 3
- Optimize retrieving remainingItemCount from etcd for APIServer List chunking calls for performance gain on etcd HOT 7
- Resilient Watchcache Initialization
- Move cgroup v1 support into maintenance mode HOT 4
- Condition `PodTerminated` added for pod HOT 1
- Server Feature Gate in etcd HOT 1
- Deprecate & remove Kubelet RunOnce mode HOT 3
- CRI Native Container Copy HOT 3
- Verifying Image Registry Origin in Private Kubernetes Clusters HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from enhancements.