Giter VIP home page Giter VIP logo

home-ops's Introduction

My Home Operations Repository :octocat:

... managed with Flux, Renovate, and GitHub Actions πŸ€–

DiscordΒ Β  TalosΒ Β  KubernetesΒ Β  Renovate

Home-InternetΒ Β  Status-PageΒ Β  Alertmanager

Age-DaysΒ Β  Uptime-DaysΒ Β  Node-CountΒ Β  Pod-CountΒ Β  CPU-UsageΒ Β  Memory-UsageΒ Β  Power-Usage


πŸ“– Overview

This is a mono repository for my home infrastructure and Kubernetes cluster. I try to adhere to Infrastructure as Code (IaC) and GitOps practices using tools like Ansible, Terraform, Kubernetes, Flux, Renovate, and GitHub Actions.


β›΅ Kubernetes

My Kubernetes cluster is deploy with Talos. This is a semi-hyper-converged cluster, workloads and block storage are sharing the same available resources on my nodes while I have a separate server with ZFS for NFS/SMB shares, bulk file storage and backups.

There is a template over at onedr0p/cluster-template if you want to try and follow along with some of the practices I use here.

Core Components

  • actions-runner-controller: Self-hosted Github runners.
  • cert-manager: Creates SSL certificates for services in my cluster.
  • cilium: Internal Kubernetes container networking interface.
  • cloudflared: Enables Cloudflare secure access to certain ingresses.
  • external-dns: Automatically syncs ingress DNS records to a DNS provider.
  • external-secrets: Managed Kubernetes secrets using 1Password Connect.
  • ingress-nginx: Kubernetes ingress controller using NGINX as a reverse proxy and load balancer.
  • rook: Distributed block storage for peristent storage.
  • sops: Managed secrets for Kubernetes and Terraform which are commited to Git.
  • spegel: Stateless cluster local OCI registry mirror.
  • tf-controller: Additional Flux component used to run Terraform from within a Kubernetes cluster.
  • volsync: Backup and recovery of persistent volume claims.

GitOps

Flux watches the clusters in my kubernetes folder (see Directories below) and makes the changes to my clusters based on the state of my Git repository.

The way Flux works for me here is it will recursively search the kubernetes/${cluster}/apps folder until it finds the most top level kustomization.yaml per directory and then apply all the resources listed in it. That aforementioned kustomization.yaml will generally only have a namespace resource and one or many Flux kustomizations (ks.yaml). Under the control of those Flux kustomizations there will be a HelmRelease or other resources related to the application which will be applied.

Renovate watches my entire repository looking for dependency updates, when they are found a PR is automatically created. When some PRs are merged Flux applies the changes to my cluster.

Directories

This Git repository contains the following directories under Kubernetes.

πŸ“ kubernetes
β”œβ”€β”€ πŸ“ main            # main cluster
β”‚   β”œβ”€β”€ πŸ“ apps           # applications
β”‚   β”œβ”€β”€ πŸ“ bootstrap      # bootstrap procedures
β”‚   β”œβ”€β”€ πŸ“ flux           # core flux configuration
β”‚   └── πŸ“ templates      # re-useable components
└── πŸ“ storage         # storage cluster
    β”œβ”€β”€ πŸ“ apps           # applications
    β”œβ”€β”€ πŸ“ bootstrap      # bootstrap procedures
    └── πŸ“ flux           # core flux configuration

Flux Workflow

This is a high-level look how Flux deploys my applications with dependencies. Below there are 3 apps postgres, glauth and authelia. postgres is the first app that needs to be running and healthy before glauth and authelia. Once postgres and glauth are healthy authelia will be deployed.

graph TD;
  id1>Kustomization: cluster] -->|Creates| id2>Kustomization: cluster-apps];
  id2>Kustomization: cluster-apps] -->|Creates| id3>Kustomization: postgres];
  id2>Kustomization: cluster-apps] -->|Creates| id6>Kustomization: glauth]
  id2>Kustomization: cluster-apps] -->|Creates| id8>Kustomization: authelia]
  id2>Kustomization: cluster-apps] -->|Creates| id5>Kustomization: postgres-cluster]
  id3>Kustomization: postgres] -->|Creates| id4[HelmRelease: postgres];
  id5>Kustomization: postgres-cluster] -->|Depends on| id3>Kustomization: postgres];
  id5>Kustomization: postgres-cluster] -->|Creates| id10[Postgres Cluster];
  id6>Kustomization: glauth] -->|Creates| id7(HelmRelease: glauth);
  id8>Kustomization: authelia] -->|Creates| id9(HelmRelease: authelia);
  id8>Kustomization: authelia] -->|Depends on| id5>Kustomization: postgres-cluster];
  id9(HelmRelease: authelia) -->|Depends on| id7(HelmRelease: glauth);

Networking

Click to see a high-level network diagram dns

☁️ Cloud Dependencies

While most of my infrastructure and workloads are self-hosted I do rely upon the cloud for certain key parts of my setup. This saves me from having to worry about two things. (1) Dealing with chicken/egg scenarios and (2) services I critically need whether my cluster is online or not.

The alternative solution to these two problems would be to host a Kubernetes cluster in the cloud and deploy applications like HCVault, Vaultwarden, ntfy, and Gatus. However, maintaining another cluster and monitoring another group of workloads is a lot more time and effort than I am willing to put in.

Service Use Cost
1Password Secrets with External Secrets ~$65/yr
Cloudflare Domain and S3 ~$30/yr
Frugal Usenet access ~$35/yr
GCP Voice interactions with Home Assistant over Google Assistant Free
GitHub Hosting this repository and continuous integration/deployments Free
Migadu Email hosting ~$20/yr
Pushover Kubernetes Alerts and application notifications $5 OTP
Terraform Cloud Storing Terraform state Free
UptimeRobot Monitoring internet connectivity and external facing applications ~$60/yr
Total: ~$20/mo

🌐 DNS

Home DNS

On my Vyos router I have Bind9, blocky and dnsdist deployed as containers. In my cluster external-dns is deployed with the RFC2136 provider which syncs DNS records to bind9.

dnsdist is a DNS loadbalancer and has "downstream" DNS servers configured such as bind9 and blocky. All my clients use dnsdist as the upstream DNS server, this allows for more granularity with configuring DNS across my networks such as having all requests for my domain forward to bind9 on certain networks, or only using 1.1.1.1 instead of blocky on certain networks where adblocking isn't required.

Public DNS

Outside the external-dns instance mentioned above another instance is deployed in my cluster and configured to sync DNS records to Cloudflare. The only ingress this external-dns instance looks at to gather DNS records to put in Cloudflare are ones that have an ingress class name of external and contain an ingress annotation external-dns.alpha.kubernetes.io/target.


πŸ”§ Hardware

Click to see the rack! rack
Device Count OS Disk Size Data Disk Size Ram Operating System Purpose
Intel NUC8i5BEH 3 1TB SSD 1TB NVMe (rook-ceph) 64GB Talos Kubernetes Controllers
Intel NUC8i7BEH 3 1TB SSD 1TB NVMe (rook-ceph) 64GB Talos Kubernetes Workers
PowerEdge T340 1 2TB SSD 64GB Ubuntu LTS NFS + Backup Server
Lenovo SA120 1 - 10x22TB ZFS (mirrored vdevs) - - DAS
Raspberry Pi 4 1 32GB (SD) - 4GB PiKVM (Arch) Network KVM
TESmart 8 Port KVM Switch 1 - - - - Network KVM (PiKVM)
HP EliteDesk 800 G3 SFF 1 256GB NVMe - 8GB Vyos (Debian) Router
Unifi US-16-XG 1 - - - - 10Gb Core Switch
Unifi USW-Enterprise-24-PoE 1 - - - - 2.5Gb PoE Switch
Unifi UNVR 1 - 4x8TB HDD - - NVR
Unifi USP PDU Pro 1 - - - - PDU
APC SMT1500RM2U w/ NIC 1 - - - - UPS

⭐ Stargazers

Star History Chart


🀝 Gratitude and Thanks

Thanks to all the people who donate their time to the Home Operations Discord community. Be sure to check out kubesearch.dev for ideas on how to deploy applications or get ideas on what you may deploy.

home-ops's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

home-ops's Issues

Blocky container error

Containers:
  blocky:
    Container ID:   docker://743473a790e9af0ce7f5fb68a6bb99420460b4ea94c6387eb1b11d0ebfec4227
    Image:          spx01/blocky:v0.5
    Image ID:       docker-pullable://spx01/blocky@sha256:51bb1df868cb5ace0a275abda6e1856681749df92910ce5af71d6c187d2ec755
    Ports:          4000/TCP, 53/TCP, 53/UDP
    Host Ports:     0/TCP, 0/TCP, 0/UDP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449
: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/c7fd0be2-d120-4b76-ad20-9110ff4b5d64/volume-s
ubpaths/config/blocky/0\\\" to rootfs \\\"/var/lib/docker/overlay2/22066adcc24fd62f2e160fefe9f249694ee41ba55dad4ae9246277f8d7a
439cb/merged\\\" at \\\"/var/lib/docker/overlay2/22066adcc24fd62f2e160fefe9f249694ee41ba55dad4ae9246277f8d7a439cb/merged/app/c
onfig.yml\\\" caused \\\"no such file or directory\\\"\"": unknown

Scaled service down to 0, and then scaled back to 2 and it's back online

Unable to delete provisioned dashboards in Grafana

Any dashboard I've included in the prometheus-operator chart cannot be deleted from Grafana.

image

This makes you think if you remove the dashboard from the prometheus-operator chart it will go away, but this is not the case.

Issue with intel/intel-gpu-plugin:0.17.0 image and containerd

The image is failing to pull / extract with containerd, reverting to old image for now

devin@k3s-worker-e:~$ sudo ctr i pull docker.io/intel/intel-gpu-plugin:0.17.0
docker.io/intel/intel-gpu-plugin:0.17.0:                                          resolved       |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:de5e8660de9899defc65217f8fdc19e52ba06d93c5c8cea587fea316ea850ccb: exists         |++++++++++++++++++++++++++++++++++++++|
layer-sha256:2e3eda482290c568a14aebcb2580d2cab5f71420071cb65eea2e24b439b2d4ed:    exists         |++++++++++++++++++++++++++++++++++++++|
config-sha256:514b0f5c52cc396bf8cfc3e89ea1715f1729cb39cf41278faad34e40dc93d996:   exists         |++++++++++++++++++++++++++++++++++++++|
elapsed: 0.5 s                                                                    total:   0.0 B (0.0 B/s)
unpacking linux/amd64 sha256:de5e8660de9899defc65217f8fdc19e52ba06d93c5c8cea587fea316ea850ccb...
INFO[2020-03-06T08:46:46.944886331-05:00] apply failure, attempting cleanup             error="failed to extract layer sha256:6aeab165b6772fd13ac3ce7d6fbaf9876aad46c3398f205596b2cf8c3520d147: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount941142498: chmod /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount941142498/sbin: no such file or directory: unknown" key="extract-893784340-KAoh sha256:6aeab165b6772fd13ac3ce7d6fbaf9876aad46c3398f205596b2cf8c3520d147"
ctr: failed to extract layer sha256:6aeab165b6772fd13ac3ce7d6fbaf9876aad46c3398f205596b2cf8c3520d147: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount941142498: chmod /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount941142498/sbin: no such file or directory: unknown

Issue with influxdb chart v3.2.0

Chart v3.0.2 works fine, but v3.2.0 spits out this error...

Helm Operator logs:

ts=2020-02-03T01:12:32.255625131Z caller=release.go:214 component=release release=influxdb targetNamespace=monitoring resource=monitoring:helmrelease/influxdb helmVersion=v3 error="Helm release failed" revision=3.2.0 err="failed to upgrade chart for release [influxdb]: YAML parse error on influxdb/templates/statefulset.yaml: error converting YAML to JSON: yaml: line 27: did not find expected key"

Random failed DNS lookup issues with internal and external DNS

I am seeing some strange issues with connectivity within the cluster, randomly failed DNS lookups, possibly related to coredns. It happens quite often.

For example, sonarr and radarr are having troubles sometimes connecting to nzbget, qbittorrent, jacket and nzbhydra2. I am using the service addresses, so I am unsure why there's so many issues. Sometimes there's also issues resolving external DNS too like torrent trackers.

Create Cyberpower Grafana dashboard

Now that I have the metrics I can create a simple dashboard for these metrics:

      - ePDUIdentName
      - ePDUIdentHardwareRev
      - ePDUStatusInputVoltage      ## input voltage (0.1 volts)
      - ePDUStatusInputFrequency    ## input frequency (0.1 Hertz)
      - ePDULoadStatusLoad          ## load (tenths of Amps)
      - ePDULoadStatusVoltage       ## voltage (0.1 volts)
      - ePDULoadStatusActivePower   ## active power (watts)
      - ePDULoadStatusApparentPower ## apparent power (VA)
      - ePDULoadStatusPowerFactor   ## power factor of the output (hundredths)
      - ePDULoadStatusEnergy        ## apparent power measured (0.1 kw/h).
      - ePDUOutletControlOutletName ## The name of the outlet.
      - ePDUOutletStatusLoad        ## Outlet load  (tenths of Amps)
      - ePDUOutletStatusActivePower ## Outlet load  (watts)
      - envirTemperature            ## temp expressed  (1/10 ΒΊF)
      - envirTemperatureCelsius     ## temp expressed  (1/10 ΒΊF)
      - envirHumidity               ## relative humidity (%)

Do not use HelmReleases on deployments that contain persistent data I want to be backed up

I believe it is worth changing the following charts into Kube manifests and let flux apply/manange them instead of Helm Operator. This way I can easily back up and restore with Velero. All my other deployments data I am not concerned with losing. Might be worth it to create a new namespace call media or something too.

  • Plex*
  • Radarr
  • Sonarr
  • Lidarr
  • Bazarr
  • Nzbget*
  • qBittorrent
  • NzbHydra2
  • Jackett*
  • Tautulli
  • Ombi

* auto-releases

Ceph cluster reporting HEALTH_WARN

This is making prometheus and alert manager blow up my phone, everything appears to be working fine even though cephs status is HEALTH_WARN.

Currently this is returning true ceph_health_status{job="rook-ceph-mgr"} == 1

Plex with nginx-ingress and Cloudflare is reporting clients using IPv6 as LAN IPs

I'm seeing some remote users IPs coming in as 10.x.x.x addresses, which doesn't make any sense. I'm not sure if this is an issue with nginx ingress, Cloudflare or what. Very strange issue.

Possible work around: I might just enable Plex relay instead of using nginx. I'll also gain streaming w/ SSL which didn't work with nginx ingress due to lack of a custom certificate set on Plex

Nginx Ingress log spam

Looks like using an IP in ExternalName for Services is creating a bunch of log spam in my nginx-ingress-controller logs

---
apiVersion: v1
kind: Service
metadata:
  name: external-unifi-protect
spec:
  ports:
  - name: http
    port: 7443
  type: ExternalName
  externalName: 192.168.1.2
2020/04/01 12:50:34 [error] 1693#1693: *27130539 [lua] dns.lua:150: dns_lookup(): failed to query the DNS server for 192.168.1.2: server returned error code: 3: name error,  server returned error code: 3: name error

metrics-server failing to give pod stats

Works:

kubectl top nodes
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k3s-master     1417m        35%    1547Mi          9%
k3s-worker-a   664m         8%     4563Mi          14%
k3s-worker-b   645m         8%     4512Mi          14%
k3s-worker-c   285m         3%     4260Mi          13%
k3s-worker-d   466m         5%     18240Mi         28%
k3s-worker-e   5189m        64%    9272Mi          14%

Doesn't work:

kubectl top pods
W0213 12:17:04.017607   30395 top_pod.go:266] Metrics not available for pod default/sonarr-episode-prune-7776645f74-mt4s4, age: 170h45m57.017599s
error: Metrics not available for pod default/sonarr-episode-prune-7776645f74-mt4s4, age: 170h45m57.017599s

I saw mentioned this could be related to using Docker as the CRI, #13

Helm pending-upgrade issue

$ helm list  --pending
NAME  	NAMESPACE	REVISION	UPDATED                                	STATUS         	CHART       	APP VERSION
radarr	default  	17      	2020-01-28 15:55:02.463264563 +0000 UTC	pending-upgrade	radarr-3.0.0	amd64-v0.2.0.1344-ls17

Logs from helm-operator:

ts=2020-02-09T18:13:43.197038925Z caller=release.go:329 component=release release=radarr targetNamespace=default resource=default:helmrelease/radarr helmVersion=v3 warning="unable to sync release with status pending-upgrade" action=skip
ts=2020-02-09T18:13:43.211508049Z caller=release.go:329 component=release release=helm-operator targetNamespace=flux resource=flux:helmrelease/helm-operator helmVersion=v3 warning="unable to sync release with status failed" action=skip

CSI Volume expansion won't work due to missing secret

I am seeing the following issue trying to expand a volume:

Type     Reason              Age                    From                                         Message
  ----     ------              ----                   ----                                         -------
  Normal   Resizing            3m33s (x38 over 136m)  external-resizer rook-ceph.rbd.csi.ceph.com  External resizer is resizing volume pvc-0bcec6ed-97de-4084-98ab-3391c900a54e
  Warning  VolumeResizeFailed  3m33s (x38 over 136m)  external-resizer rook-ceph.rbd.csi.ceph.com  resize volume pvc-0bcec6ed-97de-4084-98ab-3391c900a54e failed: error getting secret rook-ceph-csi in namespace rook-ceph: secrets "rook-ceph-csi" not found

This makes sense because I do not have the secret in my cluster:

❯ k get secrets -n rook-ceph | grep rook-ceph-csi
[empty]

According to their manifest here and here

We should be using rook-csi-rbd-provisioner as the secret for rbd and rook-csi-cephfs-provisioner for cephfs in the csi.storage.k8s.io/controller-expand-secret-name field.

Todo: Update the following PVs to allow expansion

  • monitoring alertmanager-prometheus-operator-alertmanager-db-alertmanager-prometheus-operator-alertmanager-0
  • monitoring prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0
  • logging storage-loki-0
  • monitoring influxdb-data-influxdb-0
  • monitoring prometheus-operator-grafana
  • test radarr-test-config
  • default nzbhydra2-config
  • default jackett-config
  • default nzbget-config
  • default ombi-config
  • default qbittorrent-config
  • default radarr-config
  • default sonarr-config
  • default tautulli-config
  • default plex-kube-plex-config

Upgrade rook-ceph to v1.2.6

Waiting as I am getting this error:

ts=2020-03-13T12:46:35.332951703Z caller=release.go:140 component=release release=rook-ceph targetNamespace=rook-ceph resource=rook-ceph:helmrelease/rook-ceph helmVersion=v3 error="chart unavailable: chart \"rook-ceph\" version \"v1.2.6\" not found in https://charts.rook.io/release repository"

The release shows up here, so I am unsure. I will wait a week to try again

https://charts.rook.io/release

Upgrade Helm Operator to v1.0.0

Done:

Updated the image to 1.0.0 in my helm-operator-values.yaml

kubectl apply -f deployments/flux/helm-operator-crds.yaml

helm upgrade --install helm-operator --values deployments/flux/helm-operator-values.yaml --namespace flux fluxcd/helm-operator

Use retainable PVCs for stateful apps.

Currently if a deployment is deleted it will also delete the PVCs. I think it's best to pre-create PVs for some deployments to use. It would also helpful to move some PVCs to nfs because they only require file based storage.

  • Jackett config data using rook-ceph
  • nzbhydra2 config data using rook-ceph
  • ombi config data using rook-ceph
  • plex config data using rook-ceph
  • qbittorrent config data using nfs
  • Radarr config data using rook-ceph
  • Sonarr config data using rook-ceph
  • nzbget config data using nfs
  • tautulli config data using rook-ceph

A possible work-around would just to mark these PVCs as retainable

Helm failed issue with helm-operator

$ helm list -n flux
NAME         	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART              	APP VERSION
flux         	flux     	8       	2020-02-08 15:26:37.755455195 +0000 UTC	deployed	flux-1.2.0         	1.18.0
helm-operator	flux     	7       	2020-02-02 15:06:56.253074064 +0000 UTC	failed  	helm-operator-0.6.0	1.0.0-rc8

Use nfs-media-pvc instead of 2 separate PVCs

Currently with the PVCs being split apart into nfs-media-downloads-pvc and nfs-media-library-pvc, sonarr, radarr, qbittorrent, and nzbget think these are separate drives and will copy downloaded files instead of hard-linking them.

While it's nice to have a separation of concerns, there is really no added benefit of keeping these 2 PVCs around. The benefits of not having duplicate data (seeding torrent and copied data to my Movie library) around is pretty important.

Todo:

  • Create new PVC called nfs-media-pvc that just mounts /Media

Radarr

  • Disable downloads and movies volumes and use extraExistingClaimMounts
  • Add new root folder in Radarr, update all movies to use the new root folder, delete old root folder

Before:

      downloads:
        enabled: true
        existingClaim: nfs-media-downloads-pvc
        accessMode: ReadWriteMany
      movies:
        enabled: true
        existingClaim: nfs-media-library-pvc
        subPath: Movies
        accessMode: ReadWriteMany

After:

      downloads:
        enabled: false
      movies:
        enabled: false
      extraExistingClaimMounts:
        - name: media
          existingClaim: nfs-media-pvc
          mountPath: /media
          readOnly: false

Sonarr

  • Disable downloads and tv volumes and use extraExistingClaimMounts
  • Add new root folder in Sonarr, update all movies to use the new root folder, delete old root folder

Before:

      downloads:
        enabled: true
        existingClaim: nfs-media-downloads-pvc
        accessMode: ReadWriteMany
      tv:
        enabled: true
        existingClaim: nfs-media-library-pvc
        subPath: Television
        accessMode: ReadWriteMany

After:

      downloads:
        enabled: false
      tv:
        enabled: false
      extraExistingClaimMounts:
        - name: media
          existingClaim: nfs-media-pvc
          mountPath: /media
          readOnly: false

nzbget

  • Update paths in config

Before:

      downloads:
        enabled: true
        existingClaim: nfs-media-downloads-pvc
        accessMode: ReadWriteMany

After:

      downloads:
        enabled: false
      extraMounts:
        - name: media
          claimName: nfs-media-pvc

qbittorrent

  • Update paths in cient

Before:

      downloads:
        enabled: true
        existingClaim: nfs-media-downloads-pvc
        accessMode: ReadWriteMany

After:

      downloads:
        enabled: false
      extraMounts:
        - name: media
          claimName: nfs-media-pvc

K3s vs rke/K8s

What made you choose k3s over rke/k8s? I think you’re all x86, right?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.