Giter VIP home page Giter VIP logo

pcidevices's Introduction

Harvester

Build Status Go Report Card Releases Slack

Harvester is a modern, open, interoperable, hyperconverged infrastructure (HCI) solution built on Kubernetes. It is an open-source alternative designed for operators seeking a cloud-native HCI solution. Harvester runs on bare metal servers and provides integrated virtualization and distributed storage capabilities. In addition to traditional virtual machines (VMs), Harvester supports containerized environments automatically through integration with Rancher. It offers a solution that unifies legacy virtualized infrastructure while enabling the adoption of containers from core to edge locations.

harvester-ui

Overview

Harvester is an enterprise-ready, easy-to-use infrastructure platform that leverages local, direct attached storage instead of complex external SANs. It utilizes Kubernetes API as a unified automation language across container and VM workloads. Some key features of Harvester include:

  1. Easy to install: Since Harvester ships as a bootable appliance image, you can install it directly on a bare metal server with the ISO image or automatically install it using iPXE scripts.
  2. VM lifecycle management: Easily create, edit, clone, and delete VMs, including SSH-Key injection, cloud-init, and graphic and serial port console.
  3. VM live migration support: Move a VM to a different host or node with zero downtime.
  4. VM backup, snapshot, and restore: Back up your VMs from NFS, S3 servers, or NAS devices. Use your backup to restore a failed VM or create a new VM on a different cluster.
  5. Storage management: Harvester supports distributed block storage and tiering. Volumes represent storage; you can easily create, edit, clone, or export a volume.
  6. Network management: Supports using a virtual IP (VIP) and multiple Network Interface Cards (NICs). If your VMs need to connect to the external network, create a VLAN or untagged network.
  7. Integration with Rancher: Access Harvester directly within Rancher through Rancher’s Virtualization Management page and manage your VM workloads alongside your Kubernetes clusters.

The following diagram outlines a high-level architecture of Harvester:

architecture.svg

  • Longhorn is a lightweight, reliable, and easy-to-use distributed block storage system for Kubernetes.
  • KubeVirt is a virtual machine management add-on for Kubernetes.
  • Elemental for SLE-Micro 5.3 is an immutable Linux distribution designed to remove as much OS maintenance as possible in a Kubernetes cluster.

Hardware Requirements

To get the Harvester server up and running the following minimum hardware is required:

Type Requirements
CPU x86_64 only. Hardware-assisted virtualization is required. 8-core processor minimum for testing; 16-core or above required for production
Memory 32 GB minimum; 64 GB or above required for production
Disk Capacity 250 GB minimum for testing (180 GB minimum when using multiple disks); 500 GB or above required for production
Disk Performance 5,000+ random IOPS per disk (SSD/NVMe). Management nodes (first three nodes) must be fast enough for etcd
Network Card 1 Gbps Ethernet minimum for testing; 10Gbps Ethernet required for production
Network Switch Trunking of ports required for VLAN support

We recommend server-class hardware for best results. Laptops and nested virtualization are not officially supported.

Quick start

You can use the ISO to install Harvester directly on the bare-metal server to form a Harvester cluster. Users can add one or many compute nodes to join the existing cluster.

To get the Harvester ISO, download it from the Github releases.

During the installation, you can either choose to create a new Harvester cluster or join the node to an existing Harvester cluster.

  1. Mount the Harvester ISO file and boot the server by selecting the Harvester Installer option. iso-install.png
  2. Use the arrow keys to choose an installation mode. By default, the first node will be the management node of the cluster. iso-install-mode.png
    • Create a new Harvester cluster: Select this option to create an entirely new Harvester cluster.
    • Join an existing Harvester cluster: Select this option to join an existing Harvester cluster. You need the VIP and cluster token of the cluster you want to join.
    • Install Harvester binaries only: If you choose this option, additional setup is required after the first bootup.
  3. Choose the installation disk you want to install the Harvester cluster on and the data disk you want to store VM data on. By default, Harvester uses GUID Partition Table (GPT) partitioning schema for both UEFI and BIOS. If you use the BIOS boot, then you will have the option to select Master boot record (MBR). iso-choose-disks.png
    • Installation disk: The disk to install the Harvester cluster on.
    • Data disk: The disk to store VM data on. Choosing a separate disk to store VM data is recommended.
    • Persistent size: If you only have one disk or use the same disk for both OS and VM data, you need to configure persistent partition size to store system packages and container images. The default and minimum persistent partition size is 150 GiB. You can specify a size like 200Gi or 153600Mi.
  4. Configure the HostName of the node.
  5. Configure network interface(s) for the management network. By default, Harvester will create a bonded NIC named mgmt-bo, and the IP address can either be configured via DHCP or statically assigned. iso-config-network.png
  6. (Optional) Configure the DNS Servers. Use commas as a delimiter to add more DNS servers. Leave blank to use the default DNS server.
  7. Configure the virtual IP (VIP) by selecting a VIP Mode. This VIP is used to access the cluster or for other nodes to join the cluster. iso-config-vip.png
  8. Configure the cluster token. This token will be used for adding other nodes to the cluster.
  9. Configure and confirm a Password to access the node. The default SSH user is rancher.
  10. Configure NTP servers to make sure all nodes' times are synchronized. This defaults to 0.suse.pool.ntp.org. Use commas as a delimiter to add more NTP servers.
  11. (Optional) If you need to use an HTTP proxy to access the outside world, enter the proxy URL address here. Otherwise, leave this blank.
  12. (Optional) You can choose to import SSH keys by providing HTTP URL. For example, your GitHub public keys https://github.com/<username>.keys can be used.
  13. (Optional) If you need to customize the host with a Harvester configuration file, enter the HTTP URL here.
  14. Review and confirm your installation options. After confirming the installation options, Harvester will be installed on your host. The installation may take a few minutes to complete.
  15. Once the installation is complete, your node restarts. After the restart, the Harvester console displays the management URL and status. The default URL of the web interface is https://your-virtual-ip. You can use F12 to switch from the Harvester console to the Shell and type exit to go back to the Harvester console. iso-installed.png
  16. You will be prompted to set the password for the default admin user when logging in for the first time. first-login.png

Releases

NOTE:

  • <version>* means the release branch is under active support and will have periodic follow-up patch releases.
  • Latest release means the version is the latest release of the newest release branch.
  • Stable release means the version is stable and has been widely adopted by users.
  • EOL means that the software has reached the end of its useful life and no further code-level maintenance will be provided. You may continue to use the software within the terms of the licensing agreement.

https://github.com/harvester/harvester/releases

Release Version Type Release Note (Changelog) Upgrade Note
1.3* 1.3.1 Latest 🔗 🔗
1.2* 1.2.2 Stable 🔗 🔗
1.1* 1.1.3 EOL 🔗 🔗

Documentation

Find more documentation here.

Demo

Check out this demo to get a quick overview of the Harvester UI.

Source code

Harvester is 100% open-source software. The project source code is spread across a number of repos:

Name Repo Address
Harvester https://github.com/harvester/harvester
Harvester Dashboard https://github.com/harvester/dashboard
Harvester Installer https://github.com/harvester/harvester-installer
Harvester Network Controller https://github.com/harvester/harvester-network-controller
Harvester Cloud Provider https://github.com/harvester/cloud-provider-harvester
Harvester Load Balancer https://github.com/harvester/load-balancer-harvester
Harvester CSI Driver https://github.com/harvester/harvester-csi-driver
Harvester Terraform Provider https://github.com/harvester/terraform-provider-harvester

Community

If you need any help with Harvester, please join us at either our Slack #harvester channel or forums where most of our team hangs out at.

If you have any feedback or questions, feel free to file an issue.

License

Copyright (c) 2024 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

pcidevices's People

Contributors

chrisho avatar eliaskoromilas avatar frankyang0529 avatar ibrokethecloud avatar tlehman avatar votdev avatar yu-jack avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pcidevices's Issues

I find a bug about indexer???

add Indexer: input func should VirtualMachine.Name
otherwise!!!!
get Indexer:
When we get indexer by VmName-NameSpace, response that is vm list don's exists!!!

Question about reconcilePCIDeviceClaims

Question about reconcilePCIDeviceClaims:

  1. The following Update operations, do not use a copied object, but the local queried one
				_, err = h.pdcClient.Update(&pdc)
				if err != nil {
					return err
				}
				_, err = h.pdcClient.UpdateStatus(&pdc)
				if err != nil {
					return err
				}
  1. The pdc spec seems not changed in reconcilePCIDeviceClaims, but here it calls Update and UpdateStatus, is one time of Update* is enough?

  2. Should if pdc.DeletionTimestamp != nil be checked before if !pdc.Status.PassthroughEnabled ?

  3. Those change of pdc.Status.PassthroughEnabled but return err, may cause (k8s) controller local cached data is not consistant with apiserver.

						if err != nil {
							pdc.Status.PassthroughEnabled = false
							return err
						}

file is not closed in some cases

The file is not closed in all cases.

And there are some similar functions have such issue, e.g. unbindPCIDeviceFromDriver, unbindPCIDeviceFromVfioPCIDriver, please check and update.

func addNewIdToVfioPCIDriver(vendorId string, deviceId string) error {
	var id string = fmt.Sprintf("%s %s", vendorId, deviceId)

	file, err := os.OpenFile("/sys/bus/pci/drivers/vfio-pci/new_id", os.O_WRONLY, 0400)
	if err != nil {
		return err
	}
	_, err = file.WriteString(id)
	if err != nil {
		return err   // the file is not closed in this return
	}
	file.Close()
	return nil
}

Found when reviewing: #14

Race condition preventing PCIDeviceClaims from being deleted in some cases

ime="2022-10-07T00:01:44Z" level=info msg="Removing janus-000024002 from KubeVirt list of permitted devices"
time="2022-10-07T00:01:44Z" level=info msg="Attempting to enable passthrough for janus-000024002"
time="2022-10-07T00:01:45Z" level=info msg="Adding janus-000024002 to KubeVirt list of permitted devices"
time="2022-10-07T00:01:46Z" level=info msg="Attempting to disable passthrough for janus-000024000"
time="2022-10-07T00:01:46Z" level=info msg="Removing janus-000024000 from KubeVirt list of permitted devices"
time="2022-10-07T00:01:47Z" level=info msg="Attempting to enable passthrough for janus-000024000"
time="2022-10-07T00:01:47Z" level=info msg="Adding janus-000024000 to KubeVirt list of permitted devices"
time="2022-10-07T00:01:47Z" level=info msg="Attempting to disable passthrough for janus-000004000"
time="2022-10-07T00:01:47Z" level=info msg="Removing janus-000004000 from KubeVirt list of permitted devices"
time="2022-10-07T00:01:48Z" level=info msg="Attempting to enable passthrough for janus-000004000"
time="2022-10-07T00:01:48Z" level=info msg="Adding janus-000004000 to KubeVirt list of permitted devices"
time="2022-10-07T00:01:50Z" level=info msg="Attempting to disable passthrough for janus-000024001"
time="2022-10-07T00:01:50Z" level=info msg="Removing janus-000024001 from KubeVirt list of permitted devices"
time="2022-10-07T00:01:50Z" level=info msg="Attempting to disable passthrough for janus-000024003"
time="2022-10-07T00:01:50Z" level=info msg="Removing janus-000024003 from KubeVirt list of permitted devices"
time="2022-10-07T00:01:51Z" level=info msg="Attempting to enable passthrough for janus-000024001"
time="2022-10-07T00:01:51Z" level=info msg="Attempting to enable passthrough for janus-000024003"
time="2022-10-07T00:01:51Z" level=info msg="Adding janus-000024001 to KubeVirt list of permitted devices"
time="2022-10-07T00:01:52Z" level=info msg="Adding janus-000024003 to KubeVirt list of permitted devices"
time="2022-10-07T00:02:04Z" level=info msg="Reconciling PCI Device Claims list"
time="2022-10-07T00:02:14Z" level=info msg="Attempting to disable passthrough for janus-000004001"
time="2022-10-07T00:02:14Z" level=info msg="Removing janus-000004001 from KubeVirt list of permitted devices"
time="2022-10-07T00:02:14Z" level=info msg="Attempting to enable passthrough for janus-000004001"
time="2022-10-07T00:02:15Z" level=info msg="Adding janus-000004001 to KubeVirt list of permitted devices"
time="2022-10-07T00:02:15Z" level=info msg="Attempting to disable passthrough for janus-000024002"
time="2022-10-07T00:02:16Z" level=info msg="Removing janus-000024002 from KubeVirt list of permitted devices"
time="2022-10-07T00:02:16Z" level=info msg="Attempting to enable passthrough for janus-000024002"
time="2022-10-07T00:02:16Z" level=info msg="Adding janus-000024002 to KubeVirt list of permitted devices"
time="2022-10-07T00:02:18Z" level=info msg="Attempting to disable passthrough for janus-000024000"
time="2022-10-07T00:02:18Z" level=info msg="Removing janus-000024000 from KubeVirt list of permitted devices"
time="2022-10-07T00:02:18Z" level=info msg="Attempting to enable passthrough for janus-000024000"
time="2022-10-07T00:02:19Z" level=info msg="Adding janus-000024000 to KubeVirt list of permitted devices"
time="2022-10-07T00:02:19Z" level=info msg="Attempting to disable passthrough for janus-000004000"

Notice that the enable and disable can be running in parallel.

Controller doesn't update permittedHostDevices

Environment:

I have Harvester cluster v1.3.1

  • one master DELL T7820 with nVidia P620
  • one worker DELL T7920 with nVidia P5000

Steps:

  • I enabled passthrough for both GPUs.
  • I created Ubuntu 22 VM, added P5000 device to it. Started.

Error:

  • HostDevice nvidia.com/GP104GL_QUADRO_P5000 is not permitted in permittedHostDevices configuration

while running
kubectl get kubevirt -n harvester-system -o yaml
I see that is not in the list.

Looking in the logs, I see the controller attempts to add it:

kubectl logs -n harvester-system harvester-pcidevices-controller-fgml9
time="2024-06-22T22:14:52Z" level=info msg="Adding t7920-0000d5000 to KubeVirt list of permitted devices"
time="2024-06-22T22:14:52Z" level=info msg="Enabling passthrough for PDC: t7920-0000d5000"
time="2024-06-22T22:14:52Z" level=info msg="Binding device t7920-0000d5000 [10de 1bb0] to vfio-pci"
time="2024-06-22T22:14:52Z" level=info msg="Binding device 0000:d5:00.0 vfio-pci"
time="2024-06-22T22:14:52Z" level=error msg="error syncing 't7920-0000d5000': handler PCIDeviceClaimReconcile: error writing to bind file: write /sys/bus/pci/drivers/vfio-pci/bind: device or resource busy, requeuing"
time="2024-06-22T22:14:52Z" level=info msg="Adding t7920-0000d5001 to KubeVirt list of permitted devices"
time="2024-06-22T22:14:52Z" level=info msg="Enabling passthrough for PDC: t7920-0000d5001"
time="2024-06-22T22:14:53Z" level=info msg="Binding device t7920-0000d5001 [10de 10f0] to vfio-pci"
time="2024-06-22T22:14:53Z" level=info msg="Binding device 0000:d5:00.1 vfio-pci"
time="2024-06-22T22:14:53Z" level=error msg="error syncing 't7920-0000d5001': handler PCIDeviceClaimReconcile: error writing to bind file: write /sys/bus/pci/drivers/vfio-pci/bind: device or resource busy, requeuing"
time="2024-06-22T22:14:53Z" level=info msg="Adding t7920-0000d5000 to KubeVirt list of permitted devices"
time="2024-06-22T22:14:53Z" level=info msg="Creating DevicePlugin: nvidia.com/GP104GL_QUADRO_P5000"
time="2024-06-22T22:14:53Z" level=info msg="Started DevicePlugin: nvidia.com/GP104GL_QUADRO_P5000"
{"component":"","level":"info","msg":"Initialized DevicePlugin: nvidia.com/GP104GL_QUADRO_P5000","pos":"device_manager.go:206","timestamp":"2024-06-22T22:14:53.581669Z"}
time="2024-06-22T22:14:53Z" level=info msg="Adding t7920-0000d5001 to KubeVirt list of permitted devices"
time="2024-06-22T22:14:53Z" level=info msg="Creating DevicePlugin: nvidia.com/GP104_HIGH_DEFINITION_AUDIO_CONTROLLER"
time="2024-06-22T22:14:53Z" level=info msg="Started DevicePlugin: nvidia.com/GP104_HIGH_DEFINITION_AUDIO_CONTROLLER"
{"component":"","level":"info","msg":"Initialized DevicePlugin: nvidia.com/GP104_HIGH_DEFINITION_AUDIO_CONTROLLER","pos":"device_manager.go:206","timestamp":"2024-06-22T22:14:53.899496Z"}
time="2024-06-22T22:14:54Z" level=info msg="Adding t7920-0000d5000 to KubeVirt list of permitted devices"
time="2024-06-22T22:14:54Z" level=info msg="Adding t7920-0000d5001 to KubeVirt list of permitted devices"
time="2024-06-22T22:14:55Z" level=info msg="Adding t7920-0000d5000 to KubeVirt list of permitted devices"
time="2024-06-22T22:14:55Z" level=info msg="Adding t7920-0000d5001 to KubeVirt list of permitted devices"

But something is not letting that entry to "permittedHostDevices -> pciHostDevices"

I was able to get it resolved by running

kubectl edit kubevirt -n harvester-system -o yaml

and adding entry manually:

    - externalResourceProvider: true
      pciVendorSelector: 10de:1bb0
      resourceName: nvidia.com/GP104GL_QUADRO_P5000

Then VM started.

But need to figure out why it was not added, as expected.

Also, I see a lot of duplicates in that section:
kubevirt.yaml.txt

attemptToEnablePassthrough func return object conflict

The modification suggestions are as follows:

retryErr := retry.RetryOnConflict(retry.DefaultRetry, func() error {
		newPdc, err := h.pdcClient.Get(pdc.Name, metav1.GetOptions{})
		if err != nil {
			return err
		}
		newPdc.Status.KernelDriverToUnbind = pd.Status.KernelDriverInUse
		newPdc.Status.PassthroughEnabled = true
		newPdc, err = h.pdcClient.UpdateStatus(newPdc)
		if err != nil {
			return err
		}
		return nil
	})

[BUG] Making a PCIDeviceClaim for a device in an IOMMU group can violate IOMMU restrictions, preventing successful allocation to a VM

For example, suppose you have an NVIDIA GPU with

  • a VGA controller on board and
  • an Audio device on board
    Both with their own PCI addresses. They belong to the same IOMMU group.

If you claim just the VGA controller but not the Audio device, then it cannot be attached to a KubeVirt VM. Currently, users have to claim both and then assign them both to the same VM, but this is a sub-par user experience. It allows the user to violate IOMMU restrictions.

One way to solve this problem would be to have the PCIDeviceClaim controller automatically create claims for all members of the same IOMMU group. But assignment to a VM will have to change as well, since it's possible that two VMs get assigned members of the same IOMMU group.

See this archived page for a more detailed discussion of this failure mode.

Build failure

When I try to build this, I see a bunch of timeout errors connecting to proxy.golang.org

pkg/deviceplugins/common.go:37:2: kubevirt.io/[email protected]: Get "https://proxy.golang.org/kubevirt.io/kubevirt/@v/v0.54.0.zip": dial tcp: lookup proxy.golang.org on 192.168.1.1:53: read udp 172.17.0.3:48019->192.168.1.1:53: i/o timeout

I can curl proxy.golang.org on the host, but when I go into the container dapper sh, I can't curl anything. This is a container networking issue:

% dapper sh
sh-4.4#   curl https://proxy.golang.org
curl: (6) Could not resolve host: proxy.golang.org

How to install pcidevices into k8s using helm

I'm not using harvester.
according to both issues. #31 #33

  1. kubectl apply -f crd.yaml
  2. helm install harvester-pcidevices-controller -n virtnest-system --debug ./harvester-pcidevices-controller-0.2.6.tgz

harvester-cpidevices-controller log

time="2024-01-15T10:20:42Z" level=info msg="No access to list CRDs, assuming CRDs are pre-created."
time="2024-01-15T10:20:42Z" level=info msg="Registering PCI Device Claims controller"
time="2024-01-15T10:20:42Z" level=fatal msg="error registering pcidevicclaim controllers :pcideviceclaims.devices.harvesterhci.io is forbidden: User \"system:serviceaccount:virtnest-system:harvester-pcidevices-controller\" cannot list resource \"pcideviceclaims\" in API group \"devices.harvesterhci.io\" at the cluster scope"

GPU doesn't get unbinded

Environment:

I have Harvester cluster v1.3.1

  • one master DELL T7820 with nVidia P620
  • one worker DELL T7920 with nVidia P5000

Steps

  • I enabled passthrough for both GPUs.
  • I created Ubuntu 22 VM, added P5000 device to it. Started.

Errors

can't passthrough nVidia P5000 , as it still binded to host

looking at kernel logs:

dmesg -T
[Wed Jun 12 02:42:37 2024] vfio-pci 0000:b3:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
[Wed Jun 12 02:42:37 2024] vfio-pci 0000:b3:00.0: BAR 1: can’t reserve [mem 0xe0000000-0xefffffff 64bit pref]
[Wed Jun 12 02:43:45 2024] vfio-pci 0000:b3:00.0: BAR 1: can’t reserve [mem 0xe0000000-0xefffffff 64bit pref]

Workaround:

only when I manually unbind it, I can pass-through:

echo 0 > /sys/class/vtconsole/vtcon0/bind
echo 0 > /sys/class/vtconsole/vtcon1/bind
echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind

But those "unbinding" get reset on the node reboot, and I had to repeat it.

PDC controller is slow for user claim

PDC controller is slow for user claim, as it depends on a 20s timer.

A better solution is to work as normal controller, watch & react with PDC CRD object, meanwhile, use timer to enqueue self about routine reconciler.

[Bug] KubeVirt Allow-list doesn't always get refreshed

Changing the kubevirt list of permitted devices will trigger a change to the .status.allocatable value. If you add multiple identical devices, it only counts the first one. Also, if you remove the device, the allocatable list is not updated.

Proposed solution: remove the device and then re-add it, to trigger an update.

Introduce a separate `resourceName` property for KubeVirt device plugin names

Description

I suggest using the .status.description property for a human-readable lspci-like description of the PCI device and introduce a .spec.resourceName property for the device plugin name. It would be nice to allow users edit this.

Describe the results you received:

$ kubectl get pcidevices
NAME                           ADDRESS        VENDOR ID   DEVICE ID   NODE NAME   DESCRIPTION                     KERNEL DRIVER IN USE
...
minikube-10de-174d-000001000   0000:01:00.0   10de        174d        minikube    nvidia.com/GM108MGeForceMX130   nouveau
...

Describe the results you expected:

$ kubectl get pcidevices
NAME                           ADDRESS        VENDOR ID   DEVICE ID   CLASS ID   NODE NAME   RESOURCE NAME                  DESCRIPTION                                                 KERNEL DRIVER IN USE
...
minikube-10de-174d-000001000   0000:01:00.0   10de        174d        0302       minikube    nvidia.com/GM108MGeForceMX130  3D controller: NVIDIA Corporation GM108M [GeForce MX130]    nouveau
...

How to use it?

How to use it with harvester?
I created the crds.yaml , PCIDevice and PCIDeviceClaim, what is the next?

Anyone can help?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.