Giter VIP home page Giter VIP logo

cockpituous's People

Contributors

allisonkarlitskaya avatar ashcrow avatar croissanne avatar dperpeet avatar edwardbetts avatar henrywang avatar jelly avatar jkozol avatar kkoukiou avatar larskarlitski avatar m4rtink avatar martinpitt avatar marusak avatar mvollmer avatar nmav avatar petervo avatar stefwalter avatar tomasmatus avatar tomastomecek avatar vpodzime avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cockpituous's Issues

Move cockpit specific Debian source package mangling from cockpituous into cockpit.git

In cockpit PR #5726 there is a side discussion about cleaning up some cockpit specific bits, like:

   # HACK: don't install or package test assets
    sed -i -e 's/install-test-assets//' $WORKDIR/build/debian/rules
    sed -i -e '/Package: cockpit-test-assets/,/^$/d' $WORKDIR/build/debian/control

We'll need another quick hack to drop the -pcp package for debian testing/unstable soon.

But medium-term I think it would be nicer if the actual work was done in release-dsc, and release-{debian,ubuntu} would not unpack, mangle, and repack the .dsc again. Instead, release-{debian,ubuntu} could pass the target release to release-dsc, and that could then check if the cockpit source tree has a "mangling" script for the given release, run it, and put the right target distribution into debian/changelog so that we don't have to unpack/repack the .dsc again.

tasks: Bring back git caching

With the move to job-runner we lost the caching of ~/.cache/cockpit-dev, in particular the node-cache (the biggest of all).

While at it, it would also be really helpful to start caching pixel refs -- they also take a long time to check out. -- that was proposed in cockpit-project/cockpit#15995 but eventually found that it's preferable to treat them as git submodule.

These should go into a persistent volume, either a single one across all runners, or a per-monitor-container instance one, depending on whether git/our scripts are safe for concurrent writes.

build fails for welder-web

Latest welder-web uses expat, which needs to be compiled in release-source. This fails:

  cc '-DNODE_GYP_MODULE_NAME=expat' '-DUSING_UV_SHARED=1' '-DUSING_V8_SHARED=1' '-DV8_DEPRECATION_WARNINGS=1' '-D_LARGEFILE_SOURCE' '-D_FILE_OFFSET_BITS=64' '-DPIC' '-DHAVE_EXPAT_CONFIG_H' '-DNDEBUG' -I/tmp/home/.node-gyp/8.11.3/include/node -I/tmp/home/.node-gyp/8.11.3/src -I/tmp/home/.node-gyp/8.11.3/deps/uv/include -I/tmp/home/.node-gyp/8.11.3/deps/v8/include -I../deps/libexpat -I../deps/libexpat/lib  -fPIC -pthread -Wall -Wextra -Wno-unused-parameter -m64 -Wno-missing-field-initializers -O3 -fno-omit-frame-pointer  -MMD -MF ./Release/.deps/Release/obj.target/expat/deps/libexpat/lib/xmlrole.o.d.raw   -c -o Release/obj.target/expat/deps/libexpat/lib/xmlrole.o ../deps/libexpat/lib/xmlrole.c
  rm -f Release/obj.target/deps/libexpat/libexpat.a && ar crs Release/obj.target/deps/libexpat/libexpat.a Release/obj.target/expat/deps/libexpat/lib/xmlparse.o Release/obj.target/expat/deps/libexpat/lib/xmltok.o Release/obj.target/expat/deps/libexpat/lib/xmlrole.o
  rm -rf "Release/libexpat.a" && cp -af "Release/obj.target/deps/libexpat/libexpat.a" "Release/libexpat.a"
  g++ '-DNODE_GYP_MODULE_NAME=node_expat' '-DUSING_UV_SHARED=1' '-DUSING_V8_SHARED=1' '-DV8_DEPRECATION_WARNINGS=1' '-D_LARGEFILE_SOURCE' '-D_FILE_OFFSET_BITS=64' '-DBUILDING_NODE_EXTENSION' -I/tmp/home/.node-gyp/8.11.3/include/node -I/tmp/home/.node-gyp/8.11.3/src -I/tmp/home/.node-gyp/8.11.3/deps/uv/include -I/tmp/home/.node-gyp/8.11.3/deps/v8/include -I../../nan -I../deps/libexpat -I../deps/libexpat/lib  -fPIC -pthread -Wall -Wextra -Wno-unused-parameter -m64 -O3 -fno-omit-frame-pointer -fno-rtti -fno-exceptions -std=gnu++0x -MMD -MF ./Release/.deps/Release/obj.target/node_expat/node-expat.o.d.raw   -c -o Release/obj.target/node_expat/node-expat.o ../node-expat.cc
make[1]: g++: Command not found
make[1]: *** [node_expat.target.mk:102: Release/obj.target/node_expat/node-expat.o] Error 127
make[1]: Leaving directory '/tmp/home/build/source.jd1bi0/repo/node_modules/node-expat/build'
gyp ERR! build error 
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack     at ChildProcess.onExit (/usr/lib/node_modules/npm/node_modules.bundled/node-gyp/lib/build.js:258:23)
gyp ERR! stack     at emitTwo (events.js:126:13)
gyp ERR! stack     at ChildProcess.emit (events.js:214:7)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:198:12)
gyp ERR! System Linux 3.10.0-693.21.1.el7.x86_64
gyp ERR! command "/usr/bin/node.real" "/usr/lib/node_modules/npm/node_modules.bundled/node-gyp/bin/node-gyp.js" "rebuild"

It could be that this needs more stuff beyond installing gcc-c++, this needs to be tested.

CC @bcl

tests: /cache/github does not exist

Noticed in cockpit-project/cockpit#9651: Apparently when starting the pod, /cache/github does not exist:

sh-4.4$ ls -l /build/github /cache/github
ls: cannot access '/cache/github': No such file or directory
lrwxrwxrwx. 1 root root 13 Jul  2 06:11 /build/github -> /cache/github

sh-4.4$ mkdir -p /build/github
mkdir: cannot create directory ‘/build/github’: File exists

i. e. one cannot mkdir -p "through" a symlink. The tests container attempts to pre-create /cache/github/, but this is in vain as /cache gets overmounted by the volume. But the cockpit-tests runner does it again, apparently this failed to work somehow?

This breaks tests-policy for Cockpit tests.

Factorize home dir setup into infra-base container

The release and tests containers now both have some setup for creating, chowning, etc. /home/user, to make it work with both the static user (uid 1111) as well as OpenShift's random dynamic users. Factorize this into the cockpit/infrabase container.

github PR statuses are not visible in github app

Github finally has an Android app, and it's really nice, but the one thing that I'd really like to use it for the most is to check on the status of tests (when I'm at lunch or whatever). This seems to be a supported feature of the app, but when I go to check, I see a message that says "Continuous integration not added".

I find that surprising, because we do have continuous integration. It's apparently not being done in the way that the github app recognises it as such, however. The feature is working for other projects.

See the screenshots:

Cockpit broken CI results:
Cockpit broken CI results

Random other project where it's working:
Random other project where it's working

The sink should take care of our master testing badges

These are the little images we display in README.md.

A log stream should be able to say that it wants to update such a test badge in its final JSON message. It should give the id of the badge, a name text, a status text, and a symbolic status from 'passed', 'failed, 'error'. The sink will then create that badge in a well known place.

The badges need to be served with Cache-Control: no-cache, see github/markup#224

The sink should maintain a general overview page

So that one can see what things are currently running, and which have recently completed.

It could do this by atomically writing little files into a dedicated directory when it starts and when it finishes. After writing the file, it could spawn a external 'status generator' that reads all files and generates the overview page.

Retry uploading releases to GitHub

In a recent release the curl upload fails which means the whole release job has to be retried which can take a while. Ideally the upload would be retried for ~ 3 times on failure.

######################################################################## 100.0%
> Uploading file: _release/source/cockpit-machines-node-261.tar.xz

#####                                                                      8.1%curl: (56) OpenSSL SSL_read: Connection reset by peer, errno 104

release-github: github api call failed
release-runner: failed code 1: release-github

From @martinpitt run_curl() in release-github

release: clean up old job objects/pods

Follow-up from PR #216:

The job object stays around after success:

$ oc get jobs
NAME          DESIRED   SUCCESSFUL   AGE
release-job   1         1            17m

$ oc get pods | grep job
release-job-5plk0      0/1       Completed   0          16m

The ttlSecondsAfterFinished: 0 option in the job (docs) is supposed to clean this up, but it's apparently not yet implemented in the relatively old OpenShift 3.6 setup on CentOS CI. So for now these completed job containers stay around, and we have to clean them up from time to time.

Investigate cleaning these up regularly, in the webhook script.

e2e browser crashes

We have a lot of random test crashes like this or this where chromium goes bananas and gets stuck, and the test eventually times out. I've never seen this with firefox nor on RHOS. Sometimes it's so bad that ¾ of the tests fail that way.

I suspect RAM over-commitment, so possibly we need to dial down the parallelism.

example Feb 20

example Feb 26

example Feb 27


[mvo] I can have a look, to learn more about our testing infrastructure.

Reducing TEST_JOBS seems like a good idea, it's at 8 currently.


example Feb 26

This has happened on rhos, but doesn't have the browser OOM message. Other tests include this:

[0227/081645.186300:ERROR:v8_initializer.cc(798)] V8 javascript OOM (CALL_AND_RETRY_LAST).

correct theory: SELinux denials

At the machines where this happens, there are thousands of SElinux rejections:

AVC avc:  denied  { execheap } for  pid=3725873 comm="ThreadPoolForeg" scontext=system_u:system_r:container_t:s0:c439,c758 tcontext=system_u:system_r:container_t:s0:c439,c758 tclass=process permissive=0

and some

ANOM_ABEND auid=4294967295 uid=1111 gid=1111 ses=4294967295 subj=system_u:system_r:container_t:s0:c439,c758 pid=3725873 comm="headless_shell" exe="/usr/lib64/chromium-browser/headless_shell" sig=5 res=1
systemd-coredump[3725903]: [🡕] Process 3725873 (headless_shell) of user 1111 terminated abnormally without generating a coredump.

Checking PSI cluster (rhos01):

ansible -i inventory -m shell -a 'journalctl | grep ThreadPoolForeg' openstack_tasks

→ not a single SELinux denial. Since January, there are a 11 instances of

ThreadPoolForeg invoked oom-killer: gfp_mask=0x8c40(GFP_NOFS|__GFP_NOFAIL), order=0, oom_score_adj=0
Jan 14 23:12:49 rhos-01-19 kernel: CPU: 6 PID: 728617 Comm: ThreadPoolForeg Not tainted 6.5.6-300.fc39.x86_64 #1

and that's a sign that we stress the rhos01 instances quite a bit. But it happens rarely enough that it's not an immediate concern IMHO.

On Feb 27 15:45 CET, @martinpitt ran ansible -f20 -i inventory -m shell -a 'setenforce 0' e2e. Let's watch out for failures in the next days, and see if it still happens. If not, and that's it, then we can see if we can fix this in a finer-grained way.

A big difference between rhos and e2e is that the former runs "classic" Fedora 39 server, while the latter runs Fedora IoT (also based on 39). But that shouldn't matter much for the on-disk runtime bits. However, all our machines are quite a bit behind in terms of upgrades (they don't matter much, these aren't internet-facing machines):

On rhos (classic rpm):

selinux-policy-targeted-38.28-1.fc39.noarch
kernel-core-6.5.6-300.fc39.x86_64
podman-4.8.2-1.fc39.x86_64

on e2e (IoT):

selinux-policy-targeted-39.3-1.fc39.noarch
kernel-core-6.6.9-200.fc39.x86_64
podman-4.8.2-1.fc39.x86_64

On Feb 28 08:15 CET, all our machines were dnf/ostree upgraded to latest package versions and rebooted, so that they are now very similar. They both have selinux-policy-targeted-39.4-1.fc39.noarch and podman-4.9.3-1.fc39.x86_64. e2e has kernel-core-6.7.5-200.fc39.x86_64, while rhos has kernel-core-6.7.6-200.fc39.x86_64.

Also, I setenforce 0 everywhere right away: We can still see the rejections with permissive=1 without having to actually fail our tests. If we still get them, let's investigate/report and allow execheap in a more targeted fashion.

So tomorrow, let's run

ansible -f20 -i inventory -m shell -a 'journalctl -b | grep AVC.*denied' e2e

and on openstack_tasks too.

Theory: /dev/shm

Current theory: Our --shm-size=1024M is not enough for TEST_JOBS=8, and recent chromium versions just need moar of it. Let's try to provoke that error in a local podman container with smaller SHM and/or --memory, and see if it's shm or heap.

Scaled down to ¼, the parameters as in production are:

podman run -it --rm --device=/dev/kvm --memory=6g --pids-limit=4096 --shm-size=256m -v ~/.cache/cockpit-images:/cache/images -e TEST_JOBS=2 quay.io/cockpit/tasks sh -exc 'git clone https://github.com/cockpit-project/cockpit-podman; cd cockpit-podman/; test/run'

which works fine locally. However, with --shm-size=80m the test fails very quickly with

DP: {"source":"network","level":"error","text":"Failed to load resource: net::ERR_INSUFFICIENT_RESOURCES","timestamp":1709040209423.008,"url":"http://127.0.0.2:9091/cockpit/@localhost/podman/index.js","networkRequestId":"928.355"}
RuntimeError: ReferenceError: cockpit is not defined

[0227/132330.730287:ERROR:validation_errors.cc(117)] Invalid message: VALIDATION_ERROR_DESERIALIZATION_FAILED
[0227/132330.730345:ERROR:interface_endpoint_client.cc(702)] Message 1033874247 rejected by interface viz.mojom.CopyOutputResultSender
[0227/132330.730396:ERROR:browser_child_process_host_impl.cc(754)] Terminating child process for bad message: Received bad user message: Validation failed for viz.mojom.CopyOutputResultSender.0  [VALIDATION_ERROR_DESERIALIZATION_FAILED]

and some more noise. podman exec -itl watch df -h /dev/shm also shows at the start of the test (when it didn't fail yet) that it's quickly eating away space:

Filesystem      Size  Used Avail Use% Mounted on
shm              80M   53M   28M  66% /dev/shm

This already happens during image-customize, so I take it that QEMU itself also started to use /dev/shm. With --shm-size=180m it works well enough, with Use going up to ~ 72%. But it changes quickly, I may have missed a few peaks.

A better command is

podman exec -itl bash -c 'while true; do df -h /dev/shm | grep ^shm; sleep 0.5; done'

Similar with --shm-size 140m -- tests start to fail with "cockpit is not defined" (like above), but this time /dev/shm actually hit the root (97%).

With --shm-size=200m the error message looks different:

RuntimeError: TypeError: Cannot read properties of undefined (reading 'includes')

.. which is interesting as I think I've seen this somewhere. It could be a bug in our page of course, and totally unrelated. And it's not the OOM hang.

I just thought about a counter-argument: rhos-01 seems much less affected (if at all), but it uses the same SHM config. It only runs one container per host, as opposed to 4 on e2e, so RAM/heap over-commitment matters, but not /dev/shm.

Theory: general RAM/heap

I ran the test with --memory=3g which should be too little for two parallel tests (it's half the size of production) -- our nond machines alone already take > 2 GiB. Interestingly, nothing inside of the container actually seems to care -- free, top etc. all show 16 GiB (my laptop's RAM size). Looking at the container cgroup:

❱❱❱ cat /sys/fs/cgroup/user.slice/user-1000.slice/[email protected]/user.slice/libpod-dd7deee8be573bedbb870c342ec3f25fd649d257118a775087690607153431a6.scope/memory.{max,peak}
3221225472
3221237760

i.e. it does hit the roof, but nothing consequential -- the tests run happily. Then again, with --memory=1g the rpmbuild inside of the VM fails, so this option does something. With 2g it successfully builds, but both qemu and browser (headless_shell) get hit over the head by the OOM killer. So, conclusions: (1) that memory limit option does work, (2) even half the allocated size is enough, and (3) it doesn't fail in the observed mode.

image-prune fails on minio s3 store

Seen in e.g. cockpit-project/bots#5539 :

Pruning debian-testing-0da420269dbb3233e29e5d46457ba578c62661d1998da2ea6b76724c3d892fc3.qcow2
Traceback (most recent call last):
  File "/work/bots/make-checkout-workdir/image-prune", line 231, in <module>
    main()
  File "/work/bots/make-checkout-workdir/image-prune", line 222, in main
    collection.prune(keepers,
  File "/work/bots/make-checkout-workdir/image-prune", line 139, in prune
    self.delete_file(image)
  File "/work/bots/make-checkout-workdir/image-prune", line 180, in delete_file
    with s3.urlopen(self.url._replace(path='/' + filename), method='DELETE'):
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/work/bots/make-checkout-workdir/lib/s3.py", line 119, in urlopen
    return urllib.request.urlopen(request, context=host_ssl_context(url.netloc))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/urllib/request.py", line 216, in urlopen
    return opener.open(url, data, timeout)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/urllib/request.py", line 525, in open
    response = meth(req, response)
               ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/urllib/request.py", line 634, in http_response
    response = self.parent.error(
               ^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/urllib/request.py", line 563, in error
    return self._call_chain(*args)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/urllib/request.py", line 496, in _call_chain
    result = func(*args)
             ^^^^^^^^^^^
  File "/usr/lib64/python3.11/urllib/request.py", line 643, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request
Traceback (most recent call last):
  File "/work/bots/make-checkout-workdir/image-upload", line 113, in <module>
    main()
  File "/work/bots/make-checkout-workdir/image-upload", line 106, in main
    success |= upload(store, source, public, prune=args.prune_s3)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/work/bots/make-checkout-workdir/image-upload", line 56, in upload
    subprocess.check_call([f'{BOTS_DIR}/image-prune', '--s3', store])
  File "/usr/lib64/python3.11/subprocess.py", line 413, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/work/bots/make-checkout-workdir/image-prune', '--s3', 'http://10.0.190.233/images/']' returned non-zero exit status 1.

This is perfectly reproducible on rhos-01-1 with

./image-prune --s3 http://10.0.190.233/images/

This has happened for a while. So far I thought there was a new minio container and that the format changed, but that's not the case right now. My current theory is that this just always happens -- but there isn't anything to prune for the first two weeks or so, and then I removed the s3 volume and restarted the container.

Let's try to reproduce this in a local setup.

cockpit-tasks: image-prune fails

Seen in cockpit-tasks container logs:

May 13 06:28:24 ci-srv-01 docker[30279]: + for region in eu-central-1 us-east-1
May 13 06:28:24 ci-srv-01 docker[30279]: + ./image-prune --s3 https://cockpit-images.us-east-1.linodeobjects.com/
May 13 06:28:25 ci-srv-01 docker[30279]: Traceback (most recent call last):
May 13 06:28:25 ci-srv-01 docker[30279]: File "/work/bots/./image-prune", line 232, in <module>
May 13 06:28:25 ci-srv-01 docker[30279]: main()
May 13 06:28:25 ci-srv-01 docker[30279]: File "/work/bots/./image-prune", line 223, in main
May 13 06:28:25 ci-srv-01 docker[30279]: collection.prune(keepers,
May 13 06:28:25 ci-srv-01 docker[30279]: File "/work/bots/./image-prune", line 123, in prune
May 13 06:28:25 ci-srv-01 docker[30279]: for mtime, image in sorted(self.list_files()):
May 13 06:28:25 ci-srv-01 docker[30279]: File "/work/bots/./image-prune", line 169, in list_files
May 13 06:28:25 ci-srv-01 docker[30279]: result = s3.list_bucket(self.url)
May 13 06:28:25 ci-srv-01 docker[30279]: File "/work/bots/lib/s3.py", line 112, in list_bucket
May 13 06:28:25 ci-srv-01 docker[30279]: with urlopen(url) as response:
May 13 06:28:25 ci-srv-01 docker[30279]: File "/work/bots/lib/s3.py", line 107, in urlopen
May 13 06:28:25 ci-srv-01 docker[30279]: return urllib.request.urlopen(request)
May 13 06:28:25 ci-srv-01 docker[30279]: File "/usr/lib64/python3.10/urllib/request.py", line 216, in urlopen
May 13 06:28:25 ci-srv-01 docker[30279]: return opener.open(url, data, timeout)
May 13 06:28:25 ci-srv-01 docker[30279]: File "/usr/lib64/python3.10/urllib/request.py", line 525, in open
May 13 06:28:25 ci-srv-01 docker[30279]: response = meth(req, response)
May 13 06:28:25 ci-srv-01 docker[30279]: File "/usr/lib64/python3.10/urllib/request.py", line 634, in http_response
May 13 06:28:25 ci-srv-01 docker[30279]: response = self.parent.error(
May 13 06:28:25 ci-srv-01 docker[30279]: File "/usr/lib64/python3.10/urllib/request.py", line 563, in error
May 13 06:28:25 ci-srv-01 docker[30279]: return self._call_chain(*args)
May 13 06:28:25 ci-srv-01 docker[30279]: File "/usr/lib64/python3.10/urllib/request.py", line 496, in _call_chain
May 13 06:28:25 ci-srv-01 docker[30279]: result = func(*args)
May 13 06:28:25 ci-srv-01 docker[30279]: File "/usr/lib64/python3.10/urllib/request.py", line 643, in http_error_default
May 13 06:28:25 ci-srv-01 docker[30279]: raise HTTPError(req.full_url, code, msg, hdrs, fp)
May 13 06:28:25 ci-srv-01 docker[30279]: urllib.error.HTTPError: HTTP Error 403: Forbidden

Presumably this has happened since cockpit-project/bots#3365 . The tokens may not be privileged enough to list buckets?

We can probably drop the image-prune call from cockpit-tasks, but it most likely affects image refreshes as well.

rhos-01 machines often fail to boot VMs

Examples: one, two, three

Traceback (most recent call last):
  File "/work/bots/make-checkout-workdir/test/common/testlib.py", line 1428, in setUp
    machine.wait_boot()
  File "/work/bots/make-checkout-workdir/bots/machine/machine_core/ssh_connection.py", line 130, in wait_boot
    raise exceptions.Failure("Unable to reach machine {0} via ssh: {1}:{2}".format(
machine_core.exceptions.Failure: Unable to reach machine fedora-coreos-127.0.0.2-2701 via ssh: 127.0.0.2:2701

In the first example, there are succeeding tests after that wave of failures which boot a new VM and succeed. Also, this tends to happen to tests which spawn a lot of VMs (check-multi-machine), although not exclusively.

So at first sight this smells like running out of memory in the bots.

check-verify should publish new VM images

It should take a new argument, maybe --allow-image-creation, and only when that is given, vm-create will create missing images and check-verify will upload them.

release: Make it easier to deal with job failures

What if when a job fails in the release runner, it sleeps indefinitely, but with the option to retry the current job or continue with the next.

Like it could listen for signals, SIGUSR1 for instance could be retry current job, and SIGUSR2 could be continue with the next job, skipping the failed one.

openshift also allows you to attach to the container like you would with docker attach; let's just listen for stdin input on a job failure to either retry, or skip to the next one.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.