Giter VIP home page Giter VIP logo

Comments (16)

GeoTower avatar GeoTower commented on August 16, 2024 1

Hi, I had this same problem and by trial and error I have been able to find out that it is due to the new versions of phonemizer. Since 3.0 (released on October 25, 2021) you can't complete preprocess.py step, it starts apparently fine but it takes longer and longer to process each file until it gives error, in addition to a crazy RAM usage.

With 2.2.2 version there is no problem, in case you want to modify it in requirements.txt. Since then I am using Oct 13, 2021 commit.

from forwardtacotron.

riderjensen avatar riderjensen commented on August 16, 2024

I did some more work on this over the weekend, I spun up a VM on digital ocean, something really cheap, 2 CPUs and 4GB of RAM. I SCPed all my data to the server and cloned the repo. Then I ran python3 preprocess.py --path /root/data as root. It ran up until about 7 files were left and then it froze. I attached a picture of HTOP for reference. The memory isnt even being used all the way up and the CPU isnt active at all. But there are a ton of processes just running in the background and not dying. I am wondering if somehow opening the files and editing them is locking up processes and not killing them. But the fact that the resources are not all used up is confusing. Once I kill one or two of the processes, they all die. I will keep looking and update the thread as I explore.

htop2

from forwardtacotron.

cschaefer26 avatar cschaefer26 commented on August 16, 2024

Hi, have you tried to preprocess with the flag: --num_workers=1? Seems to me that the system does not infer the default number of cores correctly.

from forwardtacotron.

riderjensen avatar riderjensen commented on August 16, 2024

I ran it with one worker, it was obviously a lot slower. I immedietly noticed a lot of PIDs in htop and a lot of RAM usage as before. My expectation would be that there would not be a lot of PIDs especially with only one worker running. I also ran ps aux which told me I had two processes running.

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
riderj 28299 0.5 1.5 1670740 199168 pts/0 Sl+ 23:00 0:09 python3 preprocess.py --path /home/riderj/data --num_workers=1
riderj 28336 99.7 38.0 50427760 4968236 pts/0 Sl+ 23:00 31:23 python3 preprocess.py --path /home/riderj/data --num_workers=1

I got to 1996 files processed before I got some strange errors I had never seen before
out1

I looked at htop and saw that now all of my CPUs were somehow involved in this process?

htop3

This is very confusing to me, I dont know what I am doing wrong here or what is going wrong. On my actual computer, I only have VScode, chrome with about 5 tabs, and discord running and I wasnt doing anything on my computer at the time, I was just on my phone in the chair.

from forwardtacotron.

cschaefer26 avatar cschaefer26 commented on August 16, 2024

Hi, it definitely seems that its spawning too many processes despite the flag. What python version do you use? maybe it would make sense to try another version (with anaconda for example). if everything fails the solution would be to remove the multiprocessing in the file.

from forwardtacotron.

cschaefer26 avatar cschaefer26 commented on August 16, 2024

If you want to remove multiprocessing, simply change line 118 in preprocess.py:

for i, (item_id, length, cleaned_text) in enumerate(map.imap_unordered(preprocessor, wav_files), 1):

to

for i, (item_id, length, cleaned_text) in enumerate(map(preprocessor, wav_files), 1):

from forwardtacotron.

riderjensen avatar riderjensen commented on August 16, 2024

I switched the multiprocessing but no luck, I ran into the same issue when I was using --num-workers=1. Tonight I will try a few different versions of python. The one I am running right now is Python 3.8.10. I will try a few version tonight and update the thread with what I find.

from forwardtacotron.

riderjensen avatar riderjensen commented on August 16, 2024

Numba isnt supported by Python3.9 and the default Python I am using from ubuntu is 3.8 so I went back a version to 3.6 using miniconda. No change unfortunately, still the same behavior at the end where I have about 9 files just waiting there. I will try with not multiprocessing and python3.6, that might do something.

from forwardtacotron.

riderjensen avatar riderjensen commented on August 16, 2024

Not sure if it is relevant but whenever I kill the process when its stuck with a few files left, I have some output. Thought I would paste it here at least for others to see

Process ForkPoolWorker-24:
Process ForkPoolWorker-22:
Process ForkPoolWorker-23:
Process ForkPoolWorker-21:
Process ForkPoolWorker-20:
Process ForkPoolWorker-19:
Process ForkPoolWorker-18:
Process ForkPoolWorker-14:
Process ForkPoolWorker-8:
Process ForkPoolWorker-7:
Process ForkPoolWorker-12:
Process ForkPoolWorker-13:
Process ForkPoolWorker-2:
Traceback (most recent call last):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 720, in next
Process ForkPoolWorker-17:
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
Traceback (most recent call last):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
KeyboardInterrupt
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
Traceback (most recent call last):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
KeyboardInterrupt
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
KeyboardInterrupt
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
Traceback (most recent call last):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
Traceback (most recent call last):
KeyboardInterrupt
Traceback (most recent call last):
KeyboardInterrupt
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
Traceback (most recent call last):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
Traceback (most recent call last):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
KeyboardInterrupt
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
KeyboardInterrupt
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
KeyboardInterrupt
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
KeyboardInterrupt
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
KeyboardInterrupt
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 335, in get
    res = self._reader.recv_bytes()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
    buf = self._recv(4)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
KeyboardInterrupt
    item = self._items.popleft()
IndexError: pop from an empty deque

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "preprocess.py", line 118, in <module>
    for i, (item_id, length, cleaned_text) in enumerate(pool.imap_unordered(preprocessor, wav_files), 1):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 724, in next
    self._cond.wait(timeout)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/threading.py", line 295, in wait
    waiter.acquire()
KeyboardInterrupt
Process ForkPoolWorker-16:
Traceback (most recent call last):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/pool.py", line 108, in worker
    task = get()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 334, in get
    with self._rlock:
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/synchronize.py", line 95, in __enter__
    return self._semlock.__enter__()
KeyboardInterrupt

from forwardtacotron.

cschaefer26 avatar cschaefer26 commented on August 16, 2024

Seems to me like a WSL problem, is this maybe related? https://stackoverflow.com/questions/44475289/inconsistent-multiprocessing-pool-performance-across-wsl-and-winpython

from forwardtacotron.

riderjensen avatar riderjensen commented on August 16, 2024

I checked that I have WSL2 installed which the commenter said the performance is better in there. In my second response above on the thread, I tried this on a VM though it was pretty small so I wanted to be thorough. This time I got a CPU optimized VM, 16GB of RAM, 8 CPUs and reproduced the process. I used miniconda to install python3.6. Full output of setup steps below but here is what I saw again, same issues:
HTOP
HTOP
Output stuck again
Output

Code I run on server start:

apt update -y
apt upgrade -y
apt install espeak libjpeg8-dev zlib1g-dev ffmpeg gcc g++ -y
cd ~

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
chmod +x Miniconda3-latest-Linux-x86_64.sh
sh Miniconda3-latest-Linux-x86_64.sh
rm Miniconda3-latest-Linux-x86_64.sh
conda update conda -y
conda update --all -y
conda create --name py3.6 python=3.6 -y
conda activate py3.6

git clone https://github.com/as-ideas/ForwardTacotron
cd ForwardTacotron

pip3 install -r requirements.txt
python3 preprocess.py --path ~/data

Every single time it just seems like the memory gets stuck at full for some reason. I am starting to think it might be my dataset, is 9k too large? Is there any way to just chunk it into smaller pieces to preprocess? My files are not strange, just wav files pulled from YouTube MP3s.

Metadata.csv is just a bunch of lines like this:

a-43-audio|but the variety today might be new hero or something

from forwardtacotron.

riderjensen avatar riderjensen commented on August 16, 2024

I did some more research on this today, I found a thread about multiprocessing hanging on stack overflow. The suggestion was that there is a problem with multiprocessing in pools but they fixed it with a separate module called concurrent.futures.

I changed out some code and tested it, the preprocessing didnt finished but I got the below error at least. I will continue troubleshooting and update this thread.

██████████░░░░░░ 5735/9596 Traceback (most recent call last):
  File "preprocess.py", line 121, in <module>
    for i, (item_id, length, cleaned_text) in enumerate(pool.map(preprocessor, wav_files), 1):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/concurrent/futures/process.py", line 366, in _chain_from_iterable_of_lists
    for element in iterable:
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/concurrent/futures/_base.py", line 586, in result_iterator
    yield fs.pop().result()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/concurrent/futures/process.py", line 295, in _queue_management_worker
    shutdown_worker()
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/concurrent/futures/process.py", line 253, in shutdown_worker
    call_queue.put_nowait(None)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 129, in put_nowait
    return self.put(obj, False)
  File "/home/rider/miniconda3/envs/py3.6/lib/python3.6/multiprocessing/queues.py", line 83, in put
    raise Full
queue.Full

from forwardtacotron.

cschaefer26 avatar cschaefer26 commented on August 16, 2024

Could you run the prog with multiprocessing removed as suggested here: #65 (comment) ? Just to make sure it runs through and does not fail on a corner case.

from forwardtacotron.

riderjensen avatar riderjensen commented on August 16, 2024

I wrote some pretty rudimentary batching code to see if I could get whatever is happening in the pool to stop hogging the ram. It looks like this

    # 1000 files in a batch
    batch_amount = math.ceil(len(wav_files) / 1000)

    for x in range(batch_amount):
        print(f'Starting batch {x}')
        # pool = Pool(processes=n_workers)
        pool = ProcessPoolExecutor(max_workers=n_workers)

        print('Pool created')

        start = x*1000
        finish = (x+1)*1000
        wav_files_batch = wav_files[start:finish]

        for i, (item_id, length, cleaned_text) in enumerate(pool.map(preprocessor, wav_files_batch), 1):
            if item_id in text_dict:
                dataset += [(item_id, length)]
                cleaned_texts += [(item_id, cleaned_text)]
            bar = progbar(i, len(wav_files_batch))
            message = f'{bar} {i}/{len(wav_files_batch)} '
            stream(message)

        print('Shutting down pool')
        try:
            pool.shutdown()
        except Exception as e:
            logging.error(traceback.format_exc())
        print('Pool closed')

This code actually works with the command python preprocess.py --path ~/data --num_workers=1 which makes it basically single threaded right? So I think the issue was that the queue process was just getting too much data at the end of the day and killing threads before they returned. Killing the old pool and creating a new one each 1000 files seemed to keep the memory usage in check. I am going to try it with more workers and a smaller batch size since with only 1 worker, its very slow. I will update the thread with the info.

from forwardtacotron.

riderjensen avatar riderjensen commented on August 16, 2024

Just an update, I ran the process with a batch size of 50 instead of 1000 and I used all the workers except one, it finished about 4 hours faster than before (thanks to the higher number of workers) but with no memory issues still.

Based on the lack of other issues listed, it seems like this is maybe just a problem with how my data was interacting with the audio processor or something. If you want, I can make an MR with this pool change in batches, otherwise I will close the issue.

from forwardtacotron.

cschaefer26 avatar cschaefer26 commented on August 16, 2024

Glad the workaround worked, still there seems to be a problem with garbage collection / closing processes with your specific problem. Batching shouldn't be necessary with a functioning multiprocessing, so no need for a MR.

from forwardtacotron.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.