Comments (11)
you can try this method:
cd [your bitsandbytes env path]
eg. : cd /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/
you can copy the path from your terminal
then:
cp libbitsandbytes_cuda117.so libbitsandbytes_cpu.so
ps : "CUDA version 117" is your cuda version, so here is " libbitsandbytes_cuda117.so"
from qlora.
nd submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32 /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /home/server/anaconda3/envs/qlora did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... warn(msg) CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/cuda/lib64')} warn(msg) /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)! warn(msg) CUDA SETUP: Highest compute capability among GPUs detected: 8.6 CUDA SETUP: Detected CUDA version 117 CUDA SETUP: Loading binary /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so... loading base model EleutherAI/pythia-12b... Downloading (…)l-00001-of-00003.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.81G/9.81G [03:57<00:00, 41.3MB/s] Downloading (…)l-00002-of-00003.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.93G/9.93G [03:44<00:00, 44.3MB/s] Downloading (…)l-00003-of-00003.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.11G/4.11G [01:32<00:00, 44.5MB/s] Downloading shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [09:20<00:00, 186.80s/it] Loading checkpoint shards: 0%| | 0/3 [00:09<?, ?it/s] Traceback (most recent call last): File "/home/server/vocal/huggingface/qlora/qlora.py", line 758, in train() File "/home/server/vocal/huggingface/qlora/qlora.py", line 590, in train model = get_accelerate_model(args, checkpoint_dir) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/vocal/huggingface/qlora/qlora.py", line 263, in get_accelerate_model model = AutoModelForCausalLM.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 472, in from_pretrained return model_class.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2829, in from_pretrained ) = cls._load_pretrained_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3172, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 718, in _load_state_dict_into_meta_model set_module_quantized_tensor_to_device( File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/utils/bitsandbytes.py", line 88, in set_module_quantized_tensor_to_device new_value = bnb.nn.Params4bit(new_value, requires_grad=False, **kwargs).to(device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/nn/modules.py", line 176, in to return self.cuda(device) ^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/nn/modules.py", line 154, in cuda w_4bit, quant_state = bnb.functional.quantize_4bit(w, blocksize=self.blocksize, compress_statistics=self.compress_statistics, quant_type=self.quant_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/functional.py", line 776, in quantize_4bit lib.cquantize_blockwise_fp16_nf4(get_ptr(None), get_ptr(A), get_ptr(absmax), get_ptr(out), ct.c_int32(blocksize), ct.c_int(n)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/ctypes/init.py", line 389, in getattr func = self.getitem(name) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/ctypes/init.py", line 394, in getitem func = self._FuncPtr((name_or_ordinal, self)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cquantize_blockwise_fp16_nf4
I solved this by "conda install cudatoolkit=11.7 -y".
from qlora.
@DamonGuzman Looks like you did not build bitsandbytes with gpu support? It is loading the cpu and not cuda version.
UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
CUDA SETUP: Loading binary /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
from qlora.
Just in case this is helpful for someone:
If you get this with docker make sure to use an image with cuda toolkit installed, e.g.:
pytorch/pytorch:2.0.1-cuda11.7-cudnn8-devel
from qlora.
For me, I just replace libbitsandbytes_cpu.so to be libbitsandbytes_cuda117.so, which 117 is the cuda vision that I am now using.
You may refer to this link: bitsandbytes-foundation/bitsandbytes#156 (comment)
They have undefined symbol error but with a different symbol, and it can be solved by this method.
from qlora.
cp libbitsandbytes_cuda117.so libbitsandbytes_cpu.so
This method worked for me in WSL Ubuntu on Windows11. I installed anacoda3 and cuda118 special version for WSL2
FineTuning Vicuna-13B with QloRA works well on RTX4090
from qlora.
i solved it by repip the "bitsandbytes"
from qlora.
nd submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32
/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /home/server/anaconda3/envs/qlora did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/cuda/lib64')}
warn(msg)
/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
warn(msg)
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
loading base model EleutherAI/pythia-12b...
Downloading (…)l-00001-of-00003.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.81G/9.81G [03:57<00:00, 41.3MB/s]
Downloading (…)l-00002-of-00003.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.93G/9.93G [03:44<00:00, 44.3MB/s]
Downloading (…)l-00003-of-00003.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.11G/4.11G [01:32<00:00, 44.5MB/s]
Downloading shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [09:20<00:00, 186.80s/it]
Loading checkpoint shards: 0%| | 0/3 [00:09<?, ?it/s]
Traceback (most recent call last):
File "/home/server/vocal/huggingface/qlora/qlora.py", line 758, in
train()
File "/home/server/vocal/huggingface/qlora/qlora.py", line 590, in train
model = get_accelerate_model(args, checkpoint_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/server/vocal/huggingface/qlora/qlora.py", line 263, in get_accelerate_model
model = AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 472, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2829, in from_pretrained
) = cls._load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3172, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 718, in _load_state_dict_into_meta_model
set_module_quantized_tensor_to_device(
File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/utils/bitsandbytes.py", line 88, in set_module_quantized_tensor_to_device
new_value = bnb.nn.Params4bit(new_value, requires_grad=False, **kwargs).to(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/nn/modules.py", line 176, in to
return self.cuda(device)
^^^^^^^^^^^^^^^^^
File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/nn/modules.py", line 154, in cuda
w_4bit, quant_state = bnb.functional.quantize_4bit(w, blocksize=self.blocksize, compress_statistics=self.compress_statistics, quant_type=self.quant_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/functional.py", line 776, in quantize_4bit
lib.cquantize_blockwise_fp16_nf4(get_ptr(None), get_ptr(A), get_ptr(absmax), get_ptr(out), ct.c_int32(blocksize), ct.c_int(n))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/server/anaconda3/envs/qlora/lib/python3.11/ctypes/init.py", line 389, in getattr
func = self.getitem(name)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/server/anaconda3/envs/qlora/lib/python3.11/ctypes/init.py", line 394, in getitem
func = self._FuncPtr((name_or_ordinal, self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cquantize_blockwise_fp16_nf4
from qlora.
@DamonGuzman Looks like you did not build bitsandbytes with gpu support? It is loading the cpu and not cuda version.
UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
CUDA SETUP: Loading binary /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
That was actually really close to the problem I was having! I had created a new conda environment and needed to install cuda toolkit in that new environment. I always assumed cuda tool kit was a system-wide package but that doesn't seem to be the case.
from qlora.
I faced the same issue today, turns it I had a version conflict between Cuda and PyTorch. A fresh install of Cuda and PyTorch did the trick for me.
from qlora.
nd submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32 /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /home/server/anaconda3/envs/qlora did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... warn(msg) CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/cuda/lib64')} warn(msg) /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)! warn(msg) CUDA SETUP: Highest compute capability among GPUs detected: 8.6 CUDA SETUP: Detected CUDA version 117 CUDA SETUP: Loading binary /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so... loading base model EleutherAI/pythia-12b... Downloading (…)l-00001-of-00003.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.81G/9.81G [03:57<00:00, 41.3MB/s] Downloading (…)l-00002-of-00003.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.93G/9.93G [03:44<00:00, 44.3MB/s] Downloading (…)l-00003-of-00003.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.11G/4.11G [01:32<00:00, 44.5MB/s] Downloading shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [09:20<00:00, 186.80s/it] Loading checkpoint shards: 0%| | 0/3 [00:09<?, ?it/s] Traceback (most recent call last): File "/home/server/vocal/huggingface/qlora/qlora.py", line 758, in train() File "/home/server/vocal/huggingface/qlora/qlora.py", line 590, in train model = get_accelerate_model(args, checkpoint_dir) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/vocal/huggingface/qlora/qlora.py", line 263, in get_accelerate_model model = AutoModelForCausalLM.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 472, in from_pretrained return model_class.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2829, in from_pretrained ) = cls._load_pretrained_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3172, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 718, in _load_state_dict_into_meta_model set_module_quantized_tensor_to_device( File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/utils/bitsandbytes.py", line 88, in set_module_quantized_tensor_to_device new_value = bnb.nn.Params4bit(new_value, requires_grad=False, **kwargs).to(device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/nn/modules.py", line 176, in to return self.cuda(device) ^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/nn/modules.py", line 154, in cuda w_4bit, quant_state = bnb.functional.quantize_4bit(w, blocksize=self.blocksize, compress_statistics=self.compress_statistics, quant_type=self.quant_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/functional.py", line 776, in quantize_4bit lib.cquantize_blockwise_fp16_nf4(get_ptr(None), get_ptr(A), get_ptr(absmax), get_ptr(out), ct.c_int32(blocksize), ct.c_int(n)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/ctypes/init.py", line 389, in getattr func = self.getitem(name) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/server/anaconda3/envs/qlora/lib/python3.11/ctypes/init.py", line 394, in getitem func = self._FuncPtr((name_or_ordinal, self)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cquantize_blockwise_fp16_nf4
I solved this by "conda install cudatoolkit=11.7 -y".
thank you very much, i solved the same problems as well with this function
from qlora.
Related Issues (20)
- Merge issue
- Saving/Loading qlora adapters
- Issue with Yi 34B Training EOS token not working HOT 1
- RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` HOT 1
- Using QLORA for Multi Modal Vison Foundation Models Optimization - google/owlv2-base-patch16-ensemble
- How do you use oasst1 dataset in qlora.py - why only the 'text' field is used?
- How to support FLAN v2 dataset.
- [Bug] large CUDA memory usage in the evaluation phase HOT 1
- Table 4 and Table 5 have different results
- [Questions]: How to implement NF4/NF2 matmul kernel function? HOT 1
- RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::BFloat16 HOT 1
- Fuyu-8B qLora
- Question about deployment of fine tuned model
- a critical loss drop happen after each epoch ending
- Llama 1 7b MMLU results largely diverges from reported
- Error when loading model HOT 2
- Paged optimizer vs gradient checkpointing?
- Qlora with flan-t5 issue - ValueError: Trying to set a tensor of shape torch.Size([4096, 4096])
- llama 3 -support?
- 反向传播时,梯度是如何计算的
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from qlora.