Giter VIP home page Giter VIP logo

lhrlab / chatkbqa Goto Github PK

View Code? Open in Web Editor NEW
232.0 2.0 21.0 18.96 MB

[ACL 2024] Official resources of "ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models".

Home Page: https://arxiv.org/abs/2310.08975

License: MIT License

Python 100.00%
knowledge-graph semantic-parsing sparql-query graph-database finetuning large-language-models

chatkbqa's Introduction

ChatKBQA

Official resources of "ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models". Haoran Luo, Haihong E, Zichen Tang, Shiyao Peng, Yikai Guo, Wentai Zhang, Chenghao Ma, Guanting Dong, Meina Song, Wei Lin, Yifan Zhu, Luu Anh Tuan. Findings of ACL 2024 [paper].

Paper Blog Tool Report Report PWC

Overview

General Setup

Environment Setup

conda create -n chatkbqa python=3.8
conda activate chatkbqa
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
pip install -r requirement.txt

Freebase KG Setup

Below steps are according to Freebase Virtuoso Setup.

How to install virtuoso backend for Freebase KG.

  1. Clone from dki-lab/Freebase-Setup:
cd Freebase-Setup
  1. Processed Freebase Virtuoso DB file can be downloaded from Dropbox or Baidu Netdisk (WARNING: 53G+ disk space is needed):
tar -zxvf virtuoso_db.zip
  1. Managing the Virtuoso service:

To start service at localhost:3001/sparql:

python3 virtuoso.py start 3001 -d virtuoso_db

and to stop a currently running service at the same port:

python3 virtuoso.py stop 3001

A server with at least 100 GB RAM is recommended.

Download FACC1 mentions for Entity Retrieval.

  • Download the mention information (including processed FACC1 mentions and all entity alias in Freebase) from OneDrive or Baidu Netdisk to data/common_data/facc1/.
ChatKBQA/
└── data/
    ├── common_data/                  
        ├── facc1/   
            ├── entity_list_file_freebase_complete_all_mention
            └── surface_map_file_freebase_complete_all_mention                                           

Dataset

Experiments are conducted on 2 KBQA benchmarks WebQSP, CWQ.

WebQSP

WebQSP dataset has been downloaded under data/WebQSP/origin.

ChatKBQA/
└── data/
    ├── WebQSP                  
        ├── origin                    
            ├── WebQSP.train.json                    
            └── WebQSP.test.json                                       

CWQ

CWQ dataset has been downloaded under data/CWQ/origin.

ChatKBQA/
└── data/
    ├── CWQ                 
        ├── origin                    
            ├── ComplexWebQuestions_train.json                   
            ├── ComplexWebQuestions_dev.json      
            └── ComplexWebQuestions_test.json                              

Data Processing

(1) Parse SPARQL queries to S-expressions

  • WebQSP:

Run python parse_sparql_webqsp.py and the augmented dataset files are saved as data/WebQSP/sexpr/WebQSP.test[train].json.

  • CWQ:

Run python parse_sparql_cwq.py and the augmented dataset files are saved as data/CWQ/sexpr/CWQ.test[train].json.

(2) Prepare data for training and evaluation

  • WebQSP:

Run python data_process.py --action merge_all --dataset WebQSP --split test and python data_process.py --action merge_all --dataset WebQSP --split train. The merged data file will be saved as data/WebQSP/generation/merged/WebQSP_test[train].json.

Run python data_process.py --action get_type_label_map --dataset WebQSP --split train. The merged data file will be saved as data/WebQSP/generation/label_maps/WebQSP_train_type_label_map.json.

  • CWQ:

Run python data_process.py --action merge_all --dataset CWQ --split test and python data_process.py --action merge_all --dataset CWQ --split train. The merged data file will be saved as data/CWQ/generation/merged/CWQ_test[train].json.

Run python data_process.py --action get_type_label_map --dataset CWQ --split train. The merged data file will be saved as data/CWQ/generation/label_maps/CWQ_train_type_label_map.json.

Note: You can also get the ChatKBQA processed data from TeraBox or Baidu Netdisk, which should be set in data/.

ChatKBQA/
└── data/
    ├── CWQ/                 
        ├── generation/    
        ├── origin/
        └── sexpr/  
    └── WebQSP/                 
        ├── generation/    
        ├── origin/
        └── sexpr/                                               

(3) Prepare data for LLM model

  • WebQSP:

Run python process_NQ.py --dataset_type WebQSP. The merged data file will be saved as LLMs/data/WebQSP_Freebase_NQ_test[train]/examples.json.

  • CWQ:

Run python process_NQ.py --dataset_type CWQ The merged data file will be saved as LLMs/data/CWQ_Freebase_NQ_test[train]/examples.json.

Note: You can also get the processed ChatKBQA SFT data from TeraBox or Baidu Netdisk, which should be set in LLMs/data.

ChatKBQA/
└── LLMs/
    ├── data/                 
        ├── CWQ_Freebase_NQ_test/                    
        ├── CWQ_Freebase_NQ_train/    
        ├── WebQSP_Freebase_NQ_test/                 
        ├── WebQSP_Freebase_NQ_train/      
        └── dataset_info.json                              

Fine-tuning, Retrieval and Evaluation

The following is an example of LLaMa2-7b fine-tuning and retrieval (num_beam = 15) on WebQSP and LLaMa2-13b fine-tuning and retrieval (num_beam = 8) on CWQ, respectively.

(1) Train and test LLM model for Logical Form Generation

  • WebQSP:

Train LLMs for Logical Form Generation:

CUDA_VISIBLE_DEVICES=3 nohup python -u LLMs/LLaMA/src/train_bash.py --stage sft --model_name_or_path meta-llama/Llama-2-7b-hf --do_train  --dataset_dir LLMs/data --dataset WebQSP_Freebase_NQ_train --template llama2  --finetuning_type lora --lora_target q_proj,v_proj --output_dir Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/checkpoint --overwrite_cache --per_device_train_batch_size 4 --gradient_accumulation_steps 4  --lr_scheduler_type cosine --logging_steps 10 --save_steps 1000 --learning_rate 5e-5  --num_train_epochs 100.0 --plot_loss  --fp16 >> train_LLaMA2-7b_WebQSP_Freebase_NQ_lora_epoch100.txt 2>&1 &

Beam-setting LLMs for Logical Form Generation:

CUDA_VISIBLE_DEVICES=3 nohup python -u LLMs/LLaMA/src/beam_output_eva.py --model_name_or_path meta-llama/Llama-2-7b-hf --dataset_dir LLMs/data --dataset WebQSP_Freebase_NQ_test --template llama2 --finetuning_type lora --checkpoint_dir Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/checkpoint --num_beams 15 >> predbeam_LLaMA2-7b_WebQSP_Freebase_NQ_lora_epoch100.txt 2>&1 &
python run_generator_final.py --data_file_name Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/evaluation_beam/generated_predictions.jsonl
  • CWQ:

Train LLMs for Logical Form Generation:

CUDA_VISIBLE_DEVICES=2 nohup python -u LLMs/LLaMA/src/train_bash.py --stage sft --model_name_or_path meta-llama/Llama-2-13b-hf --do_train  --dataset_dir LLMs/data --dataset CWQ_Freebase_NQ_train --template default  --finetuning_type lora --lora_target q_proj,v_proj --output_dir Reading/LLaMA2-13b/CWQ_Freebase_NQ_lora_epoch10/checkpoint --overwrite_cache --per_device_train_batch_size 4 --gradient_accumulation_steps 4  --lr_scheduler_type cosine --logging_steps 10 --save_steps 1000 --learning_rate 5e-5  --num_train_epochs 10.0 --plot_loss  --fp16 >> train_LLaMA2-13b_CWQ_Freebase_NQ_lora_epoch10.txt 2>&1 &

Beam-setting LLMs for Logical Form Generation:

CUDA_VISIBLE_DEVICES=3 nohup python -u LLMs/LLaMA/src/beam_output_eva.py --model_name_or_path meta-llama/Llama-2-13b-hf --dataset_dir LLMs/data --dataset CWQ_Freebase_NQ_test --template default --finetuning_type lora --checkpoint_dir Reading/LLaMA2-13b/CWQ_Freebase_NQ_lora_epoch10/checkpoint --num_beams 8 >> predbeam_LLaMA2-13b_CWQ_Freebase_NQ_lora_epoch10.txt 2>&1 &
python run_generator_final.py --data_file_name Reading/LLaMA2-13b/CWQ_Freebase_NQ_lora_epoch10/evaluation_beam/generated_predictions.jsonl

(2) Evaluate KBQA result with Retrieval

  • WebQSP:

Evaluate KBQA result with entity-retrieval and relation-retrieval:

CUDA_VISIBLE_DEVICES=1 nohup python -u eval_final.py --dataset WebQSP --pred_file Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/evaluation_beam/beam_test_top_k_predictions.json >> predfinal_LLaMA2-7b_WebQSP_Freebase_NQ_lora_epoch100.txt 2>&1 &

Evaluate KBQA result with golden-entities and relation-retrieval:

CUDA_VISIBLE_DEVICES=4 nohup python -u eval_final.py --dataset WebQSP --pred_file Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/evaluation_beam/beam_test_top_k_predictions.json --golden_ent >> predfinalgoldent_LLaMA2-7b_WebQSP_Freebase_NQ_lora_epoch100.txt 2>&1 &
  • CWQ:

Evaluate KBQA result with entity-retrieval and relation-retrieval:

CUDA_VISIBLE_DEVICES=4 nohup python -u eval_final_cwq.py --dataset CWQ --pred_file Reading/LLaMA2-13b/CWQ_Freebase_NQ_lora_epoch10/evaluation_beam/beam_test_top_k_predictions.json >> predfinal_LLaMA2-13b_CWQ_Freebase_NQ_lora_epoch10.txt 2>&1 &

Evaluate KBQA result with golden-entities and relation-retrieval:

CUDA_VISIBLE_DEVICES=5 nohup python -u eval_final_cwq.py --dataset CWQ --pred_file Reading/LLaMA2-13b/CWQ_Freebase_NQ_lora_epoch10/evaluation_beam/beam_test_top_k_predictions.json --golden_ent >> predfinalgoldent_LLaMA2-13b_CWQ_Freebase_NQ_lora_epoch10.txt 2>&1 &

Note: You can also get the ChatKBQA checkpoints and evaluations from TeraBox or Baidu Netdisk, which should be set in Reading/.

ChatKBQA/
└── Reading/
    ├── LLaMA2-7b/                 
        └── WebQSP_Freebase_NQ_lora_epoch100/  
            ├── checkpoint/    
            └── evaluation_beam/  
    └── LLaMA2-13b/                 
        └── CWQ_Freebase_NQ_lora_epoch10/  
            ├── checkpoint/    
            └── evaluation_beam/                                              

BibTex

If you find this work is helpful for your research, please cite:

@misc{luo2023chatkbqa,
      title={ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models}, 
      author={Haoran Luo and Haihong E and Zichen Tang and Shiyao Peng and Yikai Guo and Wentai Zhang and Chenghao Ma and Guanting Dong and Meina Song and Wei Lin},
      year={2023},
      eprint={2310.08975},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

For further questions, please contact: [email protected].

Acknowledgement

This repo benefits from PEFT, LLaMA-Efficient-Tuning, SimCSE, GMT-KBQA and DECAF. Thanks for their wonderful works.

chatkbqa's People

Contributors

lhrlab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

chatkbqa's Issues

Checkpoint seems not contain LoRA weights

Hi,
Thanks for sharing the code, I was trying to run it, and get stack in
python -u LLMs/LLaMA/src/beam_output_eva.py --model_name_or_path meta-llama/Llama-2-7b-hf --dataset_dir LLMs/data --dataset WebQSP_Freebase_NQ_test --template llama2 --finetuning_type lora --checkpoint_dir Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/checkpoint --num_beams 15

I have finish the 100 epoch training on WebQSP and get following error:
AssertionError: Provided path (Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/checkpoint) does not contain a LoRA weight.

here is the screen shot for the checkpoint I got after training:
image

Looking forward to your response.

Best regards,
Xiaqiang

跑代码后loss一直是零值

在跑代码的过程中loss刚开始为300多,然后一下跌到零值并且一直是零,请问是什么问题,数据集是按照百度网盘下载并放置。
image

The problem of generating SExpr expression

Hi,
After running python parse_sparql_webqsp.py program, SExpr expression will be generated, but there is null in the reproduced result. Is this reasonable?
Thank you for your assistance.
Best regards.
94d6e6f2e19a88dfa6c462aa22a4cb9
f120a8085e7fe4385c2741acee7751f

torch.cuda.OutOfMemoryError: CUDA out of memory.

Hello, my friend
During the training of LLAMA2-13b on an A30 GPU equipped with 24GB of video memory, I am facing an error concerning GPU memory allocation. Are there any feasible solutions or code modifications that can resolve this issue?

error:torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 23.50 GiB total capacity; 23.16 GiB already allocated; 2.81 MiB free; 23.16 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Thanks!

oracle entity linking annotations

Hi,

Thanks for the outstanding work your team has accomplished.

I have a question: Could you kindly explain what "Oracle Entity Linking Annotations" refer to in your work?

Thank you in advance for your time and assistance.

Clarification on Metrics in ChatKBQA Results Reproduction

Hello,

I am attempting to reproduce the results of ChatKBQA on the WebQSP dataset, and I have some confusion regarding the metrics used. Specifically, I am trying to determine which of the provided metrics in the repository corresponds to the "F1 Hits@1 Acc 79.8 83.2 73.8" reported in the paper's results.

In the repository, the following metrics are provided for the WebQSP dataset:
total:1639, ex_cnt:1026, ex_rate:0.6259914582062233, real_ex_rate:0.6424546023794615, contains_ex_cnt:1227, contains_ex_rate:0.74862721171446 real_contains_ex_rate:0.7683155917345021

I would appreciate it if you could help me understand which of these metrics corresponds to the "F1 Hits@1 Acc" reported in the paper. This clarification will greatly assist me in accurately reproducing the results.

Thank you for your assistance.

Best regards,

There are some abnormalities in the test results

Your work is really good and has given me a lot of inspiration, but when running the following command, the following situation occurs.

CUDA_VISIBLE_DEVICES=1 nohup python -u eval_final.py --dataset WebQSP --pred_file Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/evaluation_beam/beam_test_top_k_predictions.json >> predfinal_LLaMA2-7b_WebQSP_Freebase_NQ_lora_epoch100.txt 2>&1 &

result
result_1

I don't know what's wrong, can you give me some suggestions? Thanks

Bug when running the retrieval code

I encountered a bug while running the retrieval code. Can you provide any suggestions on how to resolve this issue?

CUDA_VISIBLE_DEVICES=7 python -u eval_final.py --dataset WebQSP --pred_file Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/evaluation_beam/beam_test_top_k_predictions.json
INFO:simcse.tool:Use `cls_before_pooler` for unsupervised models. If you want to use other pooling policy, specify `pooler` argument.
split:test, topk_file:Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/evaluation_beam/beam_test_top_k_predictions.json
Reading/LLaMA2-7b/WebQSP_Freebase_NQ_lora_epoch100/evaluation_beam
INFO:entity_retrieval.surface_index_memory:Loading entity vocabulary from disk.
INFO:entity_retrieval.surface_index_memory:Loading surfaces from disk.
INFO:entity_retrieval.surface_index_memory:Done initializing surface index.
Evaluating test:   0%|                                                                                                                                                                             | 0/1639 [00:00<?, ?it/s]( join ( r [ location , country , languages spoken ] ) [ jamaica ] )
(join (r location.country.languages_spoken) m.03_r3)
(join (r location.country.languages_spoken) m.03_r3)
  0%|                                                                                                                                                                                                 | 0/6 [00:00<?, ?it/s]
Evaluating test:   0%|                                                                                                                                                                             | 0/1639 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "eval_final.py", line 586, in <module>
    aggressive_top_k_eval_new(args.split, args.pred_file, args.dataset)
  File "eval_final.py", line 477, in aggressive_top_k_eval_new
    lf, answers = execute_normed_s_expr_from_label_maps_rel(
  File "eval_final.py", line 288, in execute_normed_s_expr_from_label_maps_rel
    query_expr, denotation = try_relation(d)
  File "eval_final.py", line 310, in try_relation
    in_rels, out_rels, _ = get_2hop_relations_with_odbc_wo_filter(ent)
  File "/data/zihengzhang/derongxu/ideas/ChatKBQA/executor/sparql_executor.py", line 527, in get_2hop_relations_with_odbc_wo_filter
    initialize_odbc_connection()
  File "/data/zihengzhang/derongxu/ideas/ChatKBQA/executor/sparql_executor.py", line 27, in initialize_odbc_connection
    odbc_conn = pyodbc.connect(
pyodbc.InterfaceError: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnect)')

泛化能力

训练出来的模型,可以直接在任何新的图谱上使用吗? 还是说这种方法,必须在新的图谱上进行finetune后,才能使用。

作者您好!我在代码复现时遇到了CPU内存不足的问题,可以帮我提示一下为什么会遇到这个问题以及解决方法吗?非常感谢!!

Traceback (most recent call last):
File "LLMs/LLaMA/src/train_bash.py", line 16, in
main()
File "LLMs/LLaMA/src/train_bash.py", line 7, in main
run_exp()
File "C:\Code\ChatKBQA-main\LLMs\LLaMA\src\llmtuner\tuner\tune.py", line 26, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "C:\Code\ChatKBQA-main\LLMs\LLaMA\src\llmtuner\tuner\sft\workflow.py", line 28, in run_sft
model, tokenizer = load_model_and_tokenizer(model_args, finetuning_args, training_args.do_train, stage="sft")
File "C:\Code\ChatKBQA-main\LLMs\LLaMA\src\llmtuner\tuner\core\loader.py", line 171, in load_model_and_tokenizer
model = AutoModelForCausalLM.from_pretrained(
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\transformers\models\auto\auto_factory.py", line 556, in from_pretrained
return model_class.from_pretrained(
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\transformers\modeling_utils.py", line 3375, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "C:\Users\DW.cache\huggingface\modules\transformers_modules\chatglm2-6b\modeling_chatglm.py", line 856, in init
self.transformer = ChatGLMModel(config, empty_init=empty_init, device=device)
File "C:\Users\DW.cache\huggingface\modules\transformers_modules\chatglm2-6b\modeling_chatglm.py", line 756, in init
self.encoder = init_method(GLMTransformer, config, **init_kwargs)
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\utils\init.py", line 52, in skip_init
return module_cls(*args, **kwargs).to_empty(device=final_device)
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\modules\module.py", line 868, in to_empty
return self._apply(lambda t: torch.empty_like(t, device=device))
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\modules\module.py", line 664, in _apply
param_applied = fn(param)
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\modules\module.py", line 868, in
return self._apply(lambda t: torch.empty_like(t, device=device))
RuntimeError: [enforce fail at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 112197632 bytes.

TypeError: sdp_kernel() got an unexpected keyword argument 'enable_mem_efficient'

I encountered this error during the training of Baichuan2-7b, and after searching for relevant solutions, I found that upgrading torch to version 2.0 was suggested. However, I am curious if there are any alternative solutions without upgrading torch.

TypeError: sdp_kernel() got an unexpected keyword argument 'enable_mem_efficient'

疑似文件缺失

作者您好,我是一名研0小白,想要学习您的代码。在查看train_bash.py时from llmtuner import run_exp提示报错,发现在Github上提供的代码llmtuner包中没有run_exp,在beam_output_eva.py也有类似情况。希望作者大大能够帮我解答疑惑,问题如有冒犯请多多谅解。祝作者大大天天开心,paper多多。

Uploading Processed Files

Thanks for sharing the codes! Could you please upload processed files for training and evaluation (e.g., WebQSP/generation/* and CWQ/generation/*), because I found that label map is empty using my Freebase deployed before. Thanks a lot!

How to transform s-expr / sparql_query to path?

Hi, thank for your great job.

Do you know how to transform a s-expr / sparql_query to a path from source entity to target entity? Rather than only get the final answer with execute_query_with_odbc(sparql_query)

I'm not familirary about the SPARQL system and the usages of pyodbc.

Any assistance you could provide would be greatly appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.