- ๐ Contains 154 code snippets to test and benchmark.
- ๐ท Offers 845 type annotations across a diverse set of Python functionalities.
- ๐ Organized into 18 distinct categories targeting various Python features.
- ๐ข Seamlessly manages the execution of containerized tools.
- ๐ Efficiently transforms inferred types into a standardized format.
- ๐ Automatically produces meaningful metrics for in-depth assessment and comparison.
Supported โ | In-progress ๐ง | Planned ๐ก |
---|---|---|
HeaderGen | Intellij PSI | Llama 2 |
Jedi | Pyre | ChatGPT |
Pyright | PySonar2 | |
HiTyper | Pytype | |
Scalpel | TypeT5 | |
Type4Py |
Below is a comparison showcasing exact matches across different tools, coupled with top_n
predictions for ML-based tools.
Rank | ๐ ๏ธ Tool | Top-n | Function Return Type | Function Parameter Type | Local Variable Type | Total |
---|---|---|---|---|---|---|
1 | HeaderGen | 1 | 186 | 56 | 322 | 564 |
2 | Jedi | 1 | 122 | 0 | 293 | 415 |
3 | Pyright | 1 | 100 | 8 | 297 | 405 |
4 | HiTyper | 1 3 5 |
163 173 175 |
27 37 37 |
179 225 229 |
369 435 441 |
5 | HiTyper (static) | 1 | 141 | 7 | 102 | 250 |
6 | Scalpel | 1 | 155 | 32 | 6 | 193 |
7 | Type4Py | 1 3 5 |
39 103 109 |
19 31 31 |
99 167 174 |
157 301 314 |
(Auto-generated based on the the analysis run on 20-10-23 14:51)
git clone https://github.com/secure-software-engineering/TypeEvalPy.git
docker build -t typeevalpy .
๐ Takes about 30mins on first run to build Docker containers.
๐ Results will be generated in the results
folder within the root directory of the repository.
Each results folder will have a timestamp, allowing you to easily track and compare different runs.
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ./results:/app/results \
typeevalpy
๐ง Optionally, run analysis on specific tools:
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ./results:/app/results \
typeevalpy --runners headergen scalpel
๐ ๏ธ Available options: headergen
, pyright
, scalpel
, jedi
, hityper
, type4py
, hityperdl
Running From Source...
-
Clone the repo
git clone https://github.com/ashwinprasadme/TypeEvalPy.git
-
Install Dependencies and Set Up Virtual Environment
Run the following commands to set up your virtual environment and activate the virtual environment.
python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txt
-
Navigate to the
src
Directorycd src
-
Execute the Analyzer
Run the following command to start the benchmarking process on all tools:
python main_runner.py
or
Run analysis on specific tools
python main_runner.py --runners headergen scalpel
Thank you for your interest in contributing! To add support for a new tool, please utilize the Docker templates provided in our repository. After implementing and testing your tool, please submit a pull request (PR) with a descriptive message. Our maintainers will review your submission, and merge them.
Give a โญ๏ธ if this project helped you!