Giter VIP home page Giter VIP logo

dirty's Introduction



GitHub stars GitHub stars License Code Style


DIRTY: Augmenting Decompiler Output with Learned Variable Names and Types

Code | Paper PDF | Demo

Original implementation for paper Augmenting Decompiler Output with Learned Variable Names and Types.

DIRTY is a Transformer-based model which improves the quality of decompiler outputs by automatically generating meaningful variable names and types, assigning variable names that agree with those written by developers 66.4% of the time and types 75.8% of the time. We also release a large real-world dataset DIRT for this task, which consists of 75K+ programs and 1M+ human-written C functions mined from GitHub paired with their decompiler outputs.

Installation

Requirements

Quick Start

Training

Download DIRT

The first step to train DIRTY is to download the preprocessed DIRT dataset. If you would like the full, unpreprocessed dataset, it is available at https://doi.org/10.1184/R1/20732656.v1

cd dirty/
wget cmu-itl.s3.amazonaws.com/dirty/dirt.tar.gz -O dirt.tar.gz
tar -xzf dirt.tar.gz

The command would automatically download and decompress the dataset from Amazon S3. If your machine does not have access to AWS, please manually download from the above link and untar it to data1/.

Our dataset was generated using GHCC, which was fed the list of projects found in the projects.txt file. This list was generated by querying the GHTorrent database in late 2019, and contains projects from January 28, 2008 through May 31, 2019. Note that not all of these projects compiled, and it is likely that some of them no longer exist.

Train DIRTY

We have setup configuration files for different models reported in our paper:

file model time (estimated hours)
multitask.xfmr.jsonnet DIRTY-Multitask 120
rename.xfmr.jsonnet DIRTY-Rename 80
retype.xfmr.jsonnet DIRTY-Retype 80
retype_nomem.xfmr.jsonnet DIRTY_NDL 80
retype_base.xfmr.jsonnet DIRTY_S 40

Training a models is as easy as specifying the name of the experimental run and the config file. Suppose we want to reproduce the Multi-task model in Table~7 in the paper:

cd dirty/
python exp.py train --cuda --expname=dirty_mt multitask.xfmr.jsonnet

Then, please watch for the line wandb: Run data is saved locally in ... in the output. This is where the logs and models are to be saved. You can also monitor the automatically uploaded training and validation status (e.g., losses, accuracy) in your browser in real-time with the link printed after wandb: πŸš€ View run at ....

Feel free to adjust the hyperparameters in *.jsonnet config files to train your own model.

Inference

Download Trained Model

As an alternative to train the model by yourself, you can download our trained DIRTY model.

cd dirty/
mkdir exp_runs/
wget cmu-itl.s3.amazonaws.com/dirty/dirty_mt.ckpt -O exp_runs/dirty_mt.ckpt

Test DIRTY

First, run your trained/downloaded model to produce predictions on the DIRE test set.

python exp.py train --cuda --expname=eval_dirty_mt multitask.xfmr.jsonnet --eval-ckpt <ckpt_path>

<ckpt_path> is either exp_runs/dirty_mt.ckpt if you download our trained model, or saved during training at wandb/run-YYYYMMDD_HHMMSS-XXXXXXXX/files/dire/XXXXXXXX/checkpoints/epoch=N.ckpt.

We sugguest changing beam_size in config files to 0 to switch to greedy decoding, which is significantly faster. The default configuration of beam_size = 5 can take hours.

The predictions will be saved to pred_XXX.json. This filename depends on models and can be modified in config files. You can inspect the prediction results, which is in the following format.

{
  binary: {
    func_name: {
      var1: [var1_retype, var1_rename], ...
    }, ...
  }, ...
}

Finally, use our standalone benchmark script:

python -m utils.evaluate --pred-file pred_mt.json --config-file multitask.xfmr.jsonnet

Structure

Here is a walk-through of the code files of this repo.

dirty/

The dirty/ folder contains the main code for the DIRTY model and DIRT dataset.

dirty/exp.py

The entry point for running DIRTY experiments. It loads a configuration file, constructs a dataset instance, a model instance, and launches into a Trainer which runs training or inference according to configuration, and save logs and results in wandb.

dirty/*.xfmr.jsonnet

Configuration files for running DIRTY experiments.

dirty/model

This folder contains neural models consisting of the DIRTY model.

β”œβ”€β”€ dirty
β”‚   β”œβ”€β”€ model
β”‚   β”‚   β”œβ”€β”€ beam.py                     # Beam search
β”‚   β”‚   β”œβ”€β”€ decoder.py                  # factory class for building Decoders from configs
β”‚   β”‚   β”œβ”€β”€ encoder.py                  # factory class for building Encoders from configs
β”‚   β”‚   β”œβ”€β”€ model.py                    # training and evaluation step and metric logging
β”‚   β”‚   β”œβ”€β”€ simple_decoder.py           # A `decoder' consists of a linear layer,
                                        # used for producing a soft mask from Data Layout Encoder
β”‚   β”‚   β”œβ”€β”€ xfmr_decoder.py             # Type/Multitask Decoder
β”‚   β”‚   β”œβ”€β”€ xfmr_mem_encoder.py         # Data Layout Encoder
β”‚   β”‚   β”œβ”€β”€ xfmr_sequential_encoder.py  # Code Encoder
β”‚   β”‚   └── xfmr_subtype_decoder.py     # Not used in the current version

dirty/utils

This folder contains code for the DIRT dataset, data preprocessing, evaluation, helper functions, and demos in the paper.

β”œβ”€β”€ dirty
β”‚   └── utils
β”‚       β”œβ”€β”€ case_study.py           # Generate results for Table 3 and Table 6 in the paper
β”‚       β”œβ”€β”€ code_processing.py      # Code canonicalization such as converting literals
β”‚       β”œβ”€β”€ compute_mi.py           # Compute the mutual information between variables and types as a proof-of-concept for MT
β”‚       β”œβ”€β”€ dataset.py              # A parallelized data loading class for preparing batched samples from DIRT for DIRTY
β”‚       β”œβ”€β”€ dataset_statistics.py   # Compute dataset statistics
β”‚       β”œβ”€β”€ dire_types.py -> ../../binary/dire_types.py
β”‚       β”œβ”€β”€ evaluate.py             # Evaluate final scores from json files saved from different methods for fair comparison
β”‚       β”œβ”€β”€ function.py -> ../../binary/function.py
β”‚       β”œβ”€β”€ ida_ast.py -> ../../binary/ida_ast.py
β”‚       β”œβ”€β”€ lexer.py
β”‚       β”œβ”€β”€ preprocess.py           # Preprocess data produced from `dataset-gen/` into the DIRT dataset
β”‚       β”œβ”€β”€ util.py
β”‚       β”œβ”€β”€ variable.py -> ../../binary/variable.py
β”‚       └── vocab.py

dirty/baselines

Empirical baselines included in the paper. Use python -m baselines.<xxxxxx> to run. Results are saved to corresponding json files and can be evaluated with python -m utils.evaluate.

β”œβ”€β”€ dirty
β”‚   β”œβ”€β”€ baselines
β”‚   β”‚   β”œβ”€β”€ copy_decompiler.py
β”‚   β”‚   β”œβ”€β”€ most_common.py
β”‚   β”‚   └── most_common_decomp.py

binary/

The binary/ folder contains definitions for classes, including types, variables, and functions, constructed from decompiler outputs from binaries.

β”œβ”€β”€ binary
β”‚   β”œβ”€β”€ __init__.py     
β”‚   β”œβ”€β”€ dire_types.py   # constructing types and a type library
β”‚   β”œβ”€β”€ function.py     # definition and serialization for function instances
β”‚   β”œβ”€β”€ ida_ast.py      # constructing ASTs from IDA-Pro outputs
β”‚   └── variable.py     # definition and serialization for variable instances

idastubs/

The idastubs/ folder contains helper functions used by the ida_ast.py file.

dataset-gen/

The dataset-gen/ folder contains producing unpreprocessed data from binaries using IDA-Pro (required).

dire/

Legacy code for the DIRE paper.

Citing DIRTY

If you use DIRTY/DIRT in your research or wish to refer to the baseline results, please use the following BibTeX.

@inproceedings {chen2021augmenting,
  title = {Augmenting Decompiler Output with Learned Variable Names and Types},
  author = {Chen, Qibin and Lacomis, Jeremy and Schwartz, Edward J. and {Le~Goues}, Claire and Neubig, Graham and Vasilescu, Bogdan},
  booktitle = {31st USENIX Security Symposium},
  year = {2022},
  address = {Boston, MA},
  url = {https://www.usenix.org/conference/usenixsecurity22/presentation/chen-qibin},
  month = aug,
}

@inproceedings {lacomis2019dire,
  title = {DIRE: A Neural Approach to Decompiled Identifier Naming},
  author = {Lacomis, Jeremy and Yin, Pengcheng and Schwartz, Edward J. and Allamanis, Miltiadis and {Le~Goues}, Claire and Neubig, Graham and Vasilescu, Bogdan},
  booktitle = {34th IEEE/ACM International Conference on Automated Software Engineering},
  year = {2019},
  address = {San Diego, CA},
  pages = {628--639},
}

dirty's People

Contributors

jlacomis avatar qibinc avatar sei-eschwartz avatar bvasiles avatar huzecong avatar bvasiles-cmu avatar

Stargazers

 avatar  avatar Hyeonseo Yang avatar  avatar Fengning Zhu avatar  avatar Daniel Sokil avatar  avatar b4tm4n avatar yancong avatar Thomas Dolash avatar  avatar Wenyu Zhu avatar Chang Zhu avatar  avatar Leo Ho avatar achaoachao avatar Oliver Schneider avatar Dr. Liyi Zeng avatar Danning XIE avatar  avatar Michael Sandborn avatar Ruturaj Vaidya avatar Benson Liu avatar  avatar  avatar sion avatar Utku Γ‡orbacΔ± avatar Drgnz avatar Caleb Lawrence avatar Utku avatar begginer avatar Automatron avatar wcchoi avatar  avatar  avatar yuanzhinv45 avatar Wessel van der Veen avatar CryoZ avatar ReplayCoding avatar  avatar  avatar Matt Revelle avatar Gerrard Tai avatar Turhan Yağız Merpez avatar Ben Cambourne avatar Jason Papakostas avatar  avatar Daniel Stien avatar Ilya Sebulke avatar  avatar  avatar defunct avatar Theo avatar docd27 avatar Thales F. Leite avatar Ryan Vissers avatar  avatar  avatar Shawn Dong avatar Tianming Cui avatar Pablo Ruiz avatar Cur1ed avatar Frank Lin avatar Ati Priya avatar  avatar Sifis Lagouvardos avatar Qiu Sihao avatar jjy avatar Kevin Cao  avatar  avatar Hanrey Ling avatar  avatar Plawn avatar  avatar R avatar Ryota Sakai avatar Josh Bundt avatar  avatar yuya avatar Hui Chen avatar zztian avatar Nour Saffour avatar KalanyuZ avatar ben avatar m0ham3dx avatar Sabnock avatar  avatar Zhenkai Weng avatar MichaΕ‚ RadwaΕ„ski avatar Reshinth Adithyan avatar nashid avatar Steven Lin avatar Uri Alon avatar  avatar Shai Menzin avatar  avatar Arthur T avatar Albert Hamik avatar Mohamed Saher avatar

Watchers

Jevin Sweval avatar James Cloos avatar nashid avatar Doguhan Yeke avatar  avatar  avatar Aaron avatar

dirty's Issues

Dataset is deprecated

I installed via the directions and get:

dirty-exp train --cuda --expname=eval_dirty_mt multitask.xfmr.jsonnet --eval-ckpt exp_runs/dirty_mt.ckpt 
WARNING:root:unable to load [idaapi], stub loaded instead
WARNING:root:unable to load [ida_auto], stub loaded instead
WARNING:root:unable to load [ida_funcs], stub loaded instead
WARNING:root:unable to load [ida_hexrays], stub loaded instead
WARNING:root:unable to load [ida_kernwin], stub loaded instead
WARNING:root:unable to load [ida_lines], stub loaded instead
WARNING:root:unable to load [ida_pro], stub loaded instead
WARNING:root:unable to load [ida_typeinf], stub loaded instead
WARNING:root:unable to load [idautils], stub loaded instead
Main process id 29909
use random seed 0
Traceback (most recent call last):
  File "/home/ed/Documents/DIRTY/env/bin/dirty-exp", line 33, in <module>
    sys.exit(load_entry_point('cmu-dirty', 'console_scripts', 'dirty-exp')())
  File "/home/ed/Documents/DIRTY/dirty/src/dirty/exp.py", line 131, in main
    train(cmd_args)
  File "/home/ed/Documents/DIRTY/dirty/src/dirty/exp.py", line 51, in train
    percent=float(args["--percent"]),
  File "/home/ed/Documents/DIRTY/dirty/src/dirty/utils/dataset.py", line 171, in __init__
    super().__init__(urls)
  File "/home/ed/Documents/DIRTY/env/lib/python3.6/site-packages/webdataset/fluid.py", line 41, in __init__
    raise Exception("Dataset is deprecated; use webdataset.WebDataset instead")
Exception: Dataset is deprecated; use webdataset.WebDataset instead

Here are the versions of packages installed in my venv:

Package                 Version   Editable project location
----------------------- --------- -----------------------------------------
absl-py                 1.0.0
aiohttp                 3.8.1
aiosignal               1.2.0
async-timeout           4.0.2
asynctest               0.13.0
attrs                   21.4.0
braceexpand             0.1.7
cachetools              4.2.4
certifi                 2021.10.8
charset-normalizer      2.0.12
click                   8.0.4
cmu-dirty               0.0.0     /home/ed/Documents/DIRTY/dirty/src
configparser            5.2.0
csvnpm-utils            0.0.0     /home/ed/Documents/DIRTY/csvnpm-utils/src
dataclasses             0.8
docker-pycreds          0.4.0
docopt                  0.6.2
frozenlist              1.2.0
fsspec                  2022.1.0
future                  0.18.2
gitdb                   4.0.9
GitPython               3.1.18
google-auth             2.6.5
google-auth-oauthlib    0.4.6
grpcio                  1.44.0
idna                    3.3
idna-ssl                1.1.0
importlib-metadata      4.8.3
joblib                  1.1.0
jsonlines               2.0.0
jsonnet                 0.17.0
Markdown                3.3.6
multidict               5.2.0
numpy                   1.19.5
oauthlib                3.2.0
packaging               21.3
pathtools               0.1.2
pip                     21.3.1
pkg_resources           0.0.0
promise                 2.3
protobuf                3.19.4
psutil                  5.9.0
pyasn1                  0.4.8
pyasn1-modules          0.2.8
Pygments                2.9.0
pyparsing               3.0.8
python-dateutil         2.8.2
pytorch-lightning       1.2.10
PyYAML                  6.0
requests                2.27.1
requests-oauthlib       1.3.1
rsa                     4.8
scikit-learn            0.24.2
scipy                   1.5.4
sentencepiece           0.1.96
sentry-sdk              1.5.10
setuptools              59.6.0
shortuuid               1.0.8
six                     1.16.0
smmap                   5.0.0
subprocess32            3.5.4
tensorboard             2.8.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit  1.8.1
threadpoolctl           3.1.0
torch                   1.8.1
torchmetrics            0.2.0
tqdm                    4.60.0
typing_extensions       4.1.1
ujson                   4.0.2
urllib3                 1.26.9
wandb                   0.10.33
webdataset              0.1.103
Werkzeug                2.0.3
wheel                   0.37.1
yarl                    1.7.2
zipp                    3.6.0

Hang when evaluating test set

When I attempt to process the test set, the script just hangs. Here is the output:

GPU available: True, used: False
TPU available: False, using: 0 TPU cores
/home/ed/Documents/DIRTY/env/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:68: UserWarning: GPU available but not used. Set the --gpus flag when calling the script.
  warnings.warn(*args, **kwargs)
/home/ed/Documents/DIRTY/env/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:68: UserWarning: Your `IterableDataset` has `__len__` defined. In combination with multi-processing data loading (e.g. batch size > 1), this can lead to unintended side effects since the samples will be duplicated.
  warnings.warn(*args, **kwargs)
/home/ed/Documents/DIRTY/env/lib/python3.6/site-packages/webdataset/dataset.py:403: UserWarning: num_workers 8 > num_shards 1
  warnings.warn(f"num_workers {num_workers} > num_shards {len(urls)}")
Testing: 0it [00:00, ?it/s]

Question on applying Dirty to custom binary files

I am trying to figure out how to apply Dirty to my own binary files, rather than the provided DIRE dataset. I have attempted to build my own binary file into a new test.jar and modify the "test_file" path in multitask.xfmr.jsonnet, but I am struggling to understand the meaning of the content in the .jsonl files within the provided test.jar (e.g., some key names are t, n, u, etc.). I've checked the relevant information in the paper and it seems that I did not find the answer. I'm not sure if I am on the right track or if I am missing something crucial. Any help on this matter would be greatly appreciated.

Need for local saving and loading of model.

The exp.py train executor interfaces with wandb for saving and loading models, and takes checkpoints as optional parameters. It doesn't look like the model is saved anywhere on disk, there is also no naming convention used for checkpoints exampled in the code. How do we seperate out the saving and loading functionality from wandb so that it can be executed completely locally?

DIRTY_light access request

Hello, I have read your paper and noticed that you compared the DIRTY_light model with OSPREY in the paper, and achieved very good results in structural recovery. However, I did not see DIRTY_light in the source code repository. Could you please let me know how to obtain DIRTY_light?

What are the "disappear" types?

Thanks so much for the great work!

I would like to know, What are the "disappear" types?

This sounds very interesting as the only reference about these types I found - "Assign type "Disappear" to variables not existing in the ground truth". Also, I have found a lot of "disappeared" types in the true and predicted labels as well, so much so that they dominate the "types".

Does this mean that these types are lost as IDA doesn't use the Dwarf information properly?

Thanks in advance.
Ruturaj

Serialization of Unions uses wrong type tag

I received an email confused about the Void type in data that looked like a Union. It turns out there is a subtle bug in the serialization code here:

DIRTY/binary/dire_types.py

Lines 918 to 924 in f1f24f4

def _to_json(self) -> t.Dict[str, t.Any]:
return {
"T": 8,
"n": self.name,
"m": [m._to_json() for m in self.members],
"p": self.padding,
}

This caused Unions to be serialized with the Void meta-tag. During deserialization, these are treated as Void types:

DIRTY/binary/dire_types.py

Lines 1041 to 1065 in f1f24f4

@staticmethod
def read_metadata(d: t.Dict[str, t.Any]) -> "TypeLibCodec.CodecTypes":
classes: t.Dict[
t.Union[int, str],
t.Union[
t.Type["TypeLib"],
t.Type["TypeLib.EntryList"],
t.Type["TypeInfo"],
t.Type["UDT.Member"],
],
] = {
"E": TypeLib.EntryList,
0: TypeLib,
1: TypeInfo,
2: Array,
3: Pointer,
4: UDT.Field,
5: UDT.Padding,
6: Struct,
7: Union,
8: Void,
9: FunctionPointer,
10: Disappear,
}
return classes[d["T"]]._from_json(d)

The serialization bug is simple enough to fix, but this means that the current dataset has this specific bug. I will fix the current dataset, but if you've already downloaded the current one and/or don't want to wait, you'll have to modify the read_metadata method to condition on d having other fields, for example by replacing line 1065 in dire_types.py with this (untested) code:

if d["T"] == 8:
    if "m" in d:
        return Union._from_json(d)
    return Void._from_json(d)
return classes[d["T"]]._from_json(d)

Help Needed/Documentation Inquiries

I have personally attempted to use DIRTY in many configurations with varying levels of success.

I have run into the following issues:

  • Package incompatibilities and/or failures to install
  • DIRTY cannot locate IDA's python libraries, "Could not import ida_typeinf. Cannot parse IDA types."
  • Relative import files in the util folder are causing syntax errors (incorrect syntax on "../../") and require modifications in order to bypass
  • using the --CUDA switch causes the script to hang and never complete model testing.

I would love to see the documentation of prerequisite setup expanded upon, as these have been my biggest headaches.

I would also appreciate some clarification on the following:

  • Should a specific python patch version be used, for example, 3.7.7?
  • What specific combination of package versions allow this tool to work properly?
  • Should I be adding the DIRTY repository to my python path?
  • Was this developed specifically for use in either a Linux or Windows environment? (I have tried both, but I don't have IDA for Linux)
  • Are there any specific steps for integrating IDA/IDAPython which are not listed on the homepage?
  • Python 3.6 and 3.7 have reached end of life and Python 3.8 will soon follow. Will this be updated or maintained in the future?

That said, I'm really excited to finally try this out and would appreciate any help.

[???] encode_memory() dirtiness

Hello. Thanks for you're kindness to share such a good project.

Could you please explain me why does we need
encode_memory

    @staticmethod
    def encode_memory(mems):
        """Encode memory to ids

        <pad>: 0
        <SEP>: 1
        <unk>: 2
        mem_id: mem_offset + 3
        """
        ret = []
        for mem in mems[: VocabEntry.MAX_MEM_LENGTH]:
            if mem == "<SEP>":
                ret.append(1)
            elif mem > VocabEntry.MAX_STACK_SIZE:
                ret.append(2)
            else:
                ret.append(3 + mem)
        return ret

this function, and why does it attempt to compare integers with '' ?
I can't get send of appending int token to array of int tokens instead of int token.

and second question is about this one:

            def var_loc_in_func(loc):
                print(" TODO: fix the magic number for computing vocabulary idx")
                if isinstance(loc, Register):
                    return 1030 + self.vocab.regs[loc.name]
                else:
                    from utils.vocab import VocabEntry

                    return (
                        3 + stack_start_pos - loc.offset
                        if stack_start_pos - loc.offset < VocabEntry.MAX_STACK_SIZE
                        else 2
                    )

what and why is 1030 constant do?

And in general, why we define tokens as that:

            self.word2id["<pad>"] = PAD_ID
            self.word2id["<s>"] = 1
            self.word2id["</s>"] = 2
            self.word2id["<unk>"] = 3
            self.word2id[SAME_VARIABLE_TOKEN] = 4

but using as this:

        <pad>: 0
        <SEP>: 1
        <unk>: 2
        mem_id: mem_offset + 3

Sorry, if my questions in too much, I specialize on system programming, and math with ML is a hobby.
Forward Thanks =)

@pcyin
@qibinc
@jlacomis

Running tox command

I am having an issue running the tox command. I have downloaded the requirements and their specific versions. I downloaded the model as well as the dataset from the links provided. Please let me know how to resolve it. I am obtaining the following error:

`
summary:
mypy: commands succeeded
pipcheck: commands succeeded
ERROR: safety: could not install deps [setuptools ~= 50.3.2, pip ~= 21.1.0, -r/home/abhishek/Desktop/dirty/DIRTY-master/dirty/local_requirements.txt]; v = InvocationError("/home/abhishek/Desktop/dirty/DIRTY-master/dirty/.tox/safety/bin/python -m pip install 'setuptools ~= 50.3.2' 'pip ~= 21.1.0' -r/home/abhishek/Desktop/dirty/DIRTY-master/dirty/local_requirements.txt", -9)
ERROR: pytest: could not install deps [setuptools ~= 50.3.2, pip ~= 21.1.0, -r/home/abhishek/Desktop/dirty/DIRTY-master/dirty/local_requirements.txt]; v = InvocationError("/home/abhishek/Desktop/dirty/DIRTY-master/dirty/.tox/pytest/bin/python -m pip install 'setuptools ~= 50.3.2' 'pip ~= 21.1.0' -r/home/abhishek/Desktop/dirty/DIRTY-master/dirty/local_requirements.txt", -9)

`

How can I apply `DIRTY` on .idb file?

Your research seems great to me!

How can I utilize the research in my IDA .idb file to enhance the decompiled output?
I tried to follow a content in README but actually had no idea on how to apply it.

I would appreciate it if you help me a little. Thanks

disagree between preprocessor and decompiler/pretokenizer

Hello.

Can somebody, please, explain what is expected behaviour of the next steps:

Decompilers code pass in debug arg only pairs of (loc: var) for the symbolized version of function, not the pseudocode.

Then in preprocessors we performs filtering by output of pseudocode for both stripped and debug versions, resulting in critically large value loss, and nonpredictable rename model.

So, should be both pseudocode versions been involved, or not?
If not - would it be correct to perform filterring by keys() comparsion?

Thanks.

@huzecong @pcyin @clegoues @bvasiles @sophieball @qibinc

A typo in model.py?

Hi,

Recently I'm following up DIRTY. The work is great, and the code repo is impressive. Thank the developers for the efforts!

Just want to mention that there might be a typo in the following file: (dirty/model/model.py:496), the variable in task_targets should be targets, instead of preds.

f"{task}_targets": preds,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.