Giter VIP home page Giter VIP logo

edinburghnlp / code-docstring-corpus Goto Github PK

View Code? Open in Web Editor NEW
199.0 199.0 48.0 858.21 MB

Preprocessed Python functions and docstrings for automated code documentation (code2doc) and automated code generation (doc2code) tasks.

Home Page: https://arxiv.org/abs/1707.02275

License: Other

Python 64.62% Shell 35.38%
code-generation corpus docstrings documentation-generator neural-machine-translation

code-docstring-corpus's People

Contributors

avmb avatar suryabhupa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

code-docstring-corpus's Issues

Question about creating a dataset format for NeuralCodeSum

Hello. @Avmb
I have a question about dataset format of NeuralCodeSum.
When I checked, https://github.com/wasiahmad/NeuralCodeSum/tree/master/data
The dataset was from this repositories as you supported the dataset to NeuralCodesum.

Could I know how you make a dataset format for NeuralCodeSum?
It was made like token word list without underscore and others.
If there is some script to parse code to dataset format or way, I hope to know it.

Thank you:)

Examples?

Can you provide some examples of this working?

blue score

hi i followed all your steps, took your dataset and done all the pre processing and tokenization but i did not got the same results as you got. I got BLUE score as 0. can you please tell me why is it so? @Avmb @rsennrich

tokenization of the data

Hi,

When i am running this command for this particular file only. The error is arising as you can see below.Can you please tell. me why is this so only for this particular file.

file name : data.ps_decldesc.train

all the files got tokenized very easily,but this file was not able to do so .why?

error:
gauravs-MBP:~ g$ /Downloads/mosesdecoder/scripts/tokenizer/tokenizer.perl -l en </code-docstring-corpus/parallel-corpus/data_ps.decldesc.train >~/code-docstring-corpus/parallel-corpus/data_ps.decldesc.train1
Tokenizer Version 1.1
Language: en
Number of threads: 1
utf8 "\xFF" does not map to Unicode at /Users/gaurav/Downloads/mosesdecoder/scripts/tokenizer/tokenizer.perl line 180, line 1133.
Malformed UTF-8 character: \xff\xff\xff\xff\x5c\x27\x29\x20\x44\x43\x4e\x4c\x20 (overflows) in substitution (s///) at /Users/gaurav/Downloads/mosesdecoder/scripts/tokenizer/tokenizer.perl line 240, line 1133.
Malformed UTF-8 character: \xff\xff\xff\xff\x5c\x27\x29\x20\x44\x43\x4e\x4c\x20 (unexpected non-continuation byte 0xff, immediately after start byte 0xff; need 13 bytes, got 1) in substitution (s///) at /Users/gaurav/Downloads/mosesdecoder/scripts/tokenizer/tokenizer.perl line 240, line 1133.
Malformed UTF-8 character (fatal) at /Users/gaurav/Downloads/mosesdecoder/scripts/tokenizer/tokenizer.perl line 240, line 1133.

Thank you so much.

Parallel corpus V2 possibly incorrect

Hello,

I have tried to use your dataset V2 and found an intresting thing:

  • The number of lines in parallel_methods_desc is 397241
  • The number of lines in parallel_methods_bodies is 397225
  • The number of lines in parallel_desc is 148620
  • The number of lines in parallel_body is 148603

It seems that dataset is possibly corrupted or some descriptions use several lines in files.

Could you explain the correct approach to match bodies and descriptions?

Thank you.

Edited (June 9th, 2018):

The same problem is encountered with the first dataset version:

  • The number of lines in data_ps.bodies.train is 109109
  • The number of lines in data_ps.descriptions.train is 109130

help required

Can you please tell me how to run this code .. i have downloaded the code but i am unable to run it .
can u please guide me with the directions if possible .

waiting for a positive reply.

Thank you so much.

Recover original code snippets from `data_ps.all.*`

I found source code and descriptions are encoded in data_ps.all.*. For example, d is replaced with qz. May I know if there is a script that I can use to convert those encoded code/descriptions into their original formats? Thanks a bunch!

Performance of SOTA model on this dataset

Thank you for your interesting paper "A parallel corpus of Python functions and documentation strings for automated code documentation and code generation." You said you didn't try AST-based generation in the paper because that was not the purpose of the paper, but have you tried any stronger model on the dataset since then to see what the current SOTA would lead to? Have you been trying to create larger dataset?

parallel-corpus

In the parallel-corpus directory is the data already tokenized?
or we need to rokenize it using moses tokenizer and BPE tokenizer?
and as this data is in codes then how to read the data because when i am writing the code as
def load_doc(filename)
file=codecs.open(filename,mode='rb' , encoding='utf-8',errors='ignore')
text=file.read()
return text

then the ouput is not showing the text written in the folder but it is giving the ouput like
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00.

why?

dataset

hi ,

I want to know that in this we will train two nmt models?

  1. code2doc
    2)doc2code

Can I get the source code as body directly without syntax parsing?

Thank you for sharing you code. In your code, the body part is represented by the syntax parse of source code which contains many symbols such as 'DCNL' and 'DCSP'. Is it possible that I get the source code directly without syntax parsing from your code? Thanks.

How much memory does this require while training?

We are running this experiment on Google Colab Pro (16 GB machines) and are receiving out of memory errors using the same vocabulary size (89500). How much memory would you recommend for us to have to run this experiment?

Idea for generating test cases

Train seq2seq RNNs to generate syntactically valid inputs to gold code, and then use gold code as oracle to get output for generated input (if the input has correct syntax).

 Inside of try/except, seq2seq receives gold code as input and has to generate an input for gold code that doesn’t set off exception.

 Reward seq2seq +1 if no exception, -1 if exception. & add entropy (or minibatch discrimination?) to loss so it’ll have varied generations.
 To learn which syntax error it made, have seq2seq predict which exception it set off and lessen negative reward if prediction is correct.

 Maybe SL pretrain with already provided test cases (if we have any) before RL stage (entropy in RL loss will ensure RL stage stays varied).

Are all the descriptions used in the baseline methods?

Hi, it is really a great corpus!
I noticed that some descriptions may contain parameter definitions/explanations, which may be hard to generate, so I am just wondering what your target sequence is in the baseline methods. The first sentence of the descriptions (may describe the functionality) or the entire description paragraph? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.