Giter VIP home page Giter VIP logo

amharic-fairseq's Introduction

amharic-fairseq

The Amharic-fairseq framework originates directly from the fairseq toolkit, taken from commit version a8f28ecb63ee01c33ea9f6986102136743d47ec2. This framework is customized for bidirectional Amharic-English sequence translation and utilizes attention mechanisms, moving away from the previously prevalent recurrent neural network (RNN) frameworks.

Requirements and Installation:

PyTorch version >= 1.13.0
Python version >= 3.6

Cloning and Building the Repository:

Repo cloning:

git clone https://github.com/wubet/amharic-fairseq.git

Change directory to the clone project

cd amharic-fairseq

Install required dependency

pip3 install sacremoses
cd exampes/translation
pip3 install https://github.com/moses-smt/mosesdecoder.git
pip3 install https://github.com/rsennrich/subword-nmt.git

get back to the main project

cd ../../

Installs the build module, a modern Python package builder

pip3 install build

Builds the package, generating distribution archives (wheel and source) in the dist directory

python3 -m build

Installs the package in editable mode (also known as development mode), allowing changes to the code to be immediately reflected

pip3 install --editable .

Compiles and builds any extension modules (e.g., C extensions) specified in setup.py directly in the source directory

python3 setup.py build_ext –inplace

Data Preparation

Clone the English-Amharic corpus.

Git clone https://github.com/wubet/unified-amharic-english-corpus.git

Process the corpus data for training and evaluation. Typically, processing encompasses various crucial stages to adeptly navigate the intricacies of language. Such stages encompass breaking down words into subwords or tokens, mapping these tokens to a specific vocabulary, and incorporating special tokens, all aimed at enhancing the model's proficiency in dealing with infrequent words and morphological differences.

Training data:

python3 data/bilingual_data_processor.py \
--en_file="unified-amharic-english-corpus/datasets/train.am-en.base.en" \
--am_file="unified-amharic-english-corpus/datasets/train.am-en.transliteration.am" \
--implementation="mmap" \
--data_bin_path="data-bin/wmt23_en_am" \
--task_file="train.en-am"

Validation Data:

python3 data/bilingual_data_processor.py \
--en_file="unified-amharic-english-corpus/datasets/dev.am-en.base.en" \
--am_file="unified-amharic-english-corpus/datasets/dev.am-en.transliteration.am" \
--implementation="mmap" \
--data_bin_path="data-bin/wmt23_en_am" \
--task_file="valid.en-am"

Test Data:

python3 data/bilingual_data_processor.py \
--en_file="unified-amharic-english-corpus/datasets/test.am-en.base.en" \
--am_file="unified-amharic-english-corpus/datasets/test.am-en.transliteration.am" \
--implementation="mmap" \
--data_bin_path="data-bin/wmt23_en_am" \
--task_file="test.en-am"

Preparing Amharic translitration file for training

python3 ../translitration/create_transliteration.py
--source_filenames=unified-amharic-english-corpus/datasets/train.am-en.base.en \
--target_filenames=unified-amharic-english-corpus/datasets/train.am-en.transliteration.am

Preparing Amharic translitration file for testing

python3 ../translitration/create_transliteration.py
--source_filenames=unified-amharic-english-corpus/datasets/test.am-en.base.en \
--target_filenames=unified-amharic-english-corpus/datasets/test.am-en.transliteration.am

Train the Model

In order to train the model use the following command:

python3 train.py \
--arch=transformer_wmt_en_am \
--source-lang=en \
--target-lang=am \
--share-decoder-input-output-embed \
--optimizer=adam \
--adam-betas="(0.9,0.98)" \
--clip-norm=0.0 \
--lr=5e-4 \
--lr-scheduler=inverse_sqrt \
--warmup-updates=4000 \
--dropout=0.3 \
--weight-decay=0.0001 \
--criterion=label_smoothed_cross_entropy \
--label-smoothing=0.1 \
--max-tokens=4096 \
--max-update=500000 \
--eval-bleu \
--eval-bleu-args="{\"beam\":5,\"max_len_a\":1.2,\"max_len_b\":10}" \
--eval-bleu-detok=moses \
--eval-bleu-remove-bpe \
--eval-bleu-print-samples \
--best-checkpoint-metric=bleu \
--maximize-best-checkpoint-metric \
--save-interval-updates=1000 \
--keep-interval-updates=5 \
data-bin/wmt23_en_am

amharic-fairseq's People

Contributors

wubet avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.