Giter VIP home page Giter VIP logo

zyddnys / manga-image-translator Goto Github PK

View Code? Open in Web Editor NEW
4.2K 40.0 442.0 63.93 MB

Translate manga/image 一键翻译各类图片内文字 https://cotrans.touhou.ai/

Home Page: https://cotrans.touhou.ai/

License: GNU General Public License v3.0

Python 91.85% HTML 1.69% Jupyter Notebook 0.07% Dockerfile 0.05% Makefile 0.02% C++ 2.03% Cuda 4.28% Batchfile 0.01% Shell 0.01%
manga anime ocr deep-learning image-processing auto-translation machine-translation pytorch-implementation text-detection text-detection-recognition

manga-image-translator's Introduction

Image/Manga Translator

Commit activity Lines of code License Contributors Discord

Translate texts in manga/images.
中文说明 | Change Log
Join us on discord https://discord.gg/Ak8APNy4vb

Some manga/images will never be translated, therefore this project is born.

Samples

Please note that the samples may not always be updated, they may not represent the current main branch version.

Original Translated
佐藤さんは知っていた - 猫麦
(Source @09ra_19ra)
Output
(Mask)
Gris finds out she's of royal blood - VERTI
(Source @VERTIGRIS_ART)
Output
--detector ctd (Mask)
陰キャお嬢様の新学期🏫📔🌸 (#3) - ひづき夜宵🎀💜
(Source @hiduki_yayoi)
Output
--translator none (Mask)
幼なじみの高校デビューの癖がすごい (#1) - 神吉李花☪️🐧
(Source @rikak)
Output
(Mask)

Online Demo

Official Demo (by zyddnys): https://touhou.ai/imgtrans/
Browser Userscript (by QiroNT): https://greasyfork.org/scripts/437569

  • Note this may not work sometimes due to stupid google gcp kept restarting my instance. In that case you can wait for me to restart the service, which may take up to 24 hrs.
  • Note this online demo is using the current main branch version.

Disclaimer

Successor to MMDOCR-HighPerformance.
This is a hobby project, you are welcome to contribute!
Currently this only a simple demo, many imperfections exist, we need your support to make this project better!
Primarily designed for translating Japanese text, but also supports Chinese, English and Korean.
Supports inpainting, text rendering and colorization.

Installation

Pip/venv

# First, you need to have Python(>=3.8) installed on your system
# The latest version often does not work with some pytorch libraries yet
$ python --version
Python 3.10.6

# Clone this repo
$ git clone https://github.com/zyddnys/manga-image-translator.git

# Create venv
$ python -m venv venv

# Activate venv
$ source venv/bin/activate

# For --use-gpu option go to https://pytorch.org/ and follow
# pytorch installation instructions. Add `--upgrade --force-reinstall`
# to the pip command to overwrite the currently installed pytorch version.

# Install the dependencies
$ pip install -r requirements.txt

$ pip install git+https://github.com/kodalli/pydensecrf.git

Poetry

git clone https://github.com/zyddnys/manga-image-translator.git
cd manga-image-translator
poetry shell
poetry install

The models will be downloaded into ./models at runtime.

Additional instructions for Windows

Before you start the pip install, first install Microsoft C++ Build Tools (Download, Instructions) as some pip dependencies will not compile without it. (See #114).

To use cuda on windows install the correct pytorch version as instructed on https://pytorch.org/.

Also, if you have trouble installing pydensecrf with the command above you can install the pre-compiled wheels with pip install https://www.lfd.uci.edu/~gohlke/pythonlibs/#_pydensecrf.

Docker

Requirements:

  • Docker (version 19.03+ required for CUDA / GPU acceleration)
  • Docker Compose (Optional if you want to use files in the demo/doc folder)
  • Nvidia Container Runtime (Optional if you want to use CUDA)

This project has docker support under zyddnys/manga-image-translator:main image. This docker image contains all required dependencies / models for the project. It should be noted that this image is fairly large (~ 15GB).

Hosting the web server

The web server can be hosted using (For CPU)

docker run -p 5003:5003 -v result:/app/result --ipc=host --rm zyddnys/manga-image-translator:main -l ENG --manga2eng -v --mode web --host=0.0.0.0 --port=5003

or

docker-compose -f demo/doc/docker-compose-web-with-cpu.yml up

depending on which you prefer. The web server should start on port 5003 and images should become in the /result folder.

Using as CLI

To use docker with the CLI (I.e in batch mode)

docker run -v <targetFolder>:/app/<targetFolder> -v <targetFolder>-translated:/app/<targetFolder>-translated  --ipc=host --rm zyddnys/manga-image-translator:main --mode=batch -i=/app/<targetFolder> <cli flags>

Note: In the event you need to reference files on your host machine you will need to mount the associated files as volumes into the /app folder inside the container. Paths for the CLI will need to be the internal docker path /app/... instead of the paths on your host machine

Setting Translation Secrets

Some translation services require API keys to function to set these pass them as env vars into the docker container. For example:

docker run --env="DEEPL_AUTH_KEY=xxx" --ipc=host --rm zyddnys/manga-image-translator:main <cli flags>

Using with Nvidia GPU

To use with a supported GPU please first read the initial Docker section. There are some special dependencies you will need to use

To run the container with the following flags set:

docker run ... --gpus=all ... zyddnys/manga-image-translator:main ... --use-gpu

Or (For the web server + GPU)

docker-compose -f demo/doc/docker-compose-web-with-gpu.yml up

Building locally

To build the docker image locally you can run (You will require make on your machine)

make build-image

Then to test the built image run

make run-web-server

Usage

Batch mode (default)

# use `--use-gpu` for speedup if you have a compatible NVIDIA GPU.
# use `--target-lang <language_code>` to specify a target language.
# use `--inpainter=none` to disable inpainting.
# use `--translator=none` if you only want to use inpainting (blank bubbles)
# replace <path> with the path to the image folder or file.
$ python -m manga_translator -v --translator=google -l ENG -i <path>
# results can be found under `<path_to_image_folder>-translated`.

Demo mode

# saves singular image into /result folder for demonstration purposes
# use `--mode demo` to enable demo translation.
# replace <path> with the path to the image file.
$ python -m manga_translator --mode demo -v --translator=google -l ENG -i <path>
# result can be found in `result/`.

Web Mode

# use `--mode web` to start a web server.
$ python -m manga_translator -v --mode web --use-gpu
# the demo will be serving on http://127.0.0.1:5003

Api Mode

# use `--mode web` to start a web server.
$ python -m manga_translator -v --mode api --use-gpu
# the demo will be serving on http://127.0.0.1:5003

Related Projects

GUI implementation: BallonsTranslator

Docs

Recommended Modules

Detector:

  • ENG: ??
  • JPN: ??
  • CHS: ??
  • KOR: ??
  • Using --detector ctd can increase the amount of text lines detected

OCR:

  • ENG: ??
  • JPN: ??
  • CHS: ??
  • KOR: 48px

Translator:

  • JPN -> ENG: Sugoi
  • CHS -> ENG: ??
  • CHS -> JPN: ??
  • JPN -> CHS: ??
  • ENG -> JPN: ??
  • ENG -> CHS: ??

Inpainter: ??

Colorizer: mc2

Tips to improve translation quality

  • Small resolutions can sometimes trip up the detector, which is not so good at picking up irregular text sizes. To circumvent this you can use an upscaler by specifying --upscale-ratio 2 or any other value
  • If the text being rendered is too small to read specify --font-size-minimum 30 for instance or use the --manga2eng renderer that will try to adapt to detected textbubbles
  • Specify a font with --font-path fonts/anime_ace_3.ttf for example

Options

-h, --help                                   show this help message and exit
-m, --mode {demo,batch,web,web_client,ws,api}
                                             Run demo in single image demo mode (demo), batch
                                             translation mode (batch), web service mode (web)
-i, --input INPUT [INPUT ...]                Path to an image file if using demo mode, or path to an
                                             image folder if using batch mode
-o, --dest DEST                              Path to the destination folder for translated images in
                                             batch mode
-l, --target-lang {CHS,CHT,CSY,NLD,ENG,FRA,DEU,HUN,ITA,JPN,KOR,PLK,PTB,ROM,RUS,ESP,TRK,UKR,VIN,ARA,CNR,SRP,HRV,THA,IND}
                                             Destination language
-v, --verbose                                Print debug info and save intermediate images in result
                                             folder
-f, --format {png,webp,jpg,xcf,psd,pdf}      Output format of the translation.
--attempts ATTEMPTS                          Retry attempts on encountered error. -1 means infinite
                                             times.
--ignore-errors                              Skip image on encountered error.
--overwrite                                  Overwrite already translated images in batch mode.
--skip-no-text                               Skip image without text (Will not be saved).
--model-dir MODEL_DIR                        Model directory (by default ./models in project root)
--use-gpu                                   Turn on/off gpu
--use-gpu-limited                           Turn on/off gpu (excluding offline translator)
--detector {default,ctd,craft,none}          Text detector used for creating a text mask from an
                                             image, DO NOT use craft for manga, it's not designed
                                             for it
--ocr {32px,48px,48px_ctc,mocr}              Optical character recognition (OCR) model to use
--use-mocr-merge                             Use bbox merge when Manga OCR inference.
--inpainter {default,lama_large,lama_mpe,sd,none,original}
                                             Inpainting model to use
--upscaler {waifu2x,esrgan,4xultrasharp}     Upscaler to use. --upscale-ratio has to be set for it
                                             to take effect
--upscale-ratio UPSCALE_RATIO                Image upscale ratio applied before detection. Can
                                             improve text detection.
--colorizer {mc2}                            Colorization model to use.
--translator {google,youdao,baidu,deepl,papago,caiyun,gpt3,gpt3.5,gpt4,none,original,offline,nllb,nllb_big,sugoi,jparacrawl,jparacrawl_big,m2m100,m2m100_big,sakura}
                                             Language translator to use
--translator-chain TRANSLATOR_CHAIN          Output of one translator goes in another. Example:
                                             --translator-chain "google:JPN;sugoi:ENG".
--selective-translation SELECTIVE_TRANSLATION
                                             Select a translator based on detected language in
                                             image. Note the first translation service acts as
                                             default if the language isn't defined. Example:
                                             --translator-chain "google:JPN;sugoi:ENG".
--revert-upscaling                           Downscales the previously upscaled image after
                                             translation back to original size (Use with --upscale-
                                             ratio).
--detection-size DETECTION_SIZE              Size of image used for detection
--det-rotate                                 Rotate the image for detection. Might improve
                                             detection.
--det-auto-rotate                            Rotate the image for detection to prefer vertical
                                             textlines. Might improve detection.
--det-invert                                 Invert the image colors for detection. Might improve
                                             detection.
--det-gamma-correct                          Applies gamma correction for detection. Might improve
                                             detection.
--unclip-ratio UNCLIP_RATIO                  How much to extend text skeleton to form bounding box
--box-threshold BOX_THRESHOLD                Threshold for bbox generation
--text-threshold TEXT_THRESHOLD              Threshold for text detection
--min-text-length MIN_TEXT_LENGTH            Minimum text length of a text region
--no-text-lang-skip                          Dont skip text that is seemingly already in the target
                                             language.
--inpainting-size INPAINTING_SIZE            Size of image used for inpainting (too large will
                                             result in OOM)
--inpainting-precision {fp32,fp16,bf16}      Inpainting precision for lama, use bf16 while you can.
--colorization-size COLORIZATION_SIZE        Size of image used for colorization. Set to -1 to use
                                             full image size
--denoise-sigma DENOISE_SIGMA                Used by colorizer and affects color strength, range
                                             from 0 to 255 (default 30). -1 turns it off.
--mask-dilation-offset MASK_DILATION_OFFSET  By how much to extend the text mask to remove left-over
                                             text pixels of the original image.
--font-size FONT_SIZE                        Use fixed font size for rendering
--font-size-offset FONT_SIZE_OFFSET          Offset font size by a given amount, positive number
                                             increase font size and vice versa
--font-size-minimum FONT_SIZE_MINIMUM        Minimum output font size. Default is
                                             image_sides_sum/200
--font-color FONT_COLOR                      Overwrite the text fg/bg color detected by the OCR
                                             model. Use hex string without the "#" such as FFFFFF
                                             for a white foreground or FFFFFF:000000 to also have a
                                             black background around the text.
--line-spacing LINE_SPACING                  Line spacing is font_size * this value. Default is 0.01
                                             for horizontal text and 0.2 for vertical.
--force-horizontal                           Force text to be rendered horizontally
--force-vertical                             Force text to be rendered vertically
--align-left                                 Align rendered text left
--align-center                               Align rendered text centered
--align-right                                Align rendered text right
--uppercase                                  Change text to uppercase
--lowercase                                  Change text to lowercase
--no-hyphenation                             If renderer should be splitting up words using a hyphen
                                             character (-)
--manga2eng                                  Render english text translated from manga with some
                                             additional typesetting. Ignores some other argument
                                             options
--gpt-config GPT_CONFIG                      Path to GPT config file, more info in README
--use-mtpe                                   Turn on/off machine translation post editing (MTPE) on
                                             the command line (works only on linux right now)
--save-text                                  Save extracted text and translations into a text file.
--save-text-file SAVE_TEXT_FILE              Like --save-text but with a specified file path.
--filter-text FILTER_TEXT                    Filter regions by their text with a regex. Example
                                             usage: --text-filter ".*badtext.*"
--prep-manual                                Prepare for manual typesetting by outputting blank,
                                             inpainted images, plus copies of the original for
                                             reference
--font-path FONT_PATH                        Path to font file
--gimp-font GIMP_FONT                        Font family to use for gimp rendering.
--host HOST                                  Used by web module to decide which host to attach to
--port PORT                                  Used by web module to decide which port to attach to
--nonce NONCE                                Used by web module as secret for securing internal web
                                             server communication
--ws-url WS_URL                              Server URL for WebSocket mode
--save-quality SAVE_QUALITY                  Quality of saved JPEG image, range from 0 to 100 with
                                             100 being best
--ignore-bubble IGNORE_BUBBLE                The threshold for ignoring text in non bubble areas,
                                             with valid values ranging from 1 to 50, does not ignore
                                             others. Recommendation 5 to 10. If it is too low,
                                             normal bubble areas may be ignored, and if it is too
                                             large, non bubble areas may be considered normal
                                             bubbles

Language Code Reference

Used by the --target-lang or -l argument.

CHS: Chinese (Simplified)
CHT: Chinese (Traditional)
CSY: Czech
NLD: Dutch
ENG: English
FRA: French
DEU: German
HUN: Hungarian
ITA: Italian
JPN: Japanese
KOR: Korean
PLK: Polish
PTB: Portuguese (Brazil)
ROM: Romanian
RUS: Russian
ESP: Spanish
TRK: Turkish
UKR: Ukrainian
VIN: Vietnames
ARA: Arabic
SRP: Serbian
HRV: Croatian
THA: Thai
IND: Indonesian

Translators Reference

Name API Key Offline Note
google
youdao ✔️ Requires YOUDAO_APP_KEY and YOUDAO_SECRET_KEY
baidu ✔️ Requires BAIDU_APP_ID and BAIDU_SECRET_KEY
deepl ✔️ Requires DEEPL_AUTH_KEY
caiyun ✔️ Requires CAIYUN_TOKEN
gpt3 ✔️ Implements text-davinci-003. Requires OPENAI_API_KEY
gpt3.5 ✔️ Implements gpt-3.5-turbo. Requires OPENAI_API_KEY
gpt4 ✔️ Implements gpt-4. Requires OPENAI_API_KEY
papago
sakura Requires SAKURA_API_BASE
offline ✔️ Chooses most suitable offline translator for language
sugoi ✔️ Sugoi V4.0 Models
m2m100 ✔️ Supports every language
m2m100_big ✔️
none ✔️ Translate to empty texts
original ✔️ Keep original texts
  • API Key: Whether the translator requires an API key to be set as environment variable. For this you can create a .env file in the project root directory containing your api keys like so:
OPENAI_API_KEY=sk-xxxxxxx...
DEEPL_AUTH_KEY=xxxxxxxx...

GPT Config Reference

Used by the --gpt-config argument.

# The prompt being feed into GPT before the text to translate.
# Use {to_lang} to indicate where the target language name should be inserted.
# Note: ChatGPT models don't use this prompt.
prompt_template: >
  Please help me to translate the following text from a manga to {to_lang}
  (if it's already in {to_lang} or looks like gibberish you have to output it as it is instead):\n

# What sampling temperature to use, between 0 and 2.
# Higher values like 0.8 will make the output more random,
# while lower values like 0.2 will make it more focused and deterministic.
temperature: 0.5

# An alternative to sampling with temperature, called nucleus sampling,
# where the model considers the results of the tokens with top_p probability mass.
# So 0.1 means only the tokens comprising the top 10% probability mass are considered.
top_p: 1

# The prompt being feed into ChatGPT before the text to translate.
# Use {to_lang} to indicate where the target language name should be inserted.
# Tokens used in this example: 57+
chat_system_template: >
  You are a professional translation engine, 
  please translate the story into a colloquial, 
  elegant and fluent content, 
  without referencing machine translations. 
  You must only translate the story, never interpret it.
  If there is any issue in the text, output it as is.

  Translate to {to_lang}.

# Samples being feed into ChatGPT to show an example conversation.
# In a [prompt, response] format, keyed by the target language name.
#
# Generally, samples should include some examples of translation preferences, and ideally
# some names of characters it's likely to encounter.
#
# If you'd like to disable this feature, just set this to an empty list.
chat_sample:
  Simplified Chinese: # Tokens used in this example: 88 + 84
    - <|1|>恥ずかしい… 目立ちたくない… 私が消えたい…
      <|2|>きみ… 大丈夫⁉
      <|3|>なんだこいつ 空気読めて ないのか…?
    - <|1|>好尴尬…我不想引人注目…我想消失…
      <|2|>你…没事吧⁉
      <|3|>这家伙怎么看不懂气氛的…?

# Overwrite configs for a specific model.
# For now the list is: gpt3, gpt35, gpt4
gpt35:
  temperature: 0.3

Using Gimp for rendering

When setting output format to {xcf, psd, pdf} Gimp will be used to generate the file.

On Windows this assumes Gimp 2.x to be installed to C:\Users\<Username>\AppData\Local\Programs\Gimp 2.

The resulting .xcf file contains the original image as the lowest layer and it has the inpainting as a separate layer. The translated textboxes have their own layers with the original text as the layer name for easy access.

Limitations:

  • Gimp will turn text layers to regular images when saving .psd files.
  • Rotated text isn't handled well in Gimp. When editing a rotated textbox it'll also show a popup that it was modified by an outside program.
  • Font family is controlled separately, with the --gimp-font argument.

Api Documentation

API V2
# use `--mode api` to start a web server.
$ python -m manga_translator -v --mode api --use-gpu
# the api will be serving on http://127.0.0.1:5003

Api is accepting json(post) and multipart.
Api endpoints are /colorize_translate, /inpaint_translate, /translate, /get_text.
Valid arguments for the api are:

// These are taken from args.py. For more info see README.md
detector: String
ocr: String
inpainter: String
upscaler: String
translator: String 
target_language: String
upscale_ratio: Integer
translator_chain: String
selective_translation: String
attempts: Integer
detection_size: Integer // 1024 => 'S', 1536 => 'M', 2048 => 'L', 2560 => 'X'
text_threshold: Float
box_threshold: Float
unclip_ratio: Float
inpainting_size: Integer
det_rotate: Bool
det_auto_rotate: Bool
det_invert: Bool
det_gamma_correct: Bool
min_text_length: Integer
colorization_size: Integer
denoise_sigma: Integer
mask_dilation_offset: Integer
ignore_bubble: Integer
gpt_config: String
filter_text: String
overlay_type: String

// These are api specific args
direction: String // {'auto', 'h', 'v'}
base64Images: String //Image in base64 format
image: Multipart // image upload from multipart
url: String // an url string

Manual translation replaces machine translation with human translators. Basic manual translation demo can be found at http://127.0.0.1:5003/manual when using web mode.

API

Two modes of translation service are provided by the demo: synchronous mode and asynchronous mode.
In synchronous mode your HTTP POST request will finish once the translation task is finished.
In asynchronous mode your HTTP POST request will respond with a task_id immediately, you can use this task_id to poll for translation task state.

Synchronous mode

  1. POST a form request with form data file:<content-of-image> to http://127.0.0.1:5003/run
  2. Wait for response
  3. Use the resultant task_id to find translation result in result/ directory, e.g. using Nginx to expose result/

Asynchronous mode

  1. POST a form request with form data file:<content-of-image> to http://127.0.0.1:5003/submit
  2. Acquire translation task_id
  3. Poll for translation task state by posting JSON {"taskid": <task-id>} to http://127.0.0.1:5003/task-state
  4. Translation is finished when the resultant state is either finished, error or error-lang
  5. Find translation result in result/ directory, e.g. using Nginx to expose result/

Manual translation

POST a form request with form data file:<content-of-image> to http://127.0.0.1:5003/manual-translate and wait for response.

You will obtain a JSON response like this:

{
  "task_id": "12c779c9431f954971cae720eb104499",
  "status": "pending",
  "trans_result": [
    {
      "s": "☆上司来ちゃった……",
      "t": ""
    }
  ]
}

Fill in translated texts:

{
  "task_id": "12c779c9431f954971cae720eb104499",
  "status": "pending",
  "trans_result": [
    {
      "s": "☆上司来ちゃった……",
      "t": "☆Boss is here..."
    }
  ]
}

Post translated JSON to http://127.0.0.1:5003/post-manual-result and wait for response.
Then you can find the translation result in result/ directory, e.g. using Nginx to expose result/.

Next steps

A list of what needs to be done next, you're welcome to contribute.

  1. Use diffusion model based inpainting to achieve near perfect result, but this could be much slower.
  2. IMPORTANT!!!HELP NEEDED!!! The current text rendering engine is barely usable, we need your help to improve text rendering!
  3. Text rendering area is determined by detected text lines, not speech bubbles.
    This works for images without speech bubbles, but making it impossible to decide where to put translated English text. I have no idea how to solve this.
  4. Ryota et al. proposed using multimodal machine translation, maybe we can add ViT features for building custom NMT models.
  5. Make this project works for video(rewrite code in C++ and use GPU/other hardware NN accelerator).
    Used for detecting hard subtitles in videos, generating ass file and remove them completely.
  6. Mask refinement based using non deep learning algorithms, I am currently testing out CRF based algorithm.
  7. Angled text region merge is not currently supported
  8. Create pip repository

Support Us

GPU server is not cheap, please consider to donate to us.

manga-image-translator's People

Contributors

1439707509 avatar archeb avatar bigemperor26 avatar boapps avatar dmmaze avatar dumoedss avatar earmour avatar eidenz avatar eltociear avatar jaric avatar jianchang512 avatar justfrederik avatar kagenihisomi avatar kdrkdrkdr avatar kolyada avatar lawbyte avatar my12123 avatar nikitalita avatar pidanshourouzhouxd avatar qiront avatar qwopqwop200 avatar rspreet92 avatar shirokurakana avatar singersbalm avatar thatdudo avatar thecolorman avatar tr7zw avatar vaibhavb02 avatar xazzafrazz avatar zyddnys avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

manga-image-translator's Issues

LaMa inpainting

Trained model is producing incorrect color, will release model if I managed to fix this issue.

2.0报错

C:\Users\Smile\Desktop\manga-image-translator-beta-0.2.0>translate_demo.py --mode web --use-inpainting --use-cuda
Namespace(box_threshold=0.7, image='', inpainting_size=2048, mode='web', size=2048, text_threshold=0.5, unclip_ratio=2.2, use_cuda=True, use_inpainting=True)
-- Loading models
Traceback (most recent call last):
File "C:\Users\Smile\Desktop\manga-image-translator-beta-0.2.0\translate_demo.py", line 784, in
asyncio.run(main(args.mode))
File "C:\Users\Smile\AppData\Local\Programs\Python\Python38\lib\asyncio\runners.py", line 43, in run
return loop.run_until_complete(main)
File "C:\Users\Smile\AppData\Local\Programs\Python\Python38\lib\asyncio\base_events.py", line 616, in run_until_complete
return future.result()
File "C:\Users\Smile\Desktop\manga-image-translator-beta-0.2.0\translate_demo.py", line 749, in main
dictionary, model_ocr = load_ocr_model()
File "C:\Users\Smile\Desktop\manga-image-translator-beta-0.2.0\translate_demo.py", line 481, in load_ocr_model
model.load_state_dict(torch.load('ocr.ckpt', map_location='cpu'), strict=False)
File "C:\Users\Smile\AppData\Roaming\Python\Python38\site-packages\torch\nn\modules\module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for OCR:
size mismatch for backbone.ConvNet.conv0_1.weight: copying a param with shape torch.Size([40, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3]).
size mismatch for backbone.ConvNet.bn0_1.weight: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for backbone.ConvNet.bn0_1.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for backbone.ConvNet.bn0_1.running_mean: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for backbone.ConvNet.bn0_1.running_var: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for backbone.ConvNet.conv0_2.weight: copying a param with shape torch.Size([40, 40, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 32, 3, 3]).
size mismatch for backbone.ConvNet.layer1.0.conv1.weight: copying a param with shape torch.Size([80, 40, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
size mismatch for backbone.ConvNet.layer1.0.bn1.weight: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn1.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn1.running_mean: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn1.running_var: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.conv2.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.0.bn2.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn2.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn2.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.bn2.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.0.downsample.0.weight: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]).
size mismatch for backbone.ConvNet.layer1.0.downsample.1.weight: copying a param with shape torch.Size([80, 40, 1, 1]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.conv1.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.1.bn1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.conv2.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.1.bn2.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn2.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn2.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.1.bn2.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.conv1.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.2.bn1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.conv2.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer1.2.bn2.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn2.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn2.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer1.2.bn2.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.conv1.weight: copying a param with shape torch.Size([80, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for backbone.ConvNet.bn1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.bn1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.bn1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.bn1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for backbone.ConvNet.layer2.0.conv1.weight: copying a param with shape torch.Size([160, 80, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for backbone.ConvNet.layer2.0.bn1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.0.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.0.downsample.0.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([256, 128, 1, 1]).
size mismatch for backbone.ConvNet.layer2.0.downsample.1.weight: copying a param with shape torch.Size([160, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.conv1.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.1.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.1.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.1.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.conv1.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.2.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.2.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.2.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.conv1.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.3.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.3.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.3.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.conv1.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.4.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer2.4.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer2.4.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.conv2.weight: copying a param with shape torch.Size([160, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for backbone.ConvNet.bn2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.bn2.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.bn2.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.bn2.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for backbone.ConvNet.layer3.0.conv1.weight: copying a param with shape torch.Size([320, 160, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]).
size mismatch for backbone.ConvNet.layer3.0.bn1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn1.bias: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn1.running_mean: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn1.running_var: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.0.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.0.downsample.0.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([512, 256, 1, 1]).
size mismatch for backbone.ConvNet.layer3.0.downsample.1.weight: copying a param with shape torch.Size([320, 160, 1, 1]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.1.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.1.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.1.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.2.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.2.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.2.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.3.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.3.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.3.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.4.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.4.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.4.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.5.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.5.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.5.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.6.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer3.6.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer3.6.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.conv3.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.bn3.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn3.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn3.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn3.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.0.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.0.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.0.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.1.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.1.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.1.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.2.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.2.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.2.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.3.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.3.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.3.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.conv1.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.4.bn1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.conv2.weight: copying a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for backbone.ConvNet.layer4.4.bn2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.layer4.4.bn2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.conv4_1.weight: copying a param with shape torch.Size([320, 320, 2, 2]) from checkpoint, the shape in current model is torch.Size([512, 512, 2, 2]).
size mismatch for backbone.ConvNet.bn4_1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_1.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_1.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.conv4_2.weight: copying a param with shape torch.Size([320, 320, 2, 2]) from checkpoint, the shape in current model is torch.Size([512, 512, 2, 2]).
size mismatch for backbone.ConvNet.bn4_2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_2.running_mean: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for backbone.ConvNet.bn4_2.running_var: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.self_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for encoders.layers.0.self_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for encoders.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for encoders.layers.0.self_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.linear1.weight: copying a param with shape torch.Size([2048, 320]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for encoders.layers.0.linear2.weight: copying a param with shape torch.Size([320, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for encoders.layers.0.linear2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.norm1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.norm1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.norm2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.0.norm2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.self_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for encoders.layers.1.self_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for encoders.layers.1.self_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for encoders.layers.1.self_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.linear1.weight: copying a param with shape torch.Size([2048, 320]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for encoders.layers.1.linear2.weight: copying a param with shape torch.Size([320, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for encoders.layers.1.linear2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.norm1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.norm1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.norm2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoders.layers.1.norm2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.self_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for decoders.layers.0.self_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for decoders.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for decoders.layers.0.self_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.multihead_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for decoders.layers.0.multihead_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for decoders.layers.0.multihead_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for decoders.layers.0.multihead_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.linear1.weight: copying a param with shape torch.Size([2048, 320]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for decoders.layers.0.linear2.weight: copying a param with shape torch.Size([320, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for decoders.layers.0.linear2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm3.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.0.norm3.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.self_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for decoders.layers.1.self_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for decoders.layers.1.self_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for decoders.layers.1.self_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.multihead_attn.in_proj_weight: copying a param with shape torch.Size([960, 320]) from checkpoint, the shape in current model is torch.Size([1536, 512]).
size mismatch for decoders.layers.1.multihead_attn.in_proj_bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1536]).
size mismatch for decoders.layers.1.multihead_attn.out_proj.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for decoders.layers.1.multihead_attn.out_proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.linear1.weight: copying a param with shape torch.Size([2048, 320]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for decoders.layers.1.linear2.weight: copying a param with shape torch.Size([320, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for decoders.layers.1.linear2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm3.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoders.layers.1.norm3.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for pe.pe: copying a param with shape torch.Size([768, 1, 320]) from checkpoint, the shape in current model is torch.Size([768, 1, 512]).
size mismatch for embd.weight: copying a param with shape torch.Size([19264, 320]) from checkpoint, the shape in current model is torch.Size([19264, 512]).
size mismatch for color_pred1.0.weight: copying a param with shape torch.Size([64, 320]) from checkpoint, the shape in current model is torch.Size([64, 512]).

Can you share the compressed font files you have gathered?

I'm sorry, some fonts are not searched by name.I think the font name list is not enough.I'm sorry to bother you, but can you share the compressed font file?
I opened a new issue because I thought it was different from the existing dataset issue.

自动切换翻译器问题

大佬好,我在Windows段执行命令行翻译整个文件夹,发现其中有段问题:
Inpainting resolution: 1360x1920
-- Translating
oh no.
fail to initialize deepl :
auth_key must not be empty
switch to google translator
-- Rendering translated text
似乎识别我所选择的翻译器为deepl了,实际上我所执行的为
python translate_demo.py --verbose --mode batch --use-inpainting --use-cuda --translator=baidu --target-lang=CHS --image D:/translate/1234
很显然,我所选择的是百度翻译,但是却切换为了谷歌翻译器了,该如何解决这个问题呢,谢谢!

freetype.ft_errors.FT_Exception: FT_Exception: (cannot open resource)

When i try to have something translated i get always get the error stated in the name of the issue. I simply took the commands from the usage section and adjusted the path for the translate_demo.py file and of course the images. I get the error when using batch mode or single image mode so it's not limited to just one mode. Here ist the full error message i get:
Traceback (most recent call last):
File "D:\manga-image-translator-main\translate_demo.py", line 334, in
loop.run_until_complete(main(args.mode))
File "C:\Users\Strah\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 646, in run_until_complete
return future.result()
File "D:\manga-image-translator-main\translate_demo.py", line 237, in main
text_render.prepare_renderer()
File "D:\manga-image-translator-main\text_rendering\text_render.py", line 481, in prepare_renderer
CACHED_FONT_FACE.append(freetype.Face(font_filename))
File "C:\Users\Strah\AppData\Local\Programs\Python\Python310\lib\site-packages\freetype_init
.py", line 1101, in init
raise FT_Exception(error)
freetype.ft_errors.FT_Exception: FT_Exception: (cannot open resource)_

training script and data for model compression

Hello there, I'd like to do some model compression such as quantization to make the model smaller and faster for CPU applications. Would you be able to release training script and dataset so I can contribute on that end. Thanks!

Add sugoi translator offline support please!

Hi,

I can't create API with DeepL because they won't accept my credit card.
Sugoi Translator translation quality is going on par with deepL currently
I hope you can add Sugoi Translator Offline integration.

Thanks for your tool, great work!

Font is too small everywhere

Is there an option for me to just add 5 or 10 to font size in code so that it's readable?
Example:
It decides that the font should be 32 so it's 42 instead.

Batch Mode Error after latest update

Single Mode work fine, but I get this error with batch mode after the update

F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main>python translate_demo.py --mode batch --image F:\lime_message\ --use-inpainting --verbose --translator=google --target-lang=ENG

Namespace(mode='batch', image='F:\\lime_message\\', image_dst='', size=1536, use_inpainting=True, use_cuda=False, force_horizontal=False, inpainting_size=2048, unclip_ratio=2.3, box_threshold=0.7, text_threshold=0.5, text_mag_ratio=1, translator='google', target_lang='ENG', use_ctd=False, verbose=True)

 -- Loading models
Processing image in source directory
Processing F:\lime_message\desktop.ini -> F:\lime_message-translated\desktop.ini

Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im28a.png -> F:\lime_message-translated\im28a.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im28b.png -> F:\lime_message-translated\im28b.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im29a.png -> F:\lime_message-translated\im29a.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im29b.png -> F:\lime_message-translated\im29b.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\im29c.png -> F:\lime_message-translated\im29c.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_2_01.png -> F:\lime_message-translated\lime02_2_01.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_2_02.png -> F:\lime_message-translated\lime02_2_02.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_2_03.png -> F:\lime_message-translated\lime02_2_03.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_01.png -> F:\lime_message-translated\lime02_3_01.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_02.png -> F:\lime_message-translated\lime02_3_02.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_03.png -> F:\lime_message-translated\lime02_3_03.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_04.png -> F:\lime_message-translated\lime02_3_04.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_05.png -> F:\lime_message-translated\lime02_3_05.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_06.png -> F:\lime_message-translated\lime02_3_06.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_07.png -> F:\lime_message-translated\lime02_3_07.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_08.png -> F:\lime_message-translated\lime02_3_08.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_09.png -> F:\lime_message-translated\lime02_3_09.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_10.png -> F:\lime_message-translated\lime02_3_10.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_11.png -> F:\lime_message-translated\lime02_3_11.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_12.png -> F:\lime_message-translated\lime02_3_12.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_13.png -> F:\lime_message-translated\lime02_3_13.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_14.png -> F:\lime_message-translated\lime02_3_14.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_15.png -> F:\lime_message-translated\lime02_3_15.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_16.png -> F:\lime_message-translated\lime02_3_16.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_17.png -> F:\lime_message-translated\lime02_3_17.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_18.png -> F:\lime_message-translated\lime02_3_18.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_19.png -> F:\lime_message-translated\lime02_3_19.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_3_20.png -> F:\lime_message-translated\lime02_3_20.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_01.png -> F:\lime_message-translated\lime02_5_01.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_02.png -> F:\lime_message-translated\lime02_5_02.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_03.png -> F:\lime_message-translated\lime02_5_03.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_04.png -> F:\lime_message-translated\lime02_5_04.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_05.png -> F:\lime_message-translated\lime02_5_05.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_06.png -> F:\lime_message-translated\lime02_5_06.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_07.png -> F:\lime_message-translated\lime02_5_07.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_5_08.png -> F:\lime_message-translated\lime02_5_08.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main
    await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
UnboundLocalError: local variable 'img' referenced before assignment
Processing F:\lime_message\lime02_6_01.png -> F:\lime_message-translated\lime02_6_01.png
Traceback (most recent call last):
  File "F:\Offline\Visual Novel\[MTL] Machine Translation tool\manga-image-translator-main\translate_demo.py", line 273, in main

Error when running

Thanks for your great support.

Traceback (most recent call last):
File "/Users/tony/IPANDALAB Dropbox/Oh Tony/project/Python/manga-image-translator/translate_demo.py", line 647, in
main()
File "/Users/tony/IPANDALAB Dropbox/Oh Tony/project/Python/manga-image-translator/translate_demo.py", line 525, in main
boxes, scores = det({'shape':[(img_resized.shape[0], img_resized.shape[1])]}, db)
File "/Users/tony/IPANDALAB Dropbox/Oh Tony/project/Python/manga-image-translator/dbnet_utils.py", line 38, in call
boxes, scores = self.boxes_from_bitmap(pred[batch_index], segmentation[batch_index], width, height)
File "/Users/tony/IPANDALAB Dropbox/Oh Tony/project/Python/manga-image-translator/dbnet_utils.py", line 105, in boxes_from_bitmap
contours, _ = cv2.findContours((bitmap * 255).astype(np.uint8), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ValueError: too many values to unpack (expected 2)

I've meet the error.

Can you explain what i can do?

Thanks

Result not very satisfying

hi, I want to translate an image (screenshot of a game) from Korean to Chinese. Here is the original image URL: https://imgur.com/vBAjVVF and the corresponding result image URL: https://imgur.com/kBipYEC. It seems that the Korean words are not well segmented at all, some words are not identified thus not translated. The CL arguments used: python translate_demo.py --verbose --translator=baidu --target-lang=CHS --image ./demo/test2.jpg.

If it's the bad argument's cause, I'd be very happy to know the good one, thanks!

Model for English OCR

Can you provide a model to ocr english pages?
I had good results with japanese pages, but when I tried a english page...

005

Original:
005

Or at least add example how we can build our own model... But I will supose I don't will know how to handle even if I have a documentation (i'm not good with python)

Is there any reason why the font size should be restricted to power of 2?

Hi, forgot about this project and making some pr of my own.
There is still blurring of translated text which messes up the quality... For some images that I tested, jpg artifacts were seen even as a png file.
I believe it's because the translated text is first drawn with limited font size (ex. 32px when it should start from 50px), then relocated by cv2.warpAffine. The warpAffine function reduces the quality by itself, but the size difference between the source and destination is another big factor of quality loss.
Why does the font size need to be restricted to power of 2? Does freetype package only work with fixed font size? Does the cache get too large without the restriction?

Meanwhile, my pr will be fixing the horizontal mode render, which is not going to use freetype, so this might not matter that much. But I was just curious.

Upload Failed & Access is denies error

I cloned the repo and install all the required module, but I get upload failed error with web mode

F:\Offline\Manga[MTL] Machine Translation tool\manga-image-translator-main>python translate_demo.py --mode web --use-inpainting --verbose --translator=google --target-lang=ENG
Namespace(mode='web', image='', image_dst='', size=1536, use_inpainting=True, use_cuda=False, force_horizontal=False, inpainting_size=2048, unclip_ratio=2.3, box_threshold=0.7, text_threshold=0.5, text_mag_ratio=1, translator='google', target_lang='ENG', use_ctd=False, verbose=True)
-- Loading models
-- Running in web service mode
-- Waiting for translation tasks
fail to initialize deepl :
auth_key must not be empty
switch to google translator
Serving up app on 127.0.0.1:5003

image

Also, when I try to run batch translation, I get access denied error like this
F:\Offline\Manga[MTL] Machine Translation tool\manga-image-translator-main>python translate_demo.py --image <F:\Offline\Manga\Sousaku Kanojo\lime message> --use-inpainting --verbose --translator=google --target-lang=ENG
Access is denied.

Anyone know the solution or maybe somethings is wrong with my command?

how to use the inpainter?

i want to use the inpainter only for it to make all bubbles inside the manga i have blank so i can manually translate them
but the code is soo big and complex can you guide me or tell me hoe to use the inpainting only?

Thanks in advance.

怎么回事

Traceback (most recent call last):
File "G:\GitHub\image-translator\manga-image-translator\translate_demo.py", line 15, in
from text_mask import dispatch as dispatch_mask_refinement
File "G:\GitHub\image-translator\manga-image-translator\text_mask_init_.py", line 8, in
from .text_mask_utils import complete_mask_fill, filter_masks, complete_mask
File "G:\GitHub\image-translator\manga-image-translator\text_mask\text_mask_utils.py", line 94, in
from pydensecrf.utils import compute_unary, unary_from_softmax
ModuleNotFoundError: No module named 'pydensecrf'

请教关于尺码图的优化建议

你好。

尝试翻译几张尺码图(简转繁),但效果仍不够好(特别是表格内的文字容易跑版)。请问有什么建议吗?

如果需要是加强模型的辨识能力,可否提供文件指引,我可以收集训练集及训练模型在反馈给这个专案。

谢谢

原图:

处理过后的:

Can I use wider area while text rendering process?

Because it is awkward that reading text vertically, I modified text_render.py to always render text horizontally.
But since original text area is too narrow, this result is still hard to read.
So, I want to know it is possible to modify code to use more wider area while text rendering process.(and how to)

Sorry for my poor English and thanks in advance.

English

It will be helpful if you guys add your documentation in English too!

ONNX models

I've seen comictextdetector.pt is released in ONNX format.

Could it be possible to release other models (OCR, Detect, Inpainting) also in ONNX format?

Thanks

src is not a numerical tuple

Giving error, here is the cmd log

\manga-image-translator>python translate_demo.py --mode web --use-inpainting --verbose --translator=google --target-lang=ENG
Namespace(mode='web', image='', image_dst='', size=1536, use_inpainting=True, use_cuda=False, force_horizontal=False, inpainting_size=2048, unclip_ratio=2.3, box_threshold=0.7, text_threshold=0.5, text_mag_ratio=1, translator='google', target_lang='ENG', use_ctd=False, verbose=True)
 -- Loading models
 -- Running in web service mode
 -- Waiting for translation tasks
fail to initialize deepl :
auth_key must not be empty
switch to google translator
Serving up app on 127.0.0.1:5003
 -- Processing task eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal
 -- Detection resolution 1536
 -- Detector using default
 -- Render text direction is h
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to detection
 -- Running text detection
Detection resolution: 1280x1536
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to ocr
 -- Running OCR
0.8592585325241089 石伸,来自 fg: (38, 38, 41) bg: (38, 38, 41)
0.9998165369033813 莫非这留痕 fg: (46, 49, 55) bg: (45, 50, 55)
0.9551085233688354 远古的星空? fg: (66, 61, 68) bg: (67, 65, 69)
0.9972514510154724 邀游,可这黑色 fg: (68, 64, 73) bg: (68, 68, 76)
0.9320383667945862 么?即便是武帝 fg: (79, 73, 80) bg: (79, 73, 80)
0.9866024255752563 强者,也不敢说 fg: (67, 60, 69) bg: (66, 60, 69)
0.9399910569190979 石碑,竟然是来 fg: (63, 59, 66) bg: (63, 59, 66)
0.9985544681549072 宇宙星空中有什 fg: (78, 74, 81) bg: (77, 73, 78)
0.9888063073158264 自那么一个地方。 fg: (79, 72, 83) bg: (79, 72, 81)
 -- spliting {0, 1, 2}
to split [0, 1, 2]
edge_weights [25.019992006393608, 23.0]
std: 1.0099960031968038, mean: 24.009996003196804
 -- spliting {3, 4, 5, 6, 7, 8}
to split [3, 4, 5, 6, 7, 8]
edge_weights [26.1725046566048, 25.0, 24.0, 23.0, 22.0]
std: 1.463818630272171, mean: 24.03450093132096
region_indices [{0, 1, 2}, {3, 4, 5, 6, 7, 8}]
 -- Generating text mask
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to mask_generation
100%|███████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 287.99it/s]
 -- Translating
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to translating
translator google
target_language ENG
 -- Running inpainting
Task state eaab3f4452a83debc847ca8cb754d493e94717ba9052c4b194b4e974b4b9c136-M-google-ENG-default-horizontal to inpainting
Inpainting resolution: 800x1136
_GatheringFuture exception was never retrieved
future: <_GatheringFuture finished exception=error("OpenCV(4.5.5) :-1: error: (-5:Bad argument) in function 'cvtColor'\n> Overload resolution failed:\n>  - src is not a numerical tuple\n>  - Expected Ptr<cv::UMat> for argument 'src'\n")>
Traceback (most recent call last):
  File "F:\xampp\htdocs\manga-rock\manga-image-translator\translate_demo.py", line 148, in infer
    cv2.imwrite(f'result/{task_id}/inpainted.png', cv2.cvtColor(img_inpainted, cv2.COLOR_RGB2BGR))
cv2.error: OpenCV(4.5.5) :-1: error: (-5:Bad argument) in function 'cvtColor'
> Overload resolution failed:
>  - src is not a numerical tuple
>  - Expected Ptr<cv::UMat> for argument 'src'


ufunc 'right_shift' not supported for the input types

Trying this tool on ubuntu 20.04.
got the latest version from git
I`m translating from Japanese to English

Error:


switch to google translator
 -- Rendering translated text
すごいな…
It's amazing ...
137 609 37 180
Traceback (most recent call last):
  File "translate_demo.py", line 354, in <module>
    loop.run_until_complete(main(args.mode))
  File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "translate_demo.py", line 270, in main
    await infer(img, mode, '', alpha_ch = alpha_ch)
  File "translate_demo.py", line 204, in infer
    output = await dispatch_rendering(np.copy(img_inpainted), args.text_mag_ratio, translated_sentences, textlines, text_regions, render_text_direction_overwrite)
  File "/home/maks/manga-image-translator/text_rendering/__init__.py", line 53, in dispatch
    img_canvas = render(img_canvas, font_size, text_mag_ratio, trans_text, region, majority_dir, fg, bg, False)
  File "/home/maks/manga-image-translator/text_rendering/__init__.py", line 83, in render
    font_size_enlarged = findNextPowerOf2(font_size) * text_mag_ratio
  File "/home/maks/manga-image-translator/utils.py", line 454, in findNextPowerOf2
    n = n >> 1
TypeError: ufunc 'right_shift' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

本地使用报错,想确认是不是墙的问题

使用的是谷歌翻译
命令行中可以看出已经截到待翻译的文本
已科学上网,可以ping通www.google.com和translate.google.com

错误如下,我想确认是不是还是墙的问题,因为后面看到说的是连接错误

File "translate_demo.py", line 317, in main
await infer(img, 'demo', '', dst_image_name = dst_filename, alpha_ch = alpha_ch)
File "translate_demo.py", line 160, in infer
translated_sentences = await run_translation(args.translator, 'auto', args.target_lang, [r.text for r in text_regions])
File "E:\Microsoft6477\manga-image-translator\translators_init_.py", line 176, in dispatch
result = await GOOGLE_CLIENT.translate(concat_texts, tgt_lang, src_lang, *args, **kwargs)
File "E:\Microsoft6477\manga-image-translator\translators\google.py", line 194, in translate
data, response = await self._translate(text, dest, src)
File "E:\Microsoft6477\manga-image-translator\translators\google.py", line 120, in _translate
r = await self.client.post(url, params=params, data=data)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1374, in post
return await self.request(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1147, in request
response = await self.send(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1168, in send
response = await self.send_handling_redirects(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1195, in send_handling_redirects
response = await self.send_handling_auth(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1232, in send_handling_auth
response = await self.send_single_request(request, timeout)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpx_client.py", line 1264, in send_single_request
) = await transport.request(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_async\http_proxy.py", line 110, in request
return await self._tunnel_request(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_async\http_proxy.py", line 191, in _tunnel_request
proxy_response = await proxy_connection.request(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_async\connection.py", line 65, in request
self.socket = await self._open_socket(timeout)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_async\connection.py", line 85, in _open_socket
return await self.backend.open_tcp_stream(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_backends\auto.py", line 38, in open_tcp_stream
return await self.backend.open_tcp_stream(hostname, port, ssl_context, timeout)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_backends\asyncio.py", line 233, in open_tcp_stream
return SocketStream(
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\contextlib.py", line 131, in exit
self.gen.throw(type, value, traceback)
File "C:\Users\Micro\AppData\Local\Programs\Python\Python38\lib\site-packages\httpcore_exceptions.py", line 12, in map_exceptions
raise to_exc(exc) from None
httpcore._exceptions.ConnectError

DeepL translation error

Hello,

Translating using DeepL only results in "error" as the text results (see attachment).

I'm running on the free version of DeepL, but the python library works when I try running it by hand.
e93f70744e935153116c82b0c1e662f5M

Wrong clean region

First, thanks for the english optimizations, I hope this tool continue becaming better.
Anyway, I found that the tool tried to redraw the wrong region to clean the text,
image
This happened to me when I used --size 3000

TypeError?

image

This problem solved with '--size 1024'.
The recognition rate is low. The same error occurs in some other images.
Is there a way not to get this error without "--size 1024" option?
I attached the used image.

proseka

Issue running, 'pydensecrf'

I am trying to use this and keep getting this error: ModuleNotFoundError: No module named 'pydensecrf', I have tried installing cython and itself but nothing seems to work

本地运行出错

运行后显示:

usage: translate_demo.py [-h] [--mode MODE] [--image IMAGE]
[--image-dst IMAGE_DST] [--size SIZE]
[--use-inpainting] [--use-cuda] [--force-horizontal]
[--inpainting-size INPAINTING_SIZE]
[--unclip-ratio UNCLIP_RATIO]
[--box-threshold BOX_THRESHOLD]
[--text-threshold TEXT_THRESHOLD]
[--text-mag-ratio TEXT_MAG_RATIO]
[--translator TRANSLATOR] [--target-lang TARGET_LANG]
[--verbose]
translate_demo.py: error: unrecognized arguments: [--verbose] [--translator=google] [--target-lang=CHS]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.