Giter VIP home page Giter VIP logo

stefann's Introduction

Getting Started  •   Training Networks  •   External Links  •   Citation  •   License



The official GitHub repository for the paper on STEFANN: Scene Text Editor using Font Adaptive Neural Network.


Getting Started

1. Installing Dependencies

Package Source Version Tested version
(Updated on April 14, 2020)
Python Conda 3.7.7 ✔️
Pip Conda 20.0.2 ✔️
Numpy Conda 1.18.1 ✔️
Requests Conda 2.23.0 ✔️
TensorFlow Conda 2.1.0 ✔️
Keras Conda 2.3.1 ✔️
Pillow Conda 7.0.0 ✔️
Colorama Conda 0.4.3 ✔️
OpenCV PyPI 4.2.0 ✔️
PyQt5 PyPI 5.14.2 ✔️

💥 Quick installation

Step 1: Install Git and Conda package manager (Miniconda / Anaconda)

Step 2: Update and configure Conda

conda update conda
conda config --set env_prompt "({name}) "

Step 3: Clone this repository and change directory to repository root

git clone https://github.com/prasunroy/stefann.git
cd stefann

Step 4: Create an environment and install depenpencies

On Linux and Windows

  • To create CPU environment: conda env create -f release/env_cpu.yml
  • To create GPU environment: conda env create -f release/env_gpu.yml

On macOS

  • To create CPU environment: conda env create -f release/env_osx.yml

💥 Quick test

Step 1: Download models and pretrained checkpoints into release/models directory

Step 2: Download sample images and extract into release/sample_images directory

stefann/
├── ...
├── release/
│   ├── models/
│   │   ├── colornet.json
│   │   ├── colornet_weights.h5
│   │   ├── fannet.json
│   │   └── fannet_weights.h5
│   ├── sample_images/
│   │   ├── 01.jpg
│   │   ├── 02.jpg
│   │   └── ...
│   └── ...
└── ...

Step 3: Activate environment

To activate CPU environment: conda activate stefann-cpu
To activate GPU environment: conda activate stefann-gpu

Step 4: Change directory to release and run STEFANN

cd release
python stefann.py

2. Editing Results 😆


Each image pair consists of the original image (Left) and the edited image (Right).


Training Networks

1. Downloading Datasets

Download datasets and extract the archives into datasets directory under repository root.

stefann/
├── ...
├── datasets/
│   ├── fannet/
│   │   ├── pairs/
│   │   ├── train/
│   │   └── valid/
│   └── colornet/
│       ├── test/
│       ├── train/
│       └── valid/
└── ...

📌 Description of datasets/fannet

This dataset is used to train FANnet and it consists of 3 directories: fannet/pairs, fannet/train and fannet/valid. The directories fannet/train and fannet/valid consist of 1015 and 300 sub-directories respectively, each corresponding to one specific font. Each font directory contains 64x64 grayscale images of 62 English alphanumeric characters (10 numerals + 26 upper-case letters + 26 lower-case letters). The filename format is xx.jpg where xx is the ASCII value of the corresponding character (e.g. "48.jpg" implies an image of character "0"). The directory fannet/pairs contains 50 image pairs, each corresponding to a random font from fannet/valid. Each image pair is horizontally concatenated to a dimension of 128x64. The filename format is id_xx_yy.jpg where id is the image identifier, xx and yy are the ASCII values of source and target characters respectively (e.g. "00_65_66.jpg" implies a transformation from source character "A" to target character "B" for the image with identifier "00").

📌 Description of datasets/colornet

This dataset is used to train Colornet and it consists of 3 directories: colornet/test, colornet/train and colornet/valid. Each directory consists of 5 sub-directories: _color_filters, _mask_pairs, input_color, input_mask and output_color. The directory _color_filters contains synthetically generated color filters of dimension 64x64 including both solid and gradient colors. The directory _mask_pairs contains a set of 64x64 grayscale image pairs selected at random from 1315 available fonts in datasets/fannet. Each image pair is horizontally concatenated to a dimension of 128x64. For colornet/train and colornet/valid each color filter is applied on each mask pair. This results in 64x64 image triplets of color source image, binary target image and color target image in input_color, input_mask and output_color directories respectively. For colornet/test one color filter is applied only on one mask pair to generate similar image triplets. With a fixed set of 100 mask pairs, 80000 colornet/train and 20000 colornet/valid samples are generated from 800 and 200 color filters respectively. With another set of 50 mask pairs, 50 colornet/test samples are generated from 50 color filters.

2. Training FANnet and Colornet

Step 1: Activate environment

To activate CPU environment: conda activate stefann-cpu
To activate GPU environment: conda activate stefann-gpu

Step 2: Change directory to project root

cd stefann

Step 3: Configure and train FANnet

To configure training options edit configurations section (line 40-72) of fannet.py
To start training: python fannet.py

☁️ Check this notebook hosted at Kaggle for an interactive demonstration of FANnet.

Step 4: Configure and train Colornet

To configure training options edit configurations section (line 38-65) of colornet.py
To start training: python colornet.py

☁️ Check this notebook hosted at Kaggle for an interactive demonstration of Colornet.

External Links

Project  •   Paper  •   Supplementary Materials  •   Datasets  •   Models  •   Sample Images


Citation

@InProceedings{Roy_2020_CVPR,
  title     = {STEFANN: Scene Text Editor using Font Adaptive Neural Network},
  author    = {Roy, Prasun and Bhattacharya, Saumik and Ghosh, Subhankar and Pal, Umapada},
  booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month     = {June},
  year      = {2020}
}

License

Copyright 2020 by the authors

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Made with ❤️ and 🍕 on Earth.

stefann's People

Contributors

mauromazzei avatar prasunroy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stefann's Issues

Making word longer?

Now I'm playing around with Stefann a bit. Your readme was really helpful here! But I guess there's more knobs and dials to it than I expected. Was still wondering is there e.g. a way to make the word you're replacing longer? Also, your examples seem way more crips than my quick shot at it. Got any tips?

Original:
backspace orig
Edited
backspace-button_2020-08-20-21-15-52

integrity failure

When I run the code, it is reporting the error integrity failure.

ValueError: No gradients provided for any variable

Hello, prasunroy. Thank you for the amazing work. I tried to train fannet by python stefann/fannet.py following your codes, but got ValueError: No gradients provided for any variable and it stopped training.
Could you give me a solution to fix the bug?

1618383906(1)

Performance of colornet

Hello, I noticed that in the stefann.py file, you have provided three methods for style transfer, transfer_color_pal, transfer_color_max, and the colornet implementation. I uncommented the lines from 480-487 in the stefann.py file to use the colornet model. When I tested the application on images from 'sample_images' folder using the colornet method with the provided pretrained weights, it did not work very well and produced blurry and inconsistent results. The results of colornet did not match the results given in the 'editing_examples' folder.

Given Result (Left - Original Image, Right - stefann generated image)
08
Output using colornet
08_colornet

There are many more cases where the colornet model is not performing as expected. Could you please help me with this?

SSIM calculation in fannet

I appreciate your excellent work on text editing. I tried to run FANnet with pretrained model on your datasets. So I downloaded the pretrained weights from here and datasets from here following README.

To generate results using the valid set as the input, I modified fannet.py and ran the following code

from skimage.metrics import structural_similarity as ssim
for data in valid_datagen.flow():
    [x, onehot], y = data
    out = fannet.predict([x, onehot]) 
    n = x.shape[0]
    for i in range(n):
        _x = x[i].reshape(64, 64)
        _gt = y[i].reshape(64, 64)
        _out = out[i].reshape(64, 64)
        _, _out_bin = cv2.threshold(_out,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
        sv = ssim(_gt, _out_bin, data_range=255, gaussian_weights=True, sigma=1.5, use_sample_covariance=False)
        print(sv)

But the SSIM value sv is far from what was claimed in the paper. I also tried to calculate the average SSIM w.r.t different source characters. There is also a large gap. So I am wondering if there exists some mistakes when running the model or just the SSIM calculation.

About character replacement

Thank you very much for your contribution in this field.
I still have some questions about the character replacement part. As you said in the paper, when training fannet and colornet, the input character size is 6464, but when performing character replacement in a real scene, some characters are much larger in size 6464 (for example, 128128), how did you solve this situation? Is it to directly resize the generated characters to 128128 or some other way?

How to use the tool

After I run the command python stefann.py and open an image, what are the steps to replace the characters in that image?

How to edit numbers?

Hi,

Thanks for sharing your awesome work! I am wondering if you have a pretrained model to edit numbers? Thanks.

h5py version (integrity failure)

I followed the installation instructions, but kept getting the following message.

[DEBUG] Loading application... integrity failure

There was no problem with my CUDA settings, and tensorflow was picking up my GPU just fine, so I tried taking stuff out of try/exceptions and found out that keras was failing on loading the models.

original_keras_version = f.attrs['keras_version'].decode('utf8')

AttributeError: 'str' object has no attribute 'decode'

Googled it, and found that it works at h5py version 2.10.0.
I think it would be nice if h5py versions were also included in conda settings.

Thank you anyways for your awesome work.
I'll try it out now since my bug is fixed.

Colab

Can anyone make a Colab or easy-to-run version please? 🙏

Can't get it to run due to 'integrity failure'.

Hi,

I was trying out your project but I'm stuck installing dependencies. I tried the conda installation method but it keeps failing (tried cpu, gpu and osx yamls).
[DEBUG] Loading application... integrity failure

I also think the yaml file here

- tensorflow=2.1.0

and here is faulty:
- tensorflow-gpu=2.1.0

I guess it should look more like the OSX version with a double == and tensorflow below pip.

- python
- requests
- pip:
- tensorflow==2.1.0
- opencv-python==4.1.0.*
- opencv-contrib-python-headless

However even when I edit the env_cpu.yaml that way it seems to fail.

It could be related to this issue: tensorflow/tensorflow#37316
but when I tried tensorflow>=2.1.0 it also failed.

Any idea how to solve this? Or is there an alternative way to install it? A docker container maybe?

Questions about the input of FANnet.

Hello, after reading your paper, I have some questions about the input of FANnet.
In the paper, one of the inputs to FANnet is "a one-hot encoding v of length 26 of the target character", where 26 represents 26 capital letters. Have you ever tried to change 26 to 62 (uppercase letters + lowercase letters + numbers), because the same font theoretically has some commonalities, so they should also be able to complete the font style transfer. I am interested in this question. Do you have any comments or suggestions?

Support for Chinese Language

Hi, thanks for your hard work on this project. It's really cool!
I've seen issue #7 but I still have some doubts. I would like to try to replace english text with it's corresponding chinese translation, but how can I do so if characters are stored in jpg file named as ASCII numbers? Chinese it's not included in ASCII.
Another question regarding chinese is, do I also need to generate new images, one for each character, in the colornet directory?
Your help would be much appreciated!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.