Giter VIP home page Giter VIP logo

kumo's Introduction

Kumo Kumo

Kumo's goal is to create a powerful and user friendly Word Cloud API in Java. Kumo directly generates an image file without the need to create an applet as many other libraries do.

Please feel free to jump in and help improve Kumo! There are many places for performance optimization in Kumo!

Maven Central CircleCI

Current Features

  • Draw Rectangle, Circle or Image Overlay word clouds. Image Overlay will draw words over all non-transparent pixels.
  • Linear, Square-Root Font Scalars. Fully extendable.
  • Variable Font Sizes.
  • Word Rotation. Just provide a Start Angle, End Angle, and number of slices.
  • Custom BackGround Color. Fully customizable BackGrounds coming soon.
  • Word Padding.
  • Load Custom Color Palettes. Also supports color gradients.
  • Two Modes that of Collision and Padding: PIXEL_PERFECT and RECTANGLE.
  • Polar Word Clouds. Draw two opposing word clouds in one image to easily compare/contrast date sets.
  • Layered Word Clouds. Overlay multiple word clouds.
  • WhiteSpace and Chinese Word Tokenizer. Fully extendable.
  • Frequency Analyzer to tokenize, filter and compute word counts.
  • Command Line Interface

CLI Install via Brew (NEW!)

brew install kumo

Available from Maven Central

<dependency>
    <groupId>com.kennycason</groupId>
    <artifactId>kumo-core</artifactId>
    <version>1.28</version>
</dependency>

Include kumo-tokenizers if you want Chinese serialization. More languages to come.

<dependency>
    <groupId>com.kennycason</groupId>
    <artifactId>kumo-tokenizers</artifactId>
    <version>1.28</version>
</dependency>

Screenshots

Examples

Example to generate a Word Cloud on top of an image.

final FrequencyAnalyzer frequencyAnalyzer = new FrequencyAnalyzer();
frequencyAnalyzer.setWordFrequenciesToReturn(300);
frequencyAnalyzer.setMinWordLength(4);
frequencyAnalyzer.setStopWords(loadStopWords());

final List<WordFrequency> wordFrequencies = frequencyAnalyzer.load("text/datarank.txt");
final Dimension dimension = new Dimension(500, 312);
final WordCloud wordCloud = new WordCloud(dimension, CollisionMode.PIXEL_PERFECT);
wordCloud.setPadding(2);
wordCloud.setBackground(new PixelBoundryBackground("backgrounds/whale_small.png"));
wordCloud.setColorPalette(new ColorPalette(new Color(0x4055F1), new Color(0x408DF1), new Color(0x40AAF1), new Color(0x40C5F1), new Color(0x40D3F1), new Color(0xFFFFFF)));
wordCloud.setFontScalar(new LinearFontScalar(10, 40));
wordCloud.build(wordFrequencies);
wordCloud.writeToFile("kumo-core/output/whale_wordcloud_small.png");

Example to generate a circular Word Cloud.

final FrequencyAnalyzer frequencyAnalyzer = new FrequencyAnalyzer();
final List<WordFrequency> wordFrequencies = frequencyAnalyzer.load("text/my_text_file.txt");
final Dimension dimension = new Dimension(600, 600);
final WordCloud wordCloud = new WordCloud(dimension, CollisionMode.PIXEL_PERFECT);
wordCloud.setPadding(2);
wordCloud.setBackground(new CircleBackground(300));
wordCloud.setColorPalette(new ColorPalette(new Color(0x4055F1), new Color(0x408DF1), new Color(0x40AAF1), new Color(0x40C5F1), new Color(0x40D3F1), new Color(0xFFFFFF)));
wordCloud.setFontScalar(new SqrtFontScalar(10, 40));
wordCloud.build(wordFrequencies);
wordCloud.writeToFile("kumo-core/output/datarank_wordcloud_circle_sqrt_font.png");

Example to generate a rectangle Word Cloud

final FrequencyAnalyzer frequencyAnalyzer = new FrequencyAnalyzer();
final List<WordFrequency> wordFrequencies = frequencyAnalyzer.load("text/my_text_file.txt");
final Dimension dimension = new Dimension(600, 600);
final WordCloud wordCloud = new WordCloud(dimension, CollisionMode.RECTANGLE);
wordCloud.setPadding(0);
wordCloud.setBackground(new RectangleBackground(dimension));
wordCloud.setColorPalette(new ColorPalette(Color.RED, Color.GREEN, Color.YELLOW, Color.BLUE));
wordCloud.setFontScalar(new LinearFontScalar(10, 40));
wordCloud.build(wordFrequencies);
wordCloud.writeToFile("kumo-core/output/wordcloud_rectangle.png");

Example using Linear Color Gradients

final FrequencyAnalyzer frequencyAnalyzer = new FrequencyAnalyzer();
frequencyAnalyzer.setWordFrequenciesToReturn(500);
frequencyAnalyzer.setMinWordLength(4); 
final List<WordFrequency> wordFrequencies = frequencyAnalyzer.load("text/my_text_file.txt");
final Dimension dimension = new Dimension(600, 600);
final WordCloud wordCloud = new WordCloud(dimension, CollisionMode.PIXEL_PERFECT);
wordCloud.setPadding(2);
wordCloud.setBackground(new CircleBackground(300));
// colors followed by and steps between
wordCloud.setColorPalette(new LinearGradientColorPalette(Color.RED, Color.BLUE, Color.GREEN, 30, 30));
wordCloud.setFontScalar( new SqrtFontScalar(10, 40));
wordCloud.build(wordFrequencies);
wordCloud.writeToFile("kumo-core/output/wordcloud_gradient_redbluegreen.png");

Example of tokenizing chinese text into a circle

final FrequencyAnalyzer frequencyAnalyzer = new FrequencyAnalyzer();
frequencyAnalyzer.setWordFrequenciesToReturn(600);
frequencyAnalyzer.setMinWordLength(2);
frequencyAnalyzer.setWordTokenizer(new ChineseWordTokenizer());

final List<WordFrequency> wordFrequencies = frequencyAnalyzer.load("text/chinese_language.txt");
final Dimension dimension = new Dimension(600, 600);
final WordCloud wordCloud = new WordCloud(dimension, CollisionMode.PIXEL_PERFECT);
wordCloud.setPadding(2);
wordCloud.setBackground(new CircleBackground(300));
wordCloud.setColorPalette(new ColorPalette(new Color(0xD5CFFA), new Color(0xBBB1FA), new Color(0x9A8CF5), new Color(0x806EF5)));
wordCloud.setFontScalar(new SqrtFontScalar(12, 45));
wordCloud.build(wordFrequencies);
wordCloud.writeToFile("kumo-core/output/chinese_language_circle.png");

Create a polarity word cloud to contrast two datasets

final FrequencyAnalyzer frequencyAnalyzer = new FrequencyAnalyzer();
frequencyAnalyzer.setWordFrequenciesToReturn(750);
frequencyAnalyzer.setMinWordLength(4);
frequencyAnalyzer.setStopWords(loadStopWords());

final List<WordFrequency> wordFrequencies = frequencyAnalyzer.load("text/new_york_positive.txt");
final List<WordFrequency> wordFrequencies2 = frequencyAnalyzer.load("text/new_york_negative.txt");
final Dimension dimension = new Dimension(600, 600);
final PolarWordCloud wordCloud = new PolarWordCloud(dimension, CollisionMode.PIXEL_PERFECT, PolarBlendMode.BLUR);
wordCloud.setPadding(2);
wordCloud.setBackground(new CircleBackground(300));
wordCloud.setFontScalar(new SqrtFontScalar(10, 40));
wordCloud.build(wordFrequencies, wordFrequencies2);
wordCloud.writeToFile("kumo-core/output/polar_newyork_circle_blur_sqrt_font.png");

Create a Layered Word Cloud from two images/two word sets

final FrequencyAnalyzer frequencyAnalyzer = new FrequencyAnalyzer();
frequencyAnalyzer.setWordFrequenciesToReturn(300);
frequencyAnalyzer.setMinWordLength(5);
frequencyAnalyzer.setStopWords(loadStopWords());

final List<WordFrequency> wordFrequencies = frequencyAnalyzer.load("text/new_york_positive.txt");
final List<WordFrequency> wordFrequencies2 = frequencyAnalyzer.load("text/new_york_negative.txt");
final Dimension dimension = new Dimension(600, 386);
final LayeredWordCloud layeredWordCloud = new LayeredWordCloud(2, dimension, CollisionMode.PIXEL_PERFECT);

layeredWordCloud.setPadding(0, 1);
layeredWordCloud.setPadding(1, 1);

layeredWordCloud.setFontOptions(0, new KumoFont("LICENSE PLATE", FontWeight.BOLD));
layeredWordCloud.setFontOptions(1, new KumoFont("Comic Sans MS", FontWeight.BOLD));

layeredWordCloud.setBackground(0, new PixelBoundryBackground("backgrounds/cloud_bg.bmp"));
layeredWordCloud.setBackground(1, new PixelBoundryBackground("backgrounds/cloud_fg.bmp"));

layeredWordCloud.setColorPalette(0, new ColorPalette(new Color(0xABEDFF), new Color(0x82E4FF), new Color(0x55D6FA)));
layeredWordCloud.setColorPalette(1, new ColorPalette(new Color(0xFFFFFF), new Color(0xDCDDDE), new Color(0xCCCCCC)));

layeredWordCloud.setFontScalar(0, new SqrtFontScalar(10, 40));
layeredWordCloud.setFontScalar(1, new SqrtFontScalar(10, 40));

layeredWordCloud.build(0, wordFrequencies);
layeredWordCloud.build(1, wordFrequencies2);
layeredWordCloud.writeToFile("kumo-core/output/layered_word_cloud.png");

Create a ParallelLayeredWordCloud using 4 distinct Rectangles.
Every Rectangle will be processed in a separate thread, thus minimizing build-time significantly NOTE: This will eventually be natively handled along with better internal data structures.

final Dimension dimension = new Dimension(2000, 2000);
ParallelLayeredWordCloud parallelLayeredWordCloud = new ParallelLayeredWordCloud(4, dimension, CollisionMode.PIXEL_PERFECT);

// Setup parts for word clouds
final Normalizer[] NORMALIZERS = new Normalizer[] { 
    new UpperCaseNormalizer(), 
    new LowerCaseNormalizer(),
    new BubbleTextNormalizer(),
    new StringToHexNormalizer() 
};
final Font[] FONTS = new Font[] { 
            new Font("Lucida Sans", Font.PLAIN, 10), 
            new Font("Comic Sans", Font.PLAIN, 10),
            new Font("Yu Gothic Light", Font.PLAIN, 10), 
            new Font("Meiryo", Font.PLAIN, 10) 
};
final List<List<WordFrequency>> listOfWordFrequencies = new ArrayList<>();
final Point[] positions = new Point][] { new Point(0, 0), new Point(0, 1000), new Point(1000, 0), new Point(1000, 1000) };
final Color[] colors = new Color[] { Color.RED, Color.WHITE, new Color(0x008080)/* TEAL */, Color.GREEN };

// set up word clouds
for (int i = 0; i < lwc.getLayers(); i++) {
    final FrequencyAnalyzer frequencyAnalyzer = new FrequencyAnalyzer();
    frequencyAnalyzer.setMinWordLength(3);
    frequencyAnalyzer.setNormalizer(NORMALIZERS[i]);
    frequencyAnalyzer.setWordFrequenciesToReturn(1000);
    listOfWordFrequencies.add(frequencyAnalyzer.load("text/english_tide.txt"));

    final WordCloud worldCloud = parallelLayeredWordCloud.getAt(i);
    worldCloud.setAngleGenerator(new AngleGenerator(0));
    worldCloud.setPadding(3);
    worldCloud.setWordStartStrategy(new CenterWordStart());
    worldCloud.setKumoFont(new KumoFont(FONTS[i]));
    worldCloud.setColorPalette(new ColorPalette(colors[i]));

    worldCloud.setBackground(new RectangleBackground(positions[i], dimension));
    worldCloud.setFontScalar(new LinearFontScalar(10, 40));
}

// start building
for (int i = 0; i < lwc.getLayers(); i++) {
    parallelLayeredWordCloud.build(i, listOfWordFreqs.get(i));
}

parallelLayeredWordCloud.writeToFile("parallelBubbleText.png");

Refer to JPanelDemo.java for an example integrating into a JPanel.

Word Frequency File / Analyzer

The most common way to generate word frequencies is to pass a String of text directly to FrequencyAnalyzer. The FrequencyAnalyzer contains many options to process and normalize input text.

Sometimes the word counts and word frequencies are already known and a consumer would like to load them directly into Kumo. To do so, you can manually construct the List<WordFrequency> yourself, or you can load in a text file containing the word frequency and word pairs. The FrequencyFileLoader can be used to load such files. The required format is:

100: frog
94: dog
43: cog
20: bog
3: fog
1: log
1: pog

Order does not matter as the FrequencyFileLoader will automatically sort the pairs.

Tokenizers

Tokenizers are the code that splits a sentence/text into a list of words. Currently only two tokenizers are built into Kumo. To add your own just create a class that override the Tokenizer interface and call the FrequencyAnalyzer.setTokenizer() or FrequencyAnalyzer.addTokenizer().

Tokenizer
WhiteSpaceWordTokenizer
ChineseWordTokenizer

Filters

After tokenization, filters are applied to each word to determine whether or not should be omitted from the word list.

To add set the filter, call FrequencyAnalyzer.setFilter() or FrequencyAnalyzer.addFilter()

Tokenizer Description
UrlFilter A filter to remove words that are urls.
CompositeFilter A wrapper of a collection of filters.
StopWordFilter Internally used, the FrequencyAnalyzer makes this filter easy to use via FrequencyAnalyzer.setStopWords().
WordSizeFilter Internally used, the FrequencyAnalyzer makes this filter easy to use via FrequencyAnalyzer.setMinWordLength() and FrequencyAnalyzer.setMaxWordLength().

Normalizers

After word tokenization and filtering has occurred you can further transform each word via a normalizer. The default normalizer ia lowerCase•characterStripping*trimToEmpty(word), the normalizer is even named DefaultNormalizer

To add set the normalizer, call FrequencyAnalyzer.setNormalizer() or FrequencyAnalyzer.addNormalizer()

Normalizers Description
CharacterStrippingNormalizer Constructed with a Pattern, it will replace all matched occurrences with a configurable 'replaceWith' string. The default pattern is `\.
LowerCaseNormalizer Converts all text to lowercase.
UpperCaseNormalizer Converts all text to uppercase.
TrimToEmptyNormalizer Trims the text down to an empty string, even if null.
UpsideDownNormalizer Converts A-Z,a-z,0-9 character to an upside-down variant.
StringToHexNormalizer Converts each character to it's hex value and concatenates them.
DefaultNormalizer Combines the TrimToEmptyNormalizer, CharacterStrippingNormalizer, and LowerCaseNormalizer.
BubbleTextNormalizer Replaces A-Z,a-z with characters enclosed in Bubbles ⓐ-ⓩⒶ-Ⓩ (requires a supporting font)

Command Line Interface (CLI)

Kumo can now be accessed via CLI. It is not quite as flexible as the programmatic interface yet but should support most of the common needs.

The CLI Documentation can be found here.

The below examples assume you have the jar installed under the name of "kumo". To install via Brew run the following command.

brew install https://raw.githubusercontent.com/kennycason/kumo/master/script/kumo.rb

Examples:

Create a standard word cloud.

kumo --input "https://en.wikipedia.org/wiki/Nintendo" --output "/tmp/wordcloud.png"

Create a standard word cloud excluding stop words.

kumo --input "https://en.wikipedia.org/wiki/Nintendo" --output "/tmp/wordcloud.png" --stop-words "nintendo,the"

Create a standard word cloud with a limited word count.

kumo --input "https://en.wikipedia.org/wiki/Nintendo" --output "/tmp/wordcloud.png" --word-count 10

Create a standard word cloud with a custom width and height.

kumo --input "https://en.wikipedia.org/wiki/Nintendo" --output "/tmp/wordcloud.png" --width 256 --height 256

Create a standard word cloud with custom font configuration.

kumo --input "https://en.wikipedia.org/wiki/Nintendo" --output "/tmp/wordcloud.png" --font-scalar sqrt --font-type Impact --font-weight plain --font-size-min 4 --font-size-max 60

Create a standard word cloud with a custom shape.

kumo --input "https://en.wikipedia.org/wiki/Nintendo" --output "/tmp/wordcloud.png" --width 990 --height 618 --background "https://raw.githubusercontent.com/kennycason/kumo/master/src/test/resources/backgrounds/whale.png

Create a standard word cloud with a custom color palette.

kumo --input "https://en.wikipedia.org/wiki/Nintendo" --output "/tmp/wordcloud.png" --color "(255,0,0),(0,255,0),(0,0,255)"
kumo --input "https://en.wikipedia.org/wiki/Nintendo" --output "/tmp/wordcloud.png" --color "(0xffffff),(0xcccccc),(0x999999),(0x666666),(0x333333)"

Create a standard word cloud using a Chinese tokenizer

kumo --input "https://zh.wikipedia.org/wiki/%E4%BB%BB%E5%A4%A9%E5%A0%82" --output "/tmp/wordcloud.png" --tokenizer chinese

Create a polar word cloud

kumo --input "https://en.wikipedia.org/wiki/Nintendo,https://en.wikipedia.org/wiki/PlayStation" --output "/tmp/nintendo_vs_playstation.png" --type polar --color "(0x00ff00),(0x00dd00),(0x007700)|(0xff0000),(0xdd0000),(0x770000)"

Create a layered word cloud

kumo --input "https://www.haskell.org/, https://en.wikipedia.org/wiki/Haskell_(programming_language)" --output "/tmp/nintendo_vs_playstation.png" --type layered --background "https://raw.githubusercontent.com/kennycason/kumo/master/src/test/resources/backgrounds/haskell_1.bmp,https://raw.githubusercontent.com/kennycason/kumo/master/src/test/resources/backgrounds/haskell_2.bmp" --color "(0xFA6C07),(0xFF7614),(0xFF8936)|(0x080706),(0x3B3029),(0x47362A)"

Contributing

My primary IDE of choice is IntelliJ due to their robust tooling as well as code analysis/inspections. If using IntelliJ IDEA, I recommend importing KumoIntelliJInspections.xml. I am also considering adding Checkstyle support.

New tests that write images should write images out to kumo-core/output_test/ instead of kumo-core/output/ which is now used for images to showcase Kumo.

kumo's People

Contributors

clause avatar danlangford avatar dararara avatar daxunyu avatar jnkhunter avatar jonathanarns avatar kennycason avatar mdirkse avatar rj93 avatar rzo1 avatar seeyn avatar sudo-jaa avatar thibstars avatar wolfposd avatar zapodot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kumo's Issues

too long words are not rendered

Hi,
I use Kumo to create a cloud of all used flickr tags of a user. flickr "normalizes" tags in some way by removing spaces between words of one tag set by the user, so the resulting word could be very long.
Now, if I define a too big max-size for the font like for example:
wordCloud.setFontScalar(new LinearFontScalar(10, 150));
so that a long tag with a high frequency would result in a too big rectangle to fit on the given cloud size.
In that case it seems, this tag is simply not rendered within the resulting cloud without any message.

Is it possible to auto-downsize the max font size in such case? Or throw a WordTooLongForCloudSizeException or something similar at least?

here is an example cloud of my used flickr tags.

  • nik
    ps: Btw, Kumo is great!

Use Guava HashMultiset

Guava's HashMultiset class would make it much faster to preprocess text. I'd suggest converting the raw tokens from languagetool to a HashMultiset before any further processing, and using the entrySet() method to process each distinct token only once during normalization, filtering etc.

Include import statements in README

Thanks for your project. I was trying to build a wrapper around this Java library in Clojure since there were no native word cloud solutions in Clojure. The README examples were missing the import statements that made me search the examples for relevant classes. Improving examples on the README by adding import statements will be great so that the developer can copy paste and run the examples easily to initiate development.

Please point me to links if any that I might be missing.

Thanks

Emoji Support

I've been trying to implement this myself to help you out, but have been struggling. Emoji support would be great, have been attempting to create a word cloud of all my conversations with my other half, to be printed on a canvas. We use emojis a lot, and to see their frequency turn up too would be great.

where is your getInputStream method

Hi, I see many of your example using getInputStream method, but the method was not in your code. Do you have a demo.java that can run some example? Thanks

buildWordFrequencies count frequencies wrongly.

            for (final String word : words) {
                final String normalized = normalize(word);
                if (!wordFrequencies.containsKey(normalized)) {
                    wordFrequencies.put(normalized, 1);
                }
                wordFrequencies.put(normalized, wordFrequencies.get(normalized) + 1);

Since there is no else, each frequency is counted as frequency + 1.

Use R or Quad tree for faster word placement

This will speed up placement code by removing unnecessary collision detections. It will not help the fact that the WordCloud.place function has a more and more difficult time placing words as the cloud fills up.

Non-English characters are not represented properly

Characters commonly used in languages like Spanish or French are not stored nor shown properly because the use of a deprecated method in IOUtils.
The fix is rather simple, just change the following line from nlp/FrequencyAnalizer.java:

public List<WordFrequency> load(InputStream fileInputStream) throws IOException {
        return load(IOUtils.readLines(fileInputStream));
    }
}

to

public List<WordFrequency> load(InputStream fileInputStream) throws IOException {
        return load(IOUtils.readLines(fileInputStream,"utf-8"));
    }

or add an overload method which let you choose your encoding.

Image to Word Cloud Issue

Hello,

I am trying to convert an image to word-cloud (like your whale.png).
But I am getting the following result. What can be the problem?
beautiful
result
Best
Onder

Empty string appear in word frequency list causing exception

Hi,
I am hitting this exception:

java.lang.IllegalArgumentException: Width (0) and height (25) cannot be <= 0
at java.awt.image.DirectColorModel.createCompatibleWritableRaster(DirectColorModel.java:1016)
at java.awt.image.BufferedImage.(BufferedImage.java:340)
at wordcloud.Word.(Word.java:39)
at wordcloud.WordCloud.buildWord(WordCloud.java:260)
at wordcloud.WordCloud.buildwords(WordCloud.java:247)
at wordcloud.WordCloud.build(WordCloud.java:107)

I've the empty string in the stop word set and also a minimum length of 3 but apparently the empty string is in the word frequency list. Need to remove that entry by hand from the list

License Status

Hey Kenny,

Love this library and look forward to using it. One question - what license are you operating this under?

Thanks!

Maven build failure due to encoding

I am trying to build this on Windows 7, but I am getting several errors due the encoding of the system (Cp1252)

The first of which is:

INFO  wordcloud.nlp.tokenizer.TestChineseWordTokenizer - Õ, ╝, ╣, Ú, ?, ô, Õ, », ╝, Õ, ╝, ╣
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.829 sec <<< FAILURE!
test(wordcloud.nlp.tokenizer.TestChineseWordTokenizer)  Time elapsed: 2.727 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<12>

and the second (when building ignoring tests):

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.10.2:jar (attach-javadocs) on project kumo: MavenReportException: Error while creating archive:
[ERROR] Exit code: 1 - C:\Users\Richard Jones\Downloads\kumo-master\src\main\java\wordcloud\nlp\normalize\BubbleTextNormalizer.java:4: error: unmappable character for encoding Cp1252
[ERROR] * Replaces the characters a-zA-Z with their bubble pendants â??-â?©â?¶-â??
[ERROR] ^
[ERROR] C:\Users\Richard Jones\Downloads\kumo-master\src\main\java\wordcloud\nlp\normalize\BubbleTextNormalizer.java:4: error: unmappable character for encoding Cp1252
[ERROR] * Replaces the characters a-zA-Z with their bubble pendants â??-â?©â?¶-â??
[ERROR] ^
[ERROR] C:\Users\Richard Jones\Downloads\kumo-master\src\main\java\wordcloud\nlp\normalize\BubbleTextNormalizer.java:12: error: unmappable character for encoding Cp1252
[ERROR] private static String bubbles = "â??â??â??â??â??â??â??â??â??â??â??â??â??â??â??â??â? â?¡â?¢â?£â?¤â?¥â?¦â?§â?¨â?©â?¶â?·â?¸â?¹â?ºâ?»â?¼â?½â?¾â?¿â??â??â??â?ƒâ??â??â??â??â??â??â??â??â??â??â??â??";
[ERROR] ^
[ERROR] C:\Users\Richard Jones\Downloads\kumo-master\src\main\java\wordcloud\nlp\normalize\BubbleTextNormalizer.java:12: error: unmappable character for encoding Cp1252
[ERROR] private static String bubbles = "â??â??â??â??â??â??â??â??â??â??â??â??â??â??â??â??â? â?¡â?¢â?£â?¤â?¥â?¦â?§â?¨â?©â?¶â?·â?¸â?¹â?ºâ?»â?¼â?½â?¾â?¿â??â??â??â?ƒâ??â??â??â??â??â??â??â??â??â??â??â??";
[ERROR] ^
[ERROR] C:\Users\Richard Jones\Downloads\kumo-master\src\main\java\wordcloud\nlp\normalize\BubbleTextNormalizer.java:12: error: unmappable character for encoding Cp1252
[ERROR] private static String bubbles = "â??â??â??â??â??â??â??â??â??â??â??â??â??â??â??â??â? â?¡â?¢â?£â?¤â?¥â?¦â?§â?¨â?©â?¶â?·â?¸â?¹â?ºâ?»â?¼â?½â?¾â?¿â??â??â??â?ƒâ??â??â??â??â??â??â??â??â??â??â??â??";
[ERROR] ^
[ERROR] C:\Users\Richard Jones\Downloads\kumo-master\src\main\java\wordcloud\nlp\normalize\BubbleTextNormalizer.java:12: error: unmappable character for encoding Cp1252
[ERROR] private static String bubbles = "â??â??â??â??â??â??â??â??â??â??â??â??â??â??â??â??â? â?¡â?¢â?£â?¤â?¥â?¦â?§â?¨â?©â?¶â?·â?¸â?¹â?ºâ?»â?¼â?½â?¾â?¿â??â??â??â?ƒâ??â??â??â??â??â??â??â??â??â??â??â??";
[ERROR] ^
[ERROR] C:\Users\Richard Jones\Downloads\kumo-master\src\main\java\wordcloud\nlp\normalize\BubbleTextNormalizer.java:12: error: unmappable character for encoding Cp1252
[ERROR] private static String bubbles = "â??â??â??â??â??â??â??â??â??â??â??â??â??â??â??â??â? â?¡â?¢â?£â?¤â?¥â?¦â?§â?¨â?©â?¶â?·â?¸â?¹â?ºâ?»â?¼â?½â?¾â?¿â??â??â??â?ƒâ??â??â??â??â??â??â??â??â??â??â??â??";
[ERROR] ^
[ERROR] C:\Users\Richard Jones\Downloads\kumo-master\src\main\java\wordcloud\nlp\normalize\UpsideDownNormalizer.java:9: error: unmappable character for encoding Cp1252
[ERROR] private static final String split  = "É?qÉ?pÇ?É?bɥıظÊ?×?ɯuodbɹsÊ?nÊ?Ê?xÊ?zâ?¾'Ø?Ë?¿¡/\\," + "â??qϽá?¡Æ?â?²ÆƒHIÅ¿Ê?Ë¥WNOÔ?á½?á´?Sâ?¥â?©Î?MXÊ?Z" + "0Æ?á??Æ?ã?£Ï?9ã?¥86";
[ERROR] ^
[ERROR] C:\Users\Richard Jones\Downloads\kumo-master\src\main\java\wordcloud\nlp\normalize\UpsideDownNormalizer.java:9: error: unmappable character for encoding Cp1252
[ERROR] private static final String split  = "É?qÉ?pÇ?É?bɥıظÊ?×?ɯuodbɹsÊ?nÊ?Ê?xÊ?zâ?¾'Ø?Ë?¿¡/\\," + "â??qϽá?¡Æ?â?²ÆƒHIÅ¿Ê?Ë¥WNOÔ?á½?á´?Sâ?¥â?©Î?MXÊ?Z" + "0Æ?á??Æ?ã?£Ï?9ã?¥86";
[ERROR] ^
[ERROR] C:\Users\Richard Jones\Downloads\kumo-master\src\main\java\wordcloud\nlp\normalize\UpsideDownNormalizer.java:9: error: unmappable character for encoding Cp1252
[ERROR] private static final String split  = "É?qÉ?pÇ?É?bɥıظÊ?×?ɯuodbɹsÊ?nÊ?Ê?xÊ?zâ?¾'Ø?Ë?¿¡/\\," + "â??qϽá?¡Æ?â?²ÆƒHIÅ¿Ê?Ë¥WNOÔ?á½?á´?Sâ?¥â?©Î?MXÊ?Z" + "0Æ?á??Æ?ã?£Ï?9ã?¥86";
[ERROR] ^
[ERROR] C:\Users\Richard Jones\Downloads\kumo-master\src\main\java\wordcloud\nlp\normalize\UpsideDownNormalizer.java:9: error: unmappable character for encoding Cp1252
[ERROR] private static final String split  = "É?qÉ?pÇ?É?bɥıظÊ?×?ɯuodbɹsÊ?nÊ?Ê?xÊ?zâ?¾'Ø?Ë?¿¡/\\," + "â??qϽá?¡Æ?â?²ÆƒHIÅ¿Ê?Ë¥WNOÔ?á½?á´?Sâ?¥â?©Î?MXÊ?Z" + "0Æ?á??Æ?ã?£Ï?9ã?¥86";
[ERROR] ^
[ERROR] C:\Users\Richard Jones\Downloads\kumo-master\src\main\java\wordcloud\nlp\tokenizer\ChineseWordTokenizer.java:20: error: unmappable character for encoding Cp1252
[ERROR] for(String rawToken : rawTokens) {   // parse parts-of-speech tags away (��/n, ��/p, ��/n, �/ng, 使/v, ��/vn)
[ERROR] ^

[ERROR]
[ERROR] Command line was: "C:\Program Files\Java\jdk1.8.0_45\jre\..\bin\javadoc.exe" "-J-Dhttp.nonProxyHosts=\"localhost\"" @options @packages
[ERROR]
[ERROR] Refer to the generated Javadoc files in 'C:\Users\Richard Jones\Downloads\kumo-master\target\apidocs' dir.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

Unrecognizable Chinese characters on output image

Hi,

When I was trying to produce a wordcloud based on a chinese txt file, the output image only contains blocks instead of correct Chinese words.

I've tried to encode the input file into "UTF-8" and "GBK". Both of the encoding couldn't fix the problem.

I'm using the sample code from the Github page with Kumo-1.4.jar to produce Chinese circle wordcloud through Eclipse.

Here is my code piece:

 public static void chineseCloud() {
    try {
        final FrequencyAnalizer frequencyAnalizer = new FrequencyAnalizer();
        frequencyAnalizer.setWordFrequencesToReturn(600);
        frequencyAnalizer.setMinWordLength(2);
        frequencyAnalizer.setWordTokenizer(new ChineseWordTokenizer());

        File initialFile = new File("D:/Clavis/TopicMining/Dataset/chinese.txt");
        final List<WordFrequency> wordFrequencies = frequencyAnalizer.load(new FileInputStream(initialFile));
        final WordCloud wordCloud = new WordCloud(600, 600, CollisionMode.PIXEL_PERFECT);

        wordCloud.setPadding(2);
        wordCloud.setBackground(new CircleBackground(300));
        wordCloud.setColorPalette(new ColorPalette(new Color(0xD5CFFA), new Color(0xBBB1FA), new Color(0x9A8CF5),
                new Color(0x806EF5)));
        wordCloud.setFontScalar(new SqrtFontScalar(12, 45));
        wordCloud.build(wordFrequencies);
        wordCloud.writeToFile("D:/Clavis/TopicMining/Dataset/chinese_language_circle.jpg");
    } catch (Exception e) {
        e.printStackTrace();
    }
}

Thanks,

George

Examples don't work: BuildWordFrequences is not known

Hello Kenny,

I stumbled upon your project and decided to build the jar file, which went successful. But I wasn't able to get any of the examples nor the testfiles to work. BuildWordFrequences() seems to cause issues, as it can't find a reference to it.

If you have any pointers on how to resolve this issue, I'd be very grateful.

Kind Regards,

Mirabis

set log4j binding to scope=test

Hello,
the usage of slf4j is nice to make a logger agnostic software. But the binding of log4j destroyed the agnostic.

Please set the scope of log4j bindings to "test". I have done this for you at pull request #61.

(Very thanks for your nice software)

Add kumo to central maven repo

It would be great if you could add maven into a centralized maven repo. I'm just waiting that to start using this wonderful library at work :)

export svg

Hi,

Is there a way to write to svg (or any other vector graphic)?

Andreas

Can I set the circle background as variable ?

I have 500 text files of the word frequency, and I have to build a wordcloud for each of them.
However, the text files are vary in word number, I want to build a wordcloud which fit their size most. How can I do this smartly? Hope you can answer my fool question, haha

Apple Color Emoji Support

Hey again, was still having a couple of issues with some of the emojis when using the Open Sans Emoji font as you were. I wasn't sure if you saw this on the closed issue so I've opened another.

bdea5390-330d-11e6-894a-62c91de4b371

Also, do you think it'd be possible to support Apple's default emoji collection with the Apple Color Emoji font? In a way so that they don't use the given colour scheme and just use the normal apple colours? Would be so fantastic to have the cute monkey and lion faces dotted around. Would genuinely like to lend a helping hand, I've a week of not much to do. If you were to point me in the right direction, I'm sure I could help out :)

Java Exception

Hi.
I am new to Java and Eclipse.
I tried to run your project and it's all fine - does not have any bugs/missing libraries, but it does not have the getInputStream method. What does this function return if I have a text file of say tweets separated by new line ?

Right edge of text is cut off for some words

When generating word clouds, we are noticing that for some words the right hand edge is cut off.

image

This doesn't happen to all the words - roughly 50% appear perfectly:

image

And it can happen even when there is plenty of whitespace around the word:

image

Have you seen this issue before and do you have any views on the cause?

Below are some more details:

  • the word cloud renders perfectly in our dev environment (MacOS 10.13)
  • we are only seeing this clipping issue in our test environment (Elastic Beanstalk Java).

We are using the built in San Serif font:

private static final KumoFont WORD_CLOUD_KUMO_FONT = new KumoFont(new Font(Font.SANS_SERIF, Font.PLAIN, 12));

We are setting up the word cloud as follows:

  final Dimension dimension = new Dimension(imageSize, imageSize);
     final WordCloud wordCloud = new WordCloud(dimension, CollisionMode.RECTANGLE);
     wordCloud.setPadding(0);
     wordCloud.setBackground(new CircleBackground(imageSize / 2));
     wordCloud.setBackgroundColor(Color.WHITE);
     wordCloud.setColorPalette(new ColorPalette(style.getWordCloudColours()));
     wordCloud.setFontScalar(new SqrtFontScalar(1, 40));
     wordCloud.setAngleGenerator(new AngleGenerator(0));
     wordCloud.setWordPlacer(new RTreeWordPlacer());
     wordCloud.setKumoFont(WORD_CLOUD_KUMO_FONT);

Any thoughts on the cause would be received with gratitude and happy to provide further details as needed.

about default font "Comic Sans"

I deploy my app on centos6, but the chinese chars can not displayed in picture which generated by kumo。

I used following code to make sure that my linux has the font "Comic Sans" .
GraphicsEnvironment.getLocalGraphicsEnvironment().getAvailableFontFamilyNames();

So I tried some other fonts (songti \AR PL UMing CN ..), they worked well。
I guess there is something wrong with "Comic Sans"。


btw:
After some search work.. I found a website :http://bancomicsans.com/main/

setMinWordLength gives an exception

Hey, nice library.

I loaded some text into it and settings frequencyAnalyzer.setMinWordLength(2); for words such as "ok" and "gg", but this gives me an exception:

java.lang.IllegalArgumentException: Width (0) and height (14) cannot be <= 0
    at java.awt.image.DirectColorModel.createCompatibleWritableRaster(DirectColorModel.java:1016)
    at java.awt.image.BufferedImage.<init>(BufferedImage.java:340)
    at wordcloud.Word.<init>(Word.java:39)
    at wordcloud.WordCloud.buildWord(WordCloud.java:260)
    at wordcloud.WordCloud.buildwords(WordCloud.java:247)
    at wordcloud.WordCloud.build(WordCloud.java:107)
    at CloudCreator.main(CloudCreator.java:56)

where line 56 is wordCloud.build(wordFrequencies);

Default Normalizers not working

I am using the latest 1.13 release.

The FrequencyAnalyze default constructor adds the following normalizers:

public FrequencyAnalyzer() {
        this.normalizers.add(new TrimToEmptyNormalizer());
        this.normalizers.add(new CharacterStrippingNormalizer());
        this.normalizers.add(new LowerCaseNormalizer());
    }

And this seems correct, but it does not work properly. It leaves whitespace, so the trim is not working correct for some reason. Here is the log file.
Notice the first line, that is just white space that is most frequent.
Also notice how many times the word "crack" appears below with and without trailing spaces.

2018-07-19 09:42:44,639 [main] INFO  com.kennycason.kumo.WordCloud - placed:    (1/300)
2018-07-19 09:42:44,642 [main] INFO  com.kennycason.kumo.WordCloud - placed: the (2/300)
2018-07-19 09:42:44,643 [main] INFO  com.kennycason.kumo.WordCloud - placed: music (3/300)
2018-07-19 09:42:44,644 [main] INFO  com.kennycason.kumo.WordCloud - placed: and (4/300)
2018-07-19 09:42:44,644 [main] INFO  com.kennycason.kumo.WordCloud - placed: user (5/300)
2018-07-19 09:42:44,645 [main] INFO  com.kennycason.kumo.WordCloud - placed:  crack (6/300)
2018-07-19 09:42:44,646 [main] INFO  com.kennycason.kumo.WordCloud - placed: this (7/300)
2018-07-19 09:42:44,646 [main] INFO  com.kennycason.kumo.WordCloud - placed: you (8/300)
2018-07-19 09:42:44,647 [main] INFO  com.kennycason.kumo.WordCloud - placed: csdb (9/300)
2018-07-19 09:42:44,689 [main] INFO  com.kennycason.kumo.WordCloud - placed: comment (10/300)
2018-07-19 09:42:44,689 [main] INFO  com.kennycason.kumo.WordCloud - placed: submitted (11/300)
2018-07-19 09:42:44,689 [main] INFO  com.kennycason.kumo.WordCloud - placed: for (12/300)
2018-07-19 09:42:44,690 [main] INFO  com.kennycason.kumo.WordCloud - placed: graphics (13/300)
2018-07-19 09:42:44,690 [main] INFO  com.kennycason.kumo.WordCloud - placed: scene (14/300)
2018-07-19 09:42:44,691 [main] INFO  com.kennycason.kumo.WordCloud - placed: demo (15/300)
2018-07-19 09:42:44,702 [main] INFO  com.kennycason.kumo.WordCloud - placed: crack   (16/300)
2018-07-19 09:42:44,702 [main] INFO  com.kennycason.kumo.WordCloud - placed: c64 (17/300)
2018-07-19 09:42:44,702 [main] INFO  com.kennycason.kumo.WordCloud - placed: crack (18/300)
2018-07-19 09:42:44,710 [main] INFO  com.kennycason.kumo.WordCloud - placed: demo   (19/300)
2018-07-19 09:42:44,711 [main] INFO  com.kennycason.kumo.WordCloud - placed: can (20/300)
2018-07-19 09:42:44,713 [main] INFO  com.kennycason.kumo.WordCloud - placed: made (21/300)
2018-07-19 09:42:44,714 [main] INFO  com.kennycason.kumo.WordCloud - placed: commodore (22/300)
2018-07-19 09:42:44,714 [main] INFO  com.kennycason.kumo.WordCloud - placed: find (23/300)
2018-07-19 09:42:44,715 [main] INFO  com.kennycason.kumo.WordCloud - placed: all (24/300)
2018-07-19 09:42:44,719 [main] INFO  com.kennycason.kumo.WordCloud - placed: one-file (25/300)
2018-07-19 09:42:44,721 [main] INFO  com.kennycason.kumo.WordCloud - placed: intro (26/300)
2018-07-19 09:42:44,721 [main] INFO  com.kennycason.kumo.WordCloud - placed: 1990 (27/300)
2018-07-19 09:42:44,723 [main] INFO  com.kennycason.kumo.WordCloud - placed: about (28/300)
2018-07-19 09:42:44,723 [main] INFO  com.kennycason.kumo.WordCloud - placed: out (29/300)

Feature Request: Scale/Resize background images to WordCloud dimensions

I would like to use an image as a background on a wordcloud of different sizes / dimensions without creating an individual image for each size.

For example, I would like to use the whale.png image with 990x618 pixel on a Wordcloud with 4000x4000 pixel and need to scale the whale image roughly 4 times (in both dimensions to keep the proportions).

I couldn't find a way to do this using the setBackground method or the PixelBoundryBackground Class. It would be great if you could implement a method to add a scale/resize factor or absolute dimensions.

Example is crashing with java.lang.VerifyError overrides final method visit

I'm trying to run "rectangle" example (nb 3) from:
https://github.com/kennycason/kumo
but I get:

Exception in thread "main" java.lang.VerifyError: class net.sf.cglib.core.DebuggingClassWriter overrides final method visit.(IILjava/lang/String;Ljava/lang/String;Ljava/lang/String;[Ljava/lang/String;)V
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at net.sf.cglib.core.AbstractClassGenerator.<init>(AbstractClassGenerator.java:38)
        at net.sf.cglib.core.KeyFactory$Generator.<init>(KeyFactory.java:127)
        at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:112)
        at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
        at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
        at net.sf.cglib.proxy.Enhancer.<clinit>(Enhancer.java:69)
        at ch.lambdaj.proxy.ProxyUtil.createEnhancer(ProxyUtil.java:89)
        at ch.lambdaj.proxy.ProxyUtil.createProxy(ProxyUtil.java:49)
        at ch.lambdaj.function.argument.ArgumentsFactory.createPlaceholder(ArgumentsFactory.java:52)
        at ch.lambdaj.function.argument.ArgumentsFactory.registerNewArgument(ArgumentsFactory.java:45)
        at ch.lambdaj.function.argument.ArgumentsFactory.createArgument(ArgumentsFactory.java:39)
        at ch.lambdaj.function.argument.ArgumentsFactory.createArgument(ArgumentsFactory.java:31)
        at ch.lambdaj.Lambda.on(Lambda.java:44)
        at com.kennycason.kumo.WordCloud.maxFrequency(WordCloud.java:259)
        at com.kennycason.kumo.WordCloud.buildWords(WordCloud.java:228)
        at com.kennycason.kumo.WordCloud.build(WordCloud.java:90)
...

Examples don't work

So I built the .jar, added to my project, and then did a:

import wordcloud.*;

However, none of your examples are working, with a copy and paste - appears as though classes are missing or misspelled.

What I have is a list of words and their integer weights (the generation of the weights was a separate function) - how can I pass those to kumo and return a circular or rectangular wordcloud?

Thanks for your help!

Help on your code

Hi Kenny,

I’m working in NLP for French https://github.com/oeuvres/Alix
I’m providing WordClouds with morpho-syntactic infos (nouns, names, verbs…).
http://obvil.lip6.fr/alix/wordcloud.jsp?bibcode=hugo_miserables&frantext=on
This javascript lib provide a nice result, but is client side and do not allow caching.
I have started a java implementation with an HTML output.
https://github.com/oeuvres/Alix/blob/master/java/alix/viz/Cloud.java
Your result is far more nicer but your code do more than I need.
HTML rendering will not be pixel perfect compared to PNG but offers other advantages.
An HTML renderer needs only the collision result as a tuple (x, y, angle).

I will try to understand your code and find the central core, all your advice is welcome.

Maven Build Failure

I'm failing on the build...

Tests run: 17, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 329.036 sec <<< FAILURE!
matchOnlineExample(wordcloud.TestWordCloud) Time elapsed: 0.004 sec <<< ERROR!
java.io.FileNotFoundException: \tmp\code.txt (The system cannot find the path specified)

Catch event when word is clicked on

Does kumo provide a mechanism to handle clicking on words in the cloud widget? Ideally, I'd like the user to be able to click on, say, "Apple" and be able to register a listener that gets the string "Apple".

Filters not working properly

I have set FrequencyAnalyzer with my set of stop words using frequencyAnalyzer.setStopWords(stopWords);. stopWords is a simple Set<String>.

After that the word cloud is rendered, some words that are present into stopWords are included into the cloud image. I debugged the code I can see that the StopWordFilter was properly initialized with my list.

The second problem is that WordSizeFilter is also not working properly. I'm using the default minWordLength (3), but my rendered image cloud contains words with 1 and 2 characters...

I didn't managed to get inside Lambda, but as far as I understood, Kumo delegates to it applying the filters (Lambda.filter(compositeFilter, words);).

Probably it has a problem, or I'm missing something. Please, could you help me? I attached the text file from which I'm trying to generate a cloud (it's in portuguese).
profiler.txt

Is there

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.