Giter VIP home page Giter VIP logo

annotator's Introduction

Piximi

Piximi is a free, open source web app for performing image understanding tasks. It’s written by dozens of engineers and scientists from institutions like the Biological Research Centre Szeged, Broad Institute of MIT and Harvard, Chan Zuckerberg Initiative, ETH Zurich, and FIMM Helsinki.

Piximi's target users are computational or non-computational scientists interested in image analysis from fields like astronomy, biology, and medicine.

Try Piximi now at Piximi.app!

Development

Available Scripts

In the project directory, you can run:

yarn install && yarn prepare

Install all project dependencies.

yarn start

Runs the app in the development mode.
Open http://localhost:3000 to view it in the browser.

yarn build

Builds the app for production to the build folder.

yarn test

Runs the tests. Note that for tests using tensorflow you need to specify the custom environment:

yarn test --env=./src/utils/common/models/utils/custom-test-env.js

Docker

To run as a docker container, clone the repo, build the image and run it:

git clone https://github.com/piximi/piximi
docker build -t <image_name> piximi/
docker run -p 3000:3000 --name <container_name> <image_name>

If you encounter the following message: The build failed because the process exited too early. This probably means the system ran out of memory or someone called ``kill -9`` on the process. and you are running Docker Desktop, you will need to increase memory resources. 8GB memory should be sufficient.

Alternatively, download the pre-built image and run it directly from Docker Hub:

docker run -p 3000:3000 --name piximi gnodar01/piximi:0.1.0

annotator's People

Contributors

0x00b1 avatar alicelucas avatar bethac07 avatar davidstirling avatar gnodar01 avatar nasimj avatar pearlryder avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

nasimj emberwhirl

annotator's Issues

Implement color adjustment for images

This includes:

  • Slider for varying brightness

  • Slider for varying contrast

  • Automatic intensity adjustment

  • Select different color for each channel in the image

  • Toggle/untoggle different channels in color space

Edit a category

User should be allowed to:

  • Change the name of a category
  • Change the assigned color of a category

Horizontally center the image container

Currently the stage (and the image in it) are located in the top left corner of the space between the two side bars. We want it to be centered horizontally.

It shouldn't be centered vertically, instead we are thinking of a bit of padding at the top.

The image inside the image container should be centered.

Animate side bars

It would be nice to have the side bars animated. For example when the user clicks on an new tool, the right side bar with the different options will slide in.

Create dataURL mask from user's selection

For each selected pixel, we would like to get the corresponding mask which will be used for storage and export purposes. Currently we have chosen our mask to be a dataURL, but we might change to a more compressed format later.
Here I am listing the tools that have been integrated in our ImageViewer app, but this is a non-exhaustive list which will be changed as we incorporate more tools:

  • Rectangular Selection
  • Elliptical Selection
  • Lasso Selection
  • Polygonal Selection
  • Magnetic Selection
  • Object Selection

Map a clicked pixel location to its accurate position of a resized window

When resizing our annotation tool window, the resulting position is not mapped to the current location on the canvas. As a result, annotations are drawn with an offset.

In principe this should have been solved with our call to transform.point(), but it seems like a bug may have been introduced somewhere.

Cursors

  • Rectangular, elliptical, lasso, and polygonal selection
  • Magnetic selection
  • Quick selection
  • Color selection
  • Object selection

Convert the contour of ObjectSelection to a sequence of ordered points

ObjectSelection currently has two problems:

  • The contour map has unwanted small holes. We'd like to do some processing on that to get a clean contour.
  • The coordinates that we have for the contour map are unordered. This results in lines that intersect and overlap each other to create a mess and cover the whole object.

Create color selection component

  • On primary click should flood the image using the mouse position as the starting flood position
  • Flood tolerance should be configurable (i.e. the component should have a tolerance: number prop)

Note that the algorithm was already written in flood.ts.

Fix category in JSON annotation export

Currently when the user clicks on a selection to edits its category, all of the other selections are changed to that category as well.

As a result, the JSON export does not have the right category in it.

Zoom operation

  • Wheel in-out

  • Rectangular Selection

  • Click in-out

  • Reset to original image scale

  • Better typography for "Zoom in-Zoom out"

  • Dash stroke should not become more dense as zoomed in

  • "Automatically center" functionality

Permit overlapping objects while keeping the functionality on clicking on existing annotation

Suppose we have an instance on our image. Clicking on it currently results in the onMouseDown() event of the selected operator to be called. (as opposed to show show the transformer around it).

This is why when clicking on an instance in Lasso or Polygonal mode, we involuntarily enter the drawing mode. (It actually happens if we are in the Rectangular/Ellipse tools, but we don't see it because we click and release immediately instead of dragging to see the shape being drawn.)

@0x00b1 When the user clicks on the Stage, should we be checking whether the clicked (x, y) coordinates belong to one of the bounding boxes of our existing annotations? If yes, we should do an early return before entering the operator's onMouseDown?

Remove significant lag observed when drawing

Currently the drawing of all of the tools in ImageViewer's default is very laggy. We suspect that it might be due to a new class of the tool being instantiated, with each new render.

Annotation export should use the last updated category

When exporting, the category that was first selected when doing the selection is the one that is used by the export file. The category for that annotation should be the one that was last assigned (not the one at the time of the selection).

Convert the output of mask from Object Selection into Konva shape

  • Add a prop that allows user to select either rectangular or lasso selection.
  • Implement the rectangular and lasso selection functionality from the rectangular component and the lasso component
  • Crop the selection made with either the rectangular or lasso tools
  • Use TensorFlow.js to create a saliency map for the cropped region
    • the saliency map should be a binary mask with exactly one object
    • see example on GitHub
  • Create a boundary of the positive values of the saliency map and display this as the selection

Add sensory cues for the various selection modes

Add in the following visual cues for Add, Subtract, or Intersection modes:

  • On mouse down:

    • Cursor changes
    • Anchor box points of first selection are removed
  • On mouse move:

    • Lighter line opacity for any contour that will disappear
  • On mouse up:

    • Bring anchor points back
    • Marching ants

Fix PenSelection decoding/encoding of mask

There is a bug that happens somewhere in the encoding and decoding of a mask created by PenSelection.
The top image shows the image that goes into the encoding function. The bottom image shows the image that comes out of it.

encoded
decoded

Fix Color Selection such that flooding is drawn at the right pixel location

This behavior is particularly clear when selecting the "Microscopy" image in our sample images. I have been able to replicate it sometimes with the "dog" picture.

Clicking and dragging to draw the tolerance map will show the resulting map in the very left side of the image instead of on the pixel location of our cursor. When releasing and pressing enter, though, the mask at the right location shows.

Implement mouse events in quick selection component

Implement the following:

  • Create a superpixel representation of the image (i.e. using a method like quickshift, k-means, or even compacted watershed)
  • onStart event, create a point and display the corresponding superpixel (e.g. as a yellow polygon with 50% opacity)
  • onMove event, create a point for each onMove position (i.e. like lasso selection) and display the corresponding superpixels (i.e. the superpixels that contain each point)
  • add a size prop that determines the relative size of the superpixels

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.