helblazer811 / manimml Goto Github PK
View Code? Open in Web Editor NEWManimML is a project focused on providing animations and visualizations of common machine learning concepts with the Manim Community Library.
License: MIT License
ManimML is a project focused on providing animations and visualizations of common machine learning concepts with the Manim Community Library.
License: MIT License
I want to add the following features:
Hi, when I was trying to run the maxpooling example in 0.0.24 it gave me an AttributeError:
convolutional_2d_to_max_pooling_2d.py:170 in make_forward_pass_animation │
│ ❱ 170 │ │ │ │ axis=manim_ml.three_d_config.rotation_axis, │
I think it would be axis=manim_ml.config.three_d_config.rotation_axis
BTW, just wondering is it possible to specify the run_time of each layer? sometimes the ImageLayer plays very fast but Convolutional2DLayer plays too slow.
Allow for passing layers as a dictionary as follows:
nn = NeuralNetwork({
"image_layer": ImageLayer(numpy_image, height=1.5),
"conv_1_layer": Convolutional2DLayer(1, 7, 3, filter_spacing=0.32),
"feed_forward_1_layer": FeedForwardLayer(3),
},
layer_spacing=0.25,
)
The main purpose of this is so that we can add operations later on that reference the layers by name like adding a residual connection nn.add_connection("image_layer", "feed_forward_1_layer")
.
Manim has the ability to represent animations as classes. These classes can take parameters and they can wrap around a passed Mobject (in our case this could be a neural network) and then a mobject specific animation is run. You can override such animations like Create (see neural_network.py). It would be interesting to make a generic TrainModel animation that is implemented by various models and emulates the training procedure.
For now, I could implement these animations as functions in the manim_ml/neural_network/animations
directory.
At some point there was code commited changing the behaviour of the net when scaling it.
If I use the code in the pip package everything works fine (0.0.11 seems to contain only code prior to the 7th of may).
https://user-images.githubusercontent.com/54776552/198372984-f704cceb-8582-4bf9-bc23-c15ebb836b34.mp4
However I'm forking the repo (with the latest commit from august) because I need to change some internal code and noticed this problem.
Maybe someone can pinpoint the exact commit which causes this behaviour?
Code used:
class Test(Scene):
def construct(self):
# Make the Layer object
l1 = Convolutional3DLayer(4, 2, 2)
l2 = Convolutional3DLayer(5, 1, 1)
l3 = Convolutional3DLayer(2, 3, 3)
layers = [l1, l2, l3]
nn = NeuralNetwork(layers)
nn.scale(2)
nn.move_to(ORIGIN)
# Make Animation
self.add(nn)
#self.play(Create(nn))
forward_propagation_animation = nn.make_forward_pass_animation(run_time=5, passing_flash=True)
self.play(forward_propagation_animation)
During certain tasks, the terminating layer of the neural network does not have Dropouts applied to the last layer; however, here, I wasn't able to directly find a method to prevent it. All FeedForwards are considered the same.
Can there be an argument like:
make_neural_network_dropout_animation(
nn, dropout_rate=0.75, do_forward_pass=True, last_layer_stable=True
)
Would like to explore the same with the first feed-forward layer and seeding the probability distribution
It would be cool to have an animation where the camera flies with the forward pass animation. This would be especially valuable for visualizing very king architectures.
The code from the example of MaxPooling2DLayer returns AttributeError. Same for /examples/cnn/cnn_max_pool.py
Using manim==0.17.2
and manim-ml==0.0.16
.
from manim import *
from manim_ml.neural_network import NeuralNetwork, Convolutional2DLayer, MaxPooling2DLayer
class BasicScene(ThreeDScene):
def construct(self):
nn = NeuralNetwork(
[
Convolutional2DLayer(1, 8),
Convolutional2DLayer(3, 6, 3),
MaxPooling2DLayer(
kernel_size=2,
),
Convolutional2DLayer(5, 2, 2),
],
layer_spacing=0.25,
)
nn.move_to(ORIGIN)
self.add(nn)
forward_pass = nn.make_forward_pass_animation()
self.wait(1)
self.play(forward_pass)`
Stack trace:
convolutional_2d_to_convolutional_2d.py:309 in init │
│ │
│ 306 │ │ self.num_output_feature_maps = self.output_layer.num_feature_maps │
│ 307 │ │ self.cell_width = self.output_layer.cell_width │
│ 308 │ │ self.stride = self.output_layer.stride │
│ ❱ 309 │ │ self.padding = self.input_layer.padding │
│ 310 │ │ self.filter_opacity = filter_opacity │
│ 311 │ │ self.cell_width = cell_width │
│ 312 │ │ self.line_color = line_color
mobject.py:660 │
│ in getattr │
│ │
│ 657 │ │ │ return types.MethodType(setter, self) │
│ 658 │ │ │
│ 659 │ │ # Unhandled attribute, therefore error │
│ ❱ 660 │ │ raise AttributeError(f"{type(self).name} object has no attribute '{attr} │
│ 661 │ │
│ 662 │ @Property │
│ 663 │ def width(self):
AttributeError: MaxPooling2DLayer object has no attribute 'padding'
I want to setup numpy style documentation for the library to work with jeckyll documentation generation. I should also add a doc style check to the CI pipeline.
Is it possible to change the color of the nodes and the background of the neural network?
It is useful to have tests that compare a reference video and a generated video frame by frame. They do this in the manim.community library. I want to set this type of testing up for our systems.
I have been unable to get Manim_ml to properly install under Anaconda distribution. Any advise?
I want to make visualization systems for visualizing diffusion models.
I tried to run the example code below:
class NeuralNetworkScene(Scene):
def construct(self):
layers = [FeedForwardLayer(3), FeedForwardLayer(5), FeedForwardLayer(3)]
nn = NeuralNetwork(layers)
self.add(nn)
nn.move_to(ORIGIN)
# nn.scale(2)
forward_propagation_animation = nn.make_forward_pass_animation(
run_time=5, passing_flash=True
)
self.play(forward_propagation_animation)
And everything is fine.
But when I uncomment nn.scale(2)
, this happened:
It appears that the connections between neurons are misplaced when scaling.
Missing manim_ml.diffusion.random_walk, dependency of diffusion process example
I would like to add support for residual connections (ResNet) to the neural network rendering system.
It would be nice to visualize the final layer of a CNN being flattened when entering into a FeedForwardLayer.
I want to make a visualization of how MCMC works.
after installing manim_ml on my anaconda env, i noticed that the import in the example
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
didn't work. After digging into the site packages, and locating why the rest of the imports due work i found that it was because the file is called convolutional2d.py
and not convolutional_2d.py
making this import in the example not work. Seems the pip build has a different name for this file
I checked by reinstalling the latest build from pip on 26.01.2023.
I would like to add support for visualizing decision tree classifiers.
Hello,
it is possible to install ManimML in the docker Manim community docker in the jupyter notebook?
It would be nice if I could import a bunch of layers from one namespace file instead of from each of the files individually,
this would shorten the length of the imports at the beginning of files.
When I pip install manim_ml
it doesn't include any of the examples in the README. It also doesn't have many of the modules you'd expect. For example, manim_ml.neural_networks
doesn't exist. As a workaround I've manually installed dependencies and added a clone of the latest commit to my python path. However, it would be nice to be able to install it via pip.
I want to make visualization systems for visualizing transformers, specifically self-attention. It would be nice if it worked for Vision Transformers as well as Language Models.
Thank you for sharing your job! @helblazer811
The second line of the "First Neural Network"
from manim_ml.neural_network import Convolutional2DLayer, FeedForwardLayer, NeuralNetwork
should be changed as
from manim_ml.neural_network.layers import Convolutional2DLayer, FeedForwardLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
I hope this project gets better and better.
Hello I've got an issue with Image to Convolutional 2D
. I took the cnn example and changed the Neural Network to:
nn = NeuralNetwork(
[
ImageLayer(numpy_image, height=1.5),
Convolutional2DLayer(1, 16, 3, activation_function="ReLU"),
#Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
#Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
#FeedForwardLayer(3),
#FeedForwardLayer(3),
],
layer_spacing=0.25,
)
when I run the code the image rotation / scaling is somehow wrong. Maybe you guys know what the problem could be.
It would be nice to allow a user to pass a neural network into another neural network and have it be treated as a layer when constructing animations or renderings. It could look something like:
nested_network = NeuralNetwork([
ImageLayer(),
NeuralNetwork([
Convolutional3DLayer(),
Convolutional3DLayer(),
])
FeedForwardLayer()
])
I want to animate an image split into it's RGB channels as it is given as input to the CNN architecture.
I want to visualize activation functions. I can envision two different ways of doing this.
I am in favor of showing the activation function above existing layers because it will not interfere with existing visualizations like FeedForwardToFeedForward Convolutional3dToConvolutional3D.
A possible way to allow for this would to be to add a string optional parameter to layers like FeedForward and Convolutional3d.
nn = NeuralNetwork([
ImageLayer(numpy_image, height=1.5),
Convolutional3DLayer(1, 7, 7, 3, 3, filter_spacing=0.32, activation="relu"),
Convolutional3DLayer(3, 5, 5, 3, 3, filter_spacing=0.32, activation="relu"),
FeedForwardLayer(3, activation="sigmoid"),
],
layer_spacing=0.25,
)
Another way of doing it could be to pass a function or callable object (the callable object could also have a name) instead of a string, which would allow for custom activation functions.
def step_function(input):
return 0 if input < 0 else return 1
nn = NeuralNetwork([
ImageLayer(numpy_image, height=1.5),
Convolutional3DLayer(1, 7, 7, 3, 3, filter_spacing=0.32, activation=step_function),
Convolutional3DLayer(3, 5, 5, 3, 3, filter_spacing=0.32, activation="relu"),
FeedForwardLayer(3, activation="sigmoid"),
],
layer_spacing=0.25,
)
A small coordinate frame with a function visualization can be shown above the layer that is being "activated" and that small function can be highlighted whenever there is a forward pass.
The max pooling visualization can work as follows:
If this is too busy then maybe an alternative is:
I am just trying to run the first example in the Readme.md. I am running on an M1 Pro Mac. I did a conda install of Manim Community v0.17.3 and pip install of the python library (manim 0.17.3, manim-ml 0.0.24). I am getting the following TypeError:
manim -pql manim_test.py
Manim Community v0.17.3
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Blah/lib/python3.10/site-packages/manim/cli/render/command │
│ s.py:115 in render │
│ │
│ 112 │ │ │ try: │
│ 113 │ │ │ │ with tempconfig({}): │
│ 114 │ │ │ │ │ scene = SceneClass() │
│ ❱ 115 │ │ │ │ │ scene.render() │
│ 116 │ │ │ except Exception: │
│ 117 │ │ │ │ error_console.print_exception() │
│ 118 │ │ │ │ sys.exit(1) │
│ │
│ /Blah/lib/python3.10/site-packages/manim/scene/scene.py:223 │
│ in render │
│ │
│ 220 │ │ """ │
│ 221 │ │ self.setup() │
│ 222 │ │ try: │
│ ❱ 223 │ │ │ self.construct() │
│ 224 │ │ except EndSceneEarlyException: │
│ 225 │ │ │ pass │
│ 226 │ │ except RerunSceneException as e: │
│ │
│ /Blah/manim_test.py:18 in │
│ construct │
│ │
│ 15 │ def construct(self): │
│ 16 │ │ # Make the neural network │
│ 17 │ │ nn = NeuralNetwork([ │
│ ❱ 18 │ │ │ │ Convolutional2DLayer(int(1), int(7), int(3), filter_spacing=0.32), │
│ 19 │ │ │ │ Convolutional2DLayer(int(3), int(5), int(3), filter_spacing=0.32), │
│ 20 │ │ │ │ Convolutional2DLayer(int(5), int(3), int(3), filter_spacing=0.18), │
│ 21 │ │ │ │ FeedForwardLayer(3), │
│ │
│ /Blah/lib/python3.10/site-packages/manim_ml/neural_network/ │
│ layers/convolutional_2d.py:128 in init │
│ │
│ 125 │ │ padding_dashed=True, │
│ 126 │ │ **kwargs, │
│ 127 │ ): │
│ ❱ 128 │ │ super().init(**kwargs) │
│ 129 │ │ self.num_feature_maps = num_feature_maps │
│ 130 │ │ self.filter_color = filter_color │
│ 131 │ │ if isinstance(padding, tuple): │
│ │
│ /Blah/lib/python3.10/site-packages/manim_ml/neural_network/ │
│ layers/parent_layers.py:46 in init │
│ │
│ 43 │
│ 44 class VGroupNeuralNetworkLayer(NeuralNetworkLayer): │
│ 45 │ def init(self, *args, **kwargs): │
│ ❱ 46 │ │ super().init(*args, **kwargs) │
│ 47 │ │ # self.camera = camera │
│ 48 │ │
│ 49 │ @AbstractMethod │
│ │
│ /Blah/lib/python3.10/site-packages/manim_ml/neural_network/ │
│ layers/parent_layers.py:10 in init │
│ │
│ 7 │ def init(self, text=None, *args, **kwargs): │
│ 8 │ │ super(Group, self).init() │
│ 9 │ │ self.title_text = kwargs["title"] if "title" in kwargs else " " │
│ ❱ 10 │ │ self.title = Text(self.title_text, font_size=DEFAULT_FONT_SIZE / 3).scale(0.6) │
│ 11 │ │ self.title.next_to(self, UP, 1.2) │
│ 12 │ │ # self.add(self.title) │
│ 13 │
│ │
│ /Blah/lib/python3.10/site-packages/manim/mobject/text/text_ │
│ mobject.py:486 in init │
│ │
│ 483 │ │ │ self.line_spacing = self._font_size + self._font_size * self.line_spacing │
│ 484 │ │ │
│ 485 │ │ color = Color(color) if color else VMobject().color │
│ ❱ 486 │ │ file_name = self.text2svg(color) │
│ 487 │ │ PangoUtils.remove_last_M(file_name) │
│ 488 │ │ super().init( │
│ 489 │ │ │ file_name, │
│ │
│ /Blah/lib/python3.10/site-packages/manim/mobject/text/text │
│ mobject.py:814 in _text2svg │
│ │
│ 811 │ │ │ width = config["pixel_width"] │
│ 812 │ │ │ height = config["pixel_height"] │
│ 813 │ │ │ │
│ ❱ 814 │ │ │ svg_file = manimpango.text2svg( │
│ 815 │ │ │ │ settings, │
│ 816 │ │ │ │ size, │
│ 817 │ │ │ │ line_spacing, │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: Argument 'size' has incorrect type (expected int, got float)
The filters in the CNN visualizations render behind the feature maps, which is not desirable.
Add a system for displaying labels of various kinds in the architecture diagram. One such kind is the name of the layer above it. Another one is showing a dimension of a vector or convolutional filter. It should be possible to override the name of it in the event that you want to show different dimension numbers (say for the sake of explaining a very high dimensional layer from a real architecture).
How do I increase the size of the rendered output? .scale()
doesn't seem to work.
Thanks
Adding a title to a NeuralNetwork and moving or scaling it after does not work. The title stays and does not move respectively.
I want to visualize quad trees used in pairwise force calculations for Barnes-Hut simulations of things like force diagrams and n-bodies simulations.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.