Archive of everything surrounding the course.
- Exhibition in Summer (around June)
- Build a handheld navigation towars light device. Attach two light sensors to a raspberry pi. Compare both values, let the raspberry pi navigate you by telling either left or right. This might also be possible with a camera.
Visualize Plant Data (Electrical Current..) and train a model on the visualized images, in order to give some feedback
Withing this project we want to add human attributes to a confined plant. This shall be the next step in the plant evolution. It struck us, how people care about their houseplant. The detail of these interaction are sometimes very complex. With our project we want to give feedback from the plant. This feedback might range from a plant moving where the environment suits best, the plant reacting to peoples emotions or even the plant communicatin in text.
A series of different plant in confined environments, ranging from beautiful to ugly. People shall display their connection to these plants in emotions. Based on a map of happy - grim the terrariums will either try to destroy or keep the plants alive. Have a setup of where to put faces ("Röntgenraum Zahnarzt") scane emotions, display live data of terrarium.
- Max Ernst Museum, Exhibition
https://maxernstmuseum.lvr.de/de/ausstellungen/surreal_futures_16nv2a6w00cbh/surreal_futures.html - Virtual Herbaria, Archive.org
https://archive.org/details/texts?query=Herbaria - ImageNet Dataset Categories
https://www.image-net.org/ - Ernst Haekel Zoologe
https://de.wikipedia.org/wiki/Ernst_Haeckel
- Voynich Manuskript, Plant Chapter
http://www.edithsherwood.com/voynich_botanical_plants/
https://www.holybooks.com/wp-content/uploads/Voynich-Manuscript.pdf - WordNet Relations of words, also API available
https://wordnet.princeton.edu/ - OpenAI Microscope
https://microscope.openai.com/models
- Objects and texture
https://www.cytter-datalab.com/
- What do Vision Transformers Learn? A Visual Exploration
https://arxiv.org/abs/2212.06727 - AI Art without text to image - Myriad Tulips
https://annaridler.com/myriad-tulips - Telegarden
https://goldberg.berkeley.edu/garden/Ars/
- Nooscope Manifeterd
https://link.springer.com/article/10.1007/s00146-020-01097-6
Logic architecture approach 1:
- Scan for the brightest point in the image
- Trigger a rotating motion until the brightest point is centered.
- Move for a given time forward
- Scan again, should the brightest point be still in the center continue
Logic architecture approach 2:
- Continously scan and move for the brightest point
- there is no centering of the brightest point
- either move the left or right motor slower/faster depending of where you want to go
Problems arising:
- Due to the narrow fov, only the viewable is scanned for the brightest light. Implement an algorythm to scan 360° for the brightest point and compare.
- When do we settle? Should the robot be in continuos motion? Attach a light sensor to the bottom of the robot, If a certain treshhold of brightness is reach settle for a while.