nanogenmo / 2020 Goto Github PK
View Code? Open in Web Editor NEWNational Novel Generation Month, 2020 edition.
National Novel Generation Month, 2020 edition.
Since last year's NaNoGenMo, I made a "novel" that consists of Google Maps driving directions, where the itinerary follows the mentions of U.S. place names from Mark Twain novels.
Let's drive around America with Mark Twain!
I hope to share the text and code soon.
This is both an intent to participate and the beginning of a more comprehensive program log.
This November, I plan to write a project using Intel 8031/8051 assembler to produce a work told via punched IBM 5081 cards (or other standard, yet to be really decided). The goal of this is to one day produce 1 physical copy of the work as a computational artist book. This project may involve Python to do the convenient parts, but I am going to try to stay as much as I can within my technical constraint.
This work takes some inspiration from the title computational machine developed the Los Alamos National Laboratory, intending be a kind of notional autononfiction. But, let's digress: that sounds far too literary a pretention to talk about at this point. As we all know from NGM projects, things can go quite differently.
More later. But for now, this is enough.
Markov Chain text generation using a gigantic science fiction corpus.
I've been writing on cosmic.voyage for a while and noticed some common tropes that I thought would make it amenable to generative text, other writers on the platform may be doing NaNoWriMo so this could be fun.
Last year I attempted to generate a dadaist zine but got hung up in the markdown+latex formatting, I'd like to fix the problems I had and make some improvements on that that I could re-implement in my other triptograph project
Started a project earlier this year to create rhymes from Jordan Peterson's dissertation. Decided to use this project as a basis for NaNoGenMo 2020 and participate a second time! As opposed to last year I aim to update my work throughout the month a bit better, so let's see!
hey, all. first time participating here. i'm going to do some research and populate a gigantic list of important events that my friends and families have experienced throughout different points in their life. specifically, i'm planning on splitting up the data sets into:
based on the information i get, i will have my program create a memoir of it's own. will it manifest a desire to live? let's see!
It's definitely going to have cyberwitches. There may be Markov chains. There will be javascript. I may even write some of the code myself.
In 2017 I wrote "White to Play and Win", a novel in which a Chess engine narrates its thought process as it determines the best move to play. This year I intend to do something chess-themed again, but this time, in the reverse direction: rather than hyperfixation on a single move, it will tell the story of four players at a tournament competing for victory.
The tone will be similar to a "sports novel", with the players coming from different backgrounds and temperaments, and having different reasons for wanting to win the championship. Interspersed with their personal (simulated? Tracery?) stories of growing up, training, and accepting defeat or glory... will be actual simulated games. Chess engines are quite advanced now and many online services offer post-game analysis, identifying the significant moves, blunders, and alternate tactics for both players. The challenge will be to put a narrative spin on the analysis performed between the players.
For the past several months my time has been consumed by Blaseball, an online baseball simulation where 20 teams of randomly-generated players compete every week to win the Internet League Series championship. Over the sparse simulation mechanics an entire community of fans have grown, cheering on their favorite teams and building elaborate lore and fan work to tell stories from the rolled dice. Given a slight nudge, people are quite happy to identify their own patterns in the material and project much more into it than what is actually there. I hope I can translate some of that experience into this work.
I will be coding a horror story with a monster in it. I plan to do a mix of templating and something else (probably simulation). This is because I'd like to produce something with a coherent sense of plot, while still wanting to be "surprised" by what the characters do. I'd like to do something with synonyms in this work too.
I'm not sure how all of this will go, but I am sure I will have a ton of fun coding it.
I will program it in Ruby.
NaNoGenMo is a solution to a problem that I have, of a dataset with no purpose. Within the month, I intend to use the R language to convert my data into a generated novel of (hopefully) 50k works or more. The title hints at the dataset: "Crash Blossoms, or History Above the Fold".
This is my first time doing a NaNo*Mo, wish me luck!
For reasons unknown, a mathematician sits down to write out their favorite (or at least second-favorite) number.
There will be at least one surprise.
To the title -- well, kinda.
I'm still working on my other ridiculously stupid idea (#7), but as I get frustrated there, I turn to another smaller project: Mean Kings.
The goal here is to crawl Wikipedia for articles on ancient rulers, perform sentiment analysis on the articles and tell readers who was (or maybe wasn't) a mean king and some reasons why.
This idea found genesis as an idea for a bot that my partner and I once had, but never found its expression. Maybe I'll turn it into a bot after this. Probably.
text to image
in a similar vein to NaNoGenMo/2019#81, but hopefully I'll be able to come up with a "deeper" & more interesting simulation this year.
my basic process will be: make a procedurally-generated text-adventure with emphasis on simulating basic needs, crafting, "survival sandbox" stuff. then make an AI to play the game. and then generate narrative from the experiences of that AI.
🤞
Hi everyone! (Possibly the youngest contributor here.) I plan on reimplementing Teens Wander Around a House in Python, with some GPT-2-generated text layered on top, and the setting changed to the night of Y2k38. That's all.
I heard about NaNoGenMo a couple of years ago and I always wanted to participate. This year, after much vacillation, I'm finally joining.
Roughly: a symmetry group governs combinations of elements (characters, moods, locations, etc) for each chapter: these are then used as elements to drive a text generator.
This is just a vague idea which may change once I start actually coding.
Completed results: https://etc.mikelynch.org/nanogenmo2020/
Simple dialogues converted into ridiculously detailed phonetic descriptions.
They say that the most important part of writing is to show, not tale. Comics tend to do a pretty good job of that, and are much more readable than a wall of textual simulation output.
I'm planning to make a comic for this year. I will aim for ~150 pages instead of 50,000 words, because comics are naturally sparser in words than prose, but I'll also make sure to generate a 50,000 word version to please the rule pedants.
As far as I know, this has only been done once, by atduskgreg in 2014. Unlike atduskgreg's entry, I will take the strictly simulationist approach with my generator. My plan is to generate a rough page-by-page plot through a simulator, then generate individual pages a little more loosely (while still maintaining as much cohesiveness as possible, especially in the small details), and finally generate the appropriate frames to go in the pages. I am hoping to generate a story that echoes the Anabasis, though I haven't decided to what extent.
I opened a repository which is empty for now, but I hope it won't stay so for long.
A Kafkaesque, esoteric, Nano NaNoGenMo, Meow inspired epizeuxical "lengthy work of fiction" attempt in Lazy-K.
It may not be possible to bring the final code length under 256 characters of SKI combinators to make this 'Nano', but I'll try.
This began as an attempt to understand SKI combinator calculus by programming "something practical" using it.
An initial version of the code that produces the exactly 50K word long text comes in at 1276 bytes (not nano :( ) is:
K(IS(SI(K(S(K(S(S(KS)K)(S(S(KS)K)I)))(S(S(KS)K)I(S(S(KS)K)(SII(S(S(KS)K)I)))))))(K(S(SI(K(S(S(KS)K)(S(K(S(S(KS)K)(SII(S(S(KS)K)I))))(S(S(KS)K)I(S(S(KS)K)(S(S(KS)K)I)))))))(K(S(SI(K(S(K(S(S(KS)K)I))(S(SII)I(S(S(KS)K)I)))))(K(S(SI(K(S(S(KS)K)(S(K(SII(S(S(KS)K)I)))(S(S(KS)K)(S(S(KS)K)I(S(S(KS)K)(SII(S(S(KS)K)I)))))))))(K(S(SI(K(S(S(KS)K)(S(K(S(S(KS)K)I))(S(S(KS)K)(S(K(S(S(KS)K)I))(S(S(KS)K)(SII(S(S(KS)K)(S(S(KS)K)I))))))))))(K(S(S(KS)K)(S(S(KS)K)I)(S(S(KS)K)(S(S(KS)K)I(S(S(KS)K)(SII(S(S(KS)K)I))))(S(S(KS)K)(S(K(S(S(S(KS)K))(SII)(S(S(KS)K)I)))(S(K(S(S(KS)K)I))(S(S(KS)K)(SII(S(S(KS)K)I)))))(S(K(S(K(S(SI(K(S(K(S(S(KS)K)I))(S(SII)I(S(S(KS)K)I)))))))K))(S(K(S(K(S(SI(K(S(K(SII(S(S(KS)K)I)))(SII(S(S(KS)K)(S(S(KS)K)I))))))))K))(S(K(S(K(S(SI(K(S(SII)I(S(S(KS)K)I)(S(S(KS)K))(S(SII)(S(S(KS)K))(S(S(KS)K)I)))))))K))(S(K(S(K(S(SI(K(S(S(KS)K)(S(S(KS)K)I(S(S(KS)K)(S(K(S(S(KS)K)I))(S(S(KS)K)(SII(S(S(KS)K)I)))))))))))K))(S(K(S(K(S(SI(K(S(S(KS)K)I(S(S(KS)K)(S(K(S(S(KS)K)I))(S(S(KS)K)(SII(S(S(KS)K)I))))))))))K))(S(K(S(SI(K(S(S(KS)K)(S(SII)I(S(S(KS)K)I))(S(S(KS)K))(SII(S(S(KS)K)(S(S(KS)K)I))))))))K))))))))(S(SI(K(S(S(KS)K)(S(S(KS)K)I)(S(S(KS)K)I))))(K(S(SI(K(S(S(KS)K)(S(K(S(S(KS)K)(SII(S(S(KS)K)I))))(S(S(KS)K)I(S(S(KS)K)(S(S(KS)K)I)))))))(K(K(SII(SII(S(S(KS)K)I)))))))))))))))))))
Plan:
Inspirational quote:
"A short while later, K. was lying in his bed. He very soon went to sleep, but before
he did he thought a little while about his behaviour, he was satisfied
with it but felt some surprise that he was not more satisfied."
Important note on pronunciation: For the purposes of this project, 'K.' refers to Franz Kafka's fictional character Josef K., and is to be pronounced as the German letter: 'kah'.
A collection of simple fables, Fabulas de Ecologia explores alternate ecologies and configurations between the living, the not, and the in-between.
My project will pick up where my NaNoGenMo 2019 project left off, utilizing GPT-2 and GPT-3, the language models developed by OpenAI.
Last year, thanks to Max Woolf, I learned how to fine-tune two different GPT-2 language models with Google Colab. I created "Writing Prompt Prompter" trained on thousands of writing prompts posted on /R/WritingPrompts and "Writing Prompt Responder" trained on thousands of responses to writing prompts on /R/WritingPrompts.
After running those language models for weeks straight, I generated a 50,000-word story collection, with a series of stories generated by "Writing Prompt Responder" in response to story prompts generated by "Writing Prompt Prompter." The stories were spooky and lovely.
Late in 2019, Nick Walton created AI Dungeon, a text-based video game that used the GPT-2 language model as its storytelling engine. Walton shared his AI Dungeon code for free through Google Colab and I spent a few amazing weeks running the early version on my laptop. Walton and his team have since developed a standalone AI Dungeon app.
In June, OpenAI announced the creation GPT-3, the third generation of its AI language model. The company said it trained GPT-3 with a massive selection of datasets, including the Common Crawl corpus (around one trillion words gathered through eight years of web searches), two “internet-based books corpora," and the English-language version of Wikipedia.
I can't access GPT-3 on my own, but AI Dungeon now runs on the powerful new language model. I could never code something as elegant as AI Dungeon and I could never replicate the millions of training hours its users have spent on the game.
So for my NaNoGenMo project this year, I will use AI Dungeon as my interface to generate 50,000 words. I will feed AI Dungeon my favorite computer-generated writing prompts and writing prompt responses generated during NaNoGenMo 2019. AI Dungeon will continue where those stories left off last year.
It will be like a game of Exquisite corpse played between GPT-2 and GPT-3.
I will keep an eye on Tra38's project that also uses AI Dungeon.
Nothing sexy, and things that were done a decade ago, BUT NOT BY ME and not tools that work with the rest of my toolchain, so why not?!!
Maybe I won't get anything done - I haven't the last few years. But maybe I will!
Possibly using:
I'm not sure if I want to write a fake diary for a supermarket cashier or generate logs for a food delivery company, but I'll try to participate this year. I hope to use Rust together with an ECS to generate the activities, otherwise I'll just use Python/Ruby.
A common criticism of GPT language models is that they plagiarise text from the internet. As an experiment in smoothing over this issue, I will make a Markov chain language model that tags each n-gram observation with the location of the original in the source text.
This means that in the text generation stage, each output token can cite the n-gram it was drawn from in the source text. In the generated novel, I'll put this info in footnotes. This should make the resulting text much better sourced, and give the reader clarity about the true origin of any deep insights found in the novel.
Haven't decided what source text to use. Maybe Shakespeare (all lines have a standard identifier), GPT research papers, Moby Dick...
Caveats:
Here I declare my intent to participate 🙌
Haven't decided about style/topic yet but we'll see
I have been looking at this for a couple of years now. I think this year is the one, I want to try it out. Not sure what I am making but let's figure it out along the way.
Not quite sure what to make yet, but this does sound like a neat little project!
I've done NaNoWriMo successfully a few times and feel like it is now time to move onto more confusing ventures! This is my first time participating in NaNoGenMo. I'm looking forward to figuring out what on earth I'll make with my novice Python skills. Will it be AI generated fanfiction? Will it be a horror novel based on cookery websites? Who knows? Not me! :D
Pretty straightforward: let's use some heuristics to fish a text for verbs, nouns, adjectives and adverbs and then generate random sentences using patterns.
And do it in <=256 characters because #NNNGM and I have not played Perl golf for many years.
It was only a month ago I trawled through all the sad mecha table top rpg game entires for a sad mech game jam on itch, so I'm gonna see what I can do to with parsing and text injection to make a story that at least starts out fairly straightforward, but... eventually have some digital ghosts or echoes or memories start making noise in the text, and see what it comes up with.
Here's hoping I can even find some time to spend on this ;0;
--Update---
I think I missed the actual official time, but I made it just in time with like an hour to spare for my time zone, to nyeh. Of course, as I type this it's still compiling the final novel, but I know where it will be once finished.
I used Markovify to build new sentences from the sentences that matched a ghost's emotions. And each ghost had a word bank that would be depleted as it overwrote sentences it didn't like. Then I had multiple ghosts all reading and writing at once at different rates. So they were learning and rewriting from each other too.
My code is available here
And the finished, rewritten text is here (The source story was a NaNoWriMo story I wrote in 2010 that was terrible)
Stumbled upon a link to this on twitter, and it sounds like a fun time. In November it'll time to code a novel! Will my story be grammatically correct? Maybe. Will it be coherent? Not likely! But will it be 50,000 words? Yep!
I'm going to write a selenium script to feed lorem ipsum from the lorem ipsum generator i made partly for nanogenmo 2020 (https://grindr.karlmarxindustries.com/) to my not well-trained, marx-quoting chat bot (https://bot.karlmarxindustries.com/) to presumably mis-train it and I will record the first 50000 words of dialog in a screenplay. (I should try to sell the finished product to netflix.)
I want to write a field guide to help you survive a magical, whimsical forest.
Sections on:
Entries will consist of
The name will most likely be generated based on a format with specific words pulled from corpuses. The text will likely be markov chain generated (unsure of source). The warnings will periodically added and will most likely be a format filled with specific based on the name and/or key words in the entry.
Stretch goals: If I complete what I have fast enough and don't want to start another project, I will possibly attempt these extra tasks:
AI Dungeon is a free-to-play single-player and multiplayer text adventure game which uses artificial intelligence to generate unlimited content.---Wikipedia
The important thing about AI Dungeon is that it currently uses GPT-3 on the backend, which is a very powerful machine learning algorithm, able to generate human-readable text based on the "prompt" you provide it. The free version uses a fairly limited version of GPT-3 (Griffin), but the paid version (Dragon) is much better. I currently have the paid version.
I have used AI Dungeon to generate characters for tabletop RPGs, so having it generate fluff for a roleplaying game seems doable.
Here's a problem with this project though. AI Dungeon is a closed-source project. The only "source code" I can provide is the initial prompt I used to generate the text, and I can open-source that prompt. But AI Dungeon doesn't allow for reproducible output. In other words, it is based almost entirely on the honor system.
So, I may write my own script, along the lines of The Computer Crashes or The Track Method, and open-source that script, merely using AI Dungeon as a generator of corpus that will be remixed.
I don't know if I'll have time to work on this project, but I think it might be something for me to think about, at least. So I'm making this issue, just in case. If I get, say, only 10,000 words out of this, I'll just add a bunch of meows at the end and declare victory.
I'm considering a puzzle book in the vein of the Sherlock Holmes Consulting Detective boardgame and Obra Dinn, where the reader is given a wealth of in-universe newspaper clippings, maps, directories and other documents, and a mystery can be solved by successfully cross-referencing information and eliminating possibilities.
The autogeneration idea would be to start with a single fact ("Mr Black was killed by Colonel Mustard") and then to recursively obfuscate it in a variety of ways ("Mr Black was killed with a skull-topped malacca cane" + "Colonel Mustard has a skull-topped malacca cane"), hiding all of these facts in various documents, until the case was complex enough to require some effort to solve. The script would generate as many non-overlapping cases as was needed to fill 50,000 words, along with a lot of randomly generated background filler to bulk out newspaper pages and address listings.
I'm imagining a standalone file of documents with a simple introduction and no further mechanics, rather than a Consulting Detective door-knocking system, but will see how it goes.
I will be hosting and updating my project here:
https://github.com/kkritselis/NaNoGenMo2020
I will be mostly working on the weekends and anticipate working about 40 hours over the course of the month.
It's been said that the definition of insanity is doing the same thing over and over and expecting different results.
WELL HERE I AM AT NANOGENMO AGAIN
I am co-founder and maintainer of an open-sourced org (Auto-DL).
We anyway need to work on RNNs, writing a novel generator (given a list of topics generate 50k+ word novel would be a challenging yet rewarding exercise.
I am participating :)
I have almost no clue what I am doing, but rather than ask for help I hit upon the idea of asking you to tell me what you do so I can just copy it. (I won't be able to, lol)
I'll go first:
I will follow a bunch of tutorial videos using P5.JS to generate a lot of very random text. I will then attempt to impose some structure and sort of fill the structure in. Then I'll give up and just generate 50k words of nonsense.
Here the code.
Here the 50000 word output.
Hi there,
Using gpt-2, I'm interested in exploring tricks to bring narrative coherence across longer texts, possibly recycling some ideas from https://www.scritturacollettiva.org/metodo.html. Looking forward to emulation from other participants.
KR Francois
I went out with the naive idea that gpt could understand the structure of a text if trained accordingly. So the training material includes text snippets (pompously called chapters) with a prompt containing:
• a summary of the snippet created with Bert Summarizer and tagged with <|summary|>,
• the third paragraph of the snippet tagged <|line-03|>,
• the final paragraph of the snippet tagged <|line-last|>,
• the first paragraph of the snippet tagged <|chapter-begin|>.
And, in deed, even though gpt-2 writes sequence by sequence, it did recognize a pattern and often picked up the first sentence and the third one. The final one mostly got lost. But that picking of the third sentence seemed to be mostly statistics-based. It does not stick to the rest of the text.
Intent: lay out existing text as a generated multicursal labyrinth, where branches occur only when the same word occurs multiple times in the source text.
This is going to a continuation of my ideas from last year in NaNoGenMo/2019#65
Treating vocabularies as numbering systems, and works composed from them as large numbers, to be manipulated.
Following some very good advice last year I switched focus towards the end of the month to ensuring I actually had 50k words in some kind of format that was readable, rather than bug free code that was pure and true to a half-baked concept that only I was judging on. It was a good exercise in project management: focus on the results that matter.
I was happy enough with the results last year. Some of the bugs / issues with the tokenisation of the source material seemed to make the output more interesting, and my attempts last year to fix it resulted in (if I remember correctly) less interesting output, so I embraced the glitches and accomplished the goal of producing a generated novel using a simple arithmetic operation on a text.
This round I want to:
Intent to participate
Collate all the weary moments of the bible into one long rest
Back in the days of mid 2000s internet I was part of a fan forum for His Dark Materials, where a bunch of nerdy mostly teenagers compiled and wrote animal analysis mapped to personality traits. I have a 10mb csv file with about 10 years of community sourced animal traits I scraped and I'd like to do something weird with it.
Current idea:
Taking the titles of animal names, parse them into a grammar to generate new animal forms
Train a model on a whole lot of animal descriptions from post bodies or use a markov chain, not sure yet!
Use the new animals as input and see what comes out
hi! i've been a big fan of nanogenmo for a while. this year i have an idea for it.
i found out earlier this year a friend of mine has to keep exhaustive logs of all the work activity he does every day. it includes things like "11am-11:15am: Nothing". this really blew me away before i learned that its actually common practice in some positions. i think it could be cool to generate a prose version of it. it might also be really boring. i guess i'll find out.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.