Giter VIP home page Giter VIP logo

citizen-dj's Introduction

Citizen DJ

This project is an experiment and there is no active development and/or maintenance of the codebase. Fork at your own risk! In the unlikely event that there are further updates, the LC Labs team will announce it through our communication channels. Sign up for our listserv on labs.loc.gov, follow us on Twitter @LC_Labs, and watch #CitizenDJ.

The Citizen DJ project invites the public to make hip hop music using the Library’s public audio and moving image collections. By embedding these materials in hip hop music, listeners can discover items in the Library's vast collections that they likely would never have known existed.

Preview Citizen DJ here

Use cases

This document is for people with software development experience who are interested in:

  • Extending the functionality of this app for their own use
  • Creating their own instance of this app using their own content

Creating a new project

This process is currently under major (re)construction; please email [email protected] if you intend to use this for your own audio collections, and I will let you know when this is ready.

  1. Fork this repository to your own account via the button on the top-right corner of this repository

  2. Clone the new repository to your local environment, e.g.

    git clone https://github.com/YOUR-USERNAME/citizen-dj.git
    
  3. To keep up-to-date with the Citizen DJ core codebase, add

    git remote add upstream https://github.com/LibraryOfCongress/citizen-dj.git
    

    Now to check for new changes without automatically merging, you can run:

    git fetch upstream
    

    Now you can selectively merge the changes from the upstream repository. For more information, check out this guide to syncing forks

  4. Now run the following Python script (coming soon) that will reset the application to a boilerplate (it will remove the default Library of Congress collections and content)

    python3 reset_to_boilerplate.py
    

Configuring your new project

(coming soon)

Creating a new collection

For this walkthrough, I will use the following use-case: using this app for your own audio collections. In this case, there's some small tweaks you'll have to do to the app first:

  1. Copy the folder /_sample_app/_collections/ and copy to /_collections/.

  2. To start, you must create a config markdown file for each of your collections. In this example we'll use the Variety Stage collection. A sample config markdown file can be found in the newly created /collections/ folder. You can leave this as-is for now, but you can rename/edit to fit your own collections; one markdown file per collection.

  3. Then run the following command to generate the other necessary pages for the collections in the /collections/ directory:

    python3 sync_collections.py -dir "_collections"
    

    This will create the necessary collections files in _explore/, _remix/, and _use/ directories. Alternatively, you can create these manually. If so, you will need to update the layout and permalink fields.

Processing a new collection

The next steps will go through a (long) series of scripts that process audio from loc.gov.

These scripts are maintained in a separate open-source code repository. I intend to move the relevant scripts into this repository in the near future

  • Install requirements. To complete the full workflow, the core libraries needed are:

    • Python (I have developed this using 3.6)
    • SciPy for math functions (probably already installed)
    • FFmpeg and FFprobe for working with media files; ffmpeg and ffprobe commands must work
    • LibROSA for audio analysis
    • Pydub for audio manipulation
    • scikit-learn for statistics and machine learning features (e.g. TSNE, clustering, classification)
    • Multicore-TSNE for faster TSNE
    • RasterFairy for transforming point cloud to grid
    • Requests for making remote web requests for scraping metadata
    • Curl for binary downloads
  • Clone the repository:

    git clone https://github.com/beefoo/media-tools.git
    cd media-tools
    

The following steps will walkthrough retrieving and processing a specific A/V collection from loc.gov. This process is a sequence of many, many Python scripts. I may automate this in the future, but for now, each step below is to be manually run from the command line. For possible convenience, I have a commands template that can be populated with your own path information using this script like so:

python3 template_to_string.py \
  -in "projects/citizen_dj/citizen_dj_commands_template.txt" \
  -query "collection_uid=variety-stage&collection_id=variety-stage-sound-recordings-and-motion-pictures&data_base_dir=output/variety-stage/&media_dir=output/variety-stage/media/&app_dir=/full/path/to/citizen-dj/" \
  -out "output/my_custom_commands.txt"

Now you have a text file with a bunch of commands that you can run individually or paste multiple lines in your terminal (for Mac) or you can replace newlines with " && " in Windows to run multiple commands sequentially. However, I recommend running each script individually when you first start to get a sense of what they do. Some script parameters require some tweaks for best results.

I. Retrieving data and assets from loc.gov

This example will retrieve data and media from loc.gov. The result will be a spreadsheet of item records (item.csv) and a folder of media files. You can skip this section if you have your own data/media, but you'll need to generate a .csv file for your data/files with the following columns:

id: a unique identifier
url: a url to the source's item record (for linking to in the app)
filename: the name of the file that has been downloaded (not the full path, just the basename, e.g. myfile.mp3)
title: title for displaying in app
contributors: name of contributors for displaying in app (pipe | separated | list)
date: date for displaying in app
subjects: subjects/tags for displaying in app (pipe | separated | list)

Otherwise, first download the search results of a loc.gov query:

python3 ingesters/loc/download_query.py \
  -query "https://www.loc.gov/collections/variety-stage-sound-recordings-and-motion-pictures/?fa=original-format:sound+recording&fo=json" \
  -out "output/variety-stage/pages/page_%s.json"

In this example, we are performing a query to the LOC API to get the audio files for the Variety Stage Sound Recordings and Motion Pictures collection from loc.gov. Note that these scripts will only work for loc.gov items that have downloadable content (you can check this by looking for a download option on a collection item page). Next we will download metadata for each item in the query results:

python3 ingesters/loc/download_metadata.py \
  -in "output/variety-stage/pages/page_*.json" \
  -out "output/variety-stage/items/%s.json"

And compile the items into a .csv file:

python3 ingesters/loc/collect_metadata.py \
  -in "output/variety-stage/items/*.json" \
  -out "output/variety-stage/items.csv"

Then download the media assets for each item (this can take a while for large collections or collections with large media files):

python3 ingesters/loc/download_media.py \
  -in "output/variety-stage/items.csv" \
  -out "output/variety-stage/media/"

II. Audio/video feature extraction

Now get file features (duration, has video?, has audio?) from each file:

python3 get_file_features.py \
  -in "output/variety-stage/items.csv" \
  -dir "output/variety-stage/media/"

Optionally, you can view a "report" about this collection:

python3 ingesters/loc/report_metadata.py \
  -in "output/variety-stage/items.csv" \
  -pages "output/variety-stage/pages/page_*.json"

Next, break up each file into audio samples. This will likely take a while, especially for large collections:

python3 audio_to_samples.py \
  -in "output/variety-stage/items.csv" \
  -dir "output/variety-stage/media/" \
  -out "output/variety-stage/sampledata/" \
  -delta 0.05 \
  -max " -1" \
  -features \
  -overwrite

-delta is the delta for onset detection; decrease this number for more samples. -min and -max is minimum and maximum duration in milliseconds (-1 for no limit). -features adds another step for analyzing the samples' pitches, volume, and musicality which will be necessary for the next steps.

Compile sample information about each item and add it to the items .csv file:

python3 get_sample_features.py \
  -in "output/variety-stage/items.csv" \
  -dir "output/variety-stage/sampledata/"

Next, filter out items that have less than 50 samples (this usually removes silent audio files that have intro audio):

python3 filter_csv.py \
  -in "output/variety-stage/items.csv" \
  -filter "samples>50"

Next, get item "phrases". This looks for sequences of samples that are likely related, e.g. within the same scene in a film.

python3 items_to_phrases.py \
  -in "output/variety-stage/items.csv" \
  -dir "output/variety-stage/sampledata/" \
  -out "output/variety-stage/phrasedata/"

If you're not getting enough phrases, you can lower the threshold for minimum clarity of phrases by adding -params "minc=24.0"; the lower the number, the more phrases you will get. The default is 30. Also run python3 samples_to_phrases.py -h for more parameters to tweak.

Add phrase stats to item .csv data:

python3 collect_phrases.py \
  -in "output/variety-stage/items.csv" \
  -dir "output/variety-stage/phrasedata/"

Optionally, view statistics of the results:

python3 stats_histogram.py \
  -in "output/variety-stage/items.csv" \
   -plot "duration,samples,medianPower,medianHz,medianClarity,phrases"

python3 stats_totals.py \
  -in "output/variety-stage/items.csv" \
   -props "duration,samples,phrases"

Now we find a subset of 4,096 samples (for a 64 x 64 grid) by selecting the phrases that sound the most musical (using the clarity feature):

python3 phrases_subset.py \
  -in "output/variety-stage/items.csv" \
  -pdir "output/variety-stage/phrasedata/" \
  -sdir "output/variety-stage/sampledata/" \
  -out "output/variety-stage/samples.csv" \
  -sort "clarity=desc" \
  -lim 4096 \
  -limp 8 \
  -lims 57

You will need to tweak the last two parameters (-limp and lims, which limits the number of phrases per file, and number of samples per phrase) based on the collection you are working with. You can do this quickly by running the command with -probe which will just report information. In the report, look at the line Found X valid samples; X should be greater than your target sample count (4096 in this case), but as close to that number as possible, e.g. around 4100. Usually for very large collections, you want the "phrases per file" and "samples per phrase" to be very small.

III. Prepping assets for the app

The following steps now branch based on the specific interface you are building towards.

A. For the "Explore" collection interface

The previous step might have resulted in more than 4096 samples, so make sure there are exactly 4096 samples; prioritize samples with a higher musical quality (clarity):

python3 filter_csv.py \
  -in "output/variety-stage/samples.csv" \
  -sort "clarity=desc" \
  -limit 4096 \
  -out "output/variety-stage/samples_grid.csv"

To determine the position (x, y) of each clip, first we need to extract a set of audible features from each clip using strategies common in speech recognition software. These set of features are then reduced to just two features using a machine learning algorithm called t-SNE. This will add two columns (tsne and tsne2) which will be later used to calculate x and y respectively. Tweak -angle and -rate (learning rate) to achieve results that best fit your audio. You can read more about these parameters here.

python3 samples_to_tsne.py \
  -in "output/variety-stage/samples_grid.csv" \
  -dir "output/variety-stage/media/" \
  -components 2 \
  -angle 0.1 \
  -cache "tmp/variety-stage_features.p" \
  -threads 4 \
  -rate 50

We're going to repeat the step above to determine the color for each clip, but use 3 dimensions (for RGB) instead of 2. This should be much faster since we cached the features in the previous step. This will add three columns: color, color2, color3

python3 samples_to_tsne.py \
  -in "output/variety-stage/samples_grid.csv" \
  -dir "output/variety-stage/media/" \
  -components 3 \
  -prefix "color" \
  -angle 0.1 \
  -cache "tmp/variety-stage_features.p" \
  -threads 4 \
  -rate 50

Now we're going to convert the tsne and tsne2 columns into fixed grid positions. This uses the Raster Fairy library.

python3 samples_to_grid.py \
  -in "output/variety-stage/samples_grid.csv" \
  -grid "64x64"

Next we will generate visual "fingerprints" for each clip as a visual representation of the audio.

python3 samples_to_fingerprints.py \
  -in "output/variety-stage/samples_grid.csv" \
  -dir "output/variety-stage/media/" \
  -out "tmp/variety-stage_fingerprints.p" \
  -log

Finally we will move the appropriate assets (images, audio files, data files) to the Citizen DJ App.

python3 samples_to_sprite.py \
  -in "output/variety-stage/samples_grid.csv" \
  -dir "output/variety-stage/media/" \
  -id "variety-stage" \
  -outaud "/full/path/to/citizen-dj/audio/sprites/{uid}/{uid}.mp3" \
  -outdat "/full/path/to/citizen-dj/data/spritedata/{uid}.json" \
  -outimg "/full/path/to/citizen-dj/img/sprites/{uid}.png" \
  -fingerprints "tmp/variety-stage_fingerprints.p" \
  -colorful

B. For the "Remix" collection interface

First, generate individual audio clip files for each sample with a max duration of 1 second.

python3 samples_to_files.py \
  -in "output/variety-stage/samples.csv" \
  -dir "output/variety-stage/media/" \
  -out "output/variety-stage/clips/%s.wav" \
  -dout "output/variety-stage/samples_clips.csv" \
  -maxd 1000 \
  -threads 3

Normalize audio so that no clip is significantly louder than another.

python3 normalize_audio.py \
  -in "output/variety-stage/samples_clips.csv" \
  -dir "output/variety-stage/clips/" \
  -out "output/variety-stage/clips_normalized/" \
  -group "sourceFilename"

Convert audio to .mp3 and move to the Citizen DJ App.

python3 convert_audio.py \
  -in "output/variety-stage/clips_normalized/*.wav" \
  -out "/full/path/to/citizen-dj/audio/collections/variety-stage/%s.mp3" \
  -overwrite

Generate data for each clip for use in the app:

python3 csv_to_json.py \
  -in "output/variety-stage/samples_clips.csv" \
  -props "id,sourceFilename,sourceStart,phrase" \
  -groups "sourceFilename" \
  -out "/full/path/to/citizen-dj/data/sampledata/variety-stage.json" \
  -light

Next we will add some item-level metadata for display in the app. Generate "year" column based on an item's date:

python3 meta_to_meta.py \
  -in "output/variety-stage/items.csv" \
  -key "date" \
  -pattern "([12][0-9]{3}).*" \
  -features "year"

Create an embed url based on the item url:

python3 update_meta.py \
  -in "output/variety-stage/items.csv" \
  -key "url" \
  -rkey "embed_url" \
  -find "(.*)" \
  -repl "\1/?embed=resources"

Add item-level metadata to the app:

python3 csv_to_json.py \
  -in "output/variety-stage/items.csv" \
  -props "title,filename,year,contributors,subjects,url,embed_url" \
  -out "/full/path/to/citizen-dj/data/metadata/variety-stage.json" \
  -filter "phrases>0" \
  -lists "contributors,subjects" \
  -light

C. For the "Use" collection interface (sample packs)

Generate a sample pack.

python3 make_sample_pack.py \
  -basedir "output/variety-stage/" \
  -dir "output/variety-stage/media/" \
  -cdata "/full/path/to/citizen-dj/_use/variety_stage.md"
  -idata "items.csv" \
  -pdata "phrasedata/%s.csv" \
  -sdata "samples.csv" \
  -id "id" \
  -provider "loc.gov" \
  -cid "variety-stage-sound-recordings-and-motion-pictures" \
  -out "output/samplepack_variety-stage/"

Move the sample pack and metadata to app.

python3 sample_pack_to_json.py \
  -idata "output/variety-stage/items.csv" \
  -bout "/full/path/to/citizen-dj/" \
  -sdir "output/samplepack_variety-stage/" \
  -id "id" \
  -cid "variety-stage"

Now you have all the necessary assets in the citizen-dj app. You can return to the citizen-dj app directory and run the app:

bundle exec jekyll serve

This will generate a static website with your new collection in the _site folder which you can view at localhost:4000

Customizing the content and style

This is more or less a basic Jekyll static website, so you can refer to the Jekyll documentation for how the content is organized. Some key areas of note:

  1. You need to update _config.yml with your own settings
  2. The different types of page layouts can be found in _layouts/ which have their associated stylesheet in css/ and javascript files in js/
  3. Each collection has three discrete interfaces: "explore", "remix", and "use". So each collection has one page per app found in _explore/, _remix/, and _use/ respectively. The next section covers how to add a new collection

citizen-dj's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

citizen-dj's Issues

Add rights explanations

In addition to a rights statement, provide simple explanations about how/why the collections are free to use

Share button link does not reproduce pattern accurately

Clicking the share button produces a link you can copy & paste into a browser to replicate the pattern. Using that link does not reproduce the pattern correctly, however.

Original pattern:
image

Pattern "reproduced" by using Share link:
image

This is the link generate by Share:
http://citizendj.labs.loc.gov.s3-website-us-east-1.amazonaws.com/loc-variety-stage/remix/?patternEdits=0%3D7%2C4%2C12%261%3D3%2C11%2C9%263%3D10%264%3D2%2C10%2C14%2C12%2C6%2C4%265%3D8%2C14%2C4%266%3D2%2C6%267%3D15%2C14%2C6&bpm=93&itemId=00694051&itemStart=204730&drumName=Akai%20MPC&patternName=80s%20hip%20hop%20pattern%20A%20%5B1%5D

Make mobile-friendly

Ensure app works on most mobile devices. Need to test:

  • Small screens
  • Tablets
  • Browsers: Safari on iDevices, Chrome, FF
  • Touch devices

Clip Duration Slider value changes the size of the window depending on Value Decimal

When using the Clip Duration slider in the Remix mode, the value changes the size of the window as well as the placement of the word "duration" in relation to the word "clip" which causes a visual jittery effect.

If the value has zero, one or two decimal places (3, 1.3, 0.78), the word "duration" is next to the word "clip"
Screen Shot 2020-09-10 at 2 27 35 PM

But if the value has 3 decimal places, the word "duration" moves below the word "clip" which causes all the other elements below that word to move as well (particularly the waveform).
Screen Shot 2020-09-10 at 2 27 46 PM

All those elements are constantly moving up and down as you change the slider value as a result of this. Perhaps just have the slider value always read with 3 decimals regardless (1.000, 1.010, 1.100 etc)

I noticed a similar issue with the clip start slider, but that only affects the slider track moving back and forth as the decimal values change.

I would also like to suggest reducing the amount of time for the clip duration slider to allow for easier fine tuning of samples. It seems unlikely someone would want to have a 10-20 second sample when they only have 4 beats to work with. Maybe 5 seconds at most to allow for easier fine tuning with the slider. It seems more likely someone would prefer to be able to easily shave a couple decimal points off of the duration with the slider. I realize this can be done with the keyboard arrows but that may not be obvious to everyone.

Love the project!!!

Allow for longer remix than 16 beats

This is awesome! I couldn't figure out if it was possible to make something longer than 16 beats. A slider or ability to add additional "groups" would be great.

Search for a particular spoken word or phrase

Hi Brian,

I'm loving the Citizen DJ project! Thanks for the great work. I'd like to suggest an enhancement to consider for a future iteration.

User story: As a hip hop producer, I want the ability to search for a particular word or phrase across the collection of audio files, in order to find samples containing the word/phrase to use in a beat

Example: something like this: https://getyarn.io/

Not sure whether this is feasible, but if there were a way to perhaps use an open source speech recognition engine with support for keyword spotting, it would be nice to have this ability as a producer.

Zoom in/out?

Is there an obvious mousing guide on how to zoom in and out of the sound explore window? In Safari, it defaults to a zoom in and it's annoying to not be able first drag through all the sounds quickly. on the video demo, zooming in and out happens constantly, but there was nothing obvious that told me Command+scroll-up/down is how I can navigate it. I would strongly consider making that front and center and default to zoom out.

Downloaded audio is distorted

Hello,

I love this project! However, when I downloaded the audio, and import it into Ableton Live, the audio is heavily distorted. Usually, Ableton is really good at handling all types of audio, so I am stumped. It plays great in iTunes, but sounds broken in Ableton. I even changed the bit and sample rates of the Ableton file to match those of the audio content, and it still did not help. Any ideas?

Keyboard Accessibility Settings

Would love it if you could hit "space" or something to play a sample instead of having to navigate to the "play in context" button in the bottom right hand corner. It wouldn't be a lot, but would make searching for sounds so much easier.

Just a small suggestion, but I think it would make all the difference.

Thanks for your work on this project, one of my favorite things I've seen on the Internet lately!!

No item meta in output/variety-stage/items/%s.json

I'm working through the README.md file, and attempting to run the following 3 steps:

Next we will download metadata for each item in the query results:

python3 ingesters/loc/download_metadata.py \
  -in "output/variety-stage/pages/page_*.json" \
  -out "output/variety-stage/items/%s.json"

And compile the items into a .csv file:

python3 ingesters/loc/collect_metadata.py \
  -in "output/variety-stage/items/%s.json" \
  -out "output/variety-stage/items.csv"

Then download the media assets for each item (this can take a while for large collections or collections with large media files):

python3 ingesters/loc/download_media.py \
  -in "output/variety-stage/items.csv" \
  -out "output/variety-stage/media/"

The first step worked:

$ python3 ingesters/loc/download_metadata.py \
>   -in "output/variety-stage/pages/page_*.json" \
>   -out "output/variety-stage/items/%s.json"
Found 1 files
Reading query data...
Read 11 items

(it downloaded a bunch of files and wrote them to my folder):

image

But the 2nd step can't find any metadata:

python3 ingesters/loc/collect_metadata.py \
>   -in "output/variety-stage/items/%s.json" \
>   -out "output/variety-stage/items.csv"
Found 1 files
Reading metadata...
No item meta in output/variety-stage/items/%s.json
Found 0 valid items with assets
Wrote 0 rows to output/variety-stage/items.csv

Subsequent runs try to read from the csv as well:

python3 ingesters/loc/collect_metadata.py \
>   -in "output/variety-stage/items/%s.json" \
>   -out "output/variety-stage/items.csv"
Found 1 files
Existing file found.
Read 0 rows from output/variety-stage/items.csv
Reading metadata...
No item meta in output/variety-stage/items/%s.json
Found 0 valid items with assets
Wrote 0 rows to output/variety-stage/items.csv

Perhaps I should ask @beefoo for help in https://github.com/beefoo/media-tools , but I'm following the README from this repo, so figured I'd post here.

Great work by the way!!! The site is amazing. I'm making a little editor for it and will post my work when I'm done.

Let me know if you need anything!

Question about National Jukebox Collection samples and their inclusion in Citizen DJ

Hi. I'm a fan of Citizen DJ, and I had a question about it that prompted me to make a GitHub account just to ask this one question. There's a lot of really cool samples to explore and use in music projects of various genres. The acoustical era recordings from the Variety Stage and Inventing Entertainment collections for example are full of vintage lo-fi character and are practically tailor-made for lo-fi music.

Which is why I was excited for 2022, as at the bottom of the Citizen DJ homepage, there is a teaser for new samples arriving in 2022. Clicking on "Learn more" shows an explanation of what is coming in 2022, namely over 10000 sound recordings from a collection called the "National Jukebox Collection" that will become public domain due to copyright expiration. The thing is though, as of writing this (January 4th, 2022), those samples are in the public domain, but the page is still written like this hasn't happened yet, as the page asks you to "Check back here on January 1, 2022!"

I figured that maybe the 1922 and earlier recordings would be available for download on the Library of Congress website, as they weren't available before presumably due to them still being copyrighted, but the Jukebox samples that are from 1922 and earlier aren't available for download either, even though they are now in the public domain. To be clear, I used the link provided on the Citizen DJ webpage that automatically filters out recordings from after 1922 to make sure I wasn't trying to download recordings from 1923 onwards.

I decided to check the Citizen DJ GitHub to see what was going on, and on the thread titled "Search for a particular spoken word or phrase" (I'm not using a link because I'm not sure if GitHub will permit me posting links), Brian stated that Citizen DJ is currently frozen due to him departing from the project in October 2020, I assume this is why Citizen DJ wasn't updated in 2022 with the National Jukebox samples. If it's okay for me to ask, I'd love some clarification as to the status of Citizen DJ and whether the National Jukebox samples will be added to Citizen DJ.

Is MIDI learn possible ? #suggestion

Hello,
Just discovering your appz, (thanks to Joo Won Park) lots of fun and cool sample collection !
I know Chrome support MIDI, but not sure about MIDI learn,
Do you have any plans for this?

Thanks

Volume control for easier system sound gain staging, “Explore” additional filters, option to download filtered results in explore

I find the Explore tab of each collection so inspiring and, especially with the musical note filter engaged, I’ve started to see it as an XY pad-like instrument that can be played by scrubbing with my mouse. I’ve started to route the system audio from the Explore tab to virtual channels that then get routed/re-sampled into Logic Pro X. This allows me the flexibility of capturing all the live tweaks, rhythms, and changes I do in Explore (or Remix) sessions, for as long a duration I want, which can be further manipulated in my DAW. With this in mind, here are some features that would facilitate this kind of resampling workflow:

  1. a volume/output knob on the Explore and Remix interfaces would help with properly gain staging the system sound. Right now, it seems like the audio is clipping when quickly scrubbing through the snippets. (I lowered the volume of the virtual channels in my audio interface, but still hear the clipping with quick scrubbing. Single, slow clicks seem fine.)

  2. Adding an option for multiple filters in the Explore tab, if even just one more, would take its musical potential to another level. This way, you can have multiple filtered notes/subjects in your pool of sounds.

  3. A filter for drum sounds—kick, snare, hi-hat, ride, tom, percussion—would be great.

  4. I figure this is complicated, but while I’m dreaming: an option to download just the snippets of sounds from a filtered Explore tab would be great (e.g. a tab filtered to display only snippets of an A note would provide an option to download just these A note sounds.) This would give folks the flexibility to use a focused set of sounds (in a specific key or texture) in their DAW.

Pardon the length and thank you for gifting the world with this project. It has inspired me to not only explore sampling, but also the LOC’s collections!

Save current work

A person can create a groove but is unable to save their work.

Or maybe I just didn't see the option.

I'm using a smart phone...

[Feature Request] Export Loop Remix Option

Hi there,

I would really benefit from being able to download a remix as-is. Just getting a .wav file would be extremely helpful!

Today I exported a loop by recording my headphone output via Audacity, and then chopped it up the best I could but it took a while and I couldn't get a perfect loop.

Fantastic project you've got here. Great work!

Audio Files are Distorted when imported to Ableton Live

Screen Shot 2020-04-27 at 10 14 26 AM

When attempting to import any audio sample into Ableton Live 10, the wavelengths are completely distorted like above. This might be caused by some frequency compression either on the site or the original export of the files. I attempted this on multiple samples from different audio collections. You can see the exact sample for the image above in the bottom left corner.

Acceptance Criteria:

  • Downloaded audio files can be properly played back in Ableton Live.

Make length of sample obvious

As a DJ or music producer browsing the sound samples, I want to know the length of a sample so that I can anticipate how long of a sample I might hear when I press play.

Gosh that was hard to write as a user story. If all the samples are the same length, you could add a note at the top of the page. Also I'm very curious if you'll share somewhere on the pages (it's highly likely I missed it, or you'll share when the project is "complete") how the samples were generated.

Acceptance criteria:

  • When browsing on the page of samples, I can tell at a glance how long a sample is before I download it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.