Giter VIP home page Giter VIP logo

mvsep-mdx23-music-separation-model's People

Contributors

ma5onic avatar zfturbo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mvsep-mdx23-music-separation-model's Issues

Vocal Stem Quality Issue - Trained on Lossy?

Hello - this new model beats htdemucs_ft hands down, except that the vocal stem appears to be trained on lossy source material.

Drum, Bass, and "Other" stems are a gargantuan improvement over htdemucs_ft, I congratulate and thank you.

Vocal stem however has a band of mis-identified/mis-assigned frequencies from 15-18KHz, and then the response goes completely dead above 18KHz.

You can examine this issue by using Spek (audio spectral analyzer) https://www.spek.cc
Please compare vocal stem outputs from regular htdemucs_ft, vs. vocal stem outputs from this new model - you can see side by side that this new model has spectral band issues, most likely from being trained on lossy/compressed source material.
To be clear - there is a band of "garbage frequencies" that dont belong in the vocal stem, from 15-18KHz, and then the vocal stem goes totally dead above 18KHz; whereas regular htdemucs_ft retains astonishing quality and accuracy all the way up to 22KHz.

Not complaining - this model is a massive improvement over htdemucs_ft on the other stems - but if you would please consider re-training the vocal stem so that it is even identical to regular htdemucs_ft, or slightly better, that would be fantastic.
I want this model to be the new gold standard! Just a bit of tweaking and it can be.

Is there an alternate version of this model that IS trained on full-bandwidth vocal material?

This model is so close to beating htdemucs_ft hands down - but the vocal stem needs to be fixed.
Thank you for your efforts!

Small amount of bleed

Truly an excellent implementation of stem separation!

But I believe there is a problem in the mixing. Seems there is a small amount of bleed in some stems from the others. One good way to hear this is trying separation on the album version of Bon Jovi's "You Give Love A Bad Name". You can hear the opening acapella in all the stems, and the result is amplified significantly in the instrumental mix when all those are summed together.

Likewise, the vocal stem has a low level of the sum of the other stems mixed in it as well.

I don't think this is an issue of identifying frequencies for a stem, rather a mixing issue perhaps when using complementary phase cancellation.

Cheers!

Question about multiple inferences in decums model

First of all, thank you for creating this impressive project! The work you've done is truly remarkable.

I had a question while looking through the codebase. I noticed that the decums model is called twice during the inference process.

vocals_demucs = 0.5 * apply_model(model, audio, shifts=shifts, overlap=overlap)[0][3].cpu().numpy()

vocals_demucs += 0.5 * -apply_model(model, -audio, shifts=shifts, overlap=overlap)[0][3].cpu().numpy()

Is there a specific reason that two separate calls are required? I'm curious if this is some sort of optimization trick or if there is another technical motivation behind it.

Thanks again for building such an amazing tool!

The gui uses CPU instead of GPU while CPU use is unchecked

I am using GUI and the process was extreeemly slow. stuck at 0% for a while and don't advance after 5%. waited about 40min+

So I found that CPU usage is maxing out, while GPU usage is none.

The prompt while GUI is running confirms that the gui is not using GPU.

GPU use: 0
Use low GPU memory version of code
Use device: cpu

I made sure that "Use CPU instead of GPU" is unchecked. And I am using RTX 3060 ti, which should be enough to process your work.

attempting and failing to run using parallels on mac (is it possible?)

I'm a total noob here on github and inexperienced with using pcs & any kind of coding so forgive my ignorance! I'm just big into stem seperation so I'm looking forward to trying out this new tool

I'm hoping to run this on an M1 macbook pro running windows 11 using parallels as I don't own a PC

I managed to get past the first stage as mvsep downloaded everything required when first launching run.bat

but now when i open the file get this message (I had to screen record to read this as the terminal closed immediately after displaying):

RuntimeError: Found no NVIDIA driver on vour svstem. Please check that vou have an NVIDIA GPU and installed a driver fro
m http://www.nvidia.com/Download/index.aspx

obviously running on an m1 mac i have no NVIDIA GPU

anyone know a way around this or whether its even possible to run this on a mac with parallels (or any other method)?

I also tried downloading python and opening web-ui with it but it closes straight away

thanks for your help!

QT5Core.dll Error

Any ideas what's causing this error and application to crash during separation? It worked yesterday, today I can't get to complete before this error.

Faulting application name: python.exe, version: 3.10.10150.1013, time stamp: 0x6419fa4b
Faulting module name: Qt5Core.dll, version: 5.15.2.0, time stamp: 0x5fa4dd3b
Exception code: 0xc0000409
Fault offset: 0x00000000000204e8
Faulting process ID: 0x0x19C4
Faulting application start time: 0x0x1D9A5FFEC4B7850
Faulting application path: C:\Users\skhoc\Documents\Standalones\MVSep-MDX23_v1.0.1\Miniconda\python.exe
Faulting module path: C:\Users\skhoc\Documents\Standalones\MVSep-MDX23_v1.0.1\Miniconda\lib\site-packages\PyQt5\Qt5\bin\Qt5Core.dll
Report ID: a3be6840-c44f-4d04-ab42-7bd39e135409
Faulting package full name: 
Faulting package-relative application ID: 

Wont process pass 20%

WIN 10 22H2
11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz
16GB RAM

GPU: RTX 3060

I tried with GPU and CPU.

With CPU, it freeze at 20% (it take about 5 minute to go to 20%) and nothing happen. When you click something , anything, the whole thing crash.

With GPU, the whole thing crash including the cmd prompt window when reaching 20%

See CMD Prompt logs:

CPU Rendering:

C:\Users**\MVSep-MDX23>"./Miniconda/python.exe" gui.py
GPU use: 0
Use device: cpu
Model path: C:\Users*
*\MVSep-MDX23/models/Kim_Vocal_1.onnx
Device: cpu Chunk size: 200000000
Model path: C:\Users*
*
\MVSep-MDX23/models/Kim_Inst.onnx
Device: cpu Chunk size: 200000000
Go for: C:/Users/
/OneDrive/Musique/Album/Sean Price, M-Phazes/[E] Land of the Crooks [148615077] [2013]/01 - Sean Price, M-Phazes - Bag of Shit (feat. Loudmouf Choir)(Explicit).flac
Input audio: (2, 7537326) Sample rate: 44100
C:\Users*
***\Downloads\MVSep-MDX23\inference.py:128: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ..\torch\csrc\utils\tensor_new.cpp:248.)
mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(device)

GPU Rendering:

C:\Users******\Downloads\MVSep-MDX23>"./Miniconda/python.exe" gui.py
GPU use: 0
Use device: cuda:0
Model path: C:\Users*\Downloads\MVSep-MDX23/models/Kim_Vocal_1.onnx
Device: cuda:0 Chunk size: 1000000
Model path: C:\Users*
\Downloads\MVSep-MDX23/models/Kim_Inst.onnx
Device: cuda:0 Chunk size: 1000000
Go for: C:/Users/
/OneDrive/Musique/Album/Sean Price, M-Phazes/[E] Land of the Crooks [148615077] [2013]/01 - Sean Price, M-Phazes - Bag of Shit (feat. Loudmouf Choir)(Explicit).flac
Input audio: (2, 7537326) Sample rate: 44100
C:\Users*
\Downloads\MVSep-MDX23\inference.py:128: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ..\torch\csrc\utils\tensor_new.cpp:248.)
mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(device)
2023-05-12 13:20:55.9949877 [E:onnxruntime:, sequential_executor.cc:494 onnxruntime::ExecuteKernel] Non-zero status code returned while running FusedConv node. Name:'Conv_3' Status Message: D:\a_work\1\s\onnxruntime\core\framework\bfc_arena.cc:368 onnxruntime::BFCArena::AllocateRawInternal Failed to allocate memory for requested buffer of size 603979776

Traceback (most recent call last):
File "C:\Users*\Downloads\MVSep-MDX23\gui.py", line 36, in run
predict_with_model(self.options)
File "C:\Users*********\Downloads\MVSep-MDX23\inference.py", line 479, in predict_with_model
result, sample_rates = model.separate_music_file(audio.T, sr, update_percent_func, i, len(options['input_audio']))
File "C:\Users*
\Downloads\MVSep-MDX23\inference.py", line 344, in separate_music_file
sources1 = demix_full(
File "C:\Users*\Downloads\MVSep-MDX23\inference.py", line 160, in demix_full
sources = demix_base(mix_part, device, models, infer_session)
File "C:\Users*
\Downloads\MVSep-MDX23\inference.py", line 133, in demix_base
res = _ort.run(None, {'input': stft_res.cpu().numpy()})[0]
File "C:\Users********\Downloads\MVSep-MDX23\Miniconda\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running FusedConv node. Name:'Conv_3' Status Message: D:\a_work\1\s\onnxruntime\core\framework\bfc_arena.cc:368 onnxruntime::BFCArena::AllocateRawInternal Failed to allocate memory for requested buffer of size 603979776

Got this message when trying to run the command

I did everything I was told in the GitHub website buy it keeps saying "cannot run inference.py"
Screenshot 2024-05-25 204009

I change "inference.py" to its file path but it still didn't work...
Screenshot 2024-05-25 204651

Please can anybody give me a solution?

FAIL : bad allocation

Running in CPU mode, I get this error:

D:\Downloads\MVSep-MDX23_v1.0.1\inference.py:131: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ..\torch\csrc\utils\tensor_new.cpp:248.)
  mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(device)
Traceback (most recent call last):
  File "D:\Downloads\MVSep-MDX23_v1.0.1\inference.py", line 906, in <module>
    predict_with_model(options)
  File "D:\Downloads\MVSep-MDX23_v1.0.1\inference.py", line 844, in predict_with_model
    result, sample_rates = model.separate_music_file(
  File "D:\Downloads\MVSep-MDX23_v1.0.1\inference.py", line 621, in separate_music_file
    sources1 = demix_full(
  File "D:\Downloads\MVSep-MDX23_v1.0.1\inference.py", line 163, in demix_full
    sources = demix_base(mix_part, device, models, infer_session)
  File "D:\Downloads\MVSep-MDX23_v1.0.1\inference.py", line 136, in demix_base
    res = _ort.run(None, {'input': stft_res.cpu().numpy()})[0]
  File "D:\Downloads\MVSep-MDX23_v1.0.1\Miniconda\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
    return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : bad allocation

Any idea for how to solve?

Need help figuring out where the run.bat file installs things

Made a github account just to ask this, but I'm wondering where run.bat installs the files in the C drive? I'm trying to delete them at the moment

This is due to when I run it, the files take up way too much space on the C drive. The drive is currently at 3.89 GB free after running it (it used to have around 5-10gb free).

I've tried editing the .bat, adding "pause" to the end of the command line like another issue has suggested, but I'm ultimately unable to see what the terminal does after it installs because it disappears after.

Any help would be massively appreciated, as I am fairly desperate to regain the space of my C drive back (and I am kind of panicking at this point).

PyTorch M1 support

Actually,this is not an issue,this is a suggestion.
One month ago,PyTorch released their new version:PyTorch 2.1.There are many changes,the change I'm most concerned about is mps backend.
mps device enables high-performance training on GPU for MacOS devices with Metal programming framework. It introduces a new device to map Machine Learning computational graphs and primitives on highly efficient Metal Performance Shaders Graph framework and tuned kernels provided by Metal Performance Shaders framework respectively.
https://pytorch.org/docs/stable/notes/mps.html
Can you add mps device inference?
Sorry,I know it's hard,but I NEED it.

path error

Hi to all and thank you in advance.
I've this warning about path, but i don't know how solve it.
image

Where i must adding this directory to PATH ???

Many thanks and sorry again if mine is a stupid question

macOS processing won't start

Stuck in here:

(base) naozumi@1-64-20-115 MVSep-MDX23_v1 % python3.8  inference.py --input_audio /Users/naozumi/Downloads/Tower\ of\ Flower.mp3 --output_folder ./results/ --cpu 
GPU use: 0
Version: 1.0.1
Options: 
input_audio: ['/Users/naozumi/Downloads/Tower of Flower.mp3']
output_folder: ./results/
cpu: True
overlap_large: 0.6
overlap_small: 0.5
single_onnx: False
chunk_size: 1000000
large_gpu: False
use_kim_model_1: False
only_vocals: False
Use low GPU memory version of code
Use device: cpu
Use Kim model 2
Go for: /Users/naozumi/Downloads/Tower of Flower.mp3
Input audio: (2, 12154611) Sample rate: 44100

Verbosity?

I have nothing against MVSep notice etc. but can we have verbosity setting argument?

Obviously, I can bypass it by myself by using stdout tweaks or grep but it looks ugly 😛

compensation pranks

@ZFTurbo Привет, вот первое видео: https://www.youtube.com/watch?v=UxPOXAlvUC4
Возможно ли избавиться в коде MVSEP-MDX23-music-separation-model от лишних деталей? Например в драмке слышно гитару, в басе слышно тоже грязь.
Вот как разделяет studio.gaudiolab.io, очень нравится результат:
Второе видео: https://youtu.be/p4bvD8RuU50

Если можете подскать куда копать, был бы рад, хочется минимизировать лишнее в дорожках и увеличить чистоту работы, это с SDR повысит) Вероятно gaudio используют какие-то трюки с Gate, чтобы делать дорожку идеально чистой где амплитуда звука наименьшая, но все же уверен что каким-то математическим действием можно сильно снизить этот эффект.
image У меня есть старый код еще на MDX-NET Main Inst, который от части отредактирован, и в нем применяется компенсация, насколько я помню, это решает данную проблему. Что скажите?

Symbol not found: __svml_cosf8_ha

C:\Users\user\Downloads\MVSep-MDX23_v1.0.1>"./Miniconda/python.exe" gui.py
GPU use: 0
Version: 1.0.1
Use low GPU memory version of code
Use device: cuda:0
Use Kim model 2
Go for: C:/Users/Jasper/Downloads/UTOPIA/TILaFURTHERaNOTICEacExplicitd.mp3
LLVM ERROR: Symbol not found: __svml_cosf8_ha

anyone know what's the problem here?

ModuleNotFoundError: No module named 'demucs.htdemucs'

I cloned the repository directly and installed the dependencies using pip install -r 'requirements.txt', but when I tried to run it, I encountered the error ModuleNotFoundError: No module named 'demucs.htdemucs'. Is there an issue with the decmus or torchaudio versions that I should use? If so, which versions should I use? Thank you!

I'm not exactly sure what to do after using the run.bat

I ran it for the first time, then went to bed while it was downloading the additional requirements. When I woke up and attempted to run it again using the run.bat, nothing happened. It indicated that the requirements were met, but the program closed without any further response.

UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow.

Hello, using this with a dual GPU setup and I receive this warning on both GPU and CPU mode - is this a possible source for a speed increase?

/home/user/MVSEP-MDX23-music-separation-model/inference.py:128: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:245.)

Output Files Are Up-converted to 32-bit Float

Hello, is there any way that the default output could be 16-bit rather than up-converted to 32-bit float please?

Would be cool if 24-bit input material retained its bit-depth, but no point in up-converting 16-bit to 32-bit.
Thank you.

How do I enable CPU over GPU?

RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

I can't open the gui because the bat file does not find a GPU source (AMD), not sure how to enable CPU useage before the gui opens?

I tried running the script "Miniconda\python.exe" gui.py --cpu
I checked the gui and set root['cpu'] = True
and tried setting gpu_use = "-1"

all gave me the same error of no Nvidia driver

Model Crashes When Starting the Separation

I use a Shadow Cloud PC that's an RTX A4500 20GB VRAM and I want to use this model to create my dataset for RVC but for unknown reasons it crashes when I start separation. It seems like #4 has the same situation too so I suspect it doesn't use my GPU but here are the error codes that displayed on CMD.

I also wanna ask how do we apply the arguments like --large_gpu?

edit: I actually decided to read it and it seems it doesnt connect because it's unable to get local issuer certificate. I tried fixed on YT and none worked

CMD LOG:

`C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip>"./Miniconda/python.exe" gui.py
GPU use: 0
Use low GPU memory version of code
Use device: cuda:0
Go for: C:/Users/Shadow/Downloads/Music/Golden Hour - JVKE | Song Cover by SuRge.wav
Input audio: (2, 9257472) Sample rate: 44100
Traceback (most recent call last):
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\urllib\request.py", line 1348, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\http\client.py", line 1282, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\http\client.py", line 1328, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\http\client.py", line 1277, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\http\client.py", line 1037, in _send_output
self.send(msg)
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\http\client.py", line 975, in send
self.connect()
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\http\client.py", line 1454, in connect
self.sock = self._context.wrap_socket(self.sock,
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\ssl.py", line 1071, in _create
self.do_handshake()
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\gui.py", line 36, in run
predict_with_model(self.options)
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\inference.py", line 804, in predict_with_model
result, sample_rates = model.separate_music_file(audio.T, sr, update_percent_func, i, len(options['input_audio']))
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\inference.py", line 554, in separate_music_file
torch.hub.download_url_to_file(remote_url, model_folder + '04573f0d-f3cf25b2.th')
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\site-packages\torch\hub.py", line 611, in download_url_to_file
u = urlopen(req)
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\urllib\request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\urllib\request.py", line 519, in open
response = self._open(req, data)
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\urllib\request.py", line 536, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\urllib\request.py", line 496, in _call_chain
result = func(*args)
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\urllib\request.py", line 1391, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "C:\Users\Shadow\Downloads\MDX23\MVSep-MDX23.zip\Miniconda\lib\urllib\request.py", line 1351, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)>`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.