jarredou / mvsep-mdx23-colab_v2 Goto Github PK
View Code? Open in Web Editor NEWThis project forked from zfturbo/mvsep-mdx23-music-separation-model
Colab adaptation of MVSep Model for MDX23 music separation contest
This project forked from zfturbo/mvsep-mdx23-music-separation-model
Colab adaptation of MVSep Model for MDX23 music separation contest
I wanted to see if there are best settings for separating music and vocals in the best way possible. My goal is to get an instrumental with the least vocal residues, and I want it to be with the instrum2
output since that one gives off better results.
Anyone who knows it?
And i found the model was downloaded in /root/.cache/hgxxxx,is any way to modify the directory of models i want to use?i means ... specify a model_directory,i know there has a directory of model,but some models likely donnt in it
Traceback (most recent call last):
File "/$path/mvsep_v2/inference.py", line 978, in
predict_with_model(options)
File "/$path/mvsep_v2/inference.py", line 862, in predict_with_model
result, sample_rates = model.separate_music_file(audio.T, sr, i, len(options['input_audio']))
File "/$path/mvsep_v2/inference.py", line 699, in separate_music_file
vocals_high = lr_filter(vocals3.T, 12000, 'highpass')
UnboundLocalError: local variable 'vocals3' referenced before assignment
branch: v2.4
Thanks for sharing this SOTA ensemble method. I encountered an error while processing the first file from the input audio list and attempting to process the second one:
It seems that the mdx_models1 is being deleted after each loop:
#L484
And the last time the model was initialized is before the for loop statement, in func EnsembleDemucsMDXMusicSeparationModel()
#L358
Please let me know if I have overlooked anything? Thanks again.
maybe set line below as cuda
to use it with multiple gpu?
MVSEP-MDX23-Colab_v2/inference.py
Line 336 in 359d6c9
Getting this error when I try to run it normally:
/content/MVSEP-MDX23-Colab_v2
GPU use: 0
Traceback (most recent call last):
File "/content/MVSEP-MDX23-Colab_v2/inference.py", line 20, in <module>
from demucs.states import load_model
ModuleNotFoundError: No module named 'demucs'
so thank you.
Thank you, teacher: This combined model works very well. The only drawback is that there is a bit of vocal residue in the separated accompaniment. There is also a bit of hissing noise and suction noise. Is there any solution?
I've used this colab for months now and it has always worked without any problems until now
I just got this error:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 537, in _make_request
response = conn.getresponse()
File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 461, in getresponse
httplib_response = super().getresponse()
File "/usr/lib/python3.10/http/client.py", line 1375, in getresponse
response.begin()
File "/usr/lib/python3.10/http/client.py", line 318, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.10/http/client.py", line 279, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.10/socket.py", line 705, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3.10/ssl.py", line 1303, in recv_into
return self.read(nbytes, buffer)
File "/usr/lib/python3.10/ssl.py", line 1159, in read
return self._sslobj.read(len, buffer)
TimeoutError: The read operation timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen
retries = retries.increment(
File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 470, in increment
raise reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python3.10/dist-packages/urllib3/util/util.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen
response = self._make_request(
File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 539, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 371, in _raise_timeout
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py", line 1238, in hf_hub_download
metadata = get_hf_file_metadata(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py", line 1631, in get_hf_file_metadata
r = _request_wrapper(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py", line 385, in _request_wrapper
response = _request_wrapper(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py", line 408, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_http.py", line 67, in send
return super().send(request, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 532, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 0063d7df-9071-450a-b915-46992e4e873d)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/content/MVSEP-MDX23-Colab_v2/inference.py", line 816, in
predict_with_model(options)
File "/content/MVSEP-MDX23-Colab_v2/inference.py", line 707, in predict_with_model
model = EnsembleDemucsMDXMusicSeparationModel(options)
File "/content/MVSEP-MDX23-Colab_v2/inference.py", line 443, in init
self.model_vl = Segm_Models_Net(config_vl)
File "/content/MVSEP-MDX23-Colab_v2/modules/segm_models.py", line 89, in init
self.unet_model = smp.Unet(
File "/usr/local/lib/python3.10/dist-packages/segmentation_models_pytorch/decoders/unet/model.py", line 71, in init
self.encoder = get_encoder(
File "/usr/local/lib/python3.10/dist-packages/segmentation_models_pytorch/encoders/init.py", line 55, in get_encoder
encoder = TimmUniversalEncoder(
File "/usr/local/lib/python3.10/dist-packages/segmentation_models_pytorch/encoders/timm_universal.py", line 20, in init
self.model = timm.create_model(name, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/timm/models/_factory.py", line 114, in create_model
model = create_fn(
File "/usr/local/lib/python3.10/dist-packages/timm/models/maxxvit.py", line 2307, in maxvit_large_tf_512
return _create_maxxvit('maxvit_large_tf_512', 'maxvit_large_tf', pretrained=pretrained, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/timm/models/maxxvit.py", line 1807, in _create_maxxvit
return build_model_with_cfg(
File "/usr/local/lib/python3.10/dist-packages/timm/models/_builder.py", line 393, in build_model_with_cfg
load_pretrained(
File "/usr/local/lib/python3.10/dist-packages/timm/models/_builder.py", line 186, in load_pretrained
state_dict = load_state_dict_from_hf(pretrained_loc)
File "/usr/local/lib/python3.10/dist-packages/timm/models/_hub.py", line 188, in load_state_dict_from_hf
cached_file = hf_hub_download(hf_model_id, filename=filename, revision=hf_revision)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py", line 1371, in hf_hub_download
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
Any help would be appreciated.
Hi everyone. Using the link to the v.2.3 Colab Notebook and making the separation unchecking the vocals_instru_only, the code terminates with a CTRL-C like exiting, right after the htdemucs_mmi processing. Even copy-pasting the simple instruction mentioned in the readme produces the same behavior. I'm running the notebook with T4 GPU. Moreover I'm noticing a huge usage of both RAM and GPU RAM.
onnxruntime-gpu does not seem to be available for arm. Is there any chance this might work on a arm processor without gpu?
I don't mind if it will be slow.
Hi! I really enjoyed your Colab Notebook and I have a couple of questions:
Thank you in advance for your answer!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.