Giter VIP home page Giter VIP logo

soedinglab / mmseqs2-app Goto Github PK

View Code? Open in Web Editor NEW
52.0 7.0 17.0 10.43 MB

MMseqs2 app to run on your workstation or servers

Home Page: https://search.foldseek.com

License: GNU General Public License v3.0

Makefile 0.94% Go 27.97% Shell 0.72% JavaScript 19.84% Vue 49.51% CSS 0.35% HTML 0.11% Awk 0.15% EJS 0.14% SCSS 0.26%
sequence-search electron vue golang docker docker-compose profile-search bioinformatics mmseqs foldseek

mmseqs2-app's Introduction

App and Server for MMseqs2, Foldseek and ColabFold

MMseqs2 and Foldseek are software suites to search and annotate huge sequence and structure sets. We built a graphical interface for interactive data exploration. Check out a live instance for MMseqs2 and for Foldseek. Additionally, this codebase holds the API server for ColabFold.

The application runs either:

  • on your server through docker-compose, where it can make your sequence, profile or structure databases easily accessible over the web
  • on your workstation as a cross-platform desktop application with the help of Electron

Desktop App

Head over to the release page and download the latest version. We support Linux, macOS and Windows.

Important

The desktop app is currently not maintained. Currently, only the backend-only mode for ColabFold and the MMseqs2 and Foldseek webservers are being maintained. We will revisit the desktop app at some point in the future.

Adding a search database

Once the app is installed, open the Settings panel. There you can add either sequence databases in FASTA format, such as our Uniclust databases or profile databases in Stockholm format, such as the PFAM.

Web app quick start with docker-compose

Make sure you have docker, docker-compose and git installed on your server. To start the MMseqs2/Foldseek web server execute the following commands. Afterwards you can navigate to http://localhost:8877 to access the interface.

# clone the repository
git clone https://github.com/soedinglab/MMseqs2-App.git

# navigate to our docker recipes
cd MMseqs2-App/docker-compose

# (optional) edit the APP entry in the .env file to choose between mmseqs and foldseek

# list available databases
docker-compose run db-setup

# download PDB
docker-compose run db-setup PDB

# start the server with docker-compose
docker-compose up

By default, the server will start on port 8877. You can edit the .env file in the docker-compose directory to change this port.

Head over to the docker-compose readme for more details on running your own server, including how to add your own sequence, profile or structure databases. Take a look at the API documentation to learn how to talk to the server backend.

Building the desktop app

You need to have git, go (>=1.18), node, npm and make installed on your system.

Afterwards run the following commands, and the apps will appear in the build folder.

# clone the repository
git clone https://github.com/soedinglab/MMseqs2-App.git
cd MMseqs2-App

# install all dependencies
npm install

# build the app for all platforms, choose either mmseqs or foldseek
FRONTEND_APP=mmseqs npm run electron:build

mmseqs2-app's People

Contributors

gamcil avatar josemduarte avatar martin-steinegger avatar matthiasblum avatar milot-mirdita avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

mmseqs2-app's Issues

Switching query does not update the page

When querying multiple sequences, the queries list on the left of the page redirects to /i where i is the index of the query sequence.
On chrome, the page URL is updated with the selected index, but the interface freezes until I refresh the page manually.
Maybe the refresh can be automatically triggered when switching query.

App does not work with latest mmseqs version

I am trying to replicate the commands that run in worker.go - specifically the MsaJob script that starts at:

MMSEQS="$1"

When trying to run the expandaln line:

"${MMSEQS}" expandaln "${BASE}/qdb" "${DB1}.idx" "${BASE}/res" "${DB1}.idx" "${BASE}/res_exp" ${EXPAND_PARAM}

I get the error:

Unrecognized parameter "--expand-filter-clusters". Did you mean "--expansion-mode" (Expansion mode)?

This is using mmseqs 13-45111 (the latest version) - is the app supposed to be used with an older version of mmseqs which supported this parameter? Or with a modified/dev version of it?

I encountered a similar issue with some of the other scripts.

To clarify I have not spun up the entire app, I am just trying to get that generated bash script to run in isolation against the latest mmseqs command line tool. Is there something I'm missing?

Docker images versioning

I use this fantastic app from the available docker containers. However there only seems to be a latest tag available in docker hub. The latest tag seems to be tracking every commit in master.

For reproducibility and stability purposes it would help if there were tags available with stable releases, so that production services could point to a certain stable tag instead of latest. Otherwise production services can break unexpectedly if a bug is introduced.

Invalid port ":80"

Fail to start MMseqs2 search server

`docker-compose up

WARNING: The TAG variable is not set. Defaulting to a blank string.
WARNING: The PORT variable is not set. Defaulting to a blank string.
ERROR: The Compose file './../docker-compose.yml' is invalid because:
services.mmseqs-web-webserver.ports is invalid: Invalid port ":80", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
`

Full target/query alignment being truncated in web UI

Hi there, thanks for releasing such a useful and well built piece of software! I've found what appears to be a bug in the web UI where the alignments between target and query pairs seem to be consistently missing the last line of the alignment. See the attached pic. This alignment should show 376 aligned AA positions but it is truncated at 320 at the end of the last full line. I've tested this out with query/target pairs of varying length and same thing. The last partial line of alignment is missing. Thanks in advance for taking a look!

mmseqs_web_bug

Temporary redirect issue

Hello,

I launched by mistake several colabfold search in parallel from our computing cluster, and now I think the mmseqs API has (legitimately) put us on the restricted list. Could you lift the restriction please ? I'll run the next commands sequentially and not in parallele

This is what I get:

curl -s "https://a3m.mmseqs.com/queue"

<html>
<head><title>307 Temporary Redirect</title></head>
<body bgcolor="white">
<center><h1>307 Temporary Redirect</h1></center>
<hr><center>nginx</center>
</body>
</html>

Sorry for the inconvenience,
Matthieu

Foldseek all database view rework

In the current "All databases" view of foldseek, results from various databases are separated. It would be beneficial to have an integrated view that ranks results from all databases by their scores. This will enable users to quickly identify the top hits regardless of their source database.

mmseqs-server: errors when called from colabfold_batch

Hi,
I tried to set up an mmseq2 server following the steps in https://github.com/sokrypton/ColabFold/tree/main/MsaServer
The server starts up as expected.

My colabfold commmand line: "colabfold_batch --host-url=http://server:port fastadir outputdir"
(fastadir contains one fasta file)
When running the command, there is some communication, but it exits with an error after a few seconds. The same command works on the "public" server, i.e. without the --host-url parameter.

I am not sure what I'm missing or what's wrong. The databases seem to work, as colabfold_search can process the same fasta files when pointed to the databases on the file system, i.e. "colabfold_search fastadir /path/to/databases outputdir2" works.

Commands used to index the databases (according to colabfold's instructions). Please see the content of my databases directory in attachment.

  mmseqs tsv2exprofiledb "uniref30_2103" "uniref30_2103_db"
  mmseqs createindex "uniref30_2103_db" tmp1 --remove-tmp-files 1
  mmseqs tsv2exprofiledb "colabfold_envdb_202108" "colabfold_envdb_202108_db"
  mmseqs createindex "colabfold_envdb_202108_db" tmp2 --remove-tmp-files 1

Could you please give me a hint what's wrong in my setup? It seems that some required files do not exist. Is this a known issue/general error? Many thanks.

mmseq-server's output:

$ ./msa-server -local -config config.ono
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 Webserver
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
2022/09/01 18:48:21 MMseqs2 worker
<internal_IP> - - [01/Sep/2022:18:49:18 +0200] "POST /ticket/msa HTTP/1.1" 200 67
createdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/job.fasta <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/qdb

Converting sequences
[
Time for merging to qdb_h: 0h 0m 0s 388ms
Time for merging to qdb: 0h 0m 0s 215ms
Database type: Aminoacid
Time for processing: 0h 0m 0s 838ms
search <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/qdb <internal_path>/ColabFold/MsaServer/databases/uniref30_2103 <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/tmp --num-iterations 3 --db-load-mode 2 -a --k-score 'seq:96,prof:80' -e 0.1 --max-seqs 10000

Input <internal_path>/ColabFold/MsaServer/databases/uniref30_2103 does not exist
expandaln <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/qdb <internal_path>/ColabFold/MsaServer/databases/uniref30_2103.idx <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res <internal_path>/ColabFold/MsaServer/databases/uniref30_2103.idx <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_exp --db-load-mode 2 --expansion-mode 0 -e inf --expand-filter-clusters 1 --max-seq-id 0.95

Input <internal_path>/ColabFold/MsaServer/databases/uniref30_2103.idx does not exist
mvdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/tmp/latest/profile_1 <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/prof_res

Time for processing: 0h 0m 0s 0ms
lndb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/qdb_h <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/prof_res_h

Time for processing: 0h 0m 0s 6ms
align <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/prof_res <internal_path>/ColabFold/MsaServer/databases/uniref30_2103.idx <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_exp <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_exp_realign --db-load-mode 2 -e 10 --max-accept 100000 --alt-ali 10 -a

Input <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/prof_res does not exist
filterresult <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/qdb <internal_path>/ColabFold/MsaServer/databases/uniref30_2103.idx <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_exp_realign <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_exp_realign_filter --db-load-mode 2 --qid 0 --qsc 0.8 --diff 0 --max-seq-id 1.0 --filter-min-enable 100

Input <internal_path>/ColabFold/MsaServer/databases/uniref30_2103.idx does not exist
result2msa <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/qdb <internal_path>/ColabFold/MsaServer/databases/uniref30_2103.idx <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_exp_realign_filter <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/uniref.a3m --msa-format-mode 6 --db-load-mode 2 --filter-msa 1 --filter-min-enable 1000 --diff 3000 --qid 0.0,0.2,0.4,0.6,0.8,1.0 --qsc 0 --max-seq-id 0.95

Input <internal_path>/ColabFold/MsaServer/databases/uniref30_2103.idx does not exist
rmdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_exp_realign

Time for processing: 0h 0m 0s 2ms
rmdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_exp

Time for processing: 0h 0m 0s 1ms
rmdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res

Time for processing: 0h 0m 0s 1ms
rmdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_exp_realign_filter

Time for processing: 0h 0m 0s 1ms
search <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/prof_res <internal_path>/ColabFold/MsaServer/databases/pdb70 <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_pdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/tmp --db-load-mode 2 -s 7.5 -a -e 0.1

Input <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/prof_res does not exist
convertalis <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/prof_res <internal_path>/ColabFold/MsaServer/databases/pdb70.idx <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_pdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/pdb70.m8 --format-output query,target,fident,alnlen,mismatch,gapopen,qstart,qend,tstart,tend,evalue,bits,cigar --db-load-mode 2

Input <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/prof_res does not exist
rmdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_pdb

Time for processing: 0h 0m 0s 1ms
search <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/prof_res <internal_path>/ColabFold/MsaServer/databases/colabfold_envdb_202108 <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_env <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/tmp --num-iterations 3 --db-load-mode 2 -a --k-score 'seq:96,prof:80' -e 0.1 --max-seqs 10000

Input <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/prof_res does not exist
expandaln <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/prof_res <internal_path>/ColabFold/MsaServer/databases/colabfold_envdb_202108.idx <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_env <internal_path>/ColabFold/MsaServer/databases/colabfold_envdb_202108.idx <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_env_exp -e inf --expansion-mode 0 --db-load-mode 2

Input <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/prof_res does not exist
align <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/tmp/latest/profile_1 <internal_path>/ColabFold/MsaServer/databases/colabfold_envdb_202108.idx <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_env_exp <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_env_exp_realign --db-load-mode 2 -e 10 --max-accept 100000 --alt-ali 10 -a

Input <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/tmp/latest/profile_1 does not exist
filterresult <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/qdb <internal_path>/ColabFold/MsaServer/databases/colabfold_envdb_202108.idx <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_env_exp_realign <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_env_exp_realign_filter --db-load-mode 2 --qid 0 --qsc 0.8 --diff 0 --max-seq-id 1.0 --filter-min-enable 100

Input <internal_path>/ColabFold/MsaServer/databases/colabfold_envdb_202108.idx does not exist
result2msa <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/qdb <internal_path>/ColabFold/MsaServer/databases/colabfold_envdb_202108.idx <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_env_exp_realign_filter <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/bfd.mgnify30.metaeuk30.smag30.a3m --msa-format-mode 6 --db-load-mode 2 --filter-msa 1 --filter-min-enable 1000 --diff 3000 --qid 0.0,0.2,0.4,0.6,0.8,1.0 --qsc 0 --max-seq-id 0.95

Input <internal_path>/ColabFold/MsaServer/databases/colabfold_envdb_202108.idx does not exist
rmdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_env_exp_realign_filter

Time for processing: 0h 0m 0s 3ms
rmdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_env_exp_realign

Time for processing: 0h 0m 0s 1ms
rmdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_env_exp

Time for processing: 0h 0m 0s 1ms
rmdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res_env

Time for processing: 0h 0m 0s 1ms
rmdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/qdb

Time for processing: 0h 0m 0s 5ms
rmdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/qdb_h

Time for processing: 0h 0m 0s 2ms
rmdb <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/res

Time for processing: 0h 0m 0s 0ms
2022/09/01 18:49:20 Execution Error: open <internal_path>/ColabFold/MsaServer/jobs/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA/uniref.a3m: no such file or directory
<internal_IP> - - [01/Sep/2022:18:49:23 +0200] "GET /ticket/X5WEFLF5vsH55VHTkuHOsdMZAmy79DZ-68NARA HTTP/1.1" 200 65

colabfold_batch's output:

2022-09-01 18:49:07,506 Running colabfold 1.3.0 (81e3fd0308de79566553cd6407fe8c67bdf68fbd)
<intgernal_path>/python/anaconda3/envs/colabfold/lib/python3.8/site-packages/haiku/_src/data_structures.py:37: FutureWarning: jax.tree_structure is deprecated, and will be removed in a future release. Use jax.tree_util.tree_structure instead.
  PyTreeDef = type(jax.tree_structure(None))
2022-09-01 18:49:14,929 Found 5 citations for tools or databases
2022-09-01 18:49:19,427 Query 1/1: 3gbn_B (length 173)
2022-09-01 18:49:19,452 Sleeping for 5s. Reason: PENDING                        
ERROR:   0%|                               | 0/150 [elapsed: 00:05 remaining: ?]
2022-09-01 18:49:24,462 Could not get MSA/templates for 3gbn_B: MMseqs2 API is giving errors. Please confirm your input is a valid protein sequence. If error persists, please try again an hour later.
Traceback (most recent call last):
  File "<internal_path>/python/anaconda3/envs/colabfold/lib/python3.8/site-packages/colabfold/batch.py", line 1332, in run
    ) = get_msa_and_templates(
  File "<internal_path>/python/anaconda3/envs/colabfold/lib/python3.8/site-packages/colabfold/batch.py", line 820, in get_msa_and_templates
    a3m_lines = run_mmseqs2(
  File "<internal_path>/python/anaconda3/envs/colabfold/lib/python3.8/site-packages/colabfold/colabfold.py", line 178, in run_mmseqs2
    raise Exception(f'MMseqs2 API is giving errors. Please confirm your input is a valid protein sequence. If error persists, please try again an hour later.')

colabfold-databases.txt

jobs stuck in PENDING status with local mmseqs-web API

Hello,

I'm trying to make the mmseqs-web API work but I'm encountering several issues.

This is the Dockerfile I used to build the API:

FROM --platform=linux/amd64 golang:latest as builder
ARG TARGETARCH

WORKDIR /opt/build
ADD backend .
RUN GOOS=linux GOARCH=$TARGETARCH go build -o mmseqs-web

ADD https://mmseqs.com/latest/mmseqs-linux-avx2.tar.gz  .

ADD https://mmseqs.com/foldseek/foldseek-linux-avx2.tar.gz  .

ADD https://raw.githubusercontent.com/soedinglab/MMseqs2/678c82ac44f1178bf9a3d49bfab9d7eed3f17fbc/util/mmseqs_wrapper.sh binaries/mmseqs
ADD https://raw.githubusercontent.com/steineggerlab/foldseek/0a68e16214a6db745cee783128ccba8546ea5dc9/util/foldseek_wrapper.sh binaries/foldseek

RUN mkdir binaries; \
    if [ "$TARGETARCH" = "arm64" ]; then \
      for i in mmseqs foldseek; do \
        if [ -e "${i}-linux-arm64.tar.gz" ]; then \
          cat ${i}-linux-arm64.tar.gz | tar -xzvf- ${i}/bin/${i}; \
          mv ${i}/bin/${i} binaries/${i}; \
        fi; \
      done; \
    else \
      for i in mmseqs foldseek; do \
        for j in sse2 sse41 avx2; do \
          if [ -e "${i}-linux-${j}.tar.gz" ]; then \
            cat ${i}-linux-${j}.tar.gz | tar -xzvf- ${i}/bin/${i}; \
            mv ${i}/bin/${i} binaries/${i}_${j}; \
          fi; \
        done; \
      done; \
    fi;

RUN chmod -R +x binaries

FROM debian:stable-slim
LABEL maintainer="Milot Mirdita <[email protected]>"

RUN apt-get update && apt-get install -y ca-certificates wget aria2 && rm -rf /var/lib/apt/lists/*
COPY --from=builder /opt/build/mmseqs-web /opt/build/binaries/* /usr/local/bin/

ENTRYPOINT ["/usr/local/bin/mmseqs-web"]

I then installed the databanks and created the indexes the usual way:

mmseqs databases UniRef50 UniRef50 tmp --remove-tmp-files
mmseqs createindex UniRef50 tmp --split 1

and added the params files along the banks in the same directory (/local/banks):

{
  "name": "UniRef50",
  "path": "UniRef50",
  "version": "",
  "default": true,
  "order": 0,
  "index": "",
  "search": "",
  "status": "COMPLETE"
}

This is how I launch the API:

singularity exec --env MMSEQS_NUM_THREADS=2 --bind /local/banks:/local/banks /shared/software/singularity/images/mmseqs2-app-v7-8e1704f-rpbs.sif /usr/local/bin/mmseqs-web -local -config config.json -app mmseqs

This is the content of the config.json file:

{
    "app": "mmseqs",
    "verbose": true,
    "server" : {
        "address"    : "0.0.0.0:3000",
        "dbmanagment": false,
        "cors"       : true
    },
    "worker": {
        "gracefulexit" : true
    },
    "paths" : {
        "databases"    : "/local/banks/",
        "results"      : "/shared/home/rey/colabfold",
        "temporary"    : "/tmp",
        "colabfold"    : {
            "uniref"        : "/local/banks/UniRef50"
        },
        "mmseqs"       : "/usr/local/bin/mmseqs",
        "foldseek"     : "/usr/local/bin/foldseek"
    },
    "redis" : {
        "network"  : "tcp",
        "address"  : "mmseqs-web-redis:6379",
        "password" : "",
        "index"    : 0
    },
    "mail" : {
        "type"      : "null",
        "sender"    : "[email protected]",
        "templates" : {
            "success" : {
                "subject" : "Done -- %s",
                "body"    : "Dear User,\nThe results of your submitted job are available now at https://search.mmseqs.com/queue/%s .\n"
            },
            "timeout" : {
                "subject" : "Timeout -- %s",
                "body"    : "Dear User,\nYour submitted job timed out. More details are available at https://search.mmseqs.com/queue/%s .\nPlease adjust the job and submit it again.\n"
            },
            "error"   : {
                "subject" : "Error -- %s",
                "body"    : "Dear User,\nYour submitted job failed. More details are available at https://search.mmseqs.com/queue/%s .\nPlease submit your job later again.\n"
            }
        }
    }
}

I get a response with curl which seems to indicate that the API is running and listening on correct port (3000):

curl -X GET http://10.0.1.246:3000/databases
{"databases":[{"name":"UniRef50","version":"","path":"UniRef50","default":true,"order":0,"taxonomy":false,"full_header":false,"index":"","search":"","status":"COMPLETE"},{"name":"UniRef30","version":"2103","path":"UniRef30","default":false,"order":1,"taxonomy":false,"full_header":false,"index":"","search":"","status":"COMPLETE"}]}

On a side note, I can't list databases if I the status in the params file is different from COMPLETE.

If I try to submit a sequence with python:

>>> from requests import get, post
>>> ticket = post('http://10.0.1.246:3000/ticket', {
...             'q' : '>FASTA\nMPKIIEAIYENGVFKPLQKVDLKEGE\n',
...             'database[]' : ["UniRef50"],
...             'mode' : 'all',
...         }).json()
>>> ticket
{'id': 'A5n_NyrysSRtH7tNN6uuYdS6LFkv2bhK3Z94IA', 'status': 'PENDING'}

The directory containing the job is correctly created. But then nothing happens, the jobs stays forever in PENDING state.

Trying to get job status after a few hours, nothing seems to happen either:

>>> status = get('http://10.0.1.246:3000/ticket/' + ticket['id']).json()
>>> status
{'id': 'A5n_NyrysSRtH7tNN6uuYdS6LFkv2bhK3Z94IA', 'status': 'PENDING'}

Any idea / advice are welcome.

MSA

Hi I am trying to call /ticket/msa API but it shows me "Execution Error: Invalid number of databases specified"

Also, How can I get a3m files using APIs. I can't find them under the downloaded folder as well.

[Enhancement] add consensus line to alignment output

Hi, I am using the web server set up locally and it's great. I have a feature request from an end user for the alignment view. They would like to see a line between the Q and T sequences showing the match/mismatch/gap results, similar to the current NCBI BLAST output. Is this something you would consider implementing? Thanks!

local colabfold_batch cannot communicate with local mmseq2 web api

Hi,
My local installation of colabfold_batch is running fine with the default mmseq2 server. My local mmseq2 server with one example is running fine when accessed in the browser and via curl. However local colabfold_batch is unwilling to talk to my local mmseq2 server.
Everything was set up as described in the readme.
I suspect the provided nginx.conf as the server sends back "http 405 Method not allowed" and the actual mmseq2 code is not reached. Indeed colabfold_batch sends post requests. I tried to work around it, but without success.
Is there anything missing or is there a version mismatch or something else?

Thanks for providing colabfold and mmseq2. Excellent work!
Kind regards, Christian

Error:
2022-05-23 18:17:23,734 Found 5 citations for tools or databases
2022-05-23 18:17:27,288 Query 1/2: 7F7X_2 (length 45)
2022-05-23 18:17:27,297 Server didn't reply with json:

<title>405 Not Allowed</title>

405 Not Allowed


nginx/1.19.10

nginx log in container:
[23/May/2022:16:17:27 +0000] "POST //ticket/msa HTTP/1.1" 405 158 "-" "python-requests/2.27.1"

Local changes to the frontend code are not picked up by the app

In the search result output, I would like to display the full header of the target. I was able to get the header data (theader) by updating the format-output argument, but my changes to the frontend result files are not reflected in the app. Can you provide more detailed instructions on how to deploy custom changes to the frontend?

Large Index Files for Nucleotide Sequences

Many thanks for this nice piece of software as well as MMseqs2!

I'm experiencing some strange behavior where small databases of nucleotide sequences require surprising amounts of disk space to store indices.

Expected Behavior

  • the index for a small collection of 14k nucleotide sequences (pdb_rna_sequence) should take much less space than the index of a large collection of 321k protein sequences (pdb_protein_sequence)

Current Behavior

  • pdb_rna_sequence.idx has a size of 8.1 GB, pdb_protein_sequence.idx has a size of only 1.5 GB
  • when I execute ffindex_unpack on pdb_rna_sequence.idx the resulting files amount to ~50 MB

pdb_rna_sequence.idx.index looks like this:

0	0	3
1	4096	49
2	69632	412
3	61440	6145
4	8192	49153
5	77824	273617
6	352256	5931696
7	6287360	273505
8	6561792	5931696
9	13246464	33871597
10	47120384	9
12	8643059712	9
13	8643063808	9
14	8637145088	5912163
15	8637059072	9
16	8637063168	78177
18	12496896	273617
19	12771328	112717
20	12886016	273505
21	13160448	81733
22	13242368	41
23	73728	1

The offsets for entries 12-16 look dubious.

Steps to Reproduce (for bugs)

Protein Params:

{"status":"COMPLETE","display":{"name":"PDB protein sequence (seqres)","version":"2022-01-17","path":"pdb_protein_sequence","default":true,"order":0,"index":"-s 7.5","search":"-s 7.5 --max-seqs 2000"}}

RNA Params:

{"status":"COMPLETE","display":{"name":"PDB RNA sequence (seqres)","version":"2022-01-17","path":"pdb_rna_sequence","default":false,"order":1,"index":"--search-type 3","search":"--max-seqs 2000 --search-type 3"}}

Sorry if this is the wrong project to submit this issue. Thank you in advance.

host not found in upstream "mmseqs-web-api

When I try to run the server using "docker-compose up" I get the error as following;

Pulling mmseqs-web-webserver (ghcr.io/soedinglab/mmseqs-app-frontend:master)...
master: Pulling from soedinglab/mmseqs-app-frontend
Digest: sha256:d31e43087d7ce14cbae752bcdc56d7b37db439376dc47300d5cebe31e46b6b5d
Status: Downloaded newer image for ghcr.io/soedinglab/mmseqs-app-frontend:master
Starting docker-compose_mmseqs-web-redis_1 ... done
Recreating docker-compose_mmseqs-web-api_1 ... done
Recreating docker-compose_mmseqs-web-worker_1    ... done
Recreating docker-compose_mmseqs-web-webserver_1 ... done
Attaching to docker-compose_mmseqs-web-redis_1, docker-compose_mmseqs-web-api_1, docker-compose_mmseqs-web-webserver_1, docker-compose_mmseqs-web-worker_1
mmseqs-web-api_1        | panic: Key: 'ParamsDisplayV1.Status' Error:Field validation for 'Status' failed on the 'required' tag
mmseqs-web-api_1        | Key: 'ParamsDisplayV1.Display.Name' Error:Field validation for 'Name' failed on the 'required' tag
mmseqs-web-api_1        | Key: 'ParamsDisplayV1.Display.Path' Error:Field validation for 'Path' failed on the 'required' tag
mmseqs-web-api_1        | 
mmseqs-web-api_1        | goroutine 22 [running]:
mmseqs-web-api_1        | main.server.func1()
mmseqs-web-api_1        |       /opt/build/server.go:44 +0x1dc
mmseqs-web-api_1        | created by main.server in goroutine 1
mmseqs-web-api_1        |       /opt/build/server.go:41 +0x119
mmseqs-web-redis_1      | 1:C 04 Dec 2023 17:05:34.051 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
mmseqs-web-redis_1      | 1:C 04 Dec 2023 17:05:34.052 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
mmseqs-web-redis_1      | 1:C 04 Dec 2023 17:05:34.052 * Redis version=7.2.3, bits=64, commit=00000000, modified=0, pid=1, just started
mmseqs-web-redis_1      | 1:C 04 Dec 2023 17:05:34.052 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
docker-compose_mmseqs-web-api_1 exited with code 2
mmseqs-web-redis_1      | 1:M 04 Dec 2023 17:05:34.052 * monotonic clock: POSIX clock_gettime
mmseqs-web-worker_1     | 2023/12/04 17:05:35 MMseqs2 worker
mmseqs-web-redis_1      | 1:M 04 Dec 2023 17:05:34.053 * Running mode=standalone, port=6379.
mmseqs-web-webserver_1  | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
mmseqs-web-webserver_1  | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
mmseqs-web-webserver_1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
mmseqs-web-redis_1      | 1:M 04 Dec 2023 17:05:34.054 * Server initialized
mmseqs-web-webserver_1  | 10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
mmseqs-web-redis_1      | 1:M 04 Dec 2023 17:05:34.054 * Loading RDB produced by version 7.2.3
mmseqs-web-redis_1      | 1:M 04 Dec 2023 17:05:34.054 * RDB age 264 seconds
mmseqs-web-redis_1      | 1:M 04 Dec 2023 17:05:34.054 * RDB memory usage when created 0.92 Mb
mmseqs-web-webserver_1  | /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
mmseqs-web-redis_1      | 1:M 04 Dec 2023 17:05:34.054 * Done loading RDB, keys loaded: 0, keys expired: 0.
mmseqs-web-webserver_1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
mmseqs-web-redis_1      | 1:M 04 Dec 2023 17:05:34.054 * DB loaded from disk: 0.000 seconds
mmseqs-web-redis_1      | 1:M 04 Dec 2023 17:05:34.054 * Ready to accept connections tcp
mmseqs-web-webserver_1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
mmseqs-web-webserver_1  | /docker-entrypoint.sh: Configuration complete; ready for start up
mmseqs-web-webserver_1  | 2023/12/04 17:05:35 [emerg] 1#1: host not found in upstream "mmseqs-web-api" in /etc/nginx/conf.d/default.conf:13
mmseqs-web-webserver_1  | nginx: [emerg] host not found in upstream "mmseqs-web-api" in /etc/nginx/conf.d/default.conf:13

--expand-filter-clusters

The MSA job pipeline built into this app has the following line in it :

EXPAND_PARAM="--expansion-mode 0 -e ${EXPAND_EVAL} --expand-filter-clusters ${FILTER} --max-seq-id 0.95"
...
"${MMSEQS}" expandaln "${BASE}/qdb" "${DBBASE}/${DB1}.idx" "${BASE}/res" "${DBBASE}/${DB1}.idx" "${BASE}/res_exp" --db-load-mode 2 ${EXPAND_PARAM}

But it doesn't appear as though mmseqs expandaln recognizes the --expand-filter-clusters parameter. I can't find it anywhere in the source. Does this rely on an unpublished version of mmseqs?

Awesome work on ColabFold!

"405 Not Allowed" returned for the queries

After installing and running the MMseqs2-App I started a query from ColabFold. I tried the same query with public MMseqs2 server and get MSAs without any error. However for my local server I get 405 Not Allowed error. To ensure that permissions is available I also started the server as;

sudo docker compose up

However the result haven't changed. Here is the full output from ColabFold and log files of ngnix is also attached.

access.log
error.log
mmseqs-web.access.log

2023-12-07 15:43:48,689 Running colabfold 1.5.3
2023-12-07 15:43:48,858 Unable to initialize backend 'rocm': NOT_FOUND: Could not find registered platform with name: "rocm". Available platform names are: Interpreter CUDA
2023-12-07 15:43:48,858 Unable to initialize backend 'tpu': module 'jaxlib.xla_extension' has no attribute 'get_tpu_client'
2023-12-07 15:43:51,163 Running on GPU
2023-12-07 15:43:52,081 Found 4 citations for tools or databases
2023-12-07 15:43:52,082 Query 1/1: binder001 (length 1528)
2023-12-07 15:43:52,093 Server didn't reply with json: <html>                                                                                                                                 
<head><title>405 Not Allowed</title></head>
<body>
<center><h1>405 Not Allowed</h1></center>
<hr><center>nginx/1.25.3</center>
</body>
</html>

SUBMIT:   0%|                                                                                                                                            | 0/300 [elapsed: 00:00 remaining: ?]
2023-12-07 15:43:52,093 Could not get MSA/templates for binder001: MMseqs2 API is giving errors. Please confirm your input is a valid protein sequence. If error persists, please try again an hour later.
Traceback (most recent call last):
  File "/usr/local/envs/colabfold/lib/python3.9/site-packages/colabfold/batch.py", line 1483, in run
    = get_msa_and_templates(jobname, query_sequence, a3m_lines, result_dir, msa_mode, use_templates,
  File "/usr/local/envs/colabfold/lib/python3.9/site-packages/colabfold/batch.py", line 844, in get_msa_and_templates
    a3m_lines = run_mmseqs2(
  File "/usr/local/envs/colabfold/lib/python3.9/site-packages/colabfold/colabfold.py", line 209, in run_mmseqs2
    raise Exception(f'MMseqs2 API is giving errors. Please confirm your input is a valid protein sequence. If error persists, please try again an hour later.')
Exception: MMseqs2 API is giving errors. Please confirm your input is a valid protein sequence. If error persists, please try again an hour later.
2023-12-07 15:43:52,095 Done

Can't run via docker

When attempting to bring docker-compose up, the api image crashes with the following output :

mmseqs-web-api_1 | panic: invalid character 'e' in numeric literal
mmseqs-web-api_1 |
mmseqs-web-api_1 | goroutine 20 [running]:
mmseqs-web-api_1 | main.server.func1(0xc0002a4780, 0x8d1538, 0xc0000b4350)
mmseqs-web-api_1 | /opt/mmseqs-web/server.go:30 +0x28c
mmseqs-web-api_1 | created by main.server
mmseqs-web-api_1 | /opt/mmseqs-web/server.go:27 +0xb9
dockercompose_mmseqs-web-api_1 exited with code 2

This is my first time interpreting stack traces for Golang, but it appears to be referencing this line in server.go ?
databases, err := Databases(config.Paths.Databases, false)

The config.json file in ./docker-compose isn't edited in any way.

docker install error: ghcr.io pull error

Dear developer and community,

First of all, thank you for your great scientific work.

When I wanted to install the docker version of the current version (the latest version on 2022/Jul/7), the following error message confused me after typing "docker-compose up" command.

(base) user@user:~/software/mmseqs2/2/MMseqs2-App/docker-compose$ docker-compose up
Pulling mmseqs-web-api (ghcr.io/soedinglab/mmseqs-app-backend:master)...
ERROR: Head "https://ghcr.io/v2/soedinglab/mmseqs-app-backend/manifests/master": unauthorized

so if you have a solution to solve this issue, please help your power.

Thanks,

Foldseek jobs are not persistent from GUI

Hello,

I'm trying to implement the foldseek docker-compose.yml as a docker swarm stack but have noticed that jobs which have been run are no longer accessible via the gui after downing and re-deploying the stack.

The previously run jobs are maintained in the jobs directory as expected and data is accessible via the command line, however in the job.json the status is "PENDING". The gui does seem to be aware of the contents of the jobs directory because tabs for old and new jobs are shown, but clicking on an old job returns "Job Status: ERROR Job failed. Please try again later.", even though the job was successful at the time of running. See screenshot below:

image

Any advice on this would be massively appreciated!

Edit:
If I manually edit the job.json "status" field from "PENDING" to "COMPLETE" before restarting the containers then I can make the job persist from the gui, however this is not ideal. If I manually edit the job.json "status" field after restarting then the job is not persistent. Any suggestions on how to make the status update automatically?

MSAJob - Reg

Hi,

I am exploring MMSeq2-App and MMSeq2 for my use case. However, I have been facing some difficulties. I followed the instructions from "Web app quickstart with docker-compose" section. I downloaded and indexed both the uniclust30 and pdb70 databases. The MMSeq web server is up locally on port 8877 and I am able to use the search. Q1. I am not getting the MSA job option and only the search option is visible. Please can you let me know how to invoke/execute the MSAJob option (stockholmServer False) from backend/worker.go code using the web server.

Docker app not working due to mmseqs version mismatch

After this commit 4cd0e0e the app doesn't seem to work anymore because of an error in backend when executing the search queries:

Unrecognized parameter --dont-shuffle

It seems that the docker image does not have a version of the mmseqs2 executable that supports the --dont-shuffle parameter, introduced in release 4

Local Foldseek searches are no longer working

Recently cloned the repo again and re-built the PDB database, setting up for foldseek local server.

On docker compose up the server seems functional, however after submitting any job mmseqs-web-worker-1 errors out with the following:

mmseqs-web-worker-1     | Index version: fs1
mmseqs-web-worker-1     | Generated by:  7192be546fc4086c7176b86b630932b3994970b1
mmseqs-web-worker-1     | ScoreMatrix:  3di.out
mmseqs-web-worker-1     | Index version: fs1
mmseqs-web-worker-1     | Generated by:  7192be546fc4086c7176b86b630932b3994970b1
mmseqs-web-worker-1     | ScoreMatrix:  3di.out
mmseqs-web-worker-1     | Index version: fs1
mmseqs-web-worker-1     | Generated by:  7192be546fc4086c7176b86b630932b3994970b1
mmseqs-web-worker-1     | ScoreMatrix:  3di.out
mmseqs-web-worker-1     | Index version: fs1
mmseqs-web-worker-1     | Generated by:  7192be546fc4086c7176b86b630932b3994970b1
mmseqs-web-worker-1     | ScoreMatrix:  3di.out
mmseqs-web-worker-1     | [=================================================================Error: Convert Alignments died
mmseqs-web-worker-1     | Segmentation fault (core dumped)
mmseqs-web-worker-1     | 2024/04/29 16:02:53 Execution Error: Execution Error: exit status 1 

Any clues on what changes could have caused this?

Thanks!

Historical jobs not loaded: "Job failed. Please try again later."

New to Docker. Trying to follow README instructions. The application runs fine but on trying to recall the historical jobs after restarting the container we get the error "Job failed. Please try again later.". Icons in history tab are question marks. However when processing the same sequence the historical job is updated and the icon changes back, implying that the job file can be written to.

Is this expected behaviour or have we missed a step?

Pass --max-seqs through GUI or server API?

Thanks for building this nice search server! I've been using the FoldSeek search server to search through AlphaFold/Uniprot50 and am noticing that it always returns a maximum of 1000 hits.

Would there be a way to allow for changing the FoldSeek parameters for the webserver, either through the GUI or the API backend?

I noticed that you can set a database's mmseqs2 search parameters using database/ POST but I couldn't find a way to pass search parameters through the ticket/ POST API. It would be super nice to be able to set these kinds of parameters along with the ticket submission.

Error when searching nucleotide DB

I am using the docker version of the webserver, and I have successfully built and searched the example uniclust DB and another custom protein DB.
I have also built a DNA DB, but searching it produces an error. Both the building and the searching logs are attached, and the issue seems to be a wrong path:

mmseqs-web-worker_1     | splitsequence /opt/mmseqs-web/databases/mydb_nt.idx /opt/mmseqs-web/jobs/UE3fNh-OopaZXL7T5LF-ZwnmWa4PbEOPQiOu8w/tmp/13815044513268057608/search_tmp/9906939544243059259/target_seqs_split --max-seq-len 10000 --sequence-overlap 0 --threads 24 --compressed 0 -v 3
mmseqs-web-worker_1     |
mmseqs-web-worker_1     | MMseqs Version:               43a10ce5f7e83f421af19b7a5cc1792e9c3d3bbd
mmseqs-web-worker_1     | Max. sequence length          10000
mmseqs-web-worker_1     | Overlap between sequences     0
mmseqs-web-worker_1     | Threads                       24
mmseqs-web-worker_1     | Compressed                    0
mmseqs-web-worker_1     | Verbosity                     3
mmseqs-web-worker_1     |
mmseqs-web-worker_1     | Could not open data file /opt/mmseqs-web/databases/mydb_nt.idx_h!
mmseqs-web-worker_1     | Error: Split sequence died
mmseqs-web-worker_1     | Error: Search died
mmseqs-web-worker_1     | 2019/03/05 13:54:03 Execution Error: exit status 1

I believe the splitsequence command is supposed to take mydb_nt and not mydb_nt.idx as argument, therefore it should be an easy fix.

I hope this is useful, let me know if you can't reproduce the issue.
Simone
createdb.log
search.log

Running it in kubernetes

Hi,
We would like to run it in k8s. Are you aware of anyone running it in k8s so we don't need to reinvent the wheel ?

Cheers.

which databases?

Could you tell me the exact link of the SMAG and MetaEuk databases? Thank you!

pairaln mode does not support the binary version of the taxonomy files

Hi guys! While trying to set up a mmseqs web server to run colabfold for a multimer, I came across the following issue that hampers the creation of the final pair.a3m alignment file:

mmseqs-web-worker_1     | pairaln /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/qdb /opt/mmseqs-web/databases/uniref30_2103_db.idx /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/res_exp_realign /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/res_exp_realign_pair --db-load-mode 2
mmseqs-web-worker_1     |
mmseqs-web-worker_1     | Input taxonomy database "/opt/mmseqs-web/databases/uniref30_2103_db.idx" is missing files:
mmseqs-web-worker_1     | - /opt/mmseqs-web/databases/uniref30_2103_db.idx_mapping
mmseqs-web-worker_1     | - /opt/mmseqs-web/databases/uniref30_2103_db.idx_nodes.dmp
mmseqs-web-worker_1     | - /opt/mmseqs-web/databases/uniref30_2103_db.idx_names.dmp
mmseqs-web-worker_1     | - /opt/mmseqs-web/databases/uniref30_2103_db.idx_merged.dmp
mmseqs-web-worker_1     | align /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/qdb /opt/mmseqs-web/databases/uniref30_2103_db.idx /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/res_exp_realign_pair /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/res_exp_realign_pair_bt --db-load-mode 2 -e inf -a
mmseqs-web-worker_1     |
mmseqs-web-worker_1     | Input /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/res_exp_realign_pair does not exist
mmseqs-web-worker_1     | pairaln /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/qdb /opt/mmseqs-web/databases/uniref30_2103_db.idx /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/res_exp_realign_pair_bt /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/res_final --db-load-mode 2
mmseqs-web-worker_1     |
mmseqs-web-worker_1     | Input taxonomy database "/opt/mmseqs-web/databases/uniref30_2103_db.idx" is missing files:
mmseqs-web-worker_1     | - /opt/mmseqs-web/databases/uniref30_2103_db.idx_mapping
mmseqs-web-worker_1     | - /opt/mmseqs-web/databases/uniref30_2103_db.idx_nodes.dmp
mmseqs-web-worker_1     | - /opt/mmseqs-web/databases/uniref30_2103_db.idx_names.dmp
mmseqs-web-worker_1     | - /opt/mmseqs-web/databases/uniref30_2103_db.idx_merged.dmp
mmseqs-web-worker_1     | result2msa /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/qdb /opt/mmseqs-web/databases/uniref30_2103_db.idx /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/res_final /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/pair.a3m --db-load-mode 2 --msa-format-mode 5
mmseqs-web-worker_1     |
mmseqs-web-worker_1     | Input /opt/mmseqs-web/jobs/qOfRdtYp0THSGaDxs9TXTh4GNdmpLvMSERS4oQ/res_final does not exist

It seems that the pairaln mode of mmseqs2 does not support the binary version of the taxonomy files (uniref30_2103_db_taxonomy) and requires the taxonomy flat file databases. I use this mmseqs commit: 96b2009982ce686e0b78e226c75c59fd286ba450. Could an update to the latest version solve this issue? Is there any workaround to create the flat file databases for uniref30_2013?

Thanks in advance!

Problems with nucleotide searches: API responds with 400

This is a follow up to my comment on #3

Issue: when executing query against DNA/RNA database, the API call to endpoint api/result/LwDKtIlhXr4oSa7w-zTpgwxHMXikC-FInHXmvg/ returns a 400 and a message like:

record on line 3: wrong number of fields

This is the debugging output as requested by @milot-mirdita :

# head jobs/LwDKtIlhXr4oSa7w-zTpgwxHMXikC-FInHXmvg/alis_*
==> jobs/LwDKtIlhXr4oSa7w-zTpgwxHMXikC-FInHXmvg/alis_pdb_rna_sequence <==
unnamed	3J6B_1	1.000	1894	0	0	1	1894	3533	5426	0.000E+00	3395	1894	3296	CAUCUAAGUAACUUAAGGAUAAGAAAUCAACAGAGAUAUUAUGAGUAUUGGUGAGAGAAAAUAAUAAAGGUCUAAUAAGUAUUAUGUGAAAAAAAUGUAAGAAAAUAGGAUAACAAAUUCUAAGACUAAAUACUAUUAAUAAGUAUAGUAAGUACCGUAAGGGAAAGUAUGAAAAUGAUUAUUUUAUAAGCAAUCAUGAAUAUAUUAUAUUAUAUUAAUGAUGUACCUUUUGUAUAAUGGGUCAGCAAGUAAUUAAUAUUAGUAAAACAAUAAGUUAUAAAUAAAUAGAAUAAUAUAUAUAUAUAAAAAAAUAUAUUAAAAUAUUUAAUUAAUAUUAAUUGACCCGAAAGCAAACGAUCUAACUAUGAUAAGAUGGAUAAACGAUCGAACAGGUUGAUGUUGCAAUAUCAUCUGAUUAAUUGUGGUUAGUAGUGAAAGACAAAUCUGGUUUGCAGAUAGCUGGUUUUCUAUGAAAUAUAUGUAAGUAUAGCCUUUAUAAAUAAUAAUUAUUAUAUAAUAUUAUAUUAAUAUUAUAUAAAGAAUGGUACAGCAAUUAAUAUAUAUUAGGGAACUAUUAAAGUUUUAUUAAUAAUAUUAAAUCUCGAAAUAUUUAAUUAUAUAUAAUAAAGAGUCAGAUUAUGUGCGAUAAGGUAAAUAAUCUAAAGGGAAACAGCCCAGAUUAAGAUAUAAAGUUCCUAAUAAAUAAUAAGUGAAAUAAAUAUUAAAAUAUUAUAAUAUAAUCAGUUAAUGGGUUUGACAAUAACCAUUUUUUAAUGAACAUGUAACAAUGCACUGAUUUAUAAUAAAUAAAAAAAAAUAAUAUUUAAAAUCAAAUAUAUAUAUAUUUGUUAAUAGAUAAUAUACGGAUCUUAAUAAUAAGAAUUAUUUAAUUCCUAAUAUGGAAUAUUAUAUUUUUAUAAUAAAAAUAUAAAUACUGAAUAUCUAAAUAUUAUUAUUACUUUUUUUUUAAUAAUAAUAAUAUGGUAAUAGAACAUUUAAUGAUAAUAUAUAUUAGUUAUUAAUUAAUAUAUGUAUUAAUUAAAUAGAGAAUGCUGACAUGAGUAACGAAAAAAAGGUAUAAACCUUUUCACCUAAAACAUAAGGUUUAACUAUAAAAGUACGGCCCCUAAUUAAAUUAAUAAGAAUAUAAAUAUAUUUAAGAUGGGAUAAUCUAUAUUAAUAAAAAUUUAUCUUAAAAUAUAUAUAUUAUUAAUAAUUAUAUUAAUUAAUUAAUAAUAUAUAUAAUUAUAUUAUAUAUUAUAUAUUUUUUAUAUAAUAUAAACUAAUAAAGAUCAGGAAAUAAUUAAUGUAUACCGUAAUGUAGACCGACUCAGGUAUGUAAGUAGAGAAUAUGAAGGUGAAUUAGAUAAUUAAAGGGAAGGAACUCGGCAAAGAUAGCUCAUAAGUUAGUCAAUAAAGAGUAAUAAGAACAAAGUUGUACAACUGUUUACUAAAAACACCGCACUUUGCAGAAACGAUAAGUUUAAGUAUAAGGUGUGAACUCUGCUCCAUGCUUAAUAUAUAAAUAAAAUUAUUUAACGAUAAUUUAAUUAAAUUUAGGUAAAUAGCAGCCUUAUUAUGAGGGUUAUAAUGUAGCGAAAUUCCUUGGCCUAUAAUUGAGGUCCCGCAUGAAUGACGUAAUGAUACAACAACUGUCUCCCCUUUAAGCUAAGUGAAAUUGAAAUCGUAGUGAAGAUGCUAUGUACCUUCAGCAAGACGGAAAGACCCUAUGCAGCUUUACUGUAAUUAGAUAGAUCGAAUUAUUGUUUAUUAUAUUCAGCAUAUUAAGUAAUCCUAUUAUUAGGUAAUCGUUUAGAUAUUAAUGAGAUACUUAUUAUAAUAUAAUGAUAAUUCUAAUCUUAUAAAUAAUUAUUAUUAUUAUUAUUAAUAAUAA	ACCUCAGAUCAGACGUGGCGACCCGCUGAAUUUAAGCAUAUUAGUCAGCGGAGGAGAAGAAACUAACCAGGAUUCCCUCAGUAACGGCGAGUGAACAGGGAAGAGCCCAGCGCCGAAUCCCCGCCCCGCGGCGGGGCGCGGGACAUGUGGCGUACGGAAGACCCGCUCCCCGGCGCCGCUCGUGGGGGGCCCAAGUCCUUCUGAUCGAGGCCCAGCCCGUGGACGGUGUGAGGCCGGUAGCGGCCCCCGGCGCGCCGGGCCCGGGUCUUCCCGGAGUCGGGUUGCUUGGGAAUGCAGCCCAAAGCGGGUGGUAAACUCCAUCUAAGGCUAAAUACCGGCACGAGACCGAUAGUCAACAAGUACCGUAAGGGAAAGUUGAAAAGAACUUUGAAGAGAGAGUUCAAGAGGGCGUGAAACCGUUAAGAGGUAAACGGGUGGGGUCCGCGCAGUCCGCCCGGAGGAUUCAACCCGGCGGCGGGUCCGGCCGUGUCGGCGGCCCGGCGGAUCUUUCCCGCGCGGGGGACCGUCCCCCGACCGGCGACCGGCCGCCGCCGGGCGCAUUUCCACCGCGGCGGUGCGCCGCGACCGGCUCCGGGACGGCUGGGAAGGCCCGGCGGGGAAGGUGGCUCGGCCGAGUGUUACAGCCCCCCCGGCAGCAGCACUCGCCGAAUCCCGGGGCCGAGGGAGCGAGACCCGUCGCCGCGCUCUCCCCCCUCCCGGCGCCGCCGGGGGGGGCCGGGCCACCCCUCCCACGGCGCGACCGCUCGGGGCGGACUGUCCCCAGUGCGCCCCGGGCGGGUCGCGCCGUCGGGCCCGGGGGAGGGGCCACGCGCGCGUCCCCCGAAGAGGGGGACGGCGGAGCGAGCGCACGGGGUCGGCGGCGACGUCGGCUACCCACCCGACCCGUCUUGAAACACGGACCAAGGAGUCUAACACGUGCGCGAGUCGGGGGCUCGCACGAAAGCCGCCGUGGCGCAAUGAAGGUGAAGGCCGGCGCGCUCGCCGGCCGAGGUGGGAUCCCGAGGCCUCUCCAGUCCGCCGAGGGCGCACCACCGGCCCGUCUCGCCCGCCGCGCCGGGGAGGUGGAGCACGAGCGCACGUGUUAGGACCCGAAAGAUGGUGAACUAUGCCUGGGCAGGGCGAAGCCAGAGGAAACUCUGGUGGAGGUCCGUAGCGGUCCUGACGUGCAAAUCGGUCGUCCGACCUGGGUAUAGGGGCGAAAGACUAAUCGAACCAUCUAGUAGCUGGUUCCCUCCGAAGUUUCCCUCAGGAUAGCUGGCGCUCUCGCACACGCAGUUUUAUCCGGUAAAGCGAAUGAUUAGAGGUCUUGGGGCCGAAACGAUCUCAACCUAUUCUCAAACUUUAAAUGGGUAAGAAGCCCGGCUCGCUGGCGUGGAGCCGGGCGUGGAAUGCGAGUGCCUAGUGGGCCACUUUUGGUAAGCAGAACUGGCGCUGCGGGAUGAACCGAACGCCGGGUUAAGGCGCCCGAUGCCGACGCUCAUCAGACCCCAGAAAAGGUGUUGGUUGAUAUAGACAGCAGGACGGUGGCCAUGGAAGUCGGAAUCCGCUAAGGAGUGUGUAACAACUCACCUGCCGAAUCAACUAGCCCUGAAAAUGGAUGGCGCUGGAGCGUCGGGCCCAUACCCGGCCGUCGCCGGCAGUCGAGAGUGGACGGGAGCGGCGGGGGCGGCGCGCGCGCGCGCCCGCCCCCGGAGCCCCGCGGACGCUACGCCGCGACGAGUAGGAGGGCCGCUGCGGUGAGCCUUGAAGCCUAGGGCGCGGGCCCGGGUGGAGCCGCCGCAGGUGCAGAUCUUGGUGGUAGUAGCAAAUAUUCAAACGAGAACUUUGAAGGCCGAAGUGGAGAAGGGUUCCAUGUGAACAGCAGUUGAACAUG
unnamed	5MRC_1	0.999	1894	2	0	1	1894	3533	5426	0.000E+00	3390	1894	3296	CAUCUAAGUAACUUAAGGAUAAGAAAUCAACAGAGAUAUUAUGAGUAUUGGUGAGAGAAAAUAAUAAAGGUCUAAUAAGUAUUAUGUGAAAAAAAUGUAAGAAAAUAGGAUAACAAAUUCUAAGACUAAAUACUAUUAAUAAGUAUAGUAAGUACCGUAAGGGAAAGUAUGAAAAUGAUUAUUUUAUAAGCAAUCAUGAAUAUAUUAUAUUAUAUUAAUGAUGUACCUUUUGUAUAAUGGGUCAGCAAGUAAUUAAUAUUAGUAAAACAAUAAGUUAUAAAUAAAUAGAAUAAUAUAUAUAUAUAAAAAAAUAUAUUAAAAUAUUUAAUUAAUAUUAAUUGACCCGAAAGCAAACGAUCUAACUAUGAUAAGAUGGAUAAACGAUCGAACAGGUUGAUGUUGCAAUAUCAUCUGAUUAAUUGUGGUUAGUAGUGAAAGACAAAUCUGGUUUGCAGAUAGCUGGUUUUCUAUGAAAUAUAUGUAAGUAUAGCCUUUAUAAAUAAUAAUUAUUAUAUAAUAUUAUAUUAAUAUUAUAUAAAGAAUGGUACAGCAAUUAAUAUAUAUUAGGGAACUAUUAAAGUUUUAUUAAUAAUAUUAAAUCUCGAAAUAUUUAAUUAUAUAUAAUAAAGAGUCAGAUUAUGUGCGAUAAGGUAAAUAAUCUAAAGGGAAACAGCCCAGAUUAAGAUAUAAAGUUCCUAAUAAAUAAUAAGUGAAAUAAAUAUUAAAAUAUUAUAAUAUAAUCAGUUAAUGGGUUUGACAAUAACCAUUUUUUAAUGAACAUGUAACAAUGCACUGAUUUAUAAUAAAUAAAAAAAAAUAAUAUUUAAAAUCAAAUAUAUAUAUAUUUGUUAAUAGAUAAUAUACGGAUCUUAAUAAUAAGAAUUAUUUAAUUCCUAAUAUGGAAUAUUAUAUUUUUAUAAUAAAAAUAUAAAUACUGAAUAUCUAAAUAUUAUUAUUACUUUUUUUUUAAUAAUAAUAAUAUGGUAAUAGAACAUUUAAUGAUAAUAUAUAUUAGUUAUUAAUUAAUAUAUGUAUUAAUUAAAUAGAGAAUGCUGACAUGAGUAACGAAAAAAAGGUAUAAACCUUUUCACCUAAAACAUAAGGUUUAACUAUAAAAGUACGGCCCCUAAUUAAAUUAAUAAGAAUAUAAAUAUAUUUAAGAUGGGAUAAUCUAUAUUAAUAAAAAUUUAUCUUAAAAUAUAUAUAUUAUUAAUAAUUAUAUUAAUUAAUUAAUAAUAUAUAUAAUUAUAUUAUAUAUUAUAUAUUUUUUAUAUAAUAUAAACUAAUAAAGAUCAGGAAAUAAUUAAUGUAUACCGUAAUGUAGACCGACUCAGGUAUGUAAGUAGAGAAUAUGAAGGUGAAUUAGAUAAUUAAAGGGAAGGAACUCGGCAAAGAUAGCUCAUAAGUUAGUCAAUAAAGAGUAAUAAGAACAAAGUUGUACAACUGUUUACUAAAAACACCGCACUUUGCAGAAACGAUAAGUUUAAGUAUAAGGUGUGAACUCUGCUCCAUGCUUAAUAUAUAAAUAAAAUUAUUUAACGAUAAUUUAAUUAAAUUUAGGUAAAUAGCAGCCUUAUUAUGAGGGUUAUAAUGUAGCGAAAUUCCUUGGCCUAUAAUUGAGGUCCCGCAUGAAUGACGUAAUGAUACAACAACUGUCUCCCCUUUAAGCUAAGUGAAAUUGAAAUCGUAGUGAAGAUGCUAUGUACCUUCAGCAAGACGGAAAGACCCUAUGCAGCUUUACUGUAAUUAGAUAGAUCGAAUUAUUGUUUAUUAUAUUCAGCAUAUUAAGUAAUCCUAUUAUUAGGUAAUCGUUUAGAUAUUAAUGAGAUACUUAUUAUAAUAUAAUGAUAAUUCUAAUCUUAUAAAUAAUUAUUAUUAUUAUUAUUAAUAAUAA	ACGCGCCAGCGCCGAUGGUACUGGGCGGGCGACCGCCUGGGAGAGUAGGUCGGUGCGGGGGA
AAGCAGCUUUACAGAUCAAUGGCGGAGGGAGGUCAACAUCAAGAACUGUGGGCCUUUUAUUGCCUAUAGAACUUAUAACGAACAUGGUUCUUGCCUUUUACCAGAACCAUCCGGGUGUUGUCUCCAUAGAAACAGGUAAAGCUGUCCGUUACUGUGGGCUUGCCAUAUUUUUUGGAACUUUUCUGCCCUUUUUCUCAAUGAGUAAGGAGGGCGU
GGGCUGAAGGAUGGAGACGUCUAGGCCC
CGCGACCUCAGAUCAGACGUGGCGACCCGCUGAAUUUAAGCAUAUUAGUCAGCGGAGGAAAAGAAACUAACCAGGAUUCCCUCAGUAACGGCGAGUGAACAGGGAAGAGCCCAGCGCCGAAUCCCCGCCCCGCGGGGCGCGGGACAUGUGGCGUACGGAAGACCCGCUCCCCGGCGCCGCUCGUGGGGGGCCCAAGUCCUUCUGAUCGAGGCCCAGCCCGUGGACGGUGUGAGGCCGGUAGCGGCCCCCGGCGCGCGCCCGGGUCUUCCCGGAGUCGGGUUGCUUGGGAAUGCAGCCCAAAGCGGGUGGUAAACUCCAUCUAAGGCUAAAUACCGGCACGAGACCGAUAGUCAACAAGUACCGUAAGGGAAAGUUGAAAAGAACUUUGAAGAGAGAGUUCAAGAGGGCGUGAAACCGUUAAGAGGUAAACGGGUGGGGUCCGCGCAGUCCGCCCGGAGGAUUCAACCCGGCGGCGGGUCCGGCCGUGUCGGCGGCCCGGCGGAUCUUUCCCGCCCCCGGGGUCGGCGGGGGACCGUCCCCCGGACCGGCGACCGGCCGCCGCCGGGCGCAUUUCCAGGCGGUGCGCCGCGACCGGCUCCGGGACGGCUGGGAAGGCCCGGCGGGGAAGGUGGCUCGGGGCCCCCGAGUGUUACAGCCCCCCCGGCAGCAGCACUCGCCGAAUCCCGGGGCCGAGGGAGCGAGACCCGUCGCCGCGCUCUCCCCCCUGGGGGGGCCGGGCCACCCCUCCCACGGCGCGACCGCUCUCCCAGGGGGCGGGGCGGACUGUCCCCAGUGCGCCCCGGGCGGGUCGCGCCGUCGGGCCCGGGGGGCCACGCGCGCGUCCCGGGACGGCGGAGCGAGCGCACGGGGUCGGCGGCGACGUCGGCUACCCACCCGACCCGUCUUGAAACACGGACCAAGGAGUCUAACACGUGCGCGAGUCGGGGGCUCGCACGAAAGCCGCCGUGGCGCAAUGAAGGUGAAGGCCGGCGCGCUCGCCGGCCGAGGUGGGAUCCCGAGGCCUCUCCAGUCCGCCGAGGGGCACCACCGGCCCGUCUCGCCCGCCGCGCCGGGGAGGUGGAGCACGAGCGCACGUGUUAGGACCCGAAAGAUGGUGAACUAUGCCUGGGCAGGGCGAAGCCAGAGGAAACUCUGGUGGAGGUCCGUAGCGGUCCUGACGUGCAAAUCGGUCGUCCGACCUGGGUAUAGGGGCGAAAGACUAAUCGAACCAUCUAGUAGCUGGUUCCCUCCGAAGUUUCCCUCAGGAUAGCUGGCGCUCUCGCCCACGCAGUUUUAUCCGGUAAAGCGAAUGAUUAGAGGUCUUGGGGCCGAAACGAUCUCAACCUAUUCUCAAACUUUAAAUGGGUAAGAAGCCCGGCUCGCUGGCGUGGAGCCGGGGUGGAAUGCGAGUGCCUAGUGGGCCACUUUUGGUAAGCAGAACUGGCGCUGCGGGAUGAACCGAACGCCGGGUUAAGGCGCCCGAUGCCGACGCUCAUCAGACCCCAGAAAAGGUGUUGGUUGAUAUAGACAGCAGGACGGUGGCCAUGGAAGUCGGAAUCCGCUAAGGAGUGUGUAACAACUCAC
unnamed	5MRF_1	0.999	1894	2	0	1	1894	3533	5426	0.000E+00	3390	1894	3296	CAUCUAAGUAACUUAAGGAUAAGAAAUCAACAGAGAUAUUAUGAGUAUUGGUGAGAGAAAAUAAUAAAGGUCUAAUAAGUAUUAUGUGAAAAAAAUGUAAGAAAAUAGGAUAACAAAUUCUAAGACUAAAUACUAUUAAUAAGUAUAGUAAGUACCGUAAGGGAAAGUAUGAAAAUGAUUAUUUUAUAAGCAAUCAUGAAUAUAUUAUAUUAUAUUAAUGAUGUACCUUUUGUAUAAUGGGUCAGCAAGUAAUUAAUAUUAGUAAAACAAUAAGUUAUAAAUAAAUAGAAUAAUAUAUAUAUAUAAAAAAAUAUAUUAAAAUAUUUAAUUAAUAUUAAUUGACCCGAAAGCAAACGAUCUAACUAUGAUAAGAUGGAUAAACGAUCGAACAGGUUGAUGUUGCAAUAUCAUCUGAUUAAUUGUGGUUAGUAGUGAAAGACAAAUCUGGUUUGCAGAUAGCUGGUUUUCUAUGAAAUAUAUGUAAGUAUAGCCUUUAUAAAUAAUAAUUAUUAUAUAAUAUUAUAUUAAUAUUAUAUAAAGAAUGGUACAGCAAUUAAUAUAUAUUAGGGAACUAUUAAAGUUUUAUUAAUAAUAUUAAAUCUCGAAAUAUUUAAUUAUAUAUAAUAAAGAGUCAGAUUAUGUGCGAUAAGGUAAAUAAUCUAAAGGGAAACAGCCCAGAUUAAGAUAUAAAGUUCCUAAUAAAUAAUAAGUGAAAUAAAUAUUAAAAUAUUAUAAUAUAAUCAGUUAAUGGGUUUGACAAUAACCAUUUUUUAAUGAACAUGUAACAAUGCACUGAUUUAUAAUAAAUAAAAAAAAAUAAUAUUUAAAAUCAAAUAUAUAUAUAUUUGUUAAUAGAUAAUAUACGGAUCUUAAUAAUAAGAAUUAUUUAAUUCCUAAUAUGGAAUAUUAUAUUUUUAUAAUAAAAAUAUAAAUACUGAAUAUCUAAAUAUUAUUAUUACUUUUUUUUUAAUAAUAAUAAUAUGGUAAUAGAACAUUUAAUGAUAAUAUAUAUUAGUUAUUAAUUAAUAUAUGUAUUAAUUAAAUAGAGAAUGCUGACAUGAGUAACGAAAAAAAGGUAUAAACCUUUUCACCUAAAACAUAAGGUUUAACUAUAAAAGUACGGCCCCUAAUUAAAUUAAUAAGAAUAUAAAUAUAUUUAAGAUGGGAUAAUCUAUAUUAAUAAAAAUUUAUCUUAAAAUAUAUAUAUUAUUAAUAAUUAUAUUAAUUAAUUAAUAAUAUAUAUAAUUAUAUUAUAUAUUAUAUAUUUUUUAUAUAAUAUAAACUAAUAAAGAUCAGGAAAUAAUUAAUGUAUACCGUAAUGUAGACCGACUCAGGUAUGUAAGUAGAGAAUAUGAAGGUGAAUUAGAUAAUUAAAGGGAAGGAACUCGGCAAAGAUAGCUCAUAAGUUAGUCAAUAAAGAGUAAUAAGAACAAAGUUGUACAACUGUUUACUAAAAACACCGCACUUUGCAGAAACGAUAAGUUUAAGUAUAAGGUGUGAACUCUGCUCCAUGCUUAAUAUAUAAAUAAAAUUAUUUAACGAUAAUUUAAUUAAAUUUAGGUAAAUAGCAGCCUUAUUAUGAGGGUUAUAAUGUAGCGAAAUUCCUUGGCCUAUAAUUGAGGUCCCGCAUGAAUGACGUAAUGAUACAACAACUGUCUCCCCUUUAAGCUAAGUGAAAUUGAAAUCGUAGUGAAGAUGCUAUGUACCUUCAGCAAGACGGAAAGACCCUAUGCAGCUUUACUGUAAUUAGAUAGAUCGAAUUAUUGUUUAUUAUAUUCAGCAUAUUAAGUAAUCCUAUUAUUAGGUAAUCGUUUAGAUAUUAAUGAGAUACUUAUUAUAAUAUAAUGAUAAUUCUAAUCUUAUAAAUAAUUAUUAUUAUUAUUAUUAAUAAUAA	UGGGAUUAGCUAGUAGGUGGGGUAACGGCUCACCUAGGCGACGAUCCCUAGCUGGUCUGAGAGGAUGACCAGCCACACUGGAACUGAGACACGGUCCAGACUCCUACGGGAGGCAGCAGUGGGGAAUAUUGCACAAUGGGCGCAAGCCUGAUGCAGCCAUGCCGCGUGUAUGAAGAAGGCCUUCGGGUUGUAAAGUACUUUCAGCGGGGAGGAAGGGAGUAAAGUUAAUACCUUUGCUCAUUGACGUUACCCGCAGAAGAAGCACCGGCUAACUCCGUGCCAGCAGCCGCGGUAAUACGGAGGGUGCAAGCGUUAAUCGGAAUUACUGGGCGUAAAGCGCACGCAGGCGGUUUGUUAAGUCAGAUGUGAAAUCCCCGGGCUCAACCUGGGAACUGCAUCUGAUACUGGCAAGCUUGAGUCUCGUAGAGGGGGGUAGAAUUCCAGGUGUAGCGGUGAAAUGCGUAGAGAUCUGGAGGAAUACCGGUGGCGAAGGCGGCCCCCUGGACGAAGACUGACGCUCAGGUGCGAAAGCGUGGGGAGCAAACAGGAUUAGAUACCCUGGUAGUCCACGCCGUAAACGAUGUCGACUUGGAGGUUGUGCCCUUGAGGCGUGGCUUCCGGAGCUAACGCGUUAAGUCGACCGCCUGGGGAGUACGGCCGCAAGGUUAAAACUCAAAUGAAUUGACGGGGGCCCGCACAAGCGGUGGAGCAUGUGGUUUAAUUCGAUGCAACGCGAAGAACCUUACCUGGUCUUGACAUCCACGGAAGUUUUCAGAGAUGAGAAUGUGCCUUCGGGAACCGUGAGACAGGUGCUGCAUGGCUGUCGUCAGCUCGUGUUGUGAAAUGUUGGGUUAAGUCCCGCAACGAGCGCAACCCUUAUCCUUUGUUGCCAGCGGUCCGGCCGGGAACUCAAAGGAGACUGCCAGUGAUAAACUGGAGGAAGGUGGGGAUGACGUCAAGUCAUCAUGGCCCUUACGACCAGGGCUACACACGUGCUACAAUGGCGCAUACAAAGAGAAGCGACCUCGCGAGAGCAAGCGGACCUCAUAAAGUGCGUCGUAGUCCGGAUUGGAGUCUGCAACUCGACUCCAUGAAGUCGGAAUCGCUAGUAAUCGUGGAUCAGAAUGCCACGGUGAAUACGUUCCCGGGCCUUGUACACACCGCCCGUCACACCAUGGGAGUGGGUUGCAAAAGAAGUAGGUAGCUUAACCUUCGGGAGGGCGCUUACCACUUUGUGAUUCAUGACUGGGGUGAAGUCGUAACAAGGUAACCGUAGGGGAACCUGCGGUUGGAUCA
GCGGCCGUAACUAUAACGGUCCUAAGG
UUUUUUUUUUUU
GGCUUAUCAAGAGAGGUGAAGGGACUGGCCCGACGAAACCCGGCAACCAGAAAUGGUGCCAAUUCCUGCAGCGGAAACGUUGAAAGAUGAGCCGA
CGGAGGAACUACUGUCUUCACGCC

==> jobs/LwDKtIlhXr4oSa7w-zTpgwxHMXikC-FInHXmvg/alis_pdb_rna_sequence.dbtype <==


==> jobs/LwDKtIlhXr4oSa7w-zTpgwxHMXikC-FInHXmvg/alis_pdb_rna_sequence.index <==
0	0	21015

==> jobs/LwDKtIlhXr4oSa7w-zTpgwxHMXikC-FInHXmvg/alis_pdb_rna_sequence.m8 <==
unnamed	3J6B_1	1.000	1894	0	0	1	1894	3533	5426	0.000E+00	3395	1894	3296	CAUCUAAGUAACUUAAGGAUAAGAAAUCAACAGAGAUAUUAUGAGUAUUGGUGAGAGAAAAUAAUAAAGGUCUAAUAAGUAUUAUGUGAAAAAAAUGUAAGAAAAUAGGAUAACAAAUUCUAAGACUAAAUACUAUUAAUAAGUAUAGUAAGUACCGUAAGGGAAAGUAUGAAAAUGAUUAUUUUAUAAGCAAUCAUGAAUAUAUUAUAUUAUAUUAAUGAUGUACCUUUUGUAUAAUGGGUCAGCAAGUAAUUAAUAUUAGUAAAACAAUAAGUUAUAAAUAAAUAGAAUAAUAUAUAUAUAUAAAAAAAUAUAUUAAAAUAUUUAAUUAAUAUUAAUUGACCCGAAAGCAAACGAUCUAACUAUGAUAAGAUGGAUAAACGAUCGAACAGGUUGAUGUUGCAAUAUCAUCUGAUUAAUUGUGGUUAGUAGUGAAAGACAAAUCUGGUUUGCAGAUAGCUGGUUUUCUAUGAAAUAUAUGUAAGUAUAGCCUUUAUAAAUAAUAAUUAUUAUAUAAUAUUAUAUUAAUAUUAUAUAAAGAAUGGUACAGCAAUUAAUAUAUAUUAGGGAACUAUUAAAGUUUUAUUAAUAAUAUUAAAUCUCGAAAUAUUUAAUUAUAUAUAAUAAAGAGUCAGAUUAUGUGCGAUAAGGUAAAUAAUCUAAAGGGAAACAGCCCAGAUUAAGAUAUAAAGUUCCUAAUAAAUAAUAAGUGAAAUAAAUAUUAAAAUAUUAUAAUAUAAUCAGUUAAUGGGUUUGACAAUAACCAUUUUUUAAUGAACAUGUAACAAUGCACUGAUUUAUAAUAAAUAAAAAAAAAUAAUAUUUAAAAUCAAAUAUAUAUAUAUUUGUUAAUAGAUAAUAUACGGAUCUUAAUAAUAAGAAUUAUUUAAUUCCUAAUAUGGAAUAUUAUAUUUUUAUAAUAAAAAUAUAAAUACUGAAUAUCUAAAUAUUAUUAUUACUUUUUUUUUAAUAAUAAUAAUAUGGUAAUAGAACAUUUAAUGAUAAUAUAUAUUAGUUAUUAAUUAAUAUAUGUAUUAAUUAAAUAGAGAAUGCUGACAUGAGUAACGAAAAAAAGGUAUAAACCUUUUCACCUAAAACAUAAGGUUUAACUAUAAAAGUACGGCCCCUAAUUAAAUUAAUAAGAAUAUAAAUAUAUUUAAGAUGGGAUAAUCUAUAUUAAUAAAAAUUUAUCUUAAAAUAUAUAUAUUAUUAAUAAUUAUAUUAAUUAAUUAAUAAUAUAUAUAAUUAUAUUAUAUAUUAUAUAUUUUUUAUAUAAUAUAAACUAAUAAAGAUCAGGAAAUAAUUAAUGUAUACCGUAAUGUAGACCGACUCAGGUAUGUAAGUAGAGAAUAUGAAGGUGAAUUAGAUAAUUAAAGGGAAGGAACUCGGCAAAGAUAGCUCAUAAGUUAGUCAAUAAAGAGUAAUAAGAACAAAGUUGUACAACUGUUUACUAAAAACACCGCACUUUGCAGAAACGAUAAGUUUAAGUAUAAGGUGUGAACUCUGCUCCAUGCUUAAUAUAUAAAUAAAAUUAUUUAACGAUAAUUUAAUUAAAUUUAGGUAAAUAGCAGCCUUAUUAUGAGGGUUAUAAUGUAGCGAAAUUCCUUGGCCUAUAAUUGAGGUCCCGCAUGAAUGACGUAAUGAUACAACAACUGUCUCCCCUUUAAGCUAAGUGAAAUUGAAAUCGUAGUGAAGAUGCUAUGUACCUUCAGCAAGACGGAAAGACCCUAUGCAGCUUUACUGUAAUUAGAUAGAUCGAAUUAUUGUUUAUUAUAUUCAGCAUAUUAAGUAAUCCUAUUAUUAGGUAAUCGUUUAGAUAUUAAUGAGAUACUUAUUAUAAUAUAAUGAUAAUUCUAAUCUUAUAAAUAAUUAUUAUUAUUAUUAUUAAUAAUAA	ACCUCAGAUCAGACGUGGCGACCCGCUGAAUUUAAGCAUAUUAGUCAGCGGAGGAGAAGAAACUAACCAGGAUUCCCUCAGUAACGGCGAGUGAACAGGGAAGAGCCCAGCGCCGAAUCCCCGCCCCGCGGCGGGGCGCGGGACAUGUGGCGUACGGAAGACCCGCUCCCCGGCGCCGCUCGUGGGGGGCCCAAGUCCUUCUGAUCGAGGCCCAGCCCGUGGACGGUGUGAGGCCGGUAGCGGCCCCCGGCGCGCCGGGCCCGGGUCUUCCCGGAGUCGGGUUGCUUGGGAAUGCAGCCCAAAGCGGGUGGUAAACUCCAUCUAAGGCUAAAUACCGGCACGAGACCGAUAGUCAACAAGUACCGUAAGGGAAAGUUGAAAAGAACUUUGAAGAGAGAGUUCAAGAGGGCGUGAAACCGUUAAGAGGUAAACGGGUGGGGUCCGCGCAGUCCGCCCGGAGGAUUCAACCCGGCGGCGGGUCCGGCCGUGUCGGCGGCCCGGCGGAUCUUUCCCGCGCGGGGGACCGUCCCCCGACCGGCGACCGGCCGCCGCCGGGCGCAUUUCCACCGCGGCGGUGCGCCGCGACCGGCUCCGGGACGGCUGGGAAGGCCCGGCGGGGAAGGUGGCUCGGCCGAGUGUUACAGCCCCCCCGGCAGCAGCACUCGCCGAAUCCCGGGGCCGAGGGAGCGAGACCCGUCGCCGCGCUCUCCCCCCUCCCGGCGCCGCCGGGGGGGGCCGGGCCACCCCUCCCACGGCGCGACCGCUCGGGGCGGACUGUCCCCAGUGCGCCCCGGGCGGGUCGCGCCGUCGGGCCCGGGGGAGGGGCCACGCGCGCGUCCCCCGAAGAGGGGGACGGCGGAGCGAGCGCACGGGGUCGGCGGCGACGUCGGCUACCCACCCGACCCGUCUUGAAACACGGACCAAGGAGUCUAACACGUGCGCGAGUCGGGGGCUCGCACGAAAGCCGCCGUGGCGCAAUGAAGGUGAAGGCCGGCGCGCUCGCCGGCCGAGGUGGGAUCCCGAGGCCUCUCCAGUCCGCCGAGGGCGCACCACCGGCCCGUCUCGCCCGCCGCGCCGGGGAGGUGGAGCACGAGCGCACGUGUUAGGACCCGAAAGAUGGUGAACUAUGCCUGGGCAGGGCGAAGCCAGAGGAAACUCUGGUGGAGGUCCGUAGCGGUCCUGACGUGCAAAUCGGUCGUCCGACCUGGGUAUAGGGGCGAAAGACUAAUCGAACCAUCUAGUAGCUGGUUCCCUCCGAAGUUUCCCUCAGGAUAGCUGGCGCUCUCGCACACGCAGUUUUAUCCGGUAAAGCGAAUGAUUAGAGGUCUUGGGGCCGAAACGAUCUCAACCUAUUCUCAAACUUUAAAUGGGUAAGAAGCCCGGCUCGCUGGCGUGGAGCCGGGCGUGGAAUGCGAGUGCCUAGUGGGCCACUUUUGGUAAGCAGAACUGGCGCUGCGGGAUGAACCGAACGCCGGGUUAAGGCGCCCGAUGCCGACGCUCAUCAGACCCCAGAAAAGGUGUUGGUUGAUAUAGACAGCAGGACGGUGGCCAUGGAAGUCGGAAUCCGCUAAGGAGUGUGUAACAACUCACCUGCCGAAUCAACUAGCCCUGAAAAUGGAUGGCGCUGGAGCGUCGGGCCCAUACCCGGCCGUCGCCGGCAGUCGAGAGUGGACGGGAGCGGCGGGGGCGGCGCGCGCGCGCGCCCGCCCCCGGAGCCCCGCGGACGCUACGCCGCGACGAGUAGGAGGGCCGCUGCGGUGAGCCUUGAAGCCUAGGGCGCGGGCCCGGGUGGAGCCGCCGCAGGUGCAGAUCUUGGUGGUAGUAGCAAAUAUUCAAACGAGAACUUUGAAGGCCGAAGUGGAGAAGGGUUCCAUGUGAACAGCAGUUGAACAUG
unnamed	5MRC_1	0.999	1894	2	0	1	1894	3533	5426	0.000E+00	3390	1894	3296	CAUCUAAGUAACUUAAGGAUAAGAAAUCAACAGAGAUAUUAUGAGUAUUGGUGAGAGAAAAUAAUAAAGGUCUAAUAAGUAUUAUGUGAAAAAAAUGUAAGAAAAUAGGAUAACAAAUUCUAAGACUAAAUACUAUUAAUAAGUAUAGUAAGUACCGUAAGGGAAAGUAUGAAAAUGAUUAUUUUAUAAGCAAUCAUGAAUAUAUUAUAUUAUAUUAAUGAUGUACCUUUUGUAUAAUGGGUCAGCAAGUAAUUAAUAUUAGUAAAACAAUAAGUUAUAAAUAAAUAGAAUAAUAUAUAUAUAUAAAAAAAUAUAUUAAAAUAUUUAAUUAAUAUUAAUUGACCCGAAAGCAAACGAUCUAACUAUGAUAAGAUGGAUAAACGAUCGAACAGGUUGAUGUUGCAAUAUCAUCUGAUUAAUUGUGGUUAGUAGUGAAAGACAAAUCUGGUUUGCAGAUAGCUGGUUUUCUAUGAAAUAUAUGUAAGUAUAGCCUUUAUAAAUAAUAAUUAUUAUAUAAUAUUAUAUUAAUAUUAUAUAAAGAAUGGUACAGCAAUUAAUAUAUAUUAGGGAACUAUUAAAGUUUUAUUAAUAAUAUUAAAUCUCGAAAUAUUUAAUUAUAUAUAAUAAAGAGUCAGAUUAUGUGCGAUAAGGUAAAUAAUCUAAAGGGAAACAGCCCAGAUUAAGAUAUAAAGUUCCUAAUAAAUAAUAAGUGAAAUAAAUAUUAAAAUAUUAUAAUAUAAUCAGUUAAUGGGUUUGACAAUAACCAUUUUUUAAUGAACAUGUAACAAUGCACUGAUUUAUAAUAAAUAAAAAAAAAUAAUAUUUAAAAUCAAAUAUAUAUAUAUUUGUUAAUAGAUAAUAUACGGAUCUUAAUAAUAAGAAUUAUUUAAUUCCUAAUAUGGAAUAUUAUAUUUUUAUAAUAAAAAUAUAAAUACUGAAUAUCUAAAUAUUAUUAUUACUUUUUUUUUAAUAAUAAUAAUAUGGUAAUAGAACAUUUAAUGAUAAUAUAUAUUAGUUAUUAAUUAAUAUAUGUAUUAAUUAAAUAGAGAAUGCUGACAUGAGUAACGAAAAAAAGGUAUAAACCUUUUCACCUAAAACAUAAGGUUUAACUAUAAAAGUACGGCCCCUAAUUAAAUUAAUAAGAAUAUAAAUAUAUUUAAGAUGGGAUAAUCUAUAUUAAUAAAAAUUUAUCUUAAAAUAUAUAUAUUAUUAAUAAUUAUAUUAAUUAAUUAAUAAUAUAUAUAAUUAUAUUAUAUAUUAUAUAUUUUUUAUAUAAUAUAAACUAAUAAAGAUCAGGAAAUAAUUAAUGUAUACCGUAAUGUAGACCGACUCAGGUAUGUAAGUAGAGAAUAUGAAGGUGAAUUAGAUAAUUAAAGGGAAGGAACUCGGCAAAGAUAGCUCAUAAGUUAGUCAAUAAAGAGUAAUAAGAACAAAGUUGUACAACUGUUUACUAAAAACACCGCACUUUGCAGAAACGAUAAGUUUAAGUAUAAGGUGUGAACUCUGCUCCAUGCUUAAUAUAUAAAUAAAAUUAUUUAACGAUAAUUUAAUUAAAUUUAGGUAAAUAGCAGCCUUAUUAUGAGGGUUAUAAUGUAGCGAAAUUCCUUGGCCUAUAAUUGAGGUCCCGCAUGAAUGACGUAAUGAUACAACAACUGUCUCCCCUUUAAGCUAAGUGAAAUUGAAAUCGUAGUGAAGAUGCUAUGUACCUUCAGCAAGACGGAAAGACCCUAUGCAGCUUUACUGUAAUUAGAUAGAUCGAAUUAUUGUUUAUUAUAUUCAGCAUAUUAAGUAAUCCUAUUAUUAGGUAAUCGUUUAGAUAUUAAUGAGAUACUUAUUAUAAUAUAAUGAUAAUUCUAAUCUUAUAAAUAAUUAUUAUUAUUAUUAUUAAUAAUAA	ACGCGCCAGCGCCGAUGGUACUGGGCGGGCGACCGCCUGGGAGAGUAGGUCGGUGCGGGGGA
AAGCAGCUUUACAGAUCAAUGGCGGAGGGAGGUCAACAUCAAGAACUGUGGGCCUUUUAUUGCCUAUAGAACUUAUAACGAACAUGGUUCUUGCCUUUUACCAGAACCAUCCGGGUGUUGUCUCCAUAGAAACAGGUAAAGCUGUCCGUUACUGUGGGCUUGCCAUAUUUUUUGGAACUUUUCUGCCCUUUUUCUCAAUGAGUAAGGAGGGCGU
GGGCUGAAGGAUGGAGACGUCUAGGCCC
CGCGACCUCAGAUCAGACGUGGCGACCCGCUGAAUUUAAGCAUAUUAGUCAGCGGAGGAAAAGAAACUAACCAGGAUUCCCUCAGUAACGGCGAGUGAACAGGGAAGAGCCCAGCGCCGAAUCCCCGCCCCGCGGGGCGCGGGACAUGUGGCGUACGGAAGACCCGCUCCCCGGCGCCGCUCGUGGGGGGCCCAAGUCCUUCUGAUCGAGGCCCAGCCCGUGGACGGUGUGAGGCCGGUAGCGGCCCCCGGCGCGCGCCCGGGUCUUCCCGGAGUCGGGUUGCUUGGGAAUGCAGCCCAAAGCGGGUGGUAAACUCCAUCUAAGGCUAAAUACCGGCACGAGACCGAUAGUCAACAAGUACCGUAAGGGAAAGUUGAAAAGAACUUUGAAGAGAGAGUUCAAGAGGGCGUGAAACCGUUAAGAGGUAAACGGGUGGGGUCCGCGCAGUCCGCCCGGAGGAUUCAACCCGGCGGCGGGUCCGGCCGUGUCGGCGGCCCGGCGGAUCUUUCCCGCCCCCGGGGUCGGCGGGGGACCGUCCCCCGGACCGGCGACCGGCCGCCGCCGGGCGCAUUUCCAGGCGGUGCGCCGCGACCGGCUCCGGGACGGCUGGGAAGGCCCGGCGGGGAAGGUGGCUCGGGGCCCCCGAGUGUUACAGCCCCCCCGGCAGCAGCACUCGCCGAAUCCCGGGGCCGAGGGAGCGAGACCCGUCGCCGCGCUCUCCCCCCUGGGGGGGCCGGGCCACCCCUCCCACGGCGCGACCGCUCUCCCAGGGGGCGGGGCGGACUGUCCCCAGUGCGCCCCGGGCGGGUCGCGCCGUCGGGCCCGGGGGGCCACGCGCGCGUCCCGGGACGGCGGAGCGAGCGCACGGGGUCGGCGGCGACGUCGGCUACCCACCCGACCCGUCUUGAAACACGGACCAAGGAGUCUAACACGUGCGCGAGUCGGGGGCUCGCACGAAAGCCGCCGUGGCGCAAUGAAGGUGAAGGCCGGCGCGCUCGCCGGCCGAGGUGGGAUCCCGAGGCCUCUCCAGUCCGCCGAGGGGCACCACCGGCCCGUCUCGCCCGCCGCGCCGGGGAGGUGGAGCACGAGCGCACGUGUUAGGACCCGAAAGAUGGUGAACUAUGCCUGGGCAGGGCGAAGCCAGAGGAAACUCUGGUGGAGGUCCGUAGCGGUCCUGACGUGCAAAUCGGUCGUCCGACCUGGGUAUAGGGGCGAAAGACUAAUCGAACCAUCUAGUAGCUGGUUCCCUCCGAAGUUUCCCUCAGGAUAGCUGGCGCUCUCGCCCACGCAGUUUUAUCCGGUAAAGCGAAUGAUUAGAGGUCUUGGGGCCGAAACGAUCUCAACCUAUUCUCAAACUUUAAAUGGGUAAGAAGCCCGGCUCGCUGGCGUGGAGCCGGGGUGGAAUGCGAGUGCCUAGUGGGCCACUUUUGGUAAGCAGAACUGGCGCUGCGGGAUGAACCGAACGCCGGGUUAAGGCGCCCGAUGCCGACGCUCAUCAGACCCCAGAAAAGGUGUUGGUUGAUAUAGACAGCAGGACGGUGGCCAUGGAAGUCGGAAUCCGCUAAGGAGUGUGUAACAACUCAC
unnamed	5MRF_1	0.999	1894	2	0	1	1894	3533	5426	0.000E+00	3390	1894	3296	CAUCUAAGUAACUUAAGGAUAAGAAAUCAACAGAGAUAUUAUGAGUAUUGGUGAGAGAAAAUAAUAAAGGUCUAAUAAGUAUUAUGUGAAAAAAAUGUAAGAAAAUAGGAUAACAAAUUCUAAGACUAAAUACUAUUAAUAAGUAUAGUAAGUACCGUAAGGGAAAGUAUGAAAAUGAUUAUUUUAUAAGCAAUCAUGAAUAUAUUAUAUUAUAUUAAUGAUGUACCUUUUGUAUAAUGGGUCAGCAAGUAAUUAAUAUUAGUAAAACAAUAAGUUAUAAAUAAAUAGAAUAAUAUAUAUAUAUAAAAAAAUAUAUUAAAAUAUUUAAUUAAUAUUAAUUGACCCGAAAGCAAACGAUCUAACUAUGAUAAGAUGGAUAAACGAUCGAACAGGUUGAUGUUGCAAUAUCAUCUGAUUAAUUGUGGUUAGUAGUGAAAGACAAAUCUGGUUUGCAGAUAGCUGGUUUUCUAUGAAAUAUAUGUAAGUAUAGCCUUUAUAAAUAAUAAUUAUUAUAUAAUAUUAUAUUAAUAUUAUAUAAAGAAUGGUACAGCAAUUAAUAUAUAUUAGGGAACUAUUAAAGUUUUAUUAAUAAUAUUAAAUCUCGAAAUAUUUAAUUAUAUAUAAUAAAGAGUCAGAUUAUGUGCGAUAAGGUAAAUAAUCUAAAGGGAAACAGCCCAGAUUAAGAUAUAAAGUUCCUAAUAAAUAAUAAGUGAAAUAAAUAUUAAAAUAUUAUAAUAUAAUCAGUUAAUGGGUUUGACAAUAACCAUUUUUUAAUGAACAUGUAACAAUGCACUGAUUUAUAAUAAAUAAAAAAAAAUAAUAUUUAAAAUCAAAUAUAUAUAUAUUUGUUAAUAGAUAAUAUACGGAUCUUAAUAAUAAGAAUUAUUUAAUUCCUAAUAUGGAAUAUUAUAUUUUUAUAAUAAAAAUAUAAAUACUGAAUAUCUAAAUAUUAUUAUUACUUUUUUUUUAAUAAUAAUAAUAUGGUAAUAGAACAUUUAAUGAUAAUAUAUAUUAGUUAUUAAUUAAUAUAUGUAUUAAUUAAAUAGAGAAUGCUGACAUGAGUAACGAAAAAAAGGUAUAAACCUUUUCACCUAAAACAUAAGGUUUAACUAUAAAAGUACGGCCCCUAAUUAAAUUAAUAAGAAUAUAAAUAUAUUUAAGAUGGGAUAAUCUAUAUUAAUAAAAAUUUAUCUUAAAAUAUAUAUAUUAUUAAUAAUUAUAUUAAUUAAUUAAUAAUAUAUAUAAUUAUAUUAUAUAUUAUAUAUUUUUUAUAUAAUAUAAACUAAUAAAGAUCAGGAAAUAAUUAAUGUAUACCGUAAUGUAGACCGACUCAGGUAUGUAAGUAGAGAAUAUGAAGGUGAAUUAGAUAAUUAAAGGGAAGGAACUCGGCAAAGAUAGCUCAUAAGUUAGUCAAUAAAGAGUAAUAAGAACAAAGUUGUACAACUGUUUACUAAAAACACCGCACUUUGCAGAAACGAUAAGUUUAAGUAUAAGGUGUGAACUCUGCUCCAUGCUUAAUAUAUAAAUAAAAUUAUUUAACGAUAAUUUAAUUAAAUUUAGGUAAAUAGCAGCCUUAUUAUGAGGGUUAUAAUGUAGCGAAAUUCCUUGGCCUAUAAUUGAGGUCCCGCAUGAAUGACGUAAUGAUACAACAACUGUCUCCCCUUUAAGCUAAGUGAAAUUGAAAUCGUAGUGAAGAUGCUAUGUACCUUCAGCAAGACGGAAAGACCCUAUGCAGCUUUACUGUAAUUAGAUAGAUCGAAUUAUUGUUUAUUAUAUUCAGCAUAUUAAGUAAUCCUAUUAUUAGGUAAUCGUUUAGAUAUUAAUGAGAUACUUAUUAUAAUAUAAUGAUAAUUCUAAUCUUAUAAAUAAUUAUUAUUAUUAUUAUUAAUAAUAA	UGGGAUUAGCUAGUAGGUGGGGUAACGGCUCACCUAGGCGACGAUCCCUAGCUGGUCUGAGAGGAUGACCAGCCACACUGGAACUGAGACACGGUCCAGACUCCUACGGGAGGCAGCAGUGGGGAAUAUUGCACAAUGGGCGCAAGCCUGAUGCAGCCAUGCCGCGUGUAUGAAGAAGGCCUUCGGGUUGUAAAGUACUUUCAGCGGGGAGGAAGGGAGUAAAGUUAAUACCUUUGCUCAUUGACGUUACCCGCAGAAGAAGCACCGGCUAACUCCGUGCCAGCAGCCGCGGUAAUACGGAGGGUGCAAGCGUUAAUCGGAAUUACUGGGCGUAAAGCGCACGCAGGCGGUUUGUUAAGUCAGAUGUGAAAUCCCCGGGCUCAACCUGGGAACUGCAUCUGAUACUGGCAAGCUUGAGUCUCGUAGAGGGGGGUAGAAUUCCAGGUGUAGCGGUGAAAUGCGUAGAGAUCUGGAGGAAUACCGGUGGCGAAGGCGGCCCCCUGGACGAAGACUGACGCUCAGGUGCGAAAGCGUGGGGAGCAAACAGGAUUAGAUACCCUGGUAGUCCACGCCGUAAACGAUGUCGACUUGGAGGUUGUGCCCUUGAGGCGUGGCUUCCGGAGCUAACGCGUUAAGUCGACCGCCUGGGGAGUACGGCCGCAAGGUUAAAACUCAAAUGAAUUGACGGGGGCCCGCACAAGCGGUGGAGCAUGUGGUUUAAUUCGAUGCAACGCGAAGAACCUUACCUGGUCUUGACAUCCACGGAAGUUUUCAGAGAUGAGAAUGUGCCUUCGGGAACCGUGAGACAGGUGCUGCAUGGCUGUCGUCAGCUCGUGUUGUGAAAUGUUGGGUUAAGUCCCGCAACGAGCGCAACCCUUAUCCUUUGUUGCCAGCGGUCCGGCCGGGAACUCAAAGGAGACUGCCAGUGAUAAACUGGAGGAAGGUGGGGAUGACGUCAAGUCAUCAUGGCCCUUACGACCAGGGCUACACACGUGCUACAAUGGCGCAUACAAAGAGAAGCGACCUCGCGAGAGCAAGCGGACCUCAUAAAGUGCGUCGUAGUCCGGAUUGGAGUCUGCAACUCGACUCCAUGAAGUCGGAAUCGCUAGUAAUCGUGGAUCAGAAUGCCACGGUGAAUACGUUCCCGGGCCUUGUACACACCGCCCGUCACACCAUGGGAGUGGGUUGCAAAAGAAGUAGGUAGCUUAACCUUCGGGAGGGCGCUUACCACUUUGUGAUUCAUGACUGGGGUGAAGUCGUAACAAGGUAACCGUAGGGGAACCUGCGGUUGGAUCA
GCGGCCGUAACUAUAACGGUCCUAAGG
UUUUUUUUUUUU
GGCUUAUCAAGAGAGGUGAAGGGACUGGCCCGACGAAACCCGGCAACCAGAAAUGGUGCCAAUUCCUGCAGCGGAAACGUUGAAAGAUGAGCCGA
CGGAGGAACUACUGUCUUCACGCC

Can query caching be turned off?

It seems that the server will read the results from the cached one in jobs directory (/docker-compose/jobs) whenever the sequence in query has already being processed once by the server. I guess the job id is some kind of hash of the sequence string.

Question: can this behaviour be overridden? so that the alignment is always performed instead of read from cache? The problem is that I would like to periodically remove queries to clean up the cache and not fill the disk. But if I do that with the current system, queries will start failing once they are not found in cache.

Missing params format specification

The Params file section in the docker-compose README is empty.
It is easy to understand the basic parameters from the examples, but it would be useful to have it in the README for quick reference.

Thanks for maintaining this amazing tool!
Simone

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.