Giter VIP home page Giter VIP logo

Comments (22)

yhmo avatar yhmo commented on May 18, 2024

Thanks for your requirement.
So far Milvus doesn't allow user specify data partition logic, but we are actually planning this.
A possible solution is:
Extend the Insert api: insert(table_name, vector_list, vector_id, partition_hint), if user provide the partition_hint, the vectors will be stored into a partition folder
Add a new api: delete_vectors_by_partition(table_name, partition_hint), user call this api to delete vectors for certain partition.

The final solution is not yet decided, please feel free to tell us if you have any suggestion.
Thanks!

from milvus.

njuslj avatar njuslj commented on May 18, 2024

Thanks for your solution.
If user specify data partition logic such as partition by date, will the recall rate or search speed decrease? Is there an upper limit on the number of partitions?

Looking forward to your reply, thanks!

from milvus.

yhmo avatar yhmo commented on May 18, 2024

Theoretically, partition logic won't affect the recall rate. But could affect search performance. For instance, assume we have 10000 vectors, if we put them into 1000 partitions, each partition contain 10 vectors(too few vectors to build index), so the search action is a 'brute-force search'; but if we put them into one partition, we can build index for this partition to get best search performance.
In my opinion milvus shouldn't limit partition number. User had to take responsibility for a reasonable partition number.

from milvus.

njuslj avatar njuslj commented on May 18, 2024

OK, if we put all vectors into 100 partitions by date, one partition have one million vectors, how much the search performance may decrease compare with only one partition probably?

from milvus.

yhmo avatar yhmo commented on May 18, 2024

The performance is same. Since one million vectors will be split into small data files(each file is about 1GB in default).
Partition number could affect search performance only in the case that vector number is too few.

from milvus.

njuslj avatar njuslj commented on May 18, 2024

Thanks!
"one million vectors will be split into small data files", Is this quantity of the small data files determined by parameter "nlist" ?

from milvus.

yhmo avatar yhmo commented on May 18, 2024

For Milvus 0.3.x: it is defined by index_building_threshold in the server_config.yaml

For Milvus 0.4.x and 0.5.x: it is defined by create_table api. For python example:
create_table({'table_name': TABLE_NAME, 'dimension': TABLE_DIMENSION, 'index_file_size': 1024, 'metric_type':MetricType.L2})
The unit of 'index_file_size' is MB. Default value is 1024MB.

from milvus.

njuslj avatar njuslj commented on May 18, 2024

OK, got it.
For Milvus 0.3.1, what's the effect of the parameter "nlist" in config file "server_config" ?

from milvus.

yhmo avatar yhmo commented on May 18, 2024

The 'nlist' means split vectors into clusters within a file, after build index. Assume one file contains 10000 vectors, 'nlist' set to 200, then user perform 'build_index', the 10000 vectors will be split into 200 clusters(not equally), each cluster has an index.

from milvus.

njuslj avatar njuslj commented on May 18, 2024

Is the recall rate sensitive to this parameter 'nlist'?

from milvus.

yhmo avatar yhmo commented on May 18, 2024

There is another parameter 'nprobe' related to 'nlist'. The 'nprobe' is a search parameter, means how many cluster will be picked up to find topk result, 'nprobe' must always less-equal than 'nlist'. The two parameters can both affect search performance and recall rate.
Assume a file contains 10000 vectors.
If you set 'nlist'=1, 'nprobe'=1, that means all vectors in a single cluster and search engine will search all vectors in this cluster, the recall rate must be 100%, but the search performance is pool since all 10000 vectors were computed.
If you set 'nlist'=100, 'nprobe'=1, that means 10000 vectors split into 100 clusters, search engine firstly find the most closest cluster, then find topk in this cluster, the recall rate may less than 90%, but search performance is good.

from milvus.

njuslj avatar njuslj commented on May 18, 2024

OK,If the parameter is set to 'nlist'=100, 'nprobe'=1 or set 'nlist'=100, 'nprobe'=10, how much difference will the query efficiency be?

from milvus.

yhmo avatar yhmo commented on May 18, 2024

It is hard to say. A query performance is affected by many facets, including data swap, index parameters, search parameters, hardware ability, so on.

from milvus.

njuslj avatar njuslj commented on May 18, 2024

If the above factors are the same, will the query time increase linearly as nprobe increases?

from milvus.

yhmo avatar yhmo commented on May 18, 2024

Query time has several phase: collect/prepare index files, data load from disk to cpu, index compare, find topk in nprobe clusters, reduce to final result, serialize result and send to client, etc.
The nprobe parameter only affect one of the phases. Although this phase time cost is linearly depend by nprobe, the whole query time is not linearly.

from milvus.

njuslj avatar njuslj commented on May 18, 2024

Thanks for your detailed analysis.

from milvus.

njuslj avatar njuslj commented on May 18, 2024

Will milvus recall rates and performance change significantly on skewed and evenly distributed data sets?

from milvus.

yhmo avatar yhmo commented on May 18, 2024

I don't think it can significantly affect recall rate and performance. But I intent to say evenly distributed data sets is a better practice.

from milvus.

njuslj avatar njuslj commented on May 18, 2024

OK, according to previous usage, on a data set containing ten million vectors,nlist set to the default of 16384 .When nprobe is 1, return top1000, a cluster actually containing 50 vectors, which can recall 20 vectors with a recall rate of 40%; When nprobe increased to 100, the recall rate was 90%.In this case, should we increase the nlist and decrease the nprobe ?

from milvus.

yhmo avatar yhmo commented on May 18, 2024

'index_file_size' default value is 1024MB. Assume the 10M vectors are 512 dimension, each file contains 500000 vectors. 'nlist' set to 16384, each cluster contain about 30 vectors. 'nprobe' set to 1, topk set to 1000, the single cluster could only contain 35 vectors, the result will like this:
id = 12340 distance = 0.0
id = 34743 distance = 71.00025939941406
..... 35 valid items
id = 63112 distance = 92.93685913085938
id = 98257 distance = 93.01753997802734
id = -1 distance = 3.4028234663852886e+38
id = -1 distance = 3.4028234663852886e+38
......
id = -1 distance = 3.4028234663852886e+38
..... 965 invalid items

It only return 35 valid items to client. So the recall rate is very pool.
To increase the recall rate, you need to increase 'nprobe'. The larger the 'nprobe', the higher recall rate. If 'nprobe' equals to 'nlist', recall rate is 100%.

from milvus.

njuslj avatar njuslj commented on May 18, 2024

Thanks for your detailed analysis.
Looking forward the new api about vector deletion via generated date.

Best wishes!

from milvus.

yhmo avatar yhmo commented on May 18, 2024

#77 'Support Table partition' already implemented in 0.6.0. Please wait 0.6.0 release.

from milvus.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.