Comments (4)
Thank you for taking the time to install and look at txtai!
I've heard of Hugging Face's dataset library and it does look very nice. But I'm not exactly clear what integration you envisioned. Would you mind expanding more? Are you thinking of a way to take a dataset and have an easy integration to build a txtai index?
from txtai.
If you see - https://huggingface.co/docs/datasets/faiss_and_ea.html , Their library allows you to index memory mapped columns of underlying pyarrow dataframe with faiss and elastic search. The dataset
library also allows loading of common data formats like csv, jsonl etc.
So, say I have a large corpus which I want to index, then I can map my vectorizer over datasets
dataframe efficiently. Here is the densevector part of the pipeline.
class DenseInformationRetriever:
_EMBEDDING_COL_NAME = "embeddings"
def __init__(
self,
documents: nlp.Dataset,
doc_vectorizer: Vectorizer,
query_vectorizer: Vectorizer,
batch_size: int = 512,
cache_file: Optional[str] = None,
string_factory: Optional[str] = "Flat",
train_size: Optional[int] = None,
min_train_pct: Optional[float] = 0.4,
):
self.docs = documents
self.batch_size = batch_size
self.query_vectorizer = query_vectorizer
self.docs = self.docs.map(
lambda examples: {self._EMBEDDING_COL_NAME: doc_vectorizer(examples)},
batched=True,
batch_size=batch_size,
cache_file_name=cache_file,
)
if train_size:
train_size = min(train_size, int(min_train_pct * len(self.docs)))
self.docs.add_faiss_index(
column=self._EMBEDDING_COL_NAME,
string_factory=string_factory,
train_size=train_size,
)
def search(
self, queries: nlp.Dataset, k: int = 10
) -> List[Tuple[List[float], Dict[str, List]]]:
query_vectorizer = self.query_vectorizer
embedding_col_name = self._EMBEDDING_COL_NAME
queries = queries.map(
lambda examples: {embedding_col_name: query_vectorizer(examples)},
batched=True,
batch_size=self.batch_size,
)
embeddings = queries[self._EMBEDDING_COL_NAME]
results = []
for i in tqdm.tqdm(range(0, len(embeddings), 32)):
batch_scores, batch_retrieved = self.docs.get_nearest_examples_batch(
self._EMBEDDING_COL_NAME,
np.array(embeddings[i : i + 32], dtype=np.float32),
k=k,
)
results.extend(list(zip(batch_scores, batch_retrieved)))
return results
def vector_search(
self, vector: List[float], k: int = 10
) -> Tuple[List[float], Dict[str, List]]:
scores, retrieved_examples = self.docs.get_nearest_examples(
self._EMBEDDING_COL_NAME, np.array(vector, dtype=np.float32), k=k
)
return scores, retrieved_examples
I thought it could be quiet useful if we use this to support processing large datasets which users of txtai might have. They could even easily publish their processed datasets (before vectorizing) to huggingface's hub. I will try to put a working POC in a repo this week.
from txtai.
Got it, thank you, I will keep this in mind and consider options to integrate datasets into txtai.
from txtai.
An example notebook has been added to show how to integrate txtai and Hugging Face's Datasets.
from txtai.
Related Issues (20)
- import Embeddings takes too long time HOT 5
- txtai[pipeline] fasttext build fails miserably HOT 3
- Segmetation fault on macOS Sonoma 14.4.1 (MacBook Pro Apple M3 chip) HOT 1
- Add support for chat messages in LLM pipeline
- Update HFOnnx pipeline to default to opset 14
- Fix incompatibility with ONNX models and transformers>=4.41.0
- Can't run example 55_Generate_knowledge_with_Semantic_Graphs_and_RAG.ipynb HOT 3
- Update finalizers to check object attributes haven't already been cleared
- 'ollama' unable to resolve HOT 2
- Update LLM pipeline to support GPU parameter with llama.cpp backend
- Add support for LiteLLM vector backend
- Add support for llama.cpp vector backend
- Refactor vector module to support additional backends
- Add notebook showing to run RAG with llama.cpp and LiteLLM
- Fix incompatibility between latest skl2onnx and txtai
- AttributeError: 'Embeddings' object has no attribute 'embed_documents' HOT 2
- Process finished with exit code 139 (interrupted by signal 11:SIGSEGV) HOT 3
- Add RAG alias for Extractor
- Fix issue with max tokens for llama.cpp components
- Fix issue with loading non-transformer LLM models in Extractor/RAG pipeline
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from txtai.