jay-dox's Projects
Prototype sample code demonstrating how we can leverage CodeLlama locally and connect it to MySQL using LangChain
FastAPI Best Practices and Conventions we used at our startup
HyDE: Precise Zero-Shot Dense Retrieval without Relevance Labels
Config files for my GitHub profile.
Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
Inference code for LLaMA models
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Documentย Q&A
Examples and recipes for Llama 2 model
Invoice data processing with Llama2 13B LLM RAG on Local CPU
Querying local documents, powered by LLM
Providing enterprise-grade LLM-based development framework, tools, and fine-tuned models.
Scrapes Memory and GPU utilization metrics using NVML and exposes them to Prometheus through a simple HTTP server and/or a push gateway.
Python-based Q&A bot using OpenAI's LLM and LangChain's Vector Index Database for text extraction and processing. Features include PDF data extraction, tokenization, efficient data management, OpenAI embeddings for context analysis, and customizable query handling for diverse informational needs.