Giter VIP home page Giter VIP logo

hpt's Introduction

HPT - Open Multimodal Large Language Models

Hyper-Pretrained Transformers (HPT) is a novel multimodal LLM framework from HyperGAI, and has been trained for vision-language models that are capable of understanding both textual and visual inputs. HPT has achieved highly competitive results with state-of-the-art models on a variety of multimodal LLM benchmarks. This repository contains the open-source implementation of inference code to reproduce the evaluation results of HPT on different benchmarks.

Release

  • [6/06] 🔥 Releasing HPT 1.5 Edge, our latest open-source model tailored to edge devices. Despite its size (<5B), Edge demonstrates impressive capabilities while being extremely efficient. HPT 1.5 Edge is publicly available on [HuggingFace Repository]. Please read our [technical blog post] for more details.
  • [5/03] HPT 1.5 Air, our best open-sourced 8B Multimodal LLM with Llama 3. Built with Meta Llama 3, Our hyper capable HPT 1.5 Air packs a punch on real world understanding and complex reasoning. HPT Air 1.5 achieves the best results among <10B models across a wide range of challenging benchmarks (MMMU, POPE, SEED-I, and more). HPT 1.5 Air is publicly available on [HuggingFace Repository]. Please read our [technical blog post] for more details.
  • [3/16] HPT 1.0 Air is out, our most efficient model as a cost-effective solution that is capable of solving a wide range of vision-and-language tasks. HPT 1.0 Air is publicly available and achieves state-of-the-art results among all the open-source multimodal LLM models of similar or smaller sizes on the challenging MMMU benchmark. Please read our [technical blog post] and [HuggingFace Repository] for more details.

We release HPT 1.5 Edge as our latest open-sources model tailored to edge devices. Despite its size (<5B), Edge demonstrates impressive capabilities while being extremely efficient. We release HPT 1.5 Edge publicly at Huggingface and Github under the Apache 2.0 license.

Table of Contents

Overview of Model Achitecture


Quick Start

Installation

pip install -r requirements.txt
pip install -e .

Prepare the Model

You can download the model weights from HF into your [Local Path] and set the global_model_path as your [Local Path] in the model config file:

git lfs install
git clone https://huggingface.co/HyperGAI/HPT1_5-Edge [Local Path]

You can also set other strategies in the config file that are different from our default settings.

Demo

After setting up the config file, launch the model demo for a quick trial:

python demo/demo.py --image_path [Image]  --text [Text]  --model [Config]

Example:

python demo/demo.py --image_path demo/einstein.jpg  --text 'What is unusual about this image?'  --model hpt-edge-1-5

Evaluations

Launch the model for evaluation:

torchrun --nproc-per-node=8 run.py --data [Dataset] --model [Config]

Example for HPT 1.5 Edge:

torchrun --nproc-per-node=8 run.py --data MMMU_DEV_VAL --model hpt-edge-1-5

Benchmarks

For HPT 1.5 Edge


  • The majority of the results presented are taken from the models‘ original reports while the others are from Phi-3-vision evaluations, which we mark with an asterisk (*).
  • The benchmark result of HPT1.5 Air and HPT1.0 is in assets directory.

Pretrained Models Used

HPT 1.5 Edge

HPT 1.5 Air

HPT 1.0 Air

Disclaimer and Responsible Use

Note that the HPT Air is a quick open release of our models to facilitate the open, responsible AI research and community development. It does not have any moderation mechanism and provides no guarantees on their results. We hope to engage with the community to make the model finely respect guardrails to allow practical adoptions in real-world applications requiring moderated outputs.

Contact Us

License

This project is released under the Apache 2.0 license. Parts of this project contain code and models from other sources, which are subject to their respective licenses and you need to apply their respective license if you want to use for commercial purposes.

Acknowledgements

The evaluation code for running this demo was extended based on the VLMEvalKit project. We also thank OpenAI for open-sourcing their visual encoder models, 01.AI, Meta and Microsoft for open-sourcing their large language models.

hpt's People

Contributors

xwwu2015 avatar yucheng-zzhao avatar quanghypergai avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.