Name: Jiangp
Type: User
Company: Tsinghua University
Bio: Computional Neuroscience
PhD in Tsinghua University
Interested in multitask learning, neural representation, task compositionally, brain&RNN&LLM
Twitter: jiangp21
Location: Beijing, China
Blog: https://jp-17.github.io/
Jiangp's Projects
This repo is constructed for explore the Allen dataset
A repository for learning computation neuroscience~
Resources for learning Computational Neuroscience
Projects and exercises for the latest Deep Learning ND program https://www.udacity.com/course/deep-learning-nanodegree--nd101
Personal Page
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Practice for Neuromatch Academy Computional Neurosience
A repository for learning in NMA_CN 2022
Neural Networks: Zero to Hero
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch