An Open-Source Engineering Guide for Prompt-in-context-learning from EgoAlpha Lab.
📝 Papers | ⚡️ Playground | 🛠 Prompt Engineering | 🌍 ChatGPT Prompt | ⛳ LLMs Usage Guide
⭐️ Shining ⭐️: This is fresh, daily-updated resources for in-context learning and prompt engineering. As Artificial General Intelligence (AGI) is approaching, let’s take action and become a super learner so as to position ourselves at the forefront of this exciting era and strive for personal and professional greatness.
The resources include:
🎉Papers🎉: The latest papers about in-context learning or prompt engineering.
🎉Playground🎉: Large language models that enable prompt experimentation.
🎉Prompt Engineering🎉: Prompt techniques for leveraging large language models.
🎉ChatGPT Prompt🎉: Prompt examples that can be applied in our work and daily lives.
🎉LLMs Usage Guide🎉: The method for quickly getting started with large language models by using LangChain.
In the future, there will likely be two types of people on Earth (perhaps even on Mars, but that's a question for Musk):
- Those who enhance their abilities through the use of AI;
- Those whose jobs are replaced by AI automation.
💎EgoAlpha: Hello! human👤, are you ready?
Table of Contents
📢 News
☄️ EgoAlpha releases the TrustGPT focuses on reasoning. Trust the GPT with the strongest reasoning abilities for authentic and reliable answers. You can click here or visit the Playgrounds directly to experience it。
-
[2023.7.27]
-
[2023.7.26]
-
[2023.7.25]
-
[2023.7.24]
-
[2023.7.23]
-
[2023.7.22]
-
[2023.7.21]
-
[2023.7.20]
- New Architecture: RetNetwork, beyond Transformer 👉Paper👈
-
[2023.7.19]
-
[2023.7.18]
-
[2023.7.17]
-
[2023.7.16]
-
[2023.7.15]
-
[2023.7.14]
-
[2023.7.13]
-
[2023.7.12]
- Claude2👉 [Paper]Model Card and Evaluations for Claude Models / [Website](https://claude.ai/)
📜 Papers
You can directly click on the title to jump to the corresponding PDF link location
Survey
A Survey on Multimodal Large Language Models (2023.06.23)
A Survey of Vision-Language Pre-training from the Lens of Multimodal Machine Translation (2023.06.12)
Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective (2023.05.24)
Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning (2023.05.19)
Prompt Engineering for Healthcare: Methodologies and Applications (2023.04.28)
Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond (2023.04.26)
A Survey of Large Language Models (2023.03.31)
Augmented Language Models: a Survey (2023.02.15)
A Survey for In-context Learning (2022.12.31)
Towards Reasoning in Large Language Models: A Survey (2022.12.20)
👉Complete paper list 🔗 for "Survey"👈
Prompt Engineering
Prompt Design
Self-consistency for open-ended generations (2023.07.11)
Focused Transformer: Contrastive Training for Context Scaling (2023.07.06)
Conformer LLMs - Convolution Augmented Large Language Models (2023.07.02)
OphGLM: Training an Ophthalmology Large Language-and-Vision Assistant based on Instructions and Dialogue (2023.06.21)
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark for Finance (2023.06.08)
Learning Multi-Step Reasoning by Solving Arithmetic Tasks (2023.06.02)
OverPrompt: Enhancing ChatGPT Capabilities through an Efficient In-Context Learning Approach (2023.05.24)
In-Context Impersonation Reveals Large Language Models' Strengths and Biases (2023.05.24)
Frugal Prompting for Dialog Models (2023.05.24)
Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering (2023.05.24)
👉Complete paper list 🔗 for "Prompt Design"👈
Automatic Prompt
Universal Self-adaptive Prompting (2023.05.24)
Discrete Prompt Optimization via Constrained Generation for Zero-shot Re-ranker (2023.05.23)
Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement (2023.05.23)
Learning Easily Updated General Purpose Text Representations with Adaptable Task-Specific Prefixes (2023.05.22)
Automated Few-shot Classification with Instruction-Finetuned Language Models (2023.05.21)
AutoTrial: Prompting Language Models for Clinical Trial Design (2023.05.19)
Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency (2023.05.18)
Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM Inference with Transferable Prompt (2023.05.17)
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data (2023.02.24)
Guiding Large Language Models via Directional Stimulus Prompting (2023.02.22)
👉Complete paper list 🔗 for "Automatic Prompt"👈
Chain of Thought
Chain-Of-Thought Prompting Under Streaming Batch: A Case Study (2023.06.01)
Majority Rule: better patching via Self-Consistency (2023.05.31)
Strategic Reasoning with Language Models (2023.05.30)
Code Prompting: a Neural Symbolic Method for Complex Reasoning in Large Language Models (2023.05.29)
Leveraging Training Data in Few-Shot Prompting for Numerical Reasoning (2023.05.29)
Tab-CoT: Zero-shot Tabular Chain of Thought (2023.05.28)
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models (2023.05.26)
MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of Thought Prompting (2023.05.26)
Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance (2023.05.26)
Demo2Code: From Summarizing Demonstrations to Synthesizing Code via Extended Chain-of-Thought (2023.05.26)
👉Complete paper list 🔗 for "Chain of Thought"👈
Knowledge Augmented Prompt
Are Pre-trained Language Models Useful for Model Ensemble in Chinese Grammatical Error Correction? (2023.05.24)
Referral Augmentation for Zero-Shot Information Retrieval (2023.05.24)
Decomposing Complex Queries for Tip-of-the-tongue Retrieval (2023.05.24)
LLMDet: A Large Language Models Detection Tool (2023.05.24)
OverPrompt: Enhancing ChatGPT Capabilities through an Efficient In-Context Learning Approach (2023.05.24)
Frugal Prompting for Dialog Models (2023.05.24)
Bi-Drop: Generalizable Fine-tuning for Pre-trained Language Models via Adaptive Subnetwork Optimization (2023.05.24)
In-Context Demonstration Selection with Cross Entropy Difference (2023.05.24)
A Causal View of Entity Bias in (Large) Language Models (2023.05.24)
SelfzCoT: a Self-Prompt Zero-shot CoT from Semantic-level to Code-level for a Better Utilization of LLMs (2023.05.19)
👉Complete paper list 🔗 for "Knowledge Augmented Prompt"👈
Evaluation & Reliability
Jailbroken: How Does LLM Safety Training Fail? (2023.07.05)
Towards Measuring the Representation of Subjective Global Opinions in Language Models (2023.06.28)
SETI: Systematicity Evaluation of Textual Inference (2023.05.24)
Is GPT-4 a Good Data Analyst? (2023.05.24)
From Words to Wires: Generating Functioning Electronic Devices from Natural Language Descriptions (2023.05.24)
Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples (2023.05.24)
EvEval: A Comprehensive Evaluation of Event Semantics for Large Language Models (2023.05.24)
Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions (2023.05.24)
HuatuoGPT, towards Taming Language Model to Be a Doctor (2023.05.24)
Have LLMs Advanced Enough? A Challenging Problem Solving Benchmark For Large Language Models (2023.05.24)
👉Complete paper list 🔗 for "Evaluation & Reliability"👈
In-context Learning
Learning to Retrieve In-Context Examples for Large Language Models (2023.07.14)
Schema-learning and rebinding as mechanisms of in-context learning and emergence (2023.06.16)
MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models (2023.06.02)
SummIt: Iterative Text Summarization via ChatGPT (2023.05.24)
Measuring and Mitigating Constraint Violations of In-Context Learning for Utterance-to-API Semantic Parsing (2023.05.24)
OverPrompt: Enhancing ChatGPT Capabilities through an Efficient In-Context Learning Approach (2023.05.24)
Adversarial Demonstration Attacks on Large Language Models (2023.05.24)
Frugal Prompting for Dialog Models (2023.05.24)
Coverage-based Example Selection for In-Context Learning (2023.05.24)
Exploring Diverse In-Context Configurations for Image Captioning (2023.05.24)
👉Complete paper list 🔗 for "In-context Learning"👈
Multimodal Prompt
HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models (2023.07.13)
SPAE: Semantic Pyramid AutoEncoder for Multimodal Generation with Frozen LLMs (2023.06.30)
Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic (2023.06.27)
PromptIR: Prompting for All-in-One Blind Image Restoration (2023.06.22)
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration (2023.06.15)
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding (2023.06.05)
Meta-Learning For Vision-and-Language Cross-lingual Transfer (2023.05.24)
LayoutGPT: Compositional Visual Planning and Generation with Large Language Models (2023.05.24)
EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought (2023.05.24)
👉Complete paper list 🔗 for "Multimodal Prompt"👈
Prompt Application
LongNet: Scaling Transformers to 1, 000, 000, 000 Tokens (2023.07.05)
Conformer LLMs - Convolution Augmented Large Language Models (2023.07.02)
Inferring the Goals of Communicating Agents from Actions and Instructions (2023.06.28)
Kosmos-2: Grounding Multimodal Large Language Models to the World (2023.06.26)
AudioPaLM: A Large Language Model That Can Speak and Listen (2023.06.22)
XrayGPT: Chest Radiographs Summarization using Medical Vision-Language Models (2023.06.13)
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark for Finance (2023.06.08)
ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory (2023.06.06)
Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective (2023.05.24)
Peek Across: Improving Multi-Document Modeling via Cross-Document Question-Answering (2023.05.24)
👉Complete paper list 🔗 for "Prompt Application"👈
Foundation Models
Kosmos-2: Grounding Multimodal Large Language Models to the World (2023.06.26)
AudioPaLM: A Large Language Model That Can Speak and Listen (2023.06.22)
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark for Finance (2023.06.08)
M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models (2023.06.08)
Simple and Controllable Music Generation (2023.06.08)
LLM-Blender: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion (2023.06.05)
Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective (2023.05.24)
Peek Across: Improving Multi-Document Modeling via Cross-Document Question-Answering (2023.05.24)
SAMScore: A Semantic Structural Similarity Metric for Image Translation Evaluation (2023.05.24)
LLMDet: A Large Language Models Detection Tool (2023.05.24)
👉Complete paper list 🔗 for "Foundation Models"👈
👨💻 LLM Usage
Large language models (LLMs) are becoming a revolutionary technology that is shaping the development of our era. Developers can create applications that were previously only possible in our imaginations by building LLMs. However, using these LLMs often comes with certain technical barriers, and even at the introductory stage, people may be intimidated by cutting-edge technology: Do you have any questions like the following?
- ❓ How can LLM be built using programming?
- ❓ How can it be used and deployed in your own programs?
💡 If there was a tutorial that could be accessible to all audiences, not just computer science professionals, it would provide detailed and comprehensive guidance to quickly get started and operate in a short amount of time, ultimately achieving the goal of being able to use LLMs flexibly and creatively to build the programs they envision. And now, just for you: the most detailed and comprehensive Langchain beginner's guide, sourced from the official langchain website but with further adjustments to the content, accompanied by the most detailed and annotated code examples, teaching code lines by line and sentence by sentence to all audiences.
Click 👉here👈 to take a quick tour of getting started with LLM.
✉️ Contact
This repo is maintained by EgoAlpha Lab. Questions and discussions are welcome via [email protected]
.
We are willing to engage in discussions with friends from the academic and industrial communities, and explore the latest developments in prompt engineering and in-context learning together.
🙏 Acknowledgements
Thanks to the PhD students from EgoAlpha Lab and other workers who participated in this repo. We will improve the project in the follow-up period and maintain this community well. We also would like to express our sincere gratitude to the authors of the relevant resources. Your efforts have broadened our horizons and enabled us to perceive a more wonderful world.