Instruction Following

286 papers with code • 1 benchmarks • 14 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Instruction Following models and implementations
2 papers
11,956
2 papers
11,955
See all 6 libraries.

Most implemented papers

Language as an Abstraction for Hierarchical Deep Reinforcement Learning

google-research/clevr_robot_env NeurIPS 2019

We find that, using our approach, agents can learn to solve to diverse, temporally-extended tasks such as object sorting and multi-object rearrangement, including from raw pixel observations.

Guiding Multi-Step Rearrangement Tasks with Natural Language Instructions

jhu-lcsr/good_robot Conference On Robot Learning (CoRL) 2021

Our model completes block manipulation tasks with synthetic commands 530 more often than a UNet-based baseline, and learns to localize actions correctly while creating a mapping of symbols to perceptual input that supports compositional reasoning.

DialFRED: Dialogue-Enabled Agents for Embodied Instruction Following

xfgao/dialfred 27 Feb 2022

Language-guided Embodied AI benchmarks requiring an agent to navigate an environment and manipulate objects typically allow one-way communication: the human user gives a natural language command to the agent, and the agent can only follow the command passively.

Precise Zero-Shot Dense Retrieval without Relevance Labels

texttron/hyde 20 Dec 2022

Given a query, HyDE first zero-shot instructs an instruction-following language model (e. g. InstructGPT) to generate a hypothetical document.

Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection

greshake/llm-security 23 Feb 2023

Large Language Models (LLMs) are increasingly being integrated into various applications.

Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following

seonghyeonye/tapp 28 Feb 2023

In this paper, we present our finding that prepending a Task-Agnostic Prefix Prompt (TAPP) to the input improves the instruction-following ability of various Large Language Models (LLMs) during inference.

CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society

camel-ai/camel NeurIPS 2023

Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems, and open-sourcing our library to support research on communicative agents and beyond: https://github. com/camel-ai/camel.

Instruction Tuning with GPT-4

Instruction-Tuning-with-GPT-4/GPT-4-LLM 6 Apr 2023

Prior work has shown that finetuning large language models (LLMs) using machine-generated instruction-following data enables such models to achieve remarkable zero-shot capabilities on new tasks, and no human-written instructions are needed.

Towards Better Instruction Following Language Models for Chinese: Investigating the Impact of Training Data and Evaluation

lianjiatech/belle 16 Apr 2023

Recently, significant public efforts have been directed towards developing low-cost models with capabilities akin to ChatGPT, thereby fostering the growth of open-source conversational models.

X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages

0nutation/speechgpt 7 May 2023

(3) Integrating multiple modalities: all single-modal encoders are aligned with the LLM through X2L interfaces to integrate multimodal capabilities into the LLM.