Math

340 papers with code • 0 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Math models and implementations
2 papers
23,955
See all 7 libraries.

Datasets


Most implemented papers

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

microsoft/guidance 28 Jan 2022

We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning.

GPT-4 Technical Report

openai/evals Preprint 2023

We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs.

AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration

vllm-project/vllm 1 Jun 2023

We then propose to search for the optimal per-channel scaling that protects the salient weights by observing the activation, not weights.

PaLM: Scaling Language Modeling with Pathways

lucidrains/CoCa-pytorch Google Research 2022

To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM.

The Matrix Calculus You Need For Deep Learning

parrt/bookish 5 Feb 2018

This paper is an attempt to explain all the matrix calculus you need in order to understand the training of deep neural networks.

Full Page Handwriting Recognition via Image to Sequence Extraction

kingyiusuen/image-to-latex 11 Mar 2021

We present a Neural Network based Handwritten Text Recognition (HTR) model architecture that can be trained to recognize full pages of handwritten or printed text without image segmentation.

Measuring Mathematical Problem Solving With the MATH Dataset

hendrycks/math 5 Mar 2021

To facilitate future research and increase accuracy on MATH, we also contribute a large auxiliary pretraining dataset which helps teach models the fundamentals of mathematics.

Memorizing Transformers

lucidrains/memorizing-transformers-pytorch ICLR 2022

Language models typically need to be trained or finetuned in order to acquire new knowledge, which involves updating their weights.

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

google/BIG-bench 9 Jun 2022

BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.

Language Models are Multilingual Chain-of-Thought Reasoners

google-research/url-nlp 6 Oct 2022

Finally, we show that the multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment.