Variable misuse
9 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Variable misuse
Libraries
Use these libraries to find Variable misuse models and implementationsMost implemented papers
Neural Program Repair by Jointly Learning to Localize and Repair
We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs.
Learning and Evaluating Contextual Embedding of Source Code
We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples.
Understanding Neural Code Intelligence Through Program Simplification
Our approach, SIVAND, uses simplification techniques that reduce the size of input programs of a CI model while preserving the predictions of the model.
Memorization and Generalization in Neural Code Intelligence Models
The goal of this paper is to evaluate and compare the extent of memorization and generalization in neural code intelligence models.
Global Relational Models of Source Code
By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation.
Learning Graph Structure With A Finite-State Automaton Layer
In practice, edges are used both to represent intrinsic structure (e. g., abstract syntax trees of programs) and more abstract relations that aid reasoning for a downstream task (e. g., results of relevant program analyses).
CodeTrek: Flexible Modeling of Code using an Extensible Relational Representation
Designing a suitable representation for code-reasoning tasks is challenging in aspects such as the kinds of program information to model, how to combine them, and how much context to consider.
Graph Conditioned Sparse-Attention for Improved Source Code Understanding
The fusion between a graph representation like Abstract Syntax Tree (AST) and a source code sequence makes the use of current approaches computationally intractable for large input sequence lengths.
Probing Pretrained Models of Source Code
Deep learning models are widely used for solving challenging code processing tasks, such as code generation or code summarization.