Alejandro Velasco

Follow

Generating author description...

Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ PDF Chat SEQUENCER: Sequence-to-Sequence Learning for End-to-End Program Repair 2019 Zimin Chen
Steve Kommrusch
Michele Tufano
Louis-Noël Pouchet
Denys Poshyvanyk
Martin Monperrus
3
+ PDF Chat Benchmarking Causal Study to Interpret Large Language Models for Source Code 2023 Daniel RodrĂ­guez-CĂĄrdenas
David N. Palacio
Dipin Khati
Henry Burke
Denys Poshyvanyk
3
+ PDF Chat On learning meaningful assert statements for unit test cases 2020 Cody Watson
Michele Tufano
Kevin Moran
Gabriele Bavota
Denys Poshyvanyk
3
+ PDF Chat A Systematic Literature Review on the Use of Deep Learning in Software Engineering Research 2022 Cody Watson
Nathan Cooper
David Nader Palacio
Kevin Moran
Denys Poshyvanyk
3
+ PDF Chat Studying the Usage of Text-To-Text Transfer Transformer to Support Code-Related Tasks 2021 Antonio Mastropaolo
Simone Scalabrino
Nathan Cooper
David N. Palacio
Denys Poshyvanyk
Rocco Oliveto
Gabriele Bavota
3
+ An Empirical Study on the Usage of Transformer Models for Code Completion 2021 Matteo Ciniselli
Nathan Cooper
Luca Pascarella
Antonio Mastropaolo
Emad Aghajani
Denys Poshyvanyk
Massimiliano Di Penta
Gabriele Bavota
3
+ PDF Chat An Empirical Study on the Usage of BERT Models for Code Completion 2021 Matteo Ciniselli
Nathan Cooper
Luca Pascarella
Denys Poshyvanyk
Massimiliano Di Penta
Gabriele Bavota
3
+ CodeBERT: A Pre-Trained Model for Programming and Natural Languages 2020 Zhangyin Feng
Daya Guo
Duyu Tang
Nan Duan
Xiaocheng Feng
Ming Gong
Linjun Shou
Bing Qin
Ting Liu
Daxin Jiang
2
+ AST-Probe: Recovering abstract syntax trees from hidden representations of pre-trained language models 2022 José Antonio Hernåndez López
Martin Weyssow
JesĂșs SĂĄnchez Cuadrado
Houari Sahraoui
2
+ Toward a Theory of Causation for Interpreting Neural Code Models 2023 David N. Palacio
Nathan Cooper
Álvaro Rodríguez
Kevin Moran
Denys Poshyvanyk
2
+ PDF Chat Pythia: AI-assisted Code Completion System 2019 Alexey Svyatkovskiy
Ying Zhao
Sheng‐Yu Fu
Neel Sundaresan
2
+ Towards Understanding What Code Language Models Learned 2023 Toufique Ahmed
Dian Yu
Chengxuan Huang
Cathy Wang
Prémkumar Dévanbu
Kenji Sagae
2
+ RoBERTa: A Robustly Optimized BERT Pretraining Approach 2019 Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
Mike Lewis
Luke Zettlemoyer
Veselin Stoyanov
2
+ PDF Chat INSPECT: Intrinsic and Systematic Probing Evaluation for Code Transformers 2023 Anjan Karmakar
Romain Robbes
2
+ PDF Chat Towards Automating Code Review Activities 2021 Rosalia Tufano
Luca Pascarella
Michele Tufano
Denys Poshyvanyk
Gabriele Bavota
2
+ PyMT5: multi-mode translation of natural language and Python code with transformers 2020 Colin Clement
Dawn Drain
Jonathan Timcheck
A. Svyatkovskiy
Neel Sundaresan
2
+ CodeSearchNet Challenge: Evaluating the State of Semantic Code Search 2019 Hamel Husain
Ho-Hsiang Wu
Tiferet Gazit
Miltiadis Allamanis
Marc Brockschmidt
2
+ DoWhy: Addressing Challenges in Expressing and Validating Causal Assumptions 2021 Amit Sharma
Vasilis Syrgkanis
Cheng Zhang
Emre Kıcıman
2
+ PDF Chat IntelliCode compose: code generation using transformer 2020 Alexey Svyatkovskiy
Shao Kun Deng
Sheng‐Yu Fu
Neel Sundaresan
2
+ What Do They Capture? -- A Structural Analysis of Pre-Trained Language Models for Source Code 2022 Yao Wan
Wei Zhao
Hongyu Zhang
Yulei Sui
Guandong Xu
Hai Jin
2
+ BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 2018 Jacob Devlin
Ming‐Wei Chang
Kenton Lee
Kristina Toutanova
2
+ Explainable AI for Pre-Trained Code Models: What Do They Learn? When They Do Not Work? 2022 Ahmad Haji Mohammadkhani
Chakkrit Tantithamthavorn
Hadi Hemmati
2
+ Are Code Pre-trained Models Powerful to Learn Code Syntax and Semantics? 2022 Wei Ma
Mengjie Zhao
Xiaofei Xie
Qiang Hu
Shangqing Liu
Jie Zhang
Wenhan Wang
Yang Liu
2
+ PDF Chat Knowledge Neurons in Pretrained Transformers 2022 Damai Dai
Li Dong
Yaru Hao
Zhifang Sui
Baobao Chang
Furu Wei
1
+ PDF Chat Principles and Practice of Explainable Machine Learning 2021 Vaishak Belle
Ioannis Papantonis
1
+ Evaluating Large Language Models Trained on Code 2021 Mark Chen
Jerry Tworek
Heewoo Jun
Qiming Yuan
Henrique Pondé de Oliveira Pinto
Jared Kaplan
Harrison Edwards
Yuri Burda
Nicholas Joseph
Greg Brockman
1
+ To Ship or Not to Ship: An Extensive Evaluation of Automatic Metrics for Machine Translation 2021 Tom Kocmi
Christian Federmann
Roman Grundkiewicz
Marcin Junczys-Dowmunt
Hitokazu Matsushita
Arul Menezes
1
+ Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations 2022 Tessa Han
Suraj Srinivas
Himabindu Lakkaraju
1
+ On the Relationship Between Explanation and Prediction: A Causal View 2022 Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
1
+ Counterfactual explanations for models of code 2022 JĂŒrgen Cito
IĆŸÄ±l Dillig
Vijayaraghavan Murali
Satish Chandra
1
+ PDF Chat Holistic Evaluation of Language Models 2023 Rishi Bommasani
Percy Liang
Tong Lee
1
+ Visualizing and Understanding Recurrent Networks 2015 Andrej Karpathy
Justin Johnson
Li Fei-Fei
1
+ Large Language Models for Software Engineering: A Systematic Literature Review 2023 Xinyi Hou
Yanjie Zhao
Yue Liu
Yang Zhou
Kailong Wang
Li Li
Xiapu Luo
David Lo
John Grundy
Haoyu Wang
1
+ From statistical to causal learning 2023 Bernhard Schölkopf
Julius von KĂŒgelgen
1
+ PDF Chat On the Reliability and Explainability of Language Models for Program Generation 2024 Yue Liu
Chakkrit Tantithamthavorn
Yonghui Liu
Li Li
1
+ Trustworthy and Synergistic Artificial Intelligence for Software Engineering: Vision and Roadmaps 2023 David Lo
1
+ Attention Is All You Need 2017 Ashish Vaswani
Noam Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan N. Gomez
Ɓukasz Kaiser
Illia Polosukhin
1
+ PDF Chat Transportability of Causal and Statistical Relations: A Formal Approach 2011 Judea Pearl
Elias Bareinboim
1
+ TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems 2016 Martı́n Abadi
Ashish Agarwal
Paul Barham
Eugene Brevdo
Zhifeng Chen
Craig Citro
Gregory S. Corrado
Andy Davis
Jay B. Dean
Matthieu Devin
1
+ Towards A Rigorous Science of Interpretable Machine Learning 2017 Finale Doshi‐Velez
Been Kim
1
+ Maybe Deep Neural Networks are the Best Choice for Modeling Source Code 2019 Rafael-Michael Karampatsis
Charles Sutton
1
+ PDF Chat BERT Rediscovers the Classical NLP Pipeline 2019 Ian Tenney
Dipanjan Das
Ellie Pavlick
1
+ PDF Chat On Learning Meaningful Code Changes Via Neural Machine Translation 2019 Michele Tufano
Jevgenija Pantiuchina
Cody Watson
Gabriele Bavota
Denys Poshyvanyk
1
+ PDF Chat Neural Machine Translation of Rare Words with Subword Units 2016 Rico Sennrich
Barry Haddow
Alexandra Birch
1
+ Know What You Don’t Know: Unanswerable Questions for SQuAD 2018 Pranav Rajpurkar
Robin Jia
Percy Liang
1
+ PDF Chat Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context 2018 Urvashi Khandelwal
He He
Peng Qi
Dan Jurafsky
1
+ On the Properties of Neural Machine Translation: Encoder–Decoder Approaches 2014 Kyunghyun Cho
Bart van Merriënboer
Dzmitry Bahdanau
Yoshua Bengio
1
+ PDF Chat An Empirical Study on Learning Bug-Fixing Patches in the Wild via Neural Machine Translation 2019 Michele Tufano
Cody Watson
Gabriele Bavota
Massimiliano Di Penta
Martin White
Denys Poshyvanyk
1
+ PDF Chat The adverse effects of code duplication in machine learning models of code 2019 Miltiadis Allamanis
1
+ PDF Chat Learning How to Mutate Source Code from Bug-Fixes 2019 Michele Tufano
Cody Watson
Gabriele Bavota
Massimiliano Di Penta
Martin White
Denys Poshyvanyk
1