Oussama Ben Sghaier

Follow

Generating author description...

Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ PDF Chat Learning a Static Analyzer from Data 2017 Pavol Bielik
Veselin Raychev
Martin Vechev
1
+ Continuous Integration, Delivery and Deployment: A Systematic Review on Approaches, Tools, Challenges and Practices 2017 Mojtaba Shahin
Muhammad Ali Babar
Liming Zhu
1
+ BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 2018 Jacob Devlin
Ming‐Wei Chang
Kenton Lee
Kristina Toutanova
1
+ RoBERTa: A Robustly Optimized BERT Pretraining Approach 2019 Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
Mike Lewis
Luke Zettlemoyer
Veselin Stoyanov
1
+ CodeBERT: A Pre-Trained Model for Programming and Natural Languages 2020 Zhangyin Feng
Daya Guo
Duyu Tang
Nan Duan
Xiaocheng Feng
Ming Gong
Linjun Shou
Bing Qin
Ting Liu
Daxin Jiang
1
+ PDF Chat CORE: Automating Review Recommendation for Code Changes 2020 Jing Kai Siow
Cuiyun Gao
Lingling Fan
Sen Chen
Yang Liu
1
+ CodeBLEU: a Method for Automatic Evaluation of Code Synthesis 2020 Shuo Ren
Daya Guo
Shuai Lu
Long Zhou
Shujie Liu
Duyu Tang
Neel Sundaresan
Ming Zhou
Ambrosio Blanco
Shuai Ma
1
+ PDF Chat Generative adversarial networks 2020 Ian Goodfellow
Jean Pouget-Abadie
Mehdi Mirza
Bing Xu
David Warde-Farley
Sherjil Ozair
Aaron Courville
Yoshua Bengio
1
+ PDF Chat Towards Automating Code Review Activities 2021 Rosalia Tufano
Luca Pascarella
Michele Tufano
Denys Poshyvanyk
Gabriele Bavota
1
+ PDF Chat Breaking Down Multilingual Machine Translation 2022 Ting-Rui Chiang
Yi-Pei Chen
Yi‐Ting Yeh
Graham Neubig
1
+ CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation 2021 Yue Wang
Weishi Wang
Shafiq Joty
Steven C. H. Hoi
1
+ PDF Chat Using pre-trained models to boost code review automation 2022 Rosalia Tufano
Simone Masiero
Antonio Mastropaolo
Luca Pascarella
Denys Poshyvanyk
Gabriele Bavota
1
+ Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer 2019 Colin Raffel
Noam Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
1
+ PDF Chat Automating code review activities by large-scale pre-training 2022 Zhiyu Li
Shuai Lu
Daya Guo
Nan Duan
Shailesh Jannu
Grant Jenks
Deep Majumder
Jared Green
A. Svyatkovskiy
Sheng‐Yu Fu
1
+ AST-Probe: Recovering abstract syntax trees from hidden representations of pre-trained language models 2022 José Antonio Hernåndez López
Martin Weyssow
JesĂșs SĂĄnchez Cuadrado
Houari Sahraoui
1