Benchmarking Causal Study to Interpret Large Language Models for Source Code

Type: Preprint

Publication Date: 2023-01-01

Citations: 0

DOI: https://doi.org/10.48550/arxiv.2308.12415

Locations

  • arXiv (Cornell University) - View - PDF
  • DataCite API - View

Similar Works

Action Title Year Authors
+ PDF Chat Benchmarking Causal Study to Interpret Large Language Models for Source Code 2023 Daniel Rodríguez-Cárdenas
David N. Palacio
Dipin Khati
Henry Burke
Denys Poshyvanyk
+ Benchmarking and Explaining Large Language Model-based Code Generation: A Causality-Centric Approach 2023 Zhenlan Ji
Pingchuan Ma
Zongjie Li
Shuai Wang
+ PDF Chat Quality Assessment of Prompts Used in Code Generation 2024 Mohammed Latif Siddiq
Simantika Dristi
Joy Saha
Joanna C. S. Santos
+ PDF Chat The Impact of Prompt Programming on Function-Level Code Generation 2024 Ranim Khojah
Francisco Gomes de Oliveira Neto
Mazen Mohamad
Philipp Leitner
+ Testing LLMs on Code Generation with Varying Levels of Prompt Specificity 2023 Lincoln Murr
Morgan Grainger
David Yang Gao
+ PDF Chat Can ChatGPT Support Developers? An Empirical Evaluation of Large Language Models for Code Generation 2024 Kailun Jin
Chung-Yu Wang
Hung Viet Pham
Hadi Hemmati
+ PDF Chat Selective Prompt Anchoring for Code Generation 2024 Yuan Tian
Tianyi Zhang
+ Prompt Engineering or Fine Tuning: An Empirical Assessment of Large Language Models in Automated Software Engineering Tasks 2023 Jiho Shin
C.J. Tang
Tahmineh Mohati
Maleknaz Nayebi
Song Wang
Hadi Hemmati
+ Investigating the Efficacy of Large Language Models for Code Clone Detection 2024 Mohamad Khajezade
Jie JW Wu
Fatemeh H. Fard
Gema Rodríguez-Pérez
Mohamed Shehata
+ Investigating the Efficacy of Large Language Models for Code Clone Detection 2024 Mohamad Khajezade
Jie Wu
Fatemeh H. Fard
Gema Rodríguez-Pérez
Mohamed Shehata
+ Boldly Going Where No Benchmark Has Gone Before: Exposing Bias and Shortcomings in Code Generation Evaluation 2024 Ankit Yadav
Mayank Singh
+ PDF Chat CodeEditorBench: Evaluating Code Editing Capability of Large Language Models 2024 Jiawei Guo
Ziming Li
Xueling Liu
Kaijing Ma
Tianyu Zheng
Zhouliang Yu
Ding Pan
Yi‐Zhi Li
Ruibo Liu
Yue Wang
+ PDF Chat Can OpenSource beat ChatGPT? -- A Comparative Study of Large Language Models for Text-to-Code Generation 2024 Lucio Mayer
Christian Heumann
Matthias Aßenmacher
+ PDF Chat Can Language Models Replace Programmers? REPOCOD Says 'Not Yet' 2024 Shuran Liang
Yiran Hu
Nan Jiang
Lin Tan
+ PDF Chat DocuMint: Docstring Generation for Python using Small Language Models 2024 Bibek Poudel
A.J.R. Cook
Sékou F. Traorè
Shelah Ameli
+ PDF Chat A Survey on Evaluating Large Language Models in Code Generation Tasks 2024 Liguo Chen
Qi Guo
Hongrui Jia
Zhengran Zeng
Xin Wang
Yijiang Xu
Jian Wu
Yidong Wang
Qing Gao
Jindong Wang
+ PDF Chat What's Wrong with Your Code Generated by Large Language Models? An Extensive Study 2024 Shihan Dou
Haoxiang Jia
Shenxi Wu
Huiyuan Zheng
Weikang Zhou
Muling Wu
Mingxu Chai
Jessica Fan
Caishuang Huang
Yunbo Tao
+ PDF Chat Where Are Large Language Models for Code Generation on GitHub? 2024 Xiao Yu
Lei Liu
Xing Hu
Jacky Keung
Jin Liu
Xin Xia
+ PDF Chat Assessing Code Generation with Intermediate Languages 2024 Xun Deng
Sicheng Zhong
Honghua Dong
Jingyu Hu
Sidi Mohamed Beillahi
Xujie Si
Fan Long
+ PDF Chat Prompting and Fine-tuning Large Language Models for Automated Code Review Comment Generation 2024 Md. Asif Haider
Ayesha Binte Mostofa
Sk. Sabit Bin Mosaddek
Anindya Iqbal
Toufique Ahmed

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors