Evaluating the Impact of Generative AI on Intelligent Programming Assistance and Code Quality.

Main Article Content

Magnus Chukwuebuka Ahuchogu, Pravin Ganpatrao Gawande, Charu Mohla, Deepak A. Vidhate, Nidal Al Said, Eric Howard

Abstract

Generative AI is revolutionizing software development by providing intelligent programming assistance, automating code generation, debugging, and refactoring. This paper evaluates the impact of AI-powered coding assistants, such as GitHub Copilot and OpenAI Codex, on developer productivity and code quality. We analyze AI’s effectiveness in improving coding efficiency, reducing errors, and maintaining best practices while addressing challenges such as bias, over-reliance, and security risks. Our study compares AI-assisted and human-written code using empirical data and controlled experiments across multiple programming languages. Key metrics, including defect rates, maintainability, and adherence to industry standards, are used to assess AI’s influence on code quality. Additionally, we explore its implications for software engineering workflows, education, and future advancements in AI-driven development. Findings reveal that AI-assisted coding enhances productivity but requires human oversight to mitigate risks like code vulnerabilities and inconsistencies. This research provides valuable insights into AI’s evolving role in software engineering, offering guidance for developers, researchers, and industry practitioners.


DOI:https://doi.org/10.52783/pst.1668

Article Details

Section
Articles