Document Type
Article
Abstract
The increasing use of AI for code generation presents significant security challenges, as these tools often lack inherent security awareness and can produce vulnerable code. This paper investigates these security risks, outlining common types of vulnerabilities (such as injection flaws and improper resource handling) found in AI-generated code. It further explores and evaluates mitigation techniques aimed at im-proving code security, including model fine-tuning and adversarial strategies like Security Verifier Enhanced Neural Steering (SVEN). Findings indicate that while current methods offer promising ways to reduce vulnerabilities, ongoing research and development are crucial for the secure and responsible deployment of AI in software development.
Recommended Citation
Nelson, Andreas E.
(2025)
"Securing AI-Generated Code,"
Scholarly Horizons: University of Minnesota, Morris Undergraduate Journal: Vol. 12:
Iss.
2, Article 6.
DOI: https://doi.org/10.61366/2576-2176.1167
Available at:
https://digitalcommons.morris.umn.edu/horizons/vol12/iss2/6
Primo Type
Article