•  
  •  
 

Document Type

Article

Abstract

The increasing use of AI for code generation presents significant security challenges, as these tools often lack inherent security awareness and can produce vulnerable code. This paper investigates these security risks, outlining common types of vulnerabilities (such as injection flaws and improper resource handling) found in AI-generated code. It further explores and evaluates mitigation techniques aimed at im-proving code security, including model fine-tuning and adversarial strategies like Security Verifier Enhanced Neural Steering (SVEN). Findings indicate that while current methods offer promising ways to reduce vulnerabilities, ongoing research and development are crucial for the secure and responsible deployment of AI in software development.

Primo Type

Article

Share

COinS