Generative AI in Software Development: Speed & Security

You are currently viewing Generative AI in Software Development: Speed & Security
  • Post last modified:October 14, 2024
  • Post comments:0 Comments
  • Reading time:7 mins read

At ThamesTech AI, we are committed to delivering cutting-edge solutions that transform businesses. One of the most disruptive technologies in recent years is Generative AI, which automates code generation and promises to speed up software development. Tools like GitHub Copilot and OpenAI’s ChatGPT are heralded as game-changers in development, but at ThamesTech AI, we recognize that speed isn’t everything. Security, scalability, and long-term maintainability must not be compromised.

In this blog, we’ll explore the benefits and challenges of generative AI in software development, drawing from our own experiences and expert industry insights. We’ll demonstrate how businesses can leverage AI without falling into common traps, ensuring that faster development doesn’t compromise software quality.



The Productivity Trap: More Code, More Problems

Generative AI tools promise to boost productivity by generating code quickly. However, many businesses find that this speed often hides deeper issues. According to the 2024 Atlassian Developer Experience Report, while AI is seen by business leaders as a key driver of productivity, two-thirds of developers reported no significant improvements. This highlights a disconnect between how AI is used and how it impacts the quality of the development process .

Nathen Harvey, a developer advocate at Google Cloud, discussed this challenge at the ByWater Solutions AI Innovations event. He explained that generative AI often behaves like an over-eager junior developer, delivering quick but sometimes poorly thought-out solutions. The result? Code that works in the short term but creates long-term risks related to security, scalability, and maintainability .



AI-Generated Code: Functionality vs. Security

At ThamesTech AI, we’ve experienced firsthand how AI-generated code can offer quick solutions but often lacks critical considerations like security and performance. For example, in one of our projects, we used GitHub Copilot to generate a Flask API for retrieving user data. The initial code worked, but upon review, we identified several issues that needed immediate attention.

AI-Generated Code:

from flask import Flask, request, jsonify import mysql.connector app = Flask(__name__) @app.route('/user/<int:user_id>', methods=['GET']) def get_user(user_id): conn = mysql.connector.connect(user='root', password='password', host='localhost', database='users_db') cursor = conn.cursor(dictionary=True) cursor.execute(f"SELECT * FROM users WHERE id = {user_id}") user = cursor.fetchone() conn.close() return jsonify(user)

While this code works functionally, it contains three major flaws:

  1. SQL Injection Vulnerability: Embedding user_id directly in the SQL query makes it vulnerable to SQL injection.
  2. Hardcoded Credentials: Storing sensitive credentials directly in the code is a significant security risk.
  3. Improper Resource Management: Database connections and cursors are not properly managed, leading to potential resource leaks.


Refining AI Code for Production-Ready Solutions

At ThamesTech AI, we ensure that AI-generated code is thoroughly reviewed and refined before deployment. Here’s how we fixed the issues in the above code example:

Refactored Code:

from flask import Flask, request, jsonify, abort from mysql.connector import connect, Error import os app = Flask(__name__) @app.route('/user/<int:user_id>', methods=['GET']) def get_user(user_id): try: with connect( user=os.getenv('DB_USER'), password=os.getenv('DB_PASSWORD'), host=os.getenv('DB_HOST'), database=os.getenv('DB_NAME') ) as conn: with conn.cursor(dictionary=True) as cursor: cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,)) user = cursor.fetchone() if not user: abort(404, description="User not found") return jsonify(user) except Error as e: abort(500, description=str(e))

Key Improvements:

  1. SQL Injection Prevention: Parameterized queries eliminate the risk of SQL injection.
  2. Secure Credentials: Credentials are stored in environment variables instead of being hardcoded, following best security practices.
  3. Resource Management: The with statement ensures proper resource management, even in the event of an error.

These improvements demonstrate that while generative AI can speed up development, human oversight is essential to produce secure, reliable, and scalable code.



Security: Generative AI’s Blind Spot

One of the biggest risks of generative AI is its lack of a security-first approach. AI is excellent at generating functional code, but it often overlooks critical security measures. A report by Sonya Moisset at Snyk found that popular AI code generators, including GitHub Copilot and Amazon CodeWhisperer, frequently produce insecure code vulnerable to cross-site scripting (XSS) and other common attacks .

Additionally, according to IBM’s 2023 Cost of a Data Breach Report, the average cost of a data breach has risen to $4.45 million . These statistics underscore the importance of conducting thorough security reviews before deploying AI-generated code.

At ThamesTech AI, we ensure that every piece of AI-generated code undergoes rigorous security audits. Our team identifies and addresses potential vulnerabilities such as SQL injection and XSS before any code goes live.



Best Practices for Using Generative AI at ThamesTech AI

At ThamesTech AI, we follow a set of best practices to ensure that AI-generated code meets our high standards for security, scalability, and maintainability:

  1. AI as a Starting Point, Not the Final Product: AI-generated code can be useful for generating functional snippets, but we treat it as a draft that requires human review and refinement.

  2. Refactor for Security: Every AI-generated code snippet is checked for security vulnerabilities such as SQL injection, XSS, and hardcoded credentials.

  3. Human Expertise Matters: While AI can automate coding tasks, it cannot replace human expertise. Our developers ensure that AI-generated code integrates seamlessly into production systems while maintaining best practices for security and performance.

  4. Clear, Contextual Prompts: The quality of AI output depends on the quality of the prompt. At ThamesTech AI, we craft detailed, contextual prompts to guide AI tools in generating more accurate and secure code.



Conclusion: Generative AI at ThamesTech AI

Generative AI is a powerful tool that can revolutionize software development. However, while AI can accelerate coding tasks, it requires human oversight to ensure the code is secure, scalable, and maintainable. At ThamesTech AI, we combine the efficiency of AI with the expertise of our development team to deliver high-quality, secure solutions that our clients can trust.

If you’re interested in learning how ThamesTech AI can help you leverage AI safely and effectively, contact us for a consultation today.



References:

  1. Riggins, J. (2024). “What’s Wrong With Generative AI-Driven Development Right Now.” The New Stack. Retrieved from https://thenewstack.io/whats-wrong-with-generative-ai-driven-development-right-now/
  2. Atlassian. (2024). “Developer Experience Report 2024.” Atlassian Blog. Retrieved from https://www.atlassian.com/blog/developer/developer-experience-report-2024
  3. Moisset, S. (2023). “Snyk Report on AI-Generated Code Vulnerabilities.” Snyk. Retrieved from https://snyk.io/blog/ai-code-security
  4. IBM Security. (2023). “Cost of a Data Breach Report.” IBM. Retrieved from https://www.ibm.com/security/data-breach

Leave a Reply