AI adoption in software engineering depends on the developer experience. How well does the AI fit...
AI Software Engineering - Security and QA
Security
Using an inference API or model that does not use inference requests to train the model is table stakes. Unfortunately, this is not a given for most free and many paid LLM chat products.
Try typing an API key name in your IDE with an AI autocomplete tool enabled, and it may suggest someone else’s functioning key that has been stored in the model’s training data. This underscores the importance of taking control of your team’s approach to AI-powered software development to ensure that minimum security standards are being followed. If you don’t provide the necessary tools to your team, they will seek them elsewhere.
Some companies will wish to take this further. It is possible to avoid sending inference requests to external services at all, while still taking advantage of frontier foundation models. For example, AWS Bedrock allows you to deploy Anthropic’s models inside your virtual private cloud. Integration with private model deployments like this represents the gold standard in IP and data protection.
Testing and QA
Our view is that software engineers own and are responsible for the integrity and quality of their codebase, whether AI was involved in their workflows or not. The engineer owns the codebase.
This means that the git workflow, particularly pull requests and code reviews, remains a critical part of the software lifecycle. AI can assist in testing and QA, by generating tests and deploying QA agents, but ultimately, there must be human accountability.