Skip to content

Fixes

The fixes feature allows users to remediate vulnerabilities leveraging the power of GenAI. By requesting an Autofix, users can either get a detailed step-by-step guide or the modifications to the code that should close the vulnerability.

Public Oath

Fluid Attacks offers a GenAI-assisted vulnerability remediation service, no sensitive or customer-specific information will be used or stored by a third party, customer code will not be used to train a LLM and any results will be viewable by the customer only.

Architecture

arch-light arch-dark
  1. Requesting fix: Autofixes can be requested from either Retrieves or Integrates. A GraphQL subscription request is sent to the API.
  2. Validation and prompt-building: After validating the provided inputs, the backend gathers the context of the vulnerability and uses it to fill a generic prompt, this prompt also comes with a snippet of the vulnerable code itself.
  3. Prompting the LLM: The Integrates backend sends the prompt through the boto client to Amazon Bedrock, then, using inference profiles, the prompt is fed to the AWS-hosted LLM.
  4. LLM Response: The LLM instance processes the input and gives an answer. As the full answer can take around 10 to 20 seconds to be generated, is returned as a constantly updated string stream.
  5. Platform response: This stream is conveyed to the Integrates backend and then the Retrieves or Front client with the aforementioned GraphQL subscription.
  6. Displaying result: The input is collected and shown to the user either as a Markdown guide or as the code to be pasted.

Data security and privacy

As this service requires sending user code to a third party GenAI model, measures must be taken to ensure the safety of the whole process:

Amazon Bedrock

AWS infrastructure hosts the LLMs used by this service.

Amazon Bedrock doesn’t store or log prompts and completions. Neither does it use them to train any AWS LLM models and distribute them to third parties. See the Bedrock data protection guide.

Data both at rest and in transit is also encrypted. See the data encryption guide.

As an additional precaution, this service has been disabled for vulnerabilities related to leaked secrets in code.

To Do

  • Use AWS GuardRails to sanitize code snippets and remove sensitive information before feeding the prompt to the LLM.
  • Instead of getting the context from criteria and adding it to the prompt, use RAG to give the model a knowledge base to consult, improve the quality of the results and simplify the prompt.
  • Consider using a provisioned, open source LLM on transparency grounds.