a bunch of purple cubes are stacked on top of each other on a purple background .

Shipping Secure GenAI Systems on Amazon Bedrock with Data-Aware Governance

Co-authored by Bedrock Data Co-founder and CTO Pranava Adduri with AWS Solutions Architects, this post distills key insights from an AWS Builder Center article on how data-aware governance, through concepts like the Data Bill of Materials and Guardrail Gap Analysis, enables teams to securely ship GenAI systems on Amazon Bedrock without slowing innovation.
December 30, 2025|4 min read
Pranava Adduri

Pranava Adduri

CTO, Co-founder

Share:

This post is based on a recent article published on the AWS Builder Center, co-authored by Pranava Adduri (Co-founder & CTO, Bedrock Data) alongside AWS Solutions Architects.

Read the original AWS article:
Securely shipping GenAI systems in Amazon Bedrock with data-aware governance

As generative AI systems move from experimentation into production, organizations are encountering a new class of security and governance challenges. These challenges aren’t driven by the models themselves, but by the data those models depend on.

In a recent AWS Builder Center article, Bedrock Data CTO and co-founder Pranava Adduri joined AWS Bedrock Solutions Architects to examine what it takes to ship GenAI systems securely on Amazon Bedrock. The central insight is simple but critical:

You cannot govern AI systems effectively unless you understand—and continuously validate—the data they rely on.

This post summarizes the key ideas from that article and explains why data-aware governance is becoming foundational for production AI.

GenAI Has a Different Dependency Model

Traditional software depends on static code libraries. GenAI systems do not.

Instead, models and agents depend on:

  • Training and fine-tuning datasets
  • Vector stores used for Retrieval-Augmented Generation (RAG)
  • Upstream enterprise systems that evolve over time

These data dependencies directly influence what a model knows, what it can access, and what it can generate. As a result, static inventories or one-time governance reviews are no longer sufficient.

This is the problem space Pranava has focused on for years: making data risk operational, measurable, and enforceable in AI systems.

From Data Inventory to a Data Bill of Materials (DBOM)

One of the core concepts explored in the AWS article is the Data Bill of Materials (DBOM).

A DBOM answers a question every security, compliance, and AI leader eventually asks:

What data actually feeds this model or agent, end to end?

In practice, a DBOM maps:

  • Models to their training and fine-tuning datasets
  • Knowledge Bases to their source systems
  • Agents to the data they retrieve during inference

This mapping is not just about visibility. It creates the foundation for validating whether governance and safety controls are appropriate for the actual data in use.

Why Guardrails Must Be Data-Aware

Amazon Bedrock provides powerful Guardrails that allow teams to block or redact categories such as PII, PHI, and PCI. However, the AWS article highlights an important nuance: a guardrail is only effective if it covers the specific sensitive data an agent can access.

In the example discussed—a PTO Approval Agent—the agent retrieves HR data that includes not only PII, but also compensation information. While common guardrails may block PII by default, they may not block salary data unless explicitly configured.

This creates a subtle but serious risk. The system appears protected, but sensitive information can still flow to the model and back to users.

Guardrail Gap Analysis: Validating Controls Against Reality

To address this problem, the article introduces the idea of Guardrail Gap Analysis.

By correlating:

  • The data types identified in the DBOM
  • The active Amazon Bedrock Guardrail configuration

teams can determine whether sensitive data is:

  • Properly blocked
  • Explicitly allowed
  • Or unintentionally exposed

In the PTO agent scenario, this analysis reveals a critical gap: compensation data is accessible even though it should not be returned to users. Updating the guardrail configuration closes the gap, but the key insight is that the risk only becomes visible when data lineage, classification, and policy coverage are evaluated together.

Governance Must Be Continuous

A final takeaway from the AWS article is that AI governance cannot be a one-time exercise.

Data sources change. New files appear. New data types are introduced. A guardrail that was sufficient last month may be incomplete tomorrow.

This is why data-aware governance must be operational:

  • DBOMs need to update as data changes
  • Guardrail coverage must be re-evaluated automatically
  • Teams need early warnings before risks reach production

This approach allows organizations to move quickly without sacrificing security or compliance.

Shipping AI with Confidence

The broader message from the AWS article—and from Bedrock Data’s work more generally—is not about slowing AI adoption. It’s about making it safe to move fast.

By combining:

  • Amazon Bedrock for scalable GenAI capabilities
  • Data-aware governance grounded in DBOMs and guardrail validation

organizations can deploy AI systems that are both powerful and trustworthy.

That intersection—where AI capability meets operational governance—is exactly where Pranava Adduri and the Bedrock Data team are focused.

Read the Full AWS Article

This post is a summary of ideas originally presented in the AWS Builder Center article:

Securely shipping GenAI systems in Amazon Bedrock with data-aware governance
By Pranava Adduri, Ishara Premadasa (AWS), and Jean Malha (AWS)

For the full technical walkthrough and Amazon Bedrock implementation details, we recommend reading the original article on the AWS Builder Center.

Related Content

Subscribe to our newsletter

See the difference with Bedrock