Microsoft to Offer Rewards Up to $30,000 for AI Vulnerabilities
Microsoft has launched an expanded bug bounty program offering rewards of up to $30,000 for researchers who identify critical vulnerabilities in AI systems within its Dynamics 365 and Power Platform products. The initiative, announced by Microsoft Security Response, aims to strengthen security in enterprise AI by incentivizing ethical hackers to uncover potential weaknesses before malicious […] The post Microsoft to Offer Rewards Up to $30,000 for AI Vulnerabilities appeared first on Cyber Security News.

Microsoft has launched an expanded bug bounty program offering rewards of up to $30,000 for researchers who identify critical vulnerabilities in AI systems within its Dynamics 365 and Power Platform products.
The initiative, announced by Microsoft Security Response, aims to strengthen security in enterprise AI by incentivizing ethical hackers to uncover potential weaknesses before malicious actors can exploit them.
AI Security Classification Framework
The program leverages Microsoft’s newly developed Vulnerability Severity Classification for AI Systems, which categorizes AI-specific security risks into three primary vulnerability types:
This category addresses vulnerabilities that could be exploited to manipulate a model’s response to individual inference requests without modifying the model itself. Key vulnerability types include:
Prompt Injection: Attacks where injected instructions cause the model to generate unintended output, potentially allowing attackers to exfiltrate user data or perform privileged actions.
Critical severity prompt injections requiring no user interaction can earn the highest bounties.
Input Perturbation: Vulnerabilities where attackers perturb valid inputs to produce incorrect outputs, also known as model evasion or adversarial examples.
Model Manipulation
These vulnerabilities target the training phase of AI systems, including:
Model Poisoning: Attacks where the model architecture, training code, hyperparameters, or training data are tampered with.
Data Poisoning: When attackers add poisoned data records to datasets used to train or fine-tune models, potentially introducing backdoors that can be triggered by specific inputs.
Inferential Information Disclosure
This category encompasses vulnerabilities that could expose sensitive information about the model’s training data, architecture, or weights:
- Membership Inference: The ability to determine whether specific data records were part of the model’s training data.
- Attribute Inference: Techniques to infer sensitive attributes of records used in training.
- Training Data Reconstruction: Methods to reconstruct individual data records from the training dataset.
- Model Stealing: Attacks that allow the creation of functionally equivalent copies of target models using only inference responses.
Reward Structure and Eligibility
Bounty awards range from $500 to $30,000, with the highest rewards reserved for critical severity vulnerabilities accompanied by high-quality reports.
The program specifically targets AI integrations in PowerApps, model-driven applications, Dataverse, AI Builder, and Microsoft Copilot Studio.
The severity classification system considers both the vulnerability type and the security impact, with the highest rewards for vulnerabilities that could allow attackers to exfiltrate another user’s data or perform privileged actions without user interaction.
Security researchers interested in participating can begin by signing up for free trials of Dynamics 365 or Power Platform services.
Microsoft provides detailed documentation for each product to assist researchers in understanding the systems they’re testing.
Microsoft’s Security Response team announced, “Your research could help us strengthen the security of enterprise AI. “
The program forms part of Microsoft’s broader security initiative, which includes bounty programs for various Microsoft products and services.
All submissions are reviewed for bounty eligibility, and researchers are recognized even when they don’t qualify for monetary rewards but lead to security improvements.
Through this initiative, Microsoft continues to emphasize collaborative security efforts as AI integration deepens across its enterprise solutions.
Malware Trends Report Based on 15000 SOC Teams Incidents, Q1 2025 out!-> Get Your Free Copy
The post Microsoft to Offer Rewards Up to $30,000 for AI Vulnerabilities appeared first on Cyber Security News.