How AI Coding Assistants Could Be Compromised Via Rules File

Slashdot reader spatwei shared this report from the cybersecurity site SC World: : AI coding assistants such as GitHub Copilot and Cursor could be manipulated to generate code containing backdoors, vulnerabilities and other security issues via distribution of malicious rule configuration files, Pillar Security researchers reported Tuesday. Rules files are used by AI coding agents to guide their behavior when generating or editing code. For example, a rules file may include instructions for the assistant to follow certain coding best practices, utilize specific formatting, or output responses in a specific language. The attack technique developed by Pillar Researchers, which they call 'Rules File Backdoor,' weaponizes rules files by injecting them with instructions that are invisible to a human user but readable by the AI agent. Hidden Unicode characters like bidirectional text markers and zero-width joiners can be used to obfuscate malicious instructions in the user interface and in GitHub pull requests, the researchers noted. Rules configurations are often shared among developer communities and distributed through open-source repositories or included in project templates; therefore, an attacker could distribute a malicious rules file by sharing it on a forum, publishing it on an open-source platform like GitHub or injecting it via a pull request to a popular repository. Once the poisoned rules file is imported to GitHub Copilot or Cursor, the AI agent will read and follow the attacker's instructions while assisting the victim's future coding projects. Read more of this story at Slashdot.

Mar 23, 2025 - 23:55
 0
How AI Coding Assistants Could Be Compromised Via Rules File
Slashdot reader spatwei shared this report from the cybersecurity site SC World: : AI coding assistants such as GitHub Copilot and Cursor could be manipulated to generate code containing backdoors, vulnerabilities and other security issues via distribution of malicious rule configuration files, Pillar Security researchers reported Tuesday. Rules files are used by AI coding agents to guide their behavior when generating or editing code. For example, a rules file may include instructions for the assistant to follow certain coding best practices, utilize specific formatting, or output responses in a specific language. The attack technique developed by Pillar Researchers, which they call 'Rules File Backdoor,' weaponizes rules files by injecting them with instructions that are invisible to a human user but readable by the AI agent. Hidden Unicode characters like bidirectional text markers and zero-width joiners can be used to obfuscate malicious instructions in the user interface and in GitHub pull requests, the researchers noted. Rules configurations are often shared among developer communities and distributed through open-source repositories or included in project templates; therefore, an attacker could distribute a malicious rules file by sharing it on a forum, publishing it on an open-source platform like GitHub or injecting it via a pull request to a popular repository. Once the poisoned rules file is imported to GitHub Copilot or Cursor, the AI agent will read and follow the attacker's instructions while assisting the victim's future coding projects.

Read more of this story at Slashdot.