GitHub's official MCP server grants LLMs a whole host of new abilities, including being able to read and issues in repositories the user has access to and submit new pull requests.
This is the lethal trifecta for prompt injection: access to private data, exposure to malicious instructions and the ability to exfiltrate information.
Fine-grained access tokens
allow you to limit repositories to an explicit set:gh
. Remember supply chain attacks; can you really trust anyone else's software?