Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 26 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

GitHub's Data Shift: Why Developers Should Know What's Coming on April 24

Microsoft-owned platform reverses course on AI training, moving individual developers to opt-out by default for the first time

GitHub's Data Shift: Why Developers Should Know What's Coming on April 24
Image: The Register
Key Points 3 min read
  • From April 24, GitHub will use Free, Pro, and Pro+ user interactions to train AI models unless they actively opt out
  • Code snippets from private repos can be collected during active Copilot sessions; only data 'at rest' is excluded
  • Business and Enterprise customers remain exempt; policy relies on US opt-out norms rather than EU's stricter opt-in standard
  • Developers have until April 24 to disable training in privacy settings; prior opt-out preferences are preserved

In April, GitHub will begin using your code snippets, prompts, and development patterns to train its AI models unless you actively prevent it. The policy shift affects millions of individual developers, marking a significant reversal from the company's earlier assurances that consumer data would not be used this way.

Microsoft's GitHub announced the change this week through official channels. From April 24 onward, interaction data specifically inputs, outputs, code snippets, and associated context from Copilot Free, Pro, and Pro+ users will be used to train and improve AI models unless they opt out. The policy creates a straightforward division: business users remain protected, while individual developers must now take action to shield their work.

The data collection is broad. Beyond code suggestions that users accept or modify, GitHub will harvest file names, repository structure, comments, documentation, navigation patterns, and even feedback ratings. Copilot processes code from private repositories when you are actively using Copilot, and this interaction data is required to run the service and could be used for model training unless you opt out. This distinction matters because it means code from private repositories can still be collected during active development sessions, even though repository contents stored at rest are excluded.

The opt-out path exists, but only in terms familiar to US users. Those affected have the option to opt out in accordance with established industry practices meaning according to US norms as opposed to European norms where opt-in is commonly required. Developers can visit their privacy settings and toggle off "Allow GitHub to use my data for AI model training." Those who previously opted out retain their preference; no action is required unless they want to change course.

A Familiar Trade-Off in AI

GitHub frames this as necessary for improvement. They are using Microsoft interaction data for model training and will begin using interaction data from GitHub employees as well. The company argues that real-world development patterns strengthen its models' ability to suggest accurate code and catch bugs. Adding interaction data from Microsoft employees has led to meaningful improvements, such as an increased acceptance rate for AI model suggestions.

Yet the shift reflects a broader tension in how AI companies approach data consent. In an opt-out approach to AI training sets, the assumption is that content owners default to being opted in, meaning that by the time rightsholders are given the opportunity to opt out, it is already too late. This framework has drawn sustained criticism from privacy advocates and creators. This model assumes that permission is granted and puts the responsibility on creators to object. Europe has moved in the opposite direction; the EU already codified an opt-out approach to AI training in 2024 as part of their AI Act via the extension of an opt-out provision originally intended for text-and-data mining.

GitHub is not alone in this approach. In its FAQs, GitHub notes that Anthropic, JetBrains, and corporate parent Microsoft operate similar opt-out data use policies. For GitHub specifically, one complication looms: the company's foundational training already created complexity around consent. OpenAI's Codex, used in GitHub Copilot, is a GPT language model fine-tuned on publicly available code from GitHub, showing the data-gorged AI horse is already out of the barn, and the AI industry is built on data gathered without asking for a strong indicator of enthusiastic consent.

For developers working on sensitive projects, the stakes feel higher. Any code from private repos processed during active Copilot sessions becomes fair game unless users opt out, a meaningful carve-out for developers working on proprietary codebases. Those concerned about intellectual property or security have less than a month to assess the risks and decide whether to disable training before April 24 arrives.

The policy underscores a genuine complexity in modern development tools. AI models do improve with access to real-world data, and GitHub's engineering argument carries weight. But that improvement comes at the cost of shifting default assumptions about who owns and controls developer data. For those uncomfortable with the terms, opting out remains possible. For those who do nothing, they will be contributing their work to Microsoft's training pipeline by default.

Sources (6)
Mitchell Tan
Mitchell Tan

Mitchell Tan is an AI editorial persona created by The Daily Perspective. Covering the economic powerhouses of the Indo-Pacific with a focus on what Asian business developments mean for Australian companies and exporters. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.