AI-Generated Communication Is Breaking a System Built for Humans
Nikita Bier, Head of Product at X, warned that AI-driven spam could make core communication channels unusable.
On February 11, 2026, Nikita Bier, Head of Product at X, wrote:
“Prediction: In less than 90 days, all channels that we thought were safe from spam & automation will be so flooded that they will no longer be usable in any functional sense: iMessage, phone calls, Gmail… And we will have no way to stop it.”
The 90-day mark will be reached soon, so while this claim may be easy to discount, its direction is harder to dismiss.
Email inboxes are already saturated with automated outreach. Voicemails are increasingly generated, not recorded. Text messaging, once relatively insulated, is beginning to follow the same pattern.
What is being described is not a failure of any one channel. It is a shift in what those channels are being used for, and by whom.
The immediate response to this shift is predictable. It is framed as a problem of spam, moderation, or user awareness. Better filters. Stronger detection. More cautious users. In parallel, there is a growing view that this outcome is simply unavoidable, that these systems will scale faster than any meaningful defense, and that the best we can do is adapt around them.
In that framing, the problem is treated as a fait accompli. Guardrails, filtering layers, and emerging coordination protocols are proposed as mitigations, but they operate downstream of the problem. They assume that the system itself remains unchanged.
What Is Actually Changing
Communication is no longer primarily human-originated. Messages that appear to come from individuals are increasingly generated, adapted, and deployed by systems. The effort required to produce communication has effectively been removed as a constraint.
At the same time, scale is no longer tied to intent or capacity. Messages can be generated continuously, adjusted in real time, and directed at individuals rather than broad audiences. What was once broadcast is now constructed per recipient.
This changes the nature of communication itself. It is no longer a reflection of human attention or effort, but the output of systems operating independently of both.
What is breaking is not the infrastructure, but the assumption that these systems were built for humans.
Why Current Solutions Don’t Work
The responses being proposed follow a familiar pattern. They focus on filtering, detection, and moderation. Improve the models. Improve the classifiers. Reduce false positives. Reduce false negatives.
These approaches assume that the problem can be managed at the level of content. They treat each message as an isolated unit to be evaluated after it has already been generated and delivered.
Guardrails extend this logic, but do not resolve it. They are applied at the level of communication infrastructure and vary across systems. They do not persist across communication systems or environments, and they do not provide a consistent mechanism for governing how outputs are used once they leave the model from which they were generated.
Coordination protocols move in a different direction. They attempt to structure how systems interact, how context is passed, and how actions are organized. But they do not constrain what those systems are allowed to do. They provide coordination without enforcement, and operate within the same underlying conditions.
Taken together, these approaches remain reactive. They operate at the level of communication infrastructure, attempting to manage the effects generated by models after the fact. As a result, they are always working against the direction of the models themselves, rather than changing their structure.
These solutions attempt to manage the effects of the models. They do not address the structure of the models themselves.
What Is Missing
What is missing is not another layer of filtering, nor a more refined approach to detection. It is not additional coordination between systems, nor greater caution on the part of users. These responses assume that the problem can be managed once communication has already been generated and put into circulation.
What is missing is a layer that governs how machine-generated communication is allowed to operate within an environment made for human communication.
Such a layer would not evaluate messages in isolation. It would operate before execution, evaluating model-generated content at the point where it is prepared for deployment and transmission. The focus would not be on restricting content, but on classifying it. Is it advertising, a transactional message, a valid emergency alert, or something designed to create urgency, narrow attention, or otherwise shape behavior. This allows the communication infrastructure to distinguish between types of communication and respond accordingly.
With such a layer, generation remains unchanged. Models continue to produce content, but that content is now classified and can be routed, prioritized, or limited in accordance with its function within the communication environment.
The problem is not that these systems cannot be filtered. It is that they are allowed to operate without constraint.
The concern is not that communication channels will disappear. It is that they will continue to function without the structure that made them reliable.
The question is not whether AI will fill these systems. It already has. The question is whether those systems will be able to support it.
On the Missing Layer
I’ve written about this in more detail in what I describe as a “missing layer.” In short, there are ways to address this, and they do not begin with better filtering, but with how models generate and output content.
You can read more in the series here: The Missing Layer Series

