For decades, large parts of Internet performance could be observed directly from the network. Operators could infer latency, loss and path behaviour by passively watching live traffic. This was not a special capability reserved for deep inspection systems; it was a natural consequence of how transport protocols were designed.
That model no longer holds. Encryption is now the default across protocol layers, and many of the signals that once enabled passive measurement have disappeared. This article explains what changed, why it changed, and which measurement approaches still make sense in an encrypted Internet.
How passive measurement used to work
Traditional transport protocols exposed a significant amount of metadata on the wire. In TCP, sequence numbers, acknowledgements, and timestamps were visible to any on-path observer. By correlating packets flowing in each direction, it was possible to estimate round-trip time, detect retransmissions, and infer congestion.
These techniques required no cooperation from endpoints. Measurement systems did not need to inject traffic or modify flows. They simply observed what was already there. This made passive measurement attractive for operators managing large networks, where active probing could be expensive or unrepresentative.
Importantly, these measurements were taken from real user traffic. They reflected the performance applications actually experienced, not a synthetic approximation.
Why encryption changed the picture
The widespread deployment of transport encryption fundamentally altered what the network can see. Protocols such as QUIC encrypt nearly all transport-layer information, including packet numbers and acknowledgements.
From the perspective of an on-path observer, packets are still exchanged, but the structure that once linked them together is hidden. Without visible acknowledgements or sequence numbers, it becomes difficult to determine which packets belong together, let alone how long a round trip takes.
This loss of visibility was not accidental. It was a deliberate design choice driven by security, privacy, and protocol robustness. Encryption reduces the risk of traffic manipulation, limits pervasive monitoring, and helps prevent protocol ossification.
However, it also removes a class of measurement techniques that operators had relied on for years.
Why active probing is not a full replacement
When passive measurement became harder, active probing was often proposed as the alternative. Sending test traffic can reveal reachability, latency, and loss, and it works regardless of encryption.
Active measurement has clear advantages, but it also has limits. Probes consume additional bandwidth. They may follow different paths from application traffic. They may be deprioritised, cached, or rate-limited in ways that real traffic is not.
In large networks, running probes at sufficient scale and frequency can be costly. More importantly, probes do not always capture the performance seen by specific applications or users.
This is why the loss of passive measurement remains significant, even in an encrypted world.
Early responses to encrypted transport
As encryption became widespread, researchers and engineers began exploring ways to restore some degree of observability without weakening security. One line of work focused on explicit cooperation between endpoints and the network.
Instead of inferring performance indirectly, endpoints could expose minimal signals designed specifically for measurement. These signals would be limited in scope, optional, and carefully evaluated for privacy impact.
The MAMI Project, which ran from 2016 to 2018, explored several aspects of this problem. One of its best-known contributions was the analysis of the QUIC spin bit, a simple mechanism that allows passive estimation of round-trip time without revealing packet contents or detailed protocol state.
The spin bit demonstrated that encryption and measurement are not mutually exclusive, but it also highlighted the trade-offs involved.
What we gained with encryption
It is important to be clear about what encryption improved. Encrypting transport metadata makes it harder for on-path devices to interfere with connections. It reduces the ability of intermediaries to make assumptions about protocol behaviour. It also limits large-scale traffic analysis.
These changes have made the Internet more robust and more private. They have also enabled faster protocol evolution, as endpoints can deploy new features without fear of being blocked or altered by legacy devices.
From this perspective, the loss of passive measurement is not a failure, but a consequence of prioritising other goals.
What we lost, in practical terms
What disappeared with encryption was not just visibility, but a shared understanding of performance. Network operators could once diagnose latency spikes or congestion patterns using traffic they were already carrying.
Today, many of those signals are available only to endpoints. Operators may see symptoms, such as increased traffic volume or retransmissions at lower layers, but lack the context to explain them.
This can make troubleshooting slower and more uncertain. It can also complicate capacity planning and long-term performance analysis.
What still works today
Despite these challenges, several measurement approaches remain viable.
Explicit signals, such as the QUIC spin bit, show that carefully designed cooperation can restore specific metrics. While deployment is uneven and often conservative, the underlying idea remains sound.
Active measurement continues to play an important role, particularly when combined with application-level instrumentation. In controlled environments, such as data centres or managed overlays, operators can still obtain detailed insights.
Finally, endpoint-based measurement has become more common. Applications increasingly measure their own performance and report aggregated metrics. This shifts visibility away from the network, but it does not eliminate it.
Why the problem is not fully solved
None of these approaches is a complete replacement for traditional passive measurement. Explicit signals raise privacy questions. Active probes can be expensive or misleading. Endpoint metrics may not be accessible to all stakeholders.
The underlying tension remains: how to balance privacy, security, and operational needs in a shared infrastructure.
There is no single technical fix. The solution space involves protocol design, deployment practices, and trust relationships between endpoints and networks.
Why historical context matters
Many current debates about observability in encrypted protocols echo questions that were already being asked several years ago. Understanding earlier proposals helps avoid repeating the same arguments or dismissing ideas without evidence.
The value of projects like MAMI lies not only in their specific mechanisms, but in how they framed the problem. They treated encryption as a given and asked what could still be measured responsibly.
That framing remains relevant as new protocols and architectures emerge.
Where this leaves practitioners
For engineers and operators, measuring the encrypted Internet requires a more deliberate approach. Assumptions that once held no longer apply. Visibility must be designed, negotiated, or instrumented explicitly.
This does not mean the network is blind. It means that measurement has become a design problem rather than an accident of protocol structure.
Understanding what was lost, what was gained, and what still works is a necessary first step toward building systems that are both observable and respectful of user privacy.

