Smart Queues, bufferbloat, and the UniFi WAN
The symptom is familiar to anyone with a home office and a gigabit-class WAN: a speed test reports the full advertised number, and ten seconds later a Zoom call goes choppy, the game ping balloons, and a webpage stalls for no obvious reason. The WAN is not slow; the WAN is laggy under load. That gap between “line rate” and “responsive under load” has a name — bufferbloat — and a fix that has been standardised since 2018 and shipping in UniFi as the Smart Queues feature for years. It is off by default on every UniFi gateway we audit. This article is the citation chain on why it exists, when to turn it on, and the throughput ceiling that catches almost everyone who does.
Fast speed test, lousy Zoom call.
The shortest accurate definition comes from Bufferbloat.net itself, in the project page that has been the canonical reference since 2011: “Bufferbloat is the undesirable latency that comes from a router or other network equipment buffering too much data.”¹ That sounds counter-intuitive. Buffers exist precisely so packets don't get dropped during a brief surge. The problem is what happens once the buffer fills.
Packets queueing in an oversized buffer add latency proportional to the depth of the queue. On a saturated link, that queue stays full continuously, and every new packet — including the latency-sensitive ones, like a VoIP frame or a Zoom audio packet — has to wait behind everything ahead of it. Jim Gettys and Kathleen Nichols named the phenomenon in their 2011 ACM Queue paper Bufferbloat: Dark Buffers in the Internet, which framed the consequence directly: large buffers damage or defeat the fundamental congestion-avoidance algorithms that TCP relies on to share a link fairly.² Without those signals, every flow keeps shoveling more into the queue, the queue stays full, and the delay stays high.
Apple's 2021 WWDC session Reduce network delays for your app framed the user-facing version of the same idea more starkly: “When buffers in the network are excessively large, when they fill up, they don't improve throughput, but they do add delay… this phenomenon of excessively large buffers is called bufferbloat.”³ That session quantified what it looks like in practice. An idle round-trip of 20 ms can balloon to 600 ms or more once the link saturates — 30 times worse — and that 600 ms is what an end user actually experiences during a videocall or a game.
The reason this gap is invisible to a speed test is structural. A speed test measures throughput while the link is saturated; it does not measure latency during the saturation. The grade A is 1 Gbps of throughput; the grade F is whether voice and video survive while that throughput is being delivered. The two questions have nothing to do with each other.
Priority queues need a controlled queue to be useful.
The instinctive answer is QoS — mark the VoIP packets as high priority, mark the bulk transfer as low priority, let the router put the important traffic first. Classic QoS on a home gateway has done a version of that for two decades. It rarely fixes bufferbloat because it is acting on the wrong queue.
The bottleneck queue that creates bufferbloat does not live in your router. It lives in the next hop upstream — usually inside the ISP-provided modem, the DOCSIS cable headend, the GPON OLT, or the cellular tower. That is the link that runs at the slowest speed in the chain, and that is the buffer that fills. The router sees a fast 1 Gbps Ethernet link to the modem and never builds a queue at all on its own side. By the time the packet gets to the buffer that's actually overflowing, the router's QoS markings are no longer being read.
Active queue management (AQM) inverts the problem. Instead of trying to influence somebody else's queue, AQM deliberately shapes your outbound traffic to run slightly below the upstream link rate, so that the bottleneck queue is in your router, where you control it. The router then signals congestion to TCP early — by dropping or marking packets before the buffer fills — so flows slow down before the queue ever gets long. The user-visible effect is that the queue stays short, latency stays low under load, and throughput stays high.
The canonical AQM algorithms for home use are CoDel (Controlled Delay), FQ-CoDel (Flow Queue CoDel), and CAKE (Common Applications Kept Enhanced). CoDel was introduced as a “no-knobs, just-works” AQM in 2012;⁴ FQ-CoDel was standardised by the IETF as RFC 8290 in 2018 under the title The Flow Queue CoDel Packet Scheduler and Active Queue Management Algorithm, and its abstract names the goal directly: “FQ-CoDel mixes packets from multiple flows and reduces the impact of head-of-line blocking from bursty traffic. It provides isolation for low-rate traffic such as DNS, web, and videoconferencing traffic.”⁵ CAKE is the more aggressive, more configurable descendant. UniFi Smart Queues uses an FQ-CoDel / CAKE-family implementation in its shaper.
One toggle, two numbers, one shaper.
The UniFi setting itself is small. In the Network application, under Settings → Internet → WAN → Advanced, there is a toggle labelled Smart Queues with two number fields: Download and Upload, each in Mbps. Ubiquiti's help-center article UniFi Gateway — Smart Queuesdescribes the feature's purpose plainly: it is “designed to prevent buffer bloat for those with low-bandwidth internet connections.”⁶ Under the hood, those two numbers are passed to a FQ-CoDel / CAKE-family shaper that runs in software on the gateway CPU.
What that shaper does is the inversion described in § 02. It deliberately limits outbound and inbound traffic to the configured rates, so that the bottleneck queue is the gateway's own software queue rather than the ISP's upstream queue. It then applies CoDel marking to keep that queue short. Cross-flow isolation comes free: small flows like a DNS query or a Zoom audio packet are scheduled separately from large flows like a backup upload, so the small flow never sits behind the large one.
A practical consequence: the numbers you enter should be slightly belowyour measured line rate, not at it. The community-consensus and HostiFi tutorial guidance is to set the Smart Queues bandwidth to roughly 85–95 percent of measured throughput on each direction — for example, “if you have 50mbps download from your ISP, you could enter in 45mbps and then same for upload too.”⁷ Setting Smart Queues to 100 percent of line rate defeats the point — the bottleneck queue is still somewhere upstream. Setting it well below line rate works but leaves throughput on the table. The sweet-spot range is the 10–15 percent haircut.
The shaper is the bottleneck above 300 Mbps.
This is the single most-common audit finding on UniFi networks where Smart Queues has been turned on by a well-meaning installer or a homeowner following a YouTube tutorial: the WAN is now slower than it was before. Speed tests that previously hit 900 Mbps now sit at 250. The reason is not configuration error. It is hardware.
Smart Queues runs in software on the gateway's main CPU. Every packet that goes through the WAN has to be classified, queued, and re-scheduled by the kernel — operations that are normally accelerated by the gateway's hardware offload engine, and that Smart Queues disables. Ubiquiti's own help-center guidance on the feature is explicit: “we do not recommend enabling Smart Queues for ISP connections that provide 300 Mbps or higher speeds.”⁶ Older Ubiquiti documentation listed the per-model ceilings directly: roughly 85 Mbps on the original USG, 250 Mbps on the USG-PRO. Newer gateways handle more, but every UniFi gateway has a ceiling above which the Smart Queues shaper itself is the bottleneck.
The community-forum threads are full of the same shape. A user enables Smart Queues, sees bufferbloat disappear (good), and also sees throughput collapse from gigabit to a few hundred Mbps (bad). The Ubiquiti-community thread “Smart Queues in UCG-Max — dramatically decreased bandwidth”is one of dozens with this story. The right framing is that on a multi-gigabit WAN with a gateway that doesn't have hardware AQM offload, Smart Queues is a tradeoff: lower throughput in exchange for low latency under load. Whether the tradeoff is worth it depends on what the household actually does with the WAN.
The same trap exists for hardware-offload features generally. Enabling QoS rules on a UniFi gateway — which Smart Queues is one form of — disables the hardware offload path for WAN traffic on most models. That is why the throughput-cap behaviour shows up whether you got there through Smart Queues, through bandwidth-limit rules, or through the QoS-priority rules in the Traffic Management section.
Three tests, all free, all five minutes.
Do not turn Smart Queues on speculatively. Measure first. Three free tools surface bufferbloat directly:
1. The Waveform Bufferbloat Test
The Waveform test at waveform.com/tools/bufferbloat is the cleanest browser-based test. It measures baseline latency while the link is idle, then saturates the connection in both directions and tracks how much the latency rises. The output is a letter grade. A or B means the WAN is well-behaved under load and Smart Queues will offer little benefit. C or D means bufferbloat is present and a Zoom call during an upload will suffer. F means bufferbloat is severe and latency under load is multiple times the idle figure.⁸
2. Apple's networkQuality tool (macOS)
Modern macOS ships a command-line tool called networkQualitythat implements Apple's RPM (round-trips-per-minute) responsiveness test from the WWDC21 session. Run networkQuality in a terminal; the output includes a responsiveness score during load. The Apple session describes the underlying problem the tool was built to surface directly.³ A responsiveness score in the thousands of RPM indicates a healthy link; a score below a few hundred under load indicates significant bufferbloat.
3. The UniFi controller's WAN latency view
In the Internet view of the Network application, the gateway logs WAN latency at one-minute granularity. Look for the pattern where idle-period latency is 10–20 ms and load-period latency is 200 ms or more. This view is less precise than Waveform or networkQuality, but it is the only one of the three that is continuous — it lets you see whether the latency spikes correlate with cloud-backup windows, IPTV streams, or work-from-home video calls.
If all three views agree that the WAN is fine under load, Smart Queues is unnecessary. If one or more shows clear bufferbloat — Waveform grade D or F, networkQuality dropping into the low hundreds, controller latency multiplying by ten under load — then the feature is the correct fix, provided the link rate is below the gateway's shaping ceiling.
Five fields, two re-tests, one rule of thumb.
The setting itself takes under a minute. The measurement around it is what makes it work.
- Measure the actual line rate.Run a full speed test from a wired client directly behind the UniFi gateway. Use a test that reports both directions accurately — speedtest.net, fast.com, or Waveform's underlying speed measurement. Record the wired download and upload numbers, not the advertised plan speed. Most ISPs deliver a few percent above plan; a few deliver below.
- Compute the Smart Queues bandwidths. Multiply the measured download by 0.90 (90 percent) and the measured upload by 0.90. Round down to a clean number. For a 100 Mbps / 20 Mbps cable plan that tests at 115 / 22, enter 103 / 19. For a 300 / 300 fiber plan that tests at 310 / 305, enter 280 / 275 — and only if the gateway can shape 300 Mbps without becoming the bottleneck.
- Enable Smart Queues. Settings → Internet → WAN → Advanced → Smart Queues. Turn the toggle on, enter the two numbers, apply. Some controller versions require a gateway provision cycle that takes 30–60 seconds.
- Re-test bufferbloat. Run Waveform again. The grade should jump to A. If it stays C or D, the configured Smart Queues bandwidth is still too close to line rate — drop it another five percent and re-test.
- Re-test throughput. Run a speed test again. The new throughput should be roughly your configured Smart Queues bandwidth — that is the expected behaviour, because the shaper is now the ceiling. If throughput dropped well below the configured number, the gateway CPU is the bottleneck and the shaper itself cannot keep up. See § 04: either accept the lower number or upgrade the gateway.
The rule of thumb after all five steps: the latency grade is what bufferbloat is fixing, and the throughput number is the cost of fixing it. If both look right — grade A latency and roughly 90 percent of measured line rate on throughput — the WAN is well-configured.
Three cases where the feature does more harm than good.
- Multi-gig WAN above the gateway's shaping ceiling.A 1 Gbps or 2.5 Gbps fiber plan terminated on a gateway that caps Smart Queues at 250–500 Mbps means turning the feature on cuts measured throughput in half or worse, in exchange for fixing latency that may not be a problem in the first place. Ubiquiti's own 300 Mbps threshold is the conservative recommendation here.⁶ Above that, the answer is to upgrade to a gateway with more shaping headroom — not to force the feature on.
- Already-low-bufferbloat ISPs. A few consumer ISPs — most notably some fiber operators that have deployed AQM at the BNG or OLT level — deliver a grade-A bufferbloat score on Waveform with no router-side intervention at all. If the WAN is already grade A, Smart Queues is a throughput cap with no upside.
- Behind a double-NAT or ISP router doing its own shaping. If the UniFi gateway is in router mode behind an ISP-supplied router that is itself doing some form of QoS or shaping, enabling Smart Queues on the UniFi side stacks two shapers on top of each other and produces unpredictable results. Fix the double-NAT first (see our companion article on ISP bridge mode); then evaluate Smart Queues against the now-cleanly-routed line rate.
None of these are absolute. There are residential multi-gig deployments where a 500 Mbps Smart-Queues cap is genuinely preferable to a 2 Gbps WAN with grade-F bufferbloat, because the household actually uses the link for videocalls and gaming rather than for raw download speed. The question is always “what is the WAN for?” — and answering it honestly means measuring the bufferbloat grade before the throughput number.
Where the picture is firmer, and where it is softer.
- The UniFi controller UI changes. The exact menu path to Smart Queues has moved several times across controller versions — formerly under Internet, sometimes under Routing & Firewall, more recently under Settings → Internet → WAN → Advanced. The feature name and its two numeric fields have been consistent; the path has not. Use the search box in the controller if the menu has moved again.
- Per-gateway throughput numbers are model-specific.Ubiquiti has not published a single canonical table of Smart-Queues ceilings per current-generation gateway. Older help-center articles cited 85 Mbps for the original USG and 250 Mbps for the USG-PRO; newer hardware like the UDM, UDM-Pro, UCG-Ultra, UCG-Max, and UDM-SE have higher ceilings but still finite ones. The 300 Mbps figure is the floor at which the feature stops being broadly recommended — not the per-model ceiling.⁶
- Not every UniFi gateway exposes the same SQM controls. Some firmware combinations expose only the Smart Queues toggle; others add per-class QoS-priority rules, traffic-management bandwidth limits, or — on newer hardware — Flow Control as a separate option. The advice in this article is about the Smart Queues toggle specifically; the surrounding controls are useful in some cases but are not substitutes for AQM on the WAN.
- The default-state finding is broader than one feature. The CHI 2025 study Understanding Home Router Configuration Habits & Attitudesreported that 60.6 percent of surveyed participants accept the router's default settings on first installation and never go back to change them.⁹ Smart Queues being off by default is a specific instance of that broader pattern. The mitigation is the same: a periodic configuration review, by an engineer or by the homeowner reading documentation like this one.
- Bufferbloat is not the only WAN-latency cause. A grade-A Waveform score does not mean every WAN problem is solved. Wi-Fi-side airtime saturation, ISP-side packet loss, and IDS/IPS on the gateway can all produce latency under load that has nothing to do with queueing in the WAN buffer. Treat Smart Queues as the fix for one specific failure mode, not as a general WAN-quality knob.
None of these caveats changes the headline. On a residential UniFi WAN that tests grade C or worse on Waveform, where the line rate is below the gateway's shaping ceiling, Smart Queues is a one-toggle fix for a two-decade-old problem — and it is the most-common latent-improvement we recommend in the WAN section of a Health Check.
// REFERENCES
- [1]Bufferbloat.net — project homepage. Source for the canonical one-sentence definition: “Bufferbloat is the undesirable latency that comes from a router or other network equipment buffering too much data.” bufferbloat.net — projects
- [2]Jim Gettys and Kathleen Nichols — Bufferbloat: Dark Buffers in the Internet, ACM Queue, Vol. 9 No. 11, November 2011. Source for the foundational framing of large buffers as the cause of congestion-control failure. dl.acm.org — Bufferbloat: Dark Buffers in the Internet
- [3]Apple — Reduce network delays for your app, WWDC21 Session 10239. Source for the canonical user-facing description of bufferbloat: “When buffers in the network are excessively large, when they fill up, they don't improve throughput, but they do add delay…” and for the introduction of the responsiveness (RPM) test that ships as
networkQualityon macOS. developer.apple.com — WWDC21 Session 10239 - [4]Bufferbloat.net — CoDel Overview. Source for the original “no-knobs” framing of CoDel as an AQM algorithm that does not require per-link tuning. bufferbloat.net — CoDel
- [5]Toke Høiland-Jørgensen, Paul McKenney, Dave Täht, Jim Gettys, Eric Dumazet — IETF RFC 8290: The Flow Queue CoDel Packet Scheduler and Active Queue Management Algorithm, January 2018. Source for the formal FQ-CoDel specification used by the SQM family of shapers. datatracker.ietf.org — RFC 8290
- [6]Ubiquiti Help Center — UniFi Gateway — Smart Queues. Source for the feature description, the recommendation against enabling it on connections above 300 Mbps, and the framing that Smart Queues is “designed to prevent buffer bloat for those with low-bandwidth internet connections.” help.ui.com — UniFi Gateway: Smart Queues
- [7]HostiFi Help Center — How to set up UniFi Smart Queues. Source for the operator-side configuration advice — set Smart Queues bandwidth slightly below measured line rate; the worked example of 45 Mbps Smart Queues on a 50 Mbps ISP plan is from this article. support.hostifi.com — UniFi Smart Queues setup
- [8]Waveform — Bufferbloat and Internet Speed Test. Source for the consumer-facing letter-grade bufferbloat test referenced in § 05; methodology is to measure latency at idle, then again under saturation in both directions. waveform.com — Bufferbloat Test
- [9]Junjian Ye et al. — Understanding Home Router Configuration Habits & Attitudes, Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. Source for the finding that 60.6 percent of surveyed participants accept default home-router settings rather than configure them. dl.acm.org — Home Router Configuration Habits (CHI 2025)