Alexis Bertholf said it best, “Let’s Make Network Network Engineering Cool Again”
There was a time when network engineering actually was cool. Not marketing cool, but builder cool. The people who understood routing, RF, and transport were the people who made the internet exist. They lit fiber, stitched together ISPs, built carrier hotels, and made offices talk to each other across cities and oceans. Network diagrams lived on whiteboards, outages were front page news, and “the network” was a visible system that shaped what organizations could even attempt, much less achieve. Somewhere along the way, connectivity became assumed. Wi-Fi became a checkbox. The internet became a utility. Network engineering didn’t stop mattering, but it disappeared into drywall and data centers, noticed mainly when it broke. The question isn’t how to make it flashy, but how do we make it cool again?
Not long ago, the phrase “highly reliable internet” meant signing a multi-year contract and cutting a four-figure check every month for a dedicated fiber circuit. Small companies routinely paid close to a $1,000/month for 10s of megabits and still spent time arguing with carriers about what “guaranteed” really meant. Homes lived in a different universe. Most were stuck on DSL or oversubscribed cable connections held together with hope, prayers, and whatever modem firmware happened to ship that year. Any time a neighbor fired up Netflix, the loading spinner made an appearance.
That gap has closed (mostly).
Modern 5G fixed-wireless access (FWA) services, T-Mobile Home Internet, Verizon 5G Home, and AT&T Internet Air, coupled with beefy home routers can deliver speeds, latency, and uptime approaching what enterprises used to buy wholesale. Let’s take a little journey, one that weaves us down the dusty DSL era and ends at a home network fed by fiber, 5G, and maybe even some LEO space internet.
The Old Model: Reliability Was A Line Item
Reliability was not something you designed at home. It was something businesses purchased.
Offices were built around leased circuits, DMARCs, and contracts that spelled out uptime in decimals. Dedicated Internet Access didn’t just mean “fast.” It meant uncontended bandwidth, predictable latency, and a provider who would answer the phone when a backhoe found your fiber. You paid for that predictability, and you structured your internal networks around the assumption that the WAN was a known constant
Homes lived on the opposite side of that divide. Most home offices ran on whatever access technology happened to reach the neighborhood: ADSL, VDSL, early cable (think Time Warner Cable RoadRunner). If you were a few miles from the ISPs main office, 10–20 Mbps down and sub-1 Mbps up was normal. Not worst case. Normal. The last mile was long, copper, and out of your control.
The office stack reflected that separation. Redundant routers, private backbones, undersea cables, MPLS cores, and engineered peering lived in the enterprise and carrier world. The home had a modem, a hope, and usually a single point of failure.
In that model, reliability and predictability didn’t come from design. They came from an SLA.
Fixed Wireless To The Rescue
Fixed Wireless Access works by hooking your home (or office) wirelessly to nearby 5G towers. A compact CPE unit, basically a 5G modem with an antenna, mounts on or in your house and talks to the carrier’s cell site via radios. Unlike tethering a phone, this is designed for stationary use and usually sits plugged into the wall. When done right, 5G FWA can deliver fiber-like speeds and latency, even in rural or fringe areas, because mid-band and mmWave 5G finally live in a performance envelope that intersects REAL broadband instead of imitating it.
How does it work in practice? The typical model is simple. The 5G base station beams a signal to the CPE at the house. The house then has its own Wi-Fi or wired LAN served by that 5G backhaul. Because it’s wireless, deployment is quick. No trenching fiber required, just place the router anywhere and plug it into power. Carriers package this as managed gateways, whether that’s the T-Mobile unit, Verizon’s 5G router, or AT&T’s Internet Air gateway. They’re designed to be plug-and-play, but the underlying behavior is still carrier RF, which means placement and local conditions matter more than people expect.
Performance is the reason this stopped being a toy. In many markets, these services land in the low hundreds of Mbps down with latency often in the teens to a few dozen milliseconds depending on spectrum and tower load. Even without external antennas, plenty of users see downstream that dwarfs DSL and upstream that actually supports modern collaboration and cloud workloads. Upstream is the part that changes the lived experience. The modern internet is not downloading webpages, it’s all that symmetric bandwidth. IYKYK
Building A Hybrid Home Network
Having a fast 5G link is only half the story. You also need equipment that can treat it like a real WAN and do things intelligently with it. This is where gear come in. In my setup, I use a Ubiquiti Dream Machine (UDM) as the core router/firewall, with the 5G gateway as a secondary WAN. The UDM is an enterprise(ish)-grade appliance in a rack mount form factor. Being able to do multi-WAN (up to 8) is the real reason I bought it.
In practice, I have two internet links: my primary wired ISP on WAN1, and a 5G gateway on WAN2 as a backup. In my case that backup is T-Mobile, but the pattern is identical if your local RF reality favors Verizon or AT&T Internet Air. This gives true redundancy. If fiber or cable fails, the cellular link takes over based on pre-defined or custom SLAs
Configuring this is fairly straightforward. Ubiquiti’s software lets you choose Failover mode: designate one link as primary and keep the other on standby. In failover mode, the UDM continuously checks upstream health. If WAN1 dies, traffic shifts to WAN2. When the primary returns, it switches back. For most outbound traffic you don’t notice the swap, and that’s the point. The network is supposed to absorb the event and keep on working.
You can also do load balancing, splitting flows between links. This introduces complexity around sticky sessions, asymmetric routing, and the realities of how modern services identify clients across paths. For a home stack, keeping the design simple tends to win. WAN1 is preferred and WAN2 is a true backup. The success metric is boring continuity.
Network Engineering coolness meter is intensifying.
A Third Path: Low Earth Orbit As A Real WAN
While 5G FWA is the current champion for easy failover in a lot of places, there’s now a third access layer that belongs in the same conversation. That’s LEO (Low Earth Orbit).
Starlink has already proven the model in the field, and Amazon’s Leo is the next major constellation to watch as it comes online. What matters here is not satellite internet as a category, because most people associate that phrase with geostationary orbit, massive latency, and a product that could move data but could not support real-time collaboration. LEO changes the physics of the path by collapsing the orbital distance, which pulls the latency profile down into the same general class as many terrestrial links.
From a home-network design standpoint, LEO is interesting because it is not just another ISP choice. It’s a physically diverse upstream that does not share the same last-mile failure modes as your local cable ISP, your fiber route, or your 5G serving cell sector. It still comes with quirks that look familiar if you’ve lived in carrier land, including CGNAT behaviors and path variability, and it still has weather and visibility considerations. But as a failover path, it is a clean separation of risk. Fiber can be your pipe, 5G can be your local RF backstop, and LEO becomes the “this region is having a day and I want an entirely different physical story” layer. When paired with a multi-WAN router or SD-WAN appliance, the dish becomes yet another mode of transport.
The Quirks That Still Matter
There are trade-offs and edge cases that show up the moment you treat these links like “real “WANs.
One big one is NAT. This is not a post on what NAT is. Most 5G home gateways do Carrier-Grade NAT by default. That means the router the carrier gives you does NAT on the WAN side, and you never get a real public IP. In my case, the T-Mobile gateway handed a 192.168.x.x address to the UDM. The UDM’s internet IP showed up as private, because the provider is sharing a single public IP among many homes. The upshot is that outbound works fine, VPN works fine, but inbound reachability and clean port forwarding are not really a thing unless you build a tunnel out. It’s not a dealbreaker for a failover link, but it’s part of the reality of how these products are operated at scale.
DNS handling is another detail. During failover, you want name resolution to keep working without drama. In practice, setting the router to use reliable public resolvers and using sane health checks keeps the transition clean. The point is not which resolver is best, it’s that your internal clients should not have to relearn the world when the path changes.
Latency and jitter deserve a note. Even with 5G, you’ll usually see ping times in the 20–40 ms range compared to ~15–25 ms on a good wired link. That’s generally fine for video calls, though gamers can be more sensitive. More important is that jitter behavior differs. DSL was slow but stable. 5G can be fast with occasional variability depending on tower load, RF conditions, and time of day. LEO can look excellent and then get odd for a few minutes because the system is literally orchestrating moving endpoints. Modern collaboration apps buffer, adapt bitrate, and recover from loss far better than older systems, so most of this gets absorbed. What users notice most is upstream capacity, not whether their ping is 18 ms or 34 ms. Assuming your’e not playing Call of Duty at work.
DSL vs 5G vs LEO: Head-To-Head In Real Terms
Let’s compare the legacy DSL home link to a modern 5G FWA link in concrete terms. A typical VDSL or ADSL2+ line might underachieve at 5–25 Mbps down, ~0.5–1 Mbps up, especially if you’re far from the telco’s CO. As you approach the CO or use vectoring, VDSL can hit 50–100 Mbps down, but only if you’re very close. In practice, most rural or suburban DSL users are in the low tens of Mbps range.
By contrast, 5G FWA in urban and suburban areas routinely gives 50–300+ Mbps down with upstream that can actually support modern collaboration and cloud sync. The performance envelope overlaps wired broadband enough that it stops being a compromise link and starts being a legitimate primary in some footprints. Whether that’s T-Mobile, Verizon, or AT&T Internet Air tends to be a local RF and backhaul story more than a logo story.
LEO slots differently. It’s not about beating fiber. It’s about beating the absence of options and beating the assumption that geography defines capability. Latency is now often in a range where voice and video behave like normal tools, not experiments. Throughput is trending upward as constellations densify and terminals evolve. LEO is no longer a niche player, it remains valuable even when you have decent terrestrial service.
From a systems view, the story is not wireless replaced wired. The story is that access is now composable, and the router is back in the role of a decision engine instead of a simple NAT box.
Planning Impact
With all these options, the question becomes practical: how do you choose what to deploy and which carrier is likely to behave well where you live?
CoverageMap.com, founded by Stetson Doggett and Trevor Mann, didn’t come out of nowhere. It grew out of years Stetson spent living in the space between carrier marketing and real-world behavior. Before CoverageMap, he built BestPhonePlans.net with a very specific mission: help people stop overpaying for cell service. Not in the “side project on the weekends” kind of space, but a truly deep passion for helping. What plan fits how you actually use your phone. What networks perform where you actually live. What you’re really buying when you’re handed a coverage map and a price sheet.
Over time, that work exposed a deeper problem. Saving people money was useful, but it wasn’t sufficient. The bigger question wasn’t just “what plan is cheapest,” it was “what service actually works here.” Carrier coverage maps weren’t answering that. Bars weren’t answering that. Even official availability tools weren’t answering that. The missing layer was observed performance.
CoverageMap is that layer.
Instead of starting from what carriers claim, it starts from what devices experience. Crowdsourced speed tests, uploaded by real users, aggregated and normalized across Verizon, T-Mobile, AT&T, and others. The result isn’t a marketing graphic. It’s a living performance database. Downstream, upstream, latency, density. What actually happens at the street level, by real people.
That’s why CoverageMap becomes a planning tool instead of a curiosity site. Before you order a 5G gateway, before you decide which carrier to anchor a home network to, before you mount an antenna and start tuning RF, you can see how each network behaves where you are. Not whether there is “coverage,” but whether there is usable upstream, consistent latency, and enough headroom to support real work.
What makes this interesting is that it aligns directly with Stetson’s original motivation. It isn’t just about saving people money. It’s about matching people to the networks that actually fit their environment and their workload. The economics matter, but the system behavior matters more. CoverageMap closes that loop. It turns community data into an access-layer for decision making.
Community Impact
On the community side, the impact of this shift shows up most clearly in the people who are actually building networks where there weren’t any to begin with. Alan Fitzpatrick and the team at Open Broadband in North Carolina are a good example of what that looks like in practice. Alan didn’t approach rural internet with a single technology or a single vendor. He approached it like a network engineer. Fiber where fiber makes sense. Fixed wireless where terrain, cost, or time make trenching unrealistic. Backhaul first, last-mile second. Design the core, then choose the access medium that fits the geography.
Open Broadband has quietly been stitching connectivity across parts of the Carolinas using whatever transport actually works for a given community. Sometimes that’s fiber. Sometimes it’s licensed or unlicensed fixed wireless. Sometimes it’s a hybrid of both. The common thread is that it isn’t one-size-fits-all and it isn’t built around what’s easiest to sell. It’s built around what’s required to deliver fast, reliable, and usable internet into places that lived for years at the end of long copper lines.
That approach matters because the outcome isn’t a nicer speed test. It’s that towns, schools, clinics, remote workers, and small businesses stop being shaped by what they can’t do online. They stop consuming the internet from a spoon, and start consuming through a fire hydrant.
The Internet Tier List of 2026
After building and observing these stacks, an informal hierarchy starts to emerge. Not as a marketing chart, but as an engineering reality based on latency consistency, capacity ceilings, and operational behavior.
At the top still sits fiber to the home. Fiber remains the gold standard because physics and ownership align. You control the last mile. The medium is immune to RF noise. Latency is low and stable. Throughput scales and symmetry is the standard. When fiber is available, it remains the most logical option to achor the network with.
Next is coaxial cable. DOCSIS has carried an enormous amount of modern internet on its back. Well run cable networks deliver high downstream capacity, acceptable upstream, and generally stable latency. It is still a shared medium and it still has congestion windows, but it occupies a clear second tier in most markets.
Then comes 5G mid-band. Mid-band 5G is the first wireless last-mile technology that reliably overlaps the performance window of wired broadband. Latency in a workable range, upstream that supports modern collaboration and live work, and deployment that does not require permission slips and trenching.
After that, LEO. LEO no longer sits in its own category. It sits behind strong terrestrial broadband, but above the absence of options, because it offers comparable latency behavior for collaboration workloads and something wired providers can’t offer.
So the working tier list that emerges in 2026 looks like this:
- Fiber to the home
- Coaxial
- 5G mid-band FWA
- LEO constellations
What’s notable is not the order. It’s how close the bottom two have moved to the top in terms of practical usability for real work. Ten years ago, wireless didn’t belong anywhere near a serious network design conversation. Today, two independent wireless fabrics, terrestrial and orbital, sit directly under wired broadband in performance and directly above it in reach.
Once the access layer becomes composable, connectivity turned back into a design problem instead of a line item.
That’s the layer where network engineering gets fun again, because the work is no longer limited to “what’s available in my area?” to; “what can I build for ultimate resiliency?”
Design is how we make network engineering cool again.