Skip to content
LutzTalk
Go back

Under the Hood: Building IsItSnowingThere.com

8 min read

How This Weird Project Started

Living in the southern U.S., snow is basically a novelty. If it happens, people talk about it for a week and the grocery store turns into a war zone. Most winters, if I want to see snow, I have to go looking for it. For years that meant the same routine anytime a decent winter storm hit somewhere. I’d start pulling up webcams from ski resorts, traffic departments, college campuses, and random towns I’d never heard of. Where is it actually snowing right now?

After doing that manually for way too long, it finally hit me that this should be automated. Not another weather site or a forecast page. I didn’t want a “snow expected in 12 hours” map. I wanted a live view of snow happening in the world that I could click on and look at. That idea eventually turned into IsItSnowingThere.com.

What This Thing Actually Does

At the surface, the site is simple. It’s a world map with live webcams, and when it’s actively snowing somewhere, it lights up. You click a point and you’re immediately looking at a real camera feed, which means you’re not relying on icons, forecasts, or summaries. If it says it’s snowing, you can actually watch it.

What makes the project interesting isn’t the map. The hard part is deciding whether something deserves to light up in the first place.


Why “Is It Snowing?” Is a Hard Problem

On paper, this sounds solved. Call a weather API, check if it says “snow,” and move on. In reality, it’s messy. Weather stations aren’t always near the camera. Some are miles away. Some sit in valleys while the camera is halfway up a mountain. Some just have no idea what's going on. Anyone who’s watched rain fall outside while their phone confidently showed snow knows how unreliable a single source can be.

The harder part isn’t collecting data. It’s deciding what to believe.

Snow isn’t binary. There’s heavy snow, flurries, mixed precipitation, blowing snow, and snow on the ground with nothing falling. Cameras make it worse. Night exposure washes everything out. Fog looks like snowfall. Sun glare turns streets white. A white building can fool a system instantly. So the backend isn’t really detecting snow. It’s constantly deciding whether it believes it’s snowing enough to tell someone.

Every location becomes a judgment call. The system has to weigh conflicting signals and decide what actually matters in that moment. That decision layer is the product. If people stop trusting what the map is showing them, nothing else about the site really matters.

How Snow Detection Works

Every location is evaluated from a few angles. The system pulls live conditions from more than one weather provider and compares them instead of trusting just one. Temperature, reported precipitation, and condition text are checked against each other. If the sources don’t agree, confidence drops.

At the same time, the system looks at the camera feed. It pulls frames from the live stream and analyzes what the image actually looks like. How bright it is. How much of it is light pixels. What the color distribution looks like. It’s not trying to solve computer vision. It’s asking practical questions. If the weather says snow but the camera shows blue sky and dry pavement, that’s not snow. If the weather says clear but the frame looks like a white wall, the camera wins.

On top of that are hard stops that prevent obvious nonsense. If it’s well above freezing and sunny, the system won’t call it snow. If the current condition is rain, it treats it as rain. Snow that fell hours ago doesn’t count. The system is intentionally conservative, because being quiet is always better than being confidently wrong.

All of this feeds into a confidence score. High confidence means the data lines up and the camera supports it. Lower confidence means something doesn’t sit right. Only high confidence results are allowed to light up the map or trigger alerts, which matters more than any individual detection rule.


The Part Nobody Sees: Automation

Behind all of this is a large automation layer. New cameras show up constantly. Old ones break constantly. Streams disappear. URLs expire. Providers change how they publish video. YouTube kills broadcasts. The internet is not a stable data source.

So there are background jobs running all day that search for new snow cams, validate existing ones, mark broken feeds offline, try to repair them, and re-check conditions across the map. Some jobs focus on finding new sources. Some exist only to clean up decay. Some track change over time so the system can tell when snow starts or stops instead of just taking snapshots.

There are jobs whose only responsibility is answering questions like whether a stream is still live, whether a camera recovered, how long something has been broken, and whether a replacement can be found.

When a feed breaks, it’s pulled from the map immediately and the system shifts into recovery mode. It starts refreshing URLs, revalidating sources, and searching for replacements. If nothing works, it keeps trying in the background. When something does work again, the feed comes back automatically. The site you see is really just the front edge of a system that spends most of its time maintaining itself.

How It’s Built

From a build perspective, this ended up being a modern web app with a TON of functions that constantly validate data.

The front end is a Next.js app using the App Router with React and TypeScript. The map is built on Leaflet, which gives full control over markers, clustering, and real-time state changes as feeds update. Most of the UI runs off a small set of API endpoints that return feed data, snow state, and stats, which keeps the front end predictable and easy to cache.

The backend lives inside Next.js API routes and runs serverless on Vercel. That layer handles feed queries, weather aggregation, snow evaluation, user actions, and submission workflows. Postgres is the source of truth for everything: feeds, health state, detection results, favorites, notification preferences, and historical status. There’s a JSON fallback for emergencies, but I try to not break things. (I have and likely will again)

A big part of making this scale is aggressive caching and batching. Weather calls are cached for hours per location, and nearby feeds are grouped so the same weather data can be reused across multiple cameras. That alone cuts external API calls down dramatically. API responses to the browser are edge cached, which keeps the UI feeling live without hammering serverless functions.

All heavy work runs outside the request path. GitHub Actions handles scheduled jobs like searching for new streams, validating feeds, checking health, repairing broken cameras, and monitoring favorites for alert conditions. Those jobs write results back into the database, and the site just reads the latest known state. That separation keeps the product responsive even as the dataset grows.

On top of that is a full user layer. Authentication is handled with NextAuth and Google OAuth. Users can save favorites, manage alert preferences per location, register devices for push notifications, and report broken feeds or bad detections.

Why Would I Need An Account?

You don’t need an account to use the site, but signing in unlocks personalization. You can save locations, compare multiple cameras, and get notifications when snow starts or stops somewhere you care about. Those alerts only fire when confidence is high, because false notifications are the fastest way to lose trust.

Conclusion and What’s Next

Right now, IsItSnowingThere is built around rules and validation layers. That approach works well because it’s explainable and debuggable. When something looks wrong, I can usually figure out why.

But it’s also clear where this starts to strain. Night scenes. Backlit snow. Fog. Mixed precipitation. Blowing snow. Cameras with strange color profiles. Situations where rules start stacking up and still don’t feel right.

That’s what I’m actively working on for 2.0.

The next version of the system moves away from a purely algorithmic approach and toward a model-driven one. Instead of only checking fixed rules, frames extracted from live feeds are being used to train and continuously refine a model that learns what “snowing” actually looks like across different environments.

The long-term goal isn’t to replace weather data. It’s to let visual understanding carry more of the decision weight, with weather becoming context instead of authority. Over time, as more labeled frames are collected, the system should get better at handling the cases rules struggle with.

The current platform already gives me something I originally set out to build: a live, global, constantly updating dataset of weather and video. 2.0 is about turning that stream into something that can learn instead of something that only follows instructions.

If you’re a snow lover or just a weather nerd, I hope you enjoy using the site as much as I enjoyed building it.

Thank you Cursor.


Share this post on:

Previous Post
I Am Not A Software Developer
Next Post
MCP for Webex Calling: Turning Admin Work Into a Conversation