Gabriel Spence

December 21, 2025

FTA: AWS Broadcast Control - Part 2

This post is part of a From The Archives (FTA) series I am doing to move my posts over from LinkedIn. This was originally posted on December 2, 2023.

In the first post of this series, I briefly discussed where the AWS Broadcast Control idea came from. It started with a goal to reduce costs, increase production values, scale our program, and build fixed infrastructure that we could use for every event. 

We are primarily motivated by how we can delight customers and help our company communicate via live video.

This post is a bit longer, but I aim to hit a few high points. If I write a Part 3, it'll be about how successful our first test of the facility was. Part 4 will be about our roadmap or how I'm thinking about what's next to scale our operations and delight customers. 

Planning 

In March 2022, I sketched out the first architectural test fit of what rooms our team would need in our remote broadcast facility. We needed at least one or two Master Control Rooms (MCR). I was thinking big and wanted to make sure we built the right amount of space and technical capacity to handle our biggest event, plus another event of the same size, at the same time. 

I also sketched out a data center, a warehouse/workroom to support our global field operation, and a small studio for American Sign Language (ASL). 

Accessibility is important to us, and we provide ASL as an opt-in program feed in many of our event broadcasts. 

The final footprint of the AWS Broadcast Control is close to the original sketch but is much better. 
Thanks to skilled architects, the facility looks much better than the collection of rooms I first sketched out. The architects added connective spaces that make it feel like a professional facility, where staff can spend long hours delivering events. The final footprint includes:

  • Master Control Room (MCR)
  • Production Control Room (PCR)
  • Server room
  • Green-screen studio
  • Warehouse/workroom
  • Mechanical plant
  • Office space for our team

Technology 

Our team understands the advantages of Live Cloud Production (LCP) and running broadcast workloads in the cloud. There are a lot of positives with pay-as-you-go pricing, scalability, elasticity, and less on-premise hardware. We don't consider going to the cloud for live production as a binary state. Instead, we took a flexible, hybrid approach and ran workloads in the cloud where they made sense and others on-premise where they made sense. The workload placement can change as business or technical requirements change. 

We've gotten the question a dozen times: "You are AWS; why didn't you build in the cloud?" And the answer is "we did". 

To take the correct approach to technology, we first looked at what we wanted the end customer experience to be. Not just for viewers watching re:Invent, but for our internal stakeholder customers who relied on our expertise to deliver livestreams. Our most pressing business requirement was to deliver results from day one of operations. 

The technology we adopted had to be rock-solid and highly available to deliver event broadcasts. Also, our team, which had grown since 2021, had to have confidence in operating, maintaining, and troubleshooting the systems, sometimes under the pressure of a live event. 

Thankfully, our systems integrator, ZTransform, helped us work through workload placement questions, understood our workflow in-depth, and helped determine the right mix of products. Based on their expert guidance, we selected Ross Ultrix FR12, MC1, Ultrix Carbonite, and OpenGear for our core routing/switching technology. 

Our workflow is modern and hyper-converged with baseband SDI at its core. 

Encoding

We push contribution from a myriad of event locations around the globe. Sometimes, event venues have high costs or limited bandwidth, or there are others where our AWS deploys a bespoke network. We need our contribution protocol to work over various networks, including AWS backbone, open internet, and cellular bonded connections. 

Adopting Secure Reliable Transport (SRT) as our standard transport protocol made sense. We have a fleet of Haivision Makito X4s and Videon Edgecasters pushing SRT from the event venues to AWS MediaConnect, and then on to a VPC output over an AWS Direct Connect to an SRT Gateway at Broadcast Control, which then routes feeds to Havision decoders. We can also send SRT directly through AWS MediaConnect, bypassing our broadcast control for handoff to a distribution partner, or be routed to our field teams for video return. 

We also rely on cellular bonded solutions for contribution to provide a backup if our redundant (A/B) terrestrial networks fail. 

We use our fleet of Elemental Links in managed encode service to provide encoding support to mobile production teams and AWS Studios who need to stream directly to social media platforms via MediaLive. Since Elemental Links are plug-and-play, this managed encode service is self-service for the teams utilizing it, and their portable size makes them ideal to fit into flypacks and carts. 

Cloud

Our physical facility is built around a baseband SDI workflow, but many of our workflows and workloads live in the cloud. I will mention a few. 

One cloud workload is AWS DirectConnect. Our facility has three ISP providers (with path diversity) and four circuits, each with 10 Gbps of bandwidth. Two circuits are AWS DirectConnect, which connects our on-prem network with everything running in our VPC. We host several essential services in VPC. 

We host our primary video storage solution in AWS. We have 4000 hours of local DVR capacity at AWS Broadcast Control. With most events generating 1-6 TBs of video files, we need a highly available, expandable, managed storage solution that doesn't need the care and feeding of an on-premise solution. Plus, I'm working on a transcoding workflow running on EC2 that works nicely to prep files on the share for handoff to post-production automatically. 

A distribution partner manages our direct-to-customer live-streaming workflow. Their cloud-native platform is built on AWS services. The workflow includes MediaConnect, MediaLive, and MediaPackage with CloudFront as our CDN. 

So when someone asks us for the 13th time, "Why didn't you build in the cloud?" we wave and say, "Hey, we're over here, can't you see us?". 

Acknowledgments 

I must appreciate my manager, Ted Frank, who gave me the space and support to lead this project over two years. A Manager isn't always synonymous with a Leader, but in Ted's case, he's both. 

A big thank you to the rest of the team: Ryan Oliver, Zachary Brisbane, and Taylor Wilkins, who were crucial advisors and contributors to this project. Our team members kept our global field and technical operations running exceptionally well through 2022 and 2023 during the entire duration of the project. 

Until next time. 

About Gabriel Spence

For a while, I managed the global streaming program for AWS Marketing Events, where I built a team and $6M in network and streaming infrastructure. I was briefly a Principal Engineer elsewhere in the Jeff Bezos universe, and now I work for a startup called Leo with a mission to provide high-speed, low-latency satellite internet worldwide. Views my own.