HLS

Shipping video should be as easy as shipping a model.

Almost a year ago I launched Phlex on Rails with the videos served as MP4 from a private S3 bucket. Customers paid for the course, clicked play, watched the spinner, and left. People don’t sit through buffering on something they paid for. They just leave.

So I made shipping video as easy as shipping a model.

# app/videos/course_video.rb
class CourseVideo < ApplicationVideo
  rendition :high,   scale: 1.0
  rendition :medium, scale: 0.5
  rendition :small,  scale: 0.25

  poster :hero,      scale: 1.0
  poster :thumbnail, width: 320, height: 180
end

That’s the whole declaration. From there, hls encodes the renditions, generates the posters, uploads them to your bucket, and signs every segment URL in the playlists. A controller renders the result in three lines.

gem "hls"

0.2.0 just shipped. Source at github.com/beautifulruby/hls.

Why this used to be a slog

The fix for buffering is HLS. Players download small chunks instead of one big file, the bitrate steps down when the connection slows and back up when it speeds up, and the format runs anywhere a <video> element does.

Doing HLS on a private bucket is harder than the public version, and the differences aren’t well documented. The m3u8 master playlist references variant playlists. Each variant references segment files. The player fetches as it goes. On a public bucket, every URI is a public URL and you’re done.

On a private bucket, every segment URI has to be a pre-signed URL. The signing TTL has to outlast the longest viewing session that might still be fetching segments. The presign has to work with whatever S3-compatible store you picked (Tigris, Cloudflare R2, MinIO, AWS). The playlist can be cached, but the signatures inside it can’t be cached longer than they’re valid for. Your encoder ladder needs to produce playlists that match a controller-routable shape, not raw S3 paths. Re-encoding has to be idempotent, because re-uploading 800MB of segments every time you tweak a bitrate costs real money and real minutes.

Forty-three lines of bespoke Ruby per stream. That was the answer.

I wanted profile classes.

Why not just use Mux?

I looked at Mux first. It’s good. For a high-volume video product, paying Mux is the right answer.

Phlex on Rails isn’t that. The video catalog is small. The footage is talking head plus screencast, which encodes cleanly with default settings. There’s no live stream, no analytics I’d actually use, no DRM requirement.

And the videos already live on Tigris, next to the rest of the course assets. Adding Mux means a second bill, a second SDK, a second dashboard, a second set of credentials to rotate, a second place to look when something breaks.

Less moving parts.

End-to-end

bin/rails g hls:install writes the initializer and the base profile:

# config/initializers/hls.rb
HLS.s3_resource = Aws::S3::Resource.new(
  access_key_id:     ENV.fetch("VIDEO_AWS_ACCESS_KEY_ID"),
  secret_access_key: ENV.fetch("VIDEO_AWS_SECRET_ACCESS_KEY"),
  endpoint:          ENV.fetch("VIDEO_S3_ENDPOINT_URL"),
  region:            "auto"
)
# app/videos/application_video.rb
class ApplicationVideo < HLS::ApplicationVideo
  def self.storage = HLS::Storage::S3.new(
    bucket_name: ENV.fetch("VIDEO_S3_BUCKET_NAME"),
    signing_ttl: 1.hour
  )

  segment_duration 4
end

bin/rails g hls:video Course writes the per-content-type profile (the CourseVideo you saw at the top).

Encode and upload from a job:

HLS::EncodeJob.perform_later(
  profile: "CourseVideo",
  input:  "lecture.mp4",
  output: "tmp/encoded",
  key_prefix: "courses/phlex/intro"
)

Serve from a controller:

class VideosController < ApplicationController
  before_action { @manifest = CourseVideo.manifest(params[:id]) }

  def index
    respond_to do |format|
      format.m3u8 { render plain: @manifest.master_playlist }
      format.jpg  { redirect_to @manifest.poster_url(:hero), allow_other_host: true }
    end
  end

  def show
    render plain: @manifest.variant(params[:variant]).playlist
  end
end

The master playlist points the player at controller routes. The variant playlist points the player at pre-signed S3 URLs. The player never sees S3 directly. You can swap buckets, rotate keys, or change TTLs without the player noticing.

How this got built

If I’m being honest, I built it to understand HLS better. Wrapping ffmpeg commands felt like something an LLM could handle. So in June 2025 I spent a couple of weeks pair-programming the pipeline with early ChatGPT and Claude agents, gluing ffmpeg commands into Ruby and Rake scripts.

The gem started inside another Rails video project I’ve been working on, which I’ll let the people running it announce in their own time. The Beautiful Ruby site uses it too: Rails serves the pages, Sitepress manages each video’s page and metadata, and hls handles the encoding and signed URLs.

It’s the first gem I’ve shipped that was written mostly by an LLM. The 0.1 release reflected that. It worked end-to-end. The rough edges were everywhere.

What’s new in 0.2.0

0.2 is where I put the design work in. The agents wrote 0.1 to make a thing work. I wrote 0.2 to make a thing I’d put in production for somebody else.

Storage adapter pattern. The S3 wrapper is now an adapter. HLS::Storage::S3, HLS::Storage::Memory for tests, or any object that responds to signing_ttl plus object(key). AWS, Tigris, Cloudflare R2, MinIO. Same code path. The README documents the MinIO knob (force_path_style: true).

Config-aware idempotency. The state sidecar records a SHA256 of the input bytes plus a SHA256 of the encode-affecting config (renditions, codecs, bitrates, segmentduration, bitsperpixel). process re-encodes when either changes. Bump `audiobitrate` from 96 to 128 and the next run picks it up. In 0.1, the same bump was a silent no-op. The output didn’t change and you couldn’t tell why. The digest serialization sorts hash keys so it’s stable across Ruby versions and insertion order.

Bounded-concurrency uploader with retries. PUTs run through a worker pool (default 4) and retry transient failures (network errors, 503, RequestTimeout, SlowDown) with exponential backoff. Permanent errors (NoSuchBucket, 403) fail fast. Threading is plain Queue plus Thread.new. Dropped the parallel and bigdecimal gem dependencies along the way.

Lock, verify, validate. Three places where 0.2 fails fast or loud instead of corrupting state:

  • process takes a flock on the output directory before encoding, so two workers can’t both write to the same state file.
  • verify_encode! walks the just-encoded output (master + variants + segments + posters) and asserts every file exists and is non-empty before recording state and uploading. No half-broken playlists in production.
  • HLS::Input#validate! raises early when the input has no video stream (audio-only files, malformed media). The encode pipeline runs it before invoking ffmpeg.

An ffmpeg timeout that fires. ffmpeg_timeout is a class setting in seconds. Stuck ffmpeg gets SIGTERM, then SIGKILL. The wider point is in Put the timeout on the connection: subprocesses are another place where Ruby’s high-level abstractions don’t bound execution. Set the deadline at the layer that actually owns the blocking primitive.

EncodeJob retry policy that knows the difference. ActiveJob retries everything five times by default, which is wrong for two of the gem’s error classes. HLS::Lock::Busy means another worker is already encoding the same output. HLS::State::CorruptError means somebody has to look at state.json. Both are poison messages. 0.2 discard_on both, so they don’t get retried.

Rails generators. bin/rails g hls:install and bin/rails g hls:video NAME. The example up top is what they scaffold, more or less.

ActiveSupport::Notifications. Events for encode, poster, verify, upload, upload-retry, and process. Subscribe to log timings or wire up metrics. No-op when ActiveSupport isn’t loaded.

Edge-cacheable playlists. Master and variant .m3u8 files now upload with public, max-age=300, so a CDN can serve them for five minutes between re-encodes. Segment files keep their long-lived caching.

Full notes (including breaking changes for storage and cache) are in CHANGELOG.md. Migration is small: rename bucket to storage, swap the constructor, replace manifest_cache + manifest_cache_ttl with an HLS::Cache.new(backend:, ttl:) on the profile.

What this lets you ship

Video behind paid access doesn’t get to live on a public CDN. Courses. Gated previews. Member-only Q&As. Every URL has to be signed. The signatures have to live inside the playlists. The encoder has to produce a shape the controller can route.

A profile class encodes, uploads, signs, serves. One declaration, one process call, one controller action. The shape Rails gave you for rows, applied to streams.

Do you want to learn Phlex 💪 and enjoy these code examples?

Support Beautiful Ruby by pre-ordering the Phlex on Rails video course.

Order the Phlex on Rails video course for $379