Evented Subscriptions in Relay Modern

Matt Krick
7 min readJul 11, 2017


I finally added Relay Modern to my production build and I gotta say… I dig it. When Relay Classic was first released, I poked a lot of fun at it and even made my own client cache (with redux & subscriptions). The new version beautifully shuts me up. It’s 5x smaller, has a vastly improved mutations API, reduces runtime complexity (thanks to babel + no diffing), is decoupled from React, and best of all: it has subscriptions.

Unfortunately, being a new library, there are a lot of questions left for the community to solve:

  • How do I pass the environment down my render tree?
  • How do I switch between environments?
  • When do I unsubscribe?
  • Where do I trigger a subscription?
  • What caching strategy should I use?

…And that’s just the front end! What about the server?

  • How do I filter out the message for the person who made the mutation?
  • Where do I authenticate the subscription requests?
  • What if I need to kick someone off a subscription?

Well, here it goes.

The Front End


When you read the Relay example, you see an environment variable that gets passed into a QueryRenderer, and it’s all good. But how does it go through nested routes? Passed via props? Good Lord no; it’s a prop, not a peace pipe. We could just create a singleton, but that’s only 1 step better than attaching it to window. The answer, elegantly solved by Redux’s Provider, is context. Just make yourself a Provider & any child that wants it can grab it. But context isn’t great for stateless components, so you’ll probably want to make a withAtmosphere HOC that puts it in the props for you. Why do I call it Atmosphere?

Creating the Atmosphere

My app uses http to fetch results until the user hits a component that needs a websocket, and then it switches. When I no longer need the websocket, it switches back to http. Some routes can even use http or websockets, depending on where they come from. In other words, the environment used by a QueryRenderer is non-deterministic and I need something that handles it all. Naturally, the atmosphere encompasses all environments, so that’s what I called it.

To set the Atmosphere, I first made a class that stored all the environments & could return the current one with something like atmosphere.get(). This was easy enough, but each environment had its own store so some things got refetched and I ended up calling withAtmosphere all over the place.

The cleaner alternative was to extend Environment. This gave me 2 benefits: I could get at it from any fragment container viaprops.relay.environment and my networks shared a store (albeit at the cost of unsafely using the internal _network, but I like to live dangerously). Now, upgrading to a socket fetch is as easy as environment.setSocket(). I can even hardwire my fetchQuery functions as methods inside the atmosphere. That way if my http authorization header changes, I don’t need to create a whole new environment, i just environment.setAuth().


Way back when, in the days of Relay Classic, there was something called a clientMutationId. It was a simple ID that accompanied a mutation on its journey through request & response. While it’s no longer needed for mutations, the concept works beautifully for subscriptions. Each subscription request sends along an opId (name borrowed from Apollo). Then, when it’s time to unsubscribe, the client just sends the opId that it keeps in its Atmosphere. Finally, if we put the requestSubscription in Atmosphere, then we can completely abstract away the opId in favor of returning a simple unsubscribe function that gets passed to our components. In Facebook land, this could be used to unsub from a newsfeed story after a user scrolls past it. But how do we start the subscription without reverting our sexy functional components back to boring Component classes?

Where to Subscribe

Just like Redux’s famous connect(mapStateToProps), subscribing when a component mounts can be as easy as withSubscriptions(subscription). But what if you want multiple subscriptions? Unfortunately, GraphQL spec (yes, I’m fun at parties) clearly states that only 1 subscription is allowed per operation. This is a bummer because most queries need at least 3 accompanying subscriptions (added, removed, updated) to keep it fresh. Again, borrowing from Redux’s compose function, we can gather up all of our unsubscribe functions and execute when the time comes.

Personally, I don’t like to unsubscribe on unmounts. The reason is simple: imagine an app with 100 todo items. That’s an expensive query. Now imagine the client navigates away and then back again. You’ll have to refetch the query because the data went stale as soon as the subscription ended. If you wanted to turn it into an inequality, it’d be something like this:

NumMessagesAfterUnmount * AvgMessageSize < P(ReturnVisit) * QueryCost

If the client leaves the page & receives 10 1KB messages, it’ll cost you 10KB. If there’s a 20% probability that they return to the page and issue a new 100KB query, then the expected value is 20KB. That means if you kept the subscription alive you’d save twice the data! You could even use page analytics to determine the exact probability of a return visit and tweak accordingly. At least, in theory…

Caching Data

A big difference between Relay Classic & Modern is that Modern always fetches the query when a component mounts. That means the above strategy won’t work out of the box. To circumvent this, there is the suggested strategy… and then there’s my strategy.

The Facebook folks recommend that you apply a cache at the network layer. In other words, in your fetchQuery function, before you call fetch, you check your cache for the outgoing request. If it’s there, you return the cached result. If it’s not, fetch and cache the result so you’ll be ready next time. They even make it super easy by giving you a cache with a global time-to-live (TTL). If you want fine-grained TTL, you could trivially implement your own.

Unfortunately, both options are hogwash for one reason: If you query for those 100 todo items, and then your subscription sends in that 101st, your cached query is now invalid because it still has 100 items.

My solution is simple: don’t remove the data from the Relay store until you’re ready. This could be when you unsubscribe from the subscription that kept the query fresh, or it could be your own TTL.

By default, when QueryRenderer mounts, it asks the server for data. When that data arrives, it tells the store it cares about that data. When it unmounts, it tells the store it no longer cares. If nothing else cares about that data node, it gets garbage collected. To fix this, I wrote a custom QueryRenderer that tries to resolve the response from the store before going to the network. Then, if you specify a subscription or TTL, it keeps caring about that data until the sub ends or TTL expires. In other words…

So stoked for season 3…

Pro tip: This strategy doesn’t work if your mutation delivers a partial record. For example, if you query a connection where each edge has a cursor, and then your mutation doesn’t provide a cursor, it’ll always think the query is incomplete. Ask me how I know. To debug this, stick a break point on RelayAsyncLoader#_handleMissing to see what field is missing.

The Back End


Socket.io has a useful function called broadcast where it sends a message to everyone but the sender. How can we mimic that functionality so calling a mutation doesn’t result in a mutation response + an identical subscription payload? The solution is to place the socketId in the GraphQL context.

For example, at the end of your mutation, include the mutator’s socketId in the payload that goes to the pubsub. Then, in your GraphQL subscribe function, compare that mutatorId to the subscriber’s socketId. This works because the pubsub payload doesn’t have to follow your schema and when it gets returned from subscribe, GraphQL will filter out the extra field.

// addTodoMutation.js
getPubSub().publish(`todoAdded.${teamId}`, {newTodo, mutatorId})
// todoAddedSubscription.js
const filterFn = (value) => value.mutatorId !== socketId;
return makeSubscribeAsyncIter(channelName, filterFn);

Sidenote: if you’re wondering why my pubsub is in a thunk, see GraphQL: Tips after a year in production.


Without locking down the subscription above, any attacker with knowledge of a team’s ID could listen for new todo items. That’s why the subscribe function, just like the resolve function for single payloads, is the best place for authentication. Before initializing the async iterator, while the request is still cheap, I shut down any funky requests. Then, for validation that depends on the incoming payload, I have the filterFn. Note that I don’t always have to return the pubsub payload. That payload could be the data necessary to trigger a user-specific query. That’s the power of evented subs!

Kicking folks off a subscription

Sometimes, you need to remotely kick someone off a subscription.

Thankfully, that’s as easy as calling asyncIterator.return(). For that to make sense, I recommend reading an article on Async Iterators & playing around with them until they stop feeling like magic. It’ll take a few hours. When you call return(), your awaited iterating loop will resolve and you can tell the client that the subscription has ended. This is where that onCompleted callback for Relay comes in. Now you can pop a grumpy modal when people use potty words.


And with that, we’ve covered all the pitfalls of getting Relay set up for efficient subscriptions. There are still plenty of interesting problems to tackle, like extracting GraphQL to a stateless microservice, avoiding waterfall query requests using React Router v4, and talking to multiple endpoints (like GitHub’s new GraphQL API). If that sounds like fun to you, you’re weird

…and I want to work with you. We’re hiring a Senior Full-stack dev & Summer Intern. You’ll be employee #5 at a company that’s in Alchemist, one of the top accelerators in the world. We’re backed by some of the best investors from across the country, including SV Angel and even Slack, so we gotta be good, right? If you’re happy just playing with this stuff in your free time, get a little side hustle going by checking out our open issues, submitting a PR, and grabbing a piece of the company with our Equity 4 Effort program.