why are here?
We have ended up here today to discuss the implementation of watchQuery from apollo graphQL. The direct reasoning for the topic of discussion is because this application was built prior to RSC and we are stuck doing client data fetching, in a perfect world I would solve this serving pre fetched data components to the user.
Â
why watchQuery?
I am exploring
watchQuery()
as a as a means of subverting skeletons across route navigation in a device management portal. Currently implemented is useQuery()
with zero caching and we get skeleton loading on every route navigation as data is fetched via network on client component mount. ‘Just fetch data in
GetServerSideProps()
and pass it down via props’These components live right down at the bottom of the tree and I don’t want to get stuck in prop drilling hell.
Â
let’s take a look
Below we have a comparison for the two scenarios mentioned above.
what does watchQuery do?
This query creates an observable subscription to my cached data, allowing me to always supply my cached data in hand but when that subscription updates a field with a fresh network call it will supply the new data in hand, giving the user a faster time to interaction on subsequent navigation requests after initial load.
Â
a perfect solution?
Can’t say I am a fan of graphQL in general (in small teams) but this is where we are and what we have to deal with so get gud.
I want to voice my opinion on the DX of
watchQuery()
implementation so far. In comparison to useQuery()
, which manages it’s own runtime based on it’s inherited variables. Where as with watchQuery()
you have to execute it inside of useEffect()
and manage your own dependancies and what invokes refetching as well as subscription cleanup on unmount. Â
this is nice, I quite like this. It is easy to use and makes sense.
Â
As you can see in the above when we start getting to towards more complex queries we now have good old useEffect dependancies to think about, which is never fun…
Â
does it actually work here?
Considering the fact that I am dealing with rebuilding lists with non-same sorting, maybe the better UX here really is just to show a skeleton. Seems as if the page shift is more confusing to the user than having a faster TTI.
Â
it does actually work
After a couple iterations of looking at different parts of these queries on both the front and back, I managed to get a very reliable list sorting by adding
pk
as part of my django order_by
. pk
is a static value so it manages to stabilise the the ordering as much as possible. last_comm_ts
is frequently updated so we will never get super static lists but we are pretty close to a good point. Â
Â
Â
final implementation on staging environment
What we do see here with the final implementation on the staging enviroment looking at production data, is that
last_comm_ts
updates rather frequently and when we have our cached data update after render we get a jump in the data and it appears like a layout shift.Â
what else could we explore?
I am unsure how I feel about this, this will always happen even if I use skeleton loading but essentially we will never be able to go back to the exact data from where we came unless I do some caching for an expected time of endurance that we believe someone might spend here.
I could look at our analytics data above and do a sort of session caching attempt, something like cache those lists for +- 5min and then users would be able to have a more static view into these lists. But would suffer at the cost of non-realtime data.
Â