Supporting Offline Mode in TanStack Query
One of the main challenges I’ve faced when using TanStack Query is achieving offline support while maintaining full control over the data layer. What starts as a simple requirement to cache data locally often evolves into managing complex cache invalidation logic, version migrations, and essentially implementing a mini-backend in your frontend.
I’ve looked at various solutions online, but they either don’t play nice with TanStack Query’s data layer or are so limited that you might as well build your own solution. So, I thought, “there has to be a better way,” and that’s when I remembered Effect Schema.
Enter Effect Schema
If you’re not familiar with Effect Schema, think of it as an evolution of validators like Zod or Yup. While most validators only transform unknown
into your type T
, Effect Schema provides bi-directional parsing - allowing you to go from From
to To
and back again. This capability enables us to:
- Eliminate the need for multiple schema versions and migrations
- Automatically detect when cached data is invalid due to schema changes
- Trigger new queries when stored data no longer matches the current schema
Let’s look at a practical example. Say we have this response type from our API:
enum TodoStatus {
Pending = "pending",
Completed = "completed",
Cancelled = "cancelled"
}
type Todo = {
id: string;
title: string;
status: TodoStatus;
created_at: string; // ISO date string
};
Now, suppose you want to transform this into a cleaner model where created_at
becomes createdAt
and that ISO string becomes a proper Date
object. With Zod, you’d need to write something like this:
// Parsing from API response to your model
const todoDecoder = z
.object({
id: z.string(),
title: z.string(),
status: z.nativeEnum(TodoStatus),
created_at: z.string().datetime()
})
.transform(({ created_at, ...rest }) => ({
...rest,
createdAt: new Date(created_at)
}));
But now you need another schema to transform back to the API format:
// Transform from your model back to API format
const todoEncoder = z
.object({
id: z.string(),
title: z.string(),
status: z.nativeEnum(TodoStatus),
createdAt: z.date()
})
.transform(({ createdAt, ...rest }) => ({
...rest,
created_at: createdAt.toISOString()
}));
The limitation here is that Zod only handles one-way parsing. You need separate schemas for each direction, which you must maintain manually and keep in sync. This becomes unwieldy as your schemas grow more complex, especially when dealing with nested objects or arrays.
Effect Schema to the Rescue
This is where Effect Schema really shines. Instead of maintaining two separate schemas, Effect Schema lets us define a single schema that handles both parsing and transformation bi-directionally:
import { Schema } from "@effect/schema";
const Todo = Schema.Struct({
id: Schema.String,
title: Schema.String,
status: Schema.Enums(TodoStatus),
// Transform from string to Date and back
created_at: Schema.DateFromString
}).pipe(
// Transform from created_at to createdAt and back
Schema.rename({
created_at: "createdAt"
})
);
type SerializedTodo = Schema.Schema.Encoded<typeof Todo>;
// { id: string; title: string; status: TodoStatus; created_at: string; }
type ParsedTodo = typeof Todo.Type;
// { id: string; title: string; status: TodoStatus; createdAt: Date; }
The power of this approach is that it:
- Unifies parsing and transformation in a single schema definition
- Provides automatic data validation when schemas change
- Ensures type safety across all system boundaries
- Simplifies data transformation logic
If you’re interested in diving deeper into Effect Schema, I recommend checking out my crash course.
When working with your API, the process is simple: use decode
when receiving data and encode
when persisting it.
const getTodos = async () => {
const todos = await fetch("/todos");
const json = await todos.json();
const decoded = Schema.decodeUnknownSync(Todo)(json);
return decoded;
};
const saveTodos = (todos: ParsedTodo[]) => {
const encoded = Schema.encodeSync(Schema.parseJson(Schema.Array(Todo)))(todos);
localStorage.setItem("todos", encoded);
};
Which means that if we do this:
const todo = {
created_at: "2024-01-01",
id: "1",
status: TodoStatus.Pending,
title: "Hello"
};
const decodedTodo = Schema.decodeSync(Todo)(todo);
console.log(decodedTodo);
const encodedTodo = Schema.encodeSync(Todo)(decodedTodo);
console.log(encodedTodo);
We get:
// Decoded
{
id: '1',
title: 'Hello',
status: 'pending',
createdAt: Date('2024-01-01T00:00:00.000Z')
}
// Encoded
{
id: '1',
title: 'Hello',
status: 'pending',
created_at: '2024-01-01T00:00:00.000Z'
}
Now we’re ready to store this in IndexedDB!
Custom Decode/Encode
Side note: you can also define your own custom transformations when you need more control:
const IncomingTodo = Schema.Struct({
id: Schema.String,
title: Schema.String,
status: Schema.Enums(TodoStatus),
created_at: Schema.DateFromString
});
const Todo = Schema.transform(
IncomingTodo, // From (API format)
Schema.Struct({
...IncomingTodo.omit("created_at").fields,
createdAt: Schema.DateFromSelf
}), // To (Model format)
{
decode: (from) => ({
...from,
createdAt: from.created_at
}),
encode: (to) => ({
...to,
created_at: to.createdAt
}),
strict: true
}
);
However, since Effect Schema comes with a lot of built-in transformations like pluck
, omit
, rename
, etc, there’s no need to manually define these transformations for this example - though it ultimately depends on what you need.
IndexedDB Abstraction
Now that we understand Effect Schema’s core concepts, let’s create our IndexedDB abstraction. For this implementation, I’ll create an Effect wrapper around localforage:
import localforage from "localforage";
import { Effect, Schema, Option } from "effect";
export class IndexedDB extends Effect.Service<IndexedDB>()("IndexedDB", {
sync: () =>
({
get: (key: string) =>
Effect.tryPromise(() => localforage.getItem(key)).pipe(
Effect.map(Option.some),
Effect.orElseSucceed(() => Option.none<unknown>()),
Effect.withSpan("IndexedDB.get")
),
set: (key: string, value: unknown) =>
Effect.tryPromise(() => localforage.setItem(key, value)).pipe(
Effect.ignore,
Effect.withSpan("IndexedDB.set")
),
remove: (key: string) =>
Effect.tryPromise(() => localforage.removeItem(key)).pipe(
Effect.ignore,
Effect.withSpan("IndexedDB.remove")
),
clear: () =>
Effect.tryPromise(() => localforage.clear()).pipe(
Effect.ignore,
Effect.withSpan("IndexedDB.clear")
)
}) as const,
accessors: true
}) {}
Next, I’ll create a QueryPersister
to manage the invalidation logic:
export class QueryPersister extends Effect.Service<QueryPersister>()("QueryPersister", {
effect: Effect.gen(function* () {
const indexedDB = yield* IndexedDB;
class StoredData extends Schema.Class<StoredData>("StoredData")({
modifiedAt: Schema.DateTimeUtc.pipe(
Schema.filter(
DateTime.greaterThanOrEqualTo(DateTime.subtract(DateTime.unsafeNow(), { days: 7 }))
)
),
data: Schema.Unknown
}) {
static decodeUnknownOptionFromSelf = Schema.decodeUnknown(
Schema.OptionFromSelf(this.pipe(Schema.pluck("data")))
);
static encode = Schema.encode(this);
}
const remove = (key: string) =>
indexedDB.remove(key).pipe(Effect.ignore, Effect.withSpan("QueryPersister.remove"));
return {
get: <Result, Input>(key: string, schema: Schema.Schema<Result, Input>) =>
indexedDB.get(key).pipe(
Effect.flatMap(StoredData.decodeUnknownOptionFromSelf),
Effect.flatMap(Schema.decodeUnknown(Schema.OptionFromSelf(schema))),
Effect.orElse(() => Effect.zipRight(remove(key), Effect.succeed(Option.none<Result>()))),
Effect.withSpan("QueryPersister.get")
),
store: (key: string, value: unknown) =>
Effect.gen(function* () {
const encoded = yield* StoredData.encode({
data: value,
modifiedAt: DateTime.unsafeNow()
});
yield* indexedDB.set(key, encoded);
}).pipe(Effect.ignore, Effect.withSpan("QueryPersister.store")),
remove: (key: string) =>
indexedDB.remove(key).pipe(Effect.ignore, Effect.withSpan("QueryPersister.remove"))
} as const;
}),
dependencies: [IndexedDB.Default]
}) {}
Notice how we declare a modifiedAt
property with a Schema.filter
. This enables automatic invalidation of stale data that exceeds a specified age threshold - whether that’s days, weeks, or months. In this case, we’re invalidating any data older than 7 days.
Query Persister
The final piece is an HTTP client that uses the query’s queryKey
as the storage key and the encoded data as the value within our IndexedDB
service. Here’s the core functionality:
- For existing stored data: we decode and return cached data optimistically to the
queryFn
, initiate a background request, and upon success, encode and update the in-memory data. - For missing data: we execute a blocking request (awaited) and, upon success, encode and store the result.
class MyHttpClient extends Effect.Service<MyHttpClient>()("MyHttpClient", {
effect: Effect.gen(function* () {
const queryPersister = yield* QueryPersister;
const client = (yield* HttpClient.HttpClient).pipe(
HttpClient.filterStatusOk,
HttpClient.mapRequest((request) =>
request.pipe(
HttpClientRequest.prependUrl("https://...")
// ...
)
)
);
const makeCachedRequest = <Result, Input>({
queryKey,
schema,
request
}: {
queryKey: QueryKey;
schema: Schema.Schema<Result, Input>;
request: HttpClientRequest.HttpClientRequest;
}) =>
Effect.gen(function* () {
const stringifiedQueryKey = JSON.stringify(queryKey);
const cachedData = yield* queryPersister.get(stringifiedQueryKey, schema);
const persistedRequest = client.execute(request).pipe(
Effect.flatMap(HttpClientResponse.schemaBodyJson(schema)),
Effect.tap((data) =>
Effect.zipRight(
pipe(
data,
Schema.encode(schema),
Effect.flatMap((encoded) => queryPersister.store(stringifiedQueryKey, encoded))
),
Effect.sync(() => queryClient.setQueryData(queryKey, data))
)
)
);
return yield* Option.match(cachedData, {
onSome: (data) =>
Effect.zipRight(Effect.forkDaemon(persistedRequest), Effect.succeed(data)),
onNone: () => persistedRequest
});
}).pipe(Effect.withSpan("MyHttpClient.makeCachedRequest"));
return { client, makeCachedRequest } as const;
}),
dependencies: [QueryPersister.Default]
}) {}
With this approach, we can decide which queries to persist. All the consumer needs to do is call makeCachedRequest
:
export namespace TodosQuery {
export const Result = Schema.Struct({
id: Schema.String,
title: Schema.String,
status: Schema.Enums(TodoStatus),
created_at: Schema.DateFromString
}).pipe(
Schema.rename({
created_at: "createdAt"
})
);
export type Result = typeof Result.Type;
export const makeQueryKey = () => ["todos"] as const;
export const useQuery = () => {
return useTanStackQuery({
queryKey: makeQueryKey(),
queryFn: () =>
Effect.gen(function* () {
const { makeCachedRequest } = yield* MyHttpClient;
return yield* makeCachedRequest({
queryKey: makeQueryKey(),
schema: Result,
request: HttpClientRequest.get("...")
});
})
});
};
}
Removing Cached Data
When it comes to cache removal, I recommend grouping your queries and mutations into namespaces. This lets you keep your types and runtime code within the same entity, while neatly organizing all your query-specific logic.
For instance, if you want to remove all cached data for the TodosQuery
, you can do something like this:
export namespace TodosQuery {
export const Result = Schema.Struct({
id: Schema.String,
title: Schema.String,
status: Schema.Enums(TodoStatus),
created_at: Schema.DateFromString
}).pipe(
Schema.rename({
created_at: "createdAt"
})
);
export type Result = typeof Result.Type;
export const makeQueryKey = () => ["todos"] as const;
export const useQuery = () => {
return useTanStackQuery({
queryKey: makeQueryKey(),
queryFn: () =>
Effect.gen(function* () {
const { makeCachedRequest } = yield* MyHttpClient;
return yield* makeCachedRequest({
queryKey: makeQueryKey(),
schema: Result,
request: HttpClientRequest.get("...")
});
})
});
};
export const remove = () =>
QueryPersister.remove(JSON.stringify(makeQueryKey())).pipe(
Effect.andThen(() => queryClient.removeQueries({ queryKey: makeQueryKey() }))
);
}
Setting Cached Data
When it comes to setting cached data, you’ve got two options:
- Implement
QueryPersister.set
within the namespace’s exportedset
function - Create a
useSyncLocal
hook for each query to periodically sync local and in-memory data, treating the latter as the source of truth
For simplicity’s sake, let’s explore the first approach.
export namespace TodosQuery {
export const Result = Schema.Struct({
id: Schema.String,
title: Schema.String,
status: Schema.Enums(TodoStatus),
created_at: Schema.DateFromString
}).pipe(
Schema.rename({
created_at: "createdAt"
})
);
export type Result = typeof Result.Type;
export const makeQueryKey = () => ["todos"] as const;
export const useQuery = () => {
return useTanStackQuery({
queryKey: makeQueryKey(),
queryFn: () =>
Effect.gen(function* () {
const { makeCachedRequest } = yield* MyHttpClient;
return yield* makeCachedRequest({
queryKey: makeQueryKey(),
schema: Result,
request: HttpClientRequest.get("...")
});
})
});
};
export const remove = () =>
QueryPersister.remove(JSON.stringify(makeQueryKey())).pipe(
Effect.andThen(() => queryClient.removeQueries({ queryKey: makeQueryKey() }))
);
export const set = (updater: (updater: Result | undefined) => Result | undefined) =>
Effect.gen(function* () {
const data = queryClient.setQueryData<Result>(makeQueryKey(), updater);
const stringifiedQueryKey = JSON.stringify(makeQueryKey());
if (data !== undefined) {
yield* QueryPersister.store(stringifiedQueryKey, yield* Schema.encode(Result)(data));
} else {
yield* QueryPersister.remove(stringifiedQueryKey);
}
});
// ...
}
As you can see, this follows the same pattern we used in MyHttpClient.makeCachedRequest
- we just encode the data and store it in IndexedDB. Pretty straightforward!
If your queries update frequently, consider either debouncing the computation that communicates with the
QueryPersister
or exploring the second approach mentioned above.
Mutations
As for mutations, it’s not much different from queries. You can create a schema for the mutation’s input:
class Input extends Schema.Class<Input>("Input")({
title: Schema.String,
description: Schema.String,
dueDate: Schema.DateTimeUtc
}) {}
And then use IndexedDB
with the mutation’s key to encode and store the input data:
export namespace CreateTask {
export class Input extends Schema.Class<Input>("Input")({
title: Schema.String,
description: Schema.String,
dueDate: Schema.DateTimeUtc
}) {}
export const makeMutationKey = () => ["createTask"] as const;
const persisterSemaphore = Effect.unsafeMakeSemaphore(1);
export const useMutation = () => {
return useTanStackMutation({
mutationKey: makeMutationKey(),
mutationFn: (input: Input) =>
Effect.gen(function* () {
const { client } = yield* MyHttpClient;
return yield* client.post("...", {
body: yield* HttpBody.json(input)
});
}).pipe(
Effect.catchIf(
(error) => error._tag === "RequestError" && error.reason === "Transport",
() =>
Effect.gen(function* () {
const key = JSON.stringify(makeMutationKey());
const inputToStore = yield* IndexedDB.get(key).pipe(
Effect.flatMap(identity),
Effect.flatMap(Schema.decodeUnknown(Schema.Array(Input))),
Effect.map(Array.append(input)),
Effect.orElseSucceed((): readonly Input[] => [input]),
Effect.flatMap(Schema.encode(Schema.Array(Input)))
);
yield* IndexedDB.set(key, inputToStore);
}).pipe(persisterSemaphore.withPermits(1))
)
)
});
};
}
We store an array of inputs since a mutation can be triggered multiple times.
Note the use of
withPermits(1)
to prevent concurrent access to stored data.
With this setup, you can create a useSyncLocal
hook for synchronizing stored input with the server, and a useLocal
hook for accessing stored input. Ideally, the hook would return optimistic items so users can see changes even offline and manage them as needed.
Conclusion
I just want to point out that this approach is pragmatic and not perfect. Since it all revolves around validating against a single schema (without versioning or migrations), users will experience interruptions when the schema changes. Now, it’s not ideal, but the benefits far outweigh the drawbacks:
- Simplicity: it’s a very straightforward approach that requires minimal effort
- Developer experience: you have full control over the data layer
- User experience: your application inherently becomes way more responsive - initial loads and time-to-first-action will be way faster
If you want to explore this further, here’s what I recommend doing next:
- Implement a
Connection
service to manage forked persisted requests based on connection status - Create helper functions to abstract the persistence and invalidation logic for queries/mutations
Pst: you can apply this same strategy to React Native - just implement a different
IndexedDB
(or better yet, rename it to a more genericCache
service) and you’re good to go!