batchMiddleware
SourceThis can be used to batch together multiple calls to an endpoint (or endpoints) to make it more efficient.
For example say you had an endpoint that accepted either 'id' or a list of 'ids' and would return the corresponding
records with those ids. For ease of use you could allow calling the endpoint with a body of { id: 2}
to get the
record of id=2
but to minimise the number of requests you want to batch together any of these calls into a
combined body of { ids: [2, 4, 8] }
but still transparently return each individual record to the original caller
who requested it. This might look like:
batchMiddleware({execute(calls) {// Our implementation of getBatchkey() guarantees that all the endpoints are the same URL (see below)const { resolvedUrl } = calls[0];// Extract the requested 'id' from each callconst ids = calls.map(call => {return JSON.parse(call.requestInit.body as string).id;});// Call fetch, merge all other fetch options (headers etc) and create a// new body with all the extracted ids.return fetch(resolvedUrl, {// In this example all calls share the same headers etc...calls[0].requestInit,body: JSON.stringify({ ids }),});},// You can also access `response` in addition to `result` if you need the raw `Response` object.resolve(call, { result }) {// For each call to the endpoint extract only the record it specifically requestreturn result[JSON.parse(call.requestInit.body as string).id];},})
But how do we distinguish between two completely different endpoints? getBatchKey
can be used to determine how
calls are batched together:
batchMiddleware({getBatchKey(call) {// Batch together all calls that have identical URL (including query parameters)return call.resolvedUrl;},...});
If each call has options to fetch
that may differ then you can combine them using mergeRequestInit.
This will combine multiple init
arguments to fetch
into a single init
argument. Headers will be combined into a single headers object with the last argument taking
precedence in the case of a conflict. Any other init
arguments use the value from the last argument passed.
fetch(resolvedUrl, {...mergeRequestInit(...calls.map(call => call.requestInit)),body: JSON.stringify({ ids }),});
The process for batching looks like:
- Endpoint is called like normal and hits
batchMiddleware
options.getBatchKey
is called.- If this is
false
then the call proceeds to the next middleware in the chain. - If this returns anything else that is used as the batch key. If the batch already exist this call is added to the batch
otherwise a new batch is created and it's execution is scheduled in
batchDelay
milliseconds. ThebatchMiddleware
will then skip any further middleware in the chain and wait for thefetch
call.
- If this is
- Any further calls to an endpoint before the
batchDelay
time elapses are added to the batch - Once
batchDelay
time elapsesexecute
is called which combines all the batched calls into a single call tofetch
- Once
fetch
finishesdecodeBody
is called. resolve
is then called on success (2xx status) orreject
called on error (non-2xx status).
- Once
- The middleware stack continue to unwind and all middleware before
batchMiddleware
can handle the response/error returned byresolve
/reject
.
NOTE: You can have multiple
batchMiddleware
in the middleware chain for anEndpoint
(including global middleware) but only one the first one that chooses to batch an item will ever apply (the others will be skipped). This allows you to have multiple conditional batching middleware for example. AllbatchMiddleware
must appear last in the chain (ie.batchingMiddleware
can only be proceeded by anotherbatchMiddleware
).
Parameter | Type | Description | |
---|---|---|---|
options.batchDelay | number | The time in ms to delay execution of a batch. During this period any calls made to the endpoint will be added to the same batch. The delay begins as soon as the first item is added to the batch. The default is | |
options.decodeBody | Function | Function used to decode the body of the response. If not provided defaults to
This does not use the | |
* | options.execute | Function | Execute the batch. This involves combining the individual calls into a single call and then calling fetch. This should return a Promise that resolves to a |
options.getBatchKey | Function | Get the key for this batch. Return false to exclude this call from being batched. If not provided a single batch will be generated. You can use this function to create different batches. All calls with the same batch key are batched together. | |
options.reject | Function | Called after If not provided defaults to re-throwing the error. This is called for each endpoint call in the batch. | |
* | options.resolve | Function | After |
Middleware function
Middleware function to pass to Endpoint or set on Endpoint.defaultConfig.middleware