batchMiddleware

Source
import { batchMiddleware } from "@prestojs/rest";
batchMiddleware(options)

This can be used to batch together multiple calls to an endpoint (or endpoints) to make it more efficient.

For example say you had an endpoint that accepted either 'id' or a list of 'ids' and would return the corresponding records with those ids. For ease of use you could allow calling the endpoint with a body of { id: 2} to get the record of id=2 but to minimise the number of requests you want to batch together any of these calls into a combined body of { ids: [2, 4, 8] } but still transparently return each individual record to the original caller who requested it. This might look like:

batchMiddleware({
execute(calls) {
// Our implementation of getBatchkey() guarantees that all the endpoints are the same URL (see below)
const { resolvedUrl } = calls[0];
// Extract the requested 'id' from each call
const ids = calls.map(call => {
return JSON.parse(call.requestInit.body as string).id;
});
// Call fetch, merge all other fetch options (headers etc) and create a
// new body with all the extracted ids.
return fetch(resolvedUrl, {
// In this example all calls share the same headers etc
...calls[0].requestInit,
body: JSON.stringify({ ids }),
});
},
// You can also access `response` in addition to `result` if you need the raw `Response` object.
resolve(call, { result }) {
// For each call to the endpoint extract only the record it specifically request
return result[JSON.parse(call.requestInit.body as string).id];
},
})

But how do we distinguish between two completely different endpoints? getBatchKey can be used to determine how calls are batched together:

batchMiddleware({
getBatchKey(call) {
// Batch together all calls that have identical URL (including query parameters)
return call.resolvedUrl;
},
...
});

If each call has options to fetch that may differ then you can combine them using mergeRequestInit. This will combine multiple init arguments to fetch into a single init argument. Headers will be combined into a single headers object with the last argument taking precedence in the case of a conflict. Any other init arguments use the value from the last argument passed.

fetch(resolvedUrl, {
...mergeRequestInit(...calls.map(call => call.requestInit)),
body: JSON.stringify({ ids }),
});

The process for batching looks like:

  • Endpoint is called like normal and hits batchMiddleware
  • options.getBatchKey is called.
    • If this is false then the call proceeds to the next middleware in the chain.
    • If this returns anything else that is used as the batch key. If the batch already exist this call is added to the batch otherwise a new batch is created and it's execution is scheduled in batchDelay milliseconds. The batchMiddleware will then skip any further middleware in the chain and wait for the fetch call.
  • Any further calls to an endpoint before the batchDelay time elapses are added to the batch
  • Once batchDelay time elapses execute is called which combines all the batched calls into a single call to fetch
    • Once fetch finishes decodeBody is called.
    • resolve is then called on success (2xx status) or reject called on error (non-2xx status).
  • The middleware stack continue to unwind and all middleware before batchMiddleware can handle the response/error returned by resolve/reject.

NOTE: You can have multiple batchMiddleware in the middleware chain for an Endpoint (including global middleware) but only one the first one that chooses to batch an item will ever apply (the others will be skipped). This allows you to have multiple conditional batching middleware for example. All batchMiddleware must appear last in the chain (ie. batchingMiddleware can only be proceeded by another batchMiddleware).

ParameterTypeDescription
options.batchDelaynumber

The time in ms to delay execution of a batch. During this period any calls made to the endpoint will be added to the same batch. The delay begins as soon as the first item is added to the batch.

The default is 10

options.decodeBody
Function

Function used to decode the body of the response.

If not provided defaults to

  • If content-type includes 'json' (eg. application/json) returns decoded json
  • If content-type includes 'text (eg. text/plain, text/html) returns text
  • If status is 204 or 205 will return null
  • Otherwise Response object itself is returned

This does not use the Endpoint decodeBody as the fetch call itself is specific to the batching and may not behave the same way. Batching could also happen across multiple different Endpoints in which case there would be multiple decodeBody functions to choose from.

*options.execute
Function

Execute the batch. This involves combining the individual calls into a single call and then calling fetch.

This should return a Promise that resolves to a Response (eg. the return value of fetch).

options.getBatchKey
Function

Get the key for this batch. Return false to exclude this call from being batched.

If not provided a single batch will be generated.

You can use this function to create different batches. All calls with the same batch key are batched together.

options.reject
Function

Called after execute has resolved if an error is thrown. This can either handle the error and return the expected result or throw an another error.

If not provided defaults to re-throwing the error.

This is called for each endpoint call in the batch.

*options.resolve
Function

After execute has finished resolve is called for each endpoint call in the batch with the return value of execute and should return the specific return value for that call. Whereas execute combines all the calls into a single one resolve splits the response into the specific parts that each call requires (if applicable).

Middleware function

Middleware function to pass to Endpoint or set on Endpoint.defaultConfig.middleware