We at lunatic have been working on a Rust web framework targeting backend WebAssembly, called submillisecond. As part of our journey we looked at other web frameworks and their patterns. And one of the patterns was standing out to me, because it’s used by almost all popular Rust web frameworks. But not only web frameworks use it, other libraries do too, including the popular game engine bevy. I don’t know the “official” name for it, so I just stole the name from a recent reddit post (magical handler functions).
Traditional handler functions
Let’s explore how handler functions usually look in programming languages. Here are a few examples:
let app = Router::new()
.route("/users", get(users))
.route("/products", get(product));
async fn users(req: Request) -> Response {
let params = Query<Params>::from_request(&req);
/* ... */
}
async fn product(req: Request) -> Response {
let db = State<Db>::from_request(&req);
let data = Json<Payload>::from_request(&req);
/* ... */
}
You get a Request
type as input and return a Response
. The Request
argument can also be used to
extract more data out of the request, like payloads. The main advantage
of this approach is that it’s fairly simple to understand and maps well to the actual client/server
pattern. There is not much magic here, everything is explicit, and it’s easy to form a mental model
around it.
Rust developers generally like these properties, predictability and explicitness, instead of magic. That’s why I’m quite surprised that most Rust web frameworks went another route. Let’s look at it!
Magical handler functions
What are “magical handler functions” in Rust? They are functions where the signature decides
what and how to extract from the Request
structure. It’s easier to explain them on an
example. Let’s look at the following Axum code:
let app = Router::new()
.route("/users", get(users))
.route("/products", get(product));
async fn users(Query(params): Query<Params>)
-> impl IntoResponse {
/* ... */
}
async fn product(State(db): State<Db>, data: Json<Payload>)
-> String {
/* ... */
}
Notice how the handler functions passed to the router have a different number of arguments and
different argument types, but magically they are turned into the right values. For example, data
will contain the payload parsed as a Json
type.
These functions hide a lot of information from developers, from performance characteristics to error handling behavior. So let’s take a deeper look to better understand them.
How do magical handler functions actually work?
The router takes a generic value F
(usually a function), that implements
IntoHandler
. And IntoHandler
is implemented for functions with a different number of arguments,
where each argument implements FromRequest
. This is a “pseudocode” view of the magic:
/// Implemented for handlers taking one argument.
impl<F, T> IntoHandler<T> for F
where
F: Fn(T),
T: FromRequest,
{
fn call(self, request: Request) {
// Turn the request into the correct type.
let arg1 = T::from_request(&request);
// Call the handler with the correct type.
(self)(arg1);
}
}
/// Implemented for handlers taking two argument.
impl<F, T1, T2> IntoHandler<(T1, T2)> for F
where
F: Fn(T1, T2),
T1: FromRequest,
T2: FromRequest,
{
fn call(self, request: Request) {
let arg1 = T1::from_request(&request);
let arg2 = T2::from_request(&request)
(self)(arg1, arg2);
}
}
This version looks fairly similar to the first example of traditional handlers, but we just
moved the control flow into the type system. Now the type T1
and T2
(also called Extractors)
tell the handler how the request should be transformed into the right arguments.
Performance implications
The careful reader will notice that the Request
is passed into from_request
as a
reference (&
). And most of the time the Extractors will create internally a copy of some data.
With the explicit version, we have much more control around the borrowing and cloning:
async fn product(req: Request) -> Response {
let db = State<Db>::from_request(&req);
let data = Json<Payload>::from_request(req);
/* ... */
}
In this example we borrow the request to get a database connection, but consume it to avoid copying the payload.
Interestingly enough, the Axum framework makes some of the same optimization, but hides them from the developer behind the order of argument definitions (I think the last one avoids the clone?):
/// This has different performance characteristics
async fn product1(State(db): State<Db>, data: Json<Payload>)
{ /* ... */ }
/// from this
async fn product2(data: Json<Payload>, State(db): State<Db>)
{ /* ... */ }
I’m not a huge fan of this hidden behavior as you need to be familiar with the implementation details of Axum to reason about performance here. And this actually might change in the future. Switching from framework to framework could also be confusing. For example, in submillisecond it’s the first argument that avoids the clone.
Abstraction and structural sharing becomes also much harder. For example, if you have another Extractor that needs to get an authorization token from the Json, it will need to re-parse it. Building performant abstractions by just using Extractors is impossible, because they are completely independent of each other and can’t share “work already done”.
Error handling
Another interesting question is, what happens if the Extractor fails? Rust is known for forcing the developer to handle every failure, but magical handler functions push the error handling to the Extractor:
pub trait FromRequest {
// A failure needs to become a response.
type Rejection: IntoResponse;
/// Perform the extraction.
fn from_request(req: &mut RequestContext)
-> Result<Self, Self::Rejection>;
}
This is also not great. Developers of extractors need to provide failure responses for their
extractors. This means that if you import two different extractors from different libraries,
they could return error pages that look completely different in style from each other and
also different from your application. And from what I know there is no great way of unifying
that. Makes them in most cases completely useless. Luckily you can catch the error and block
the automatic response by wrapping an Extractor into a Result
type:
async fn product1(data: Result<Json<Payload>>) { /* ... */ }
Extractors as an authorization mechanism
This is something that Rocket encourages, but it gets a big NO-NO from me. The idea is simple,
if Extractors
can fail they can also block you from accessing some handlers. So, why not do
something like this?:
async fn admin(admin: Admin) { /* ... */ }
The idea is straightforward. If you need to use the Admin
struct in a handler, you should not be
able to access the handler without being able to acquire that struct. However, having arguments as
the only way of restricting access to handlers seems super dangerous to me. Sometimes we want to
guard non-public info but not really use any data belonging to the admin user. An unused argument
can be easily removed by a dev not completely familiar with this concept.
That’s why we decided in submillisecond to use a different approach. We express our routing as a macro, so it’s easier to safeguard whole subtrees in a robuster way:
router! {
"/short_requests" if ContentLengthGuard(128) => {
POST "/super" if ContentLengthGuard(64) => super_short
POST "/" => short
}
}
I feel strongly about this. Once your app grows to a certain size, you will need a way of robustly guarding groups of routes. Having a more declarative mechanism for it makes changes less error-prone.
The router!
macro in submillisecond generates just a handler function that uses a prefix tree
to dispatch the request to the correct handler, depending on the URL in the request.
Conclusion
Exploring this pattern was an interesting journey. I still can’t tell why it’s more popular than the alternative that contains less magic and gives more flexibility to developers. I can only assume that even Rust developers long for some magic and coolness in their tooling.
The more interesting question to me is, when does this magic become too much? What is the red line a typical Rust developer would not cross? As Rust’s type system powers keep increasing (higher-kinded-types and other features), we will probably see much more of compute hidden behind type definitions. At this point, you can hardly call stuff zero-cost if the order in which function arguments are defined starts to have side effects.
I also have the feeling that the “magic” word is just reserved for macros in the Rust community. If you manage to build the same features, but hide them behind the type system, Rust developers will mostly not consider it magic. In many cases I personally prefer well documented macros instead of crazy type system workarounds. At a certain level of type complexity the compiler also becomes less helpful, so it degrades the experience of simpler code being built on top of those abstractions.
I’m definitely not against some magic. That’s why we decided to use magical handler functions
in submillisecond too, but we offer you an escape hatch in the shape that the RequestContext
is
also an Extractor, so you can have a handler function with only one argument taking the request:
fn product(req: RequestContext) -> Response { /* ... */}
Submillisecond is still super early in the development, but we would love for people to try it out and give us some feedback. It has some interesting properties:
- Fast compile times
- async-free - preemption and scheduling is done by lunatic
- strong security - each request is handled in a separate WebAssembly instance
We are also working on a LiveView implementation for Rust built on top of submillisecond. Join our discord to stay updated!