Rust's web frameworks ecosystem is in constant change, but recently a new framework called warp came out implementing a new, original way to solve the old problem of transforming a request into a response, and I wanted to give it a try.
And, as I use GraphQL massively at work, I also wanted to check how well Juniper implements it. To add some spice, I used MongoDB as a storage engine instead of the ubiquitous and well-supported SQL databases.
The good old serde will handle serializations.
And I also wanted to try failure for fun.
The project
I created a toy project to test the whole setup. This is the project's directory:
.
├── Cargo.lock
├── Cargo.toml
├── src
│ ├── db.rs
│ ├── error.rs
│ ├── gql.rs
│ ├── main.rs
│ └── model.rs
└── target
I split the main functionalities into root-level modules. As the project gets more and more complex they can be moved inside their own directories, but for the sake of simplicity, let's keep a linear structure.
This is the Cargo.toml
file:
[package]
name = "gql-server"
version = "0.1.0"
authors = ["Alessandro Pellizzari <alex@example.org>"]
[dependencies]
juniper = "0.9"
mongodb = "0.3"
bson = "0.12"
serde = "1"
serde_derive = "1"
warp = "0.1"
serde_json = "1"
failure = "0.1"
failure_derive = "0.1"
The main entry point
Let's take a look at the main.rs
file, as it's the entry point to the whole app, and the one defining the web
server routes and handlers.
extern crate failure;
#[macro_use]
extern crate failure_derive;
#[macro_use]
extern crate serde_derive;
extern crate serde;
extern crate serde_json;
#[macro_use]
extern crate juniper;
#[macro_use(bson, doc)]
extern crate bson;
extern crate mongodb;
extern crate warp;
use std::sync::Arc;
use warp::{filters::BoxedFilter, Filter};
mod db;
mod error;
mod gql;
mod model;
use db::Db;
use gql::{Mutations, Query};
#[derive(Clone)]
pub struct Context {
pub db: Arc<Db>,
}
impl juniper::Context for Context {}
pub type Schema = juniper::RootNode<'static, Query, Mutations>;
fn main() {
let ctx = Context {
db: Arc::new(Db::new("mydb")),
};
let schema = Schema::new(Query, Mutations);
let gql_index = warp::get2().and(warp::index()).and_then(web_index);
let gql_query = make_graphql_filter("query", schema, ctx);
let routes = gql_index.or(gql_query);
warp::serve(routes).unstable_pipeline().run(([127, 0, 0, 1], 3030))
}
pub fn web_index() -> Result<impl warp::Reply, warp::Rejection> {
Ok(warp::http::Response::builder()
.header("content-type", "text/html; charset=utf-8")
.body(juniper::graphiql::graphiql_source("/query"))
.expect("response is valid"))
}
pub fn make_graphql_filter<Query, Mutation, Context>(
path: &'static str,
schema: juniper::RootNode<'static, Query, Mutation>,
ctx: Context,
) -> BoxedFilter<(impl warp::Reply,)>
where
Context: juniper::Context + Send + Sync + Clone + 'static,
Query: juniper::GraphQLType<Context = Context, TypeInfo = ()> + Send + Sync + 'static,
Mutation: juniper::GraphQLType<Context = Context, TypeInfo = ()> + Send + Sync + 'static,
{
let schema = Arc::new(schema);
let context_extractor = warp::any().map(move || -> Context { ctx.clone() });
let handle_request =
move |context: Context, request: juniper::http::GraphQLRequest| -> Result<Vec<u8>, serde_json::Error> {
serde_json::to_vec(&request.execute(&schema, &context))
};
warp::post2()
.and(warp::path(path.into()))
.and(context_extractor)
.and(warp::body::json())
.map(handle_request)
.map(build_response)
.boxed()
}
fn build_response(response: Result<Vec<u8>, serde_json::Error>) -> warp::http::Response<Vec<u8>> {
match response {
Ok(body) => warp::http::Response::builder()
.header("content-type", "application/json; charset=utf-8")
.body(body)
.expect("response is valid"),
Err(_) => warp::http::Response::builder()
.status(warp::http::StatusCode::INTERNAL_SERVER_ERROR)
.body(Vec::new())
.expect("status code is valid"),
}
}
Let's split it up. The first part is just extern crate
s, followed by a couple of use
s and the definitio of our
modules, so just skip it.
Type definitions
The first interesting part is the Context
definition:
#[derive(Clone)]
pub struct Context {
pub db: Arc<Db>,
}
As we'll see later, Db
is just a struct containing our MongoDB Client
. This Context
should contain all the
"global" info that should be available in the GraphQL's resolvers. They need to be wrapped in an Arc
(and possibly
an RwLock
, if needed) as Warp uses futures that could be spawned on different threads.
The Juniper definitions are quite simple:
impl juniper::Context for Context {}
pub type Schema = juniper::RootNode<'static, Query, Mutations>;
The first line declares that out Context is a valid Juniper's Context by implementing an empty trait on it.
The second one creates a type alias for the GraphQL root node, which defines the queries and the mutations.
Main
The main
function looks like this:
fn main() {
let ctx = Context {
db: Arc::new(Db::new("mydb")),
};
let schema = Schema::new(Query, Mutations);
let gql_index = warp::get2().and(warp::index()).and_then(web_index);
let gql_query = make_graphql_filter("query", schema, ctx);
let routes = gql_index.or(gql_query);
warp::serve(routes).unstable_pipeline().run(([127, 0, 0, 1], 3030))
}
It starts by initializing the Context with the connection to the DB. In this case we just panic if the connection is not available, which is fine if you deply this in a self-healing environment like kubernetes. Otherwise you'll want to handle errors.
The next step is to just build the root schema by creating it with the queries and mutations (that will be defined in
the graphql
module later)
Everything else is just warp
routes definitions.
Routes
The first one is simple:
let gql_index = warp::get2().and(warp::index()).and_then(web_index);
It defines a GET
on /
(defined by warp::index()
) and, when it happens, calls web_index
(a finction defined later
that will return the GraphiQL page)
The second one (gql-query
)is a bit more complicated and will be created in a dedicated function to keep the code clean.
Finally, the single routes are joined together into a router and the webserver started on 127.0.0.1:3030
:
let routes = gql_index.or(gql_query);
warp::serve(routes).unstable_pipeline().run(([127, 0, 0, 1], 3030))
The router just says "serve gql_index
or, if it doesn't match, serve gql_query
". Everything else will be a 405,
unless a 404 route is defined.
The graphiql page
I took and adapter the next 3 functions from the great PR by tomhoule on the Juniper project. Maybe by the time you read this it will be merged and available as an official example.
The handler that will serve the graphiql page is quite simple.
pub fn web_index() -> Result<impl warp::Reply, warp::Rejection> {
Ok(warp::http::Response::builder()
.header("content-type", "text/html; charset=utf-8")
.body(juniper::graphiql::graphiql_source("/query"))
.expect("response is valid"))
}
it just builds a response by setting the correct Content-type
and using juniper's function that just returns an
HTML page with the graphiql interface, and loads all the static resources
(css and js) from a CDN.
I'm still not quite sure how to handle errors better in this case. I will update this post when/if I found a way.
The query endpoint
This is the core of the application: the handler for GraphQL queries. It will decode the request, pass it to Juniper and encode the response:
pub fn make_graphql_filter<Query, Mutation, Context>(
path: &'static str,
schema: juniper::RootNode<'static, Query, Mutation>,
ctx: Context,
) -> BoxedFilter<(impl warp::Reply,)>
where
Context: juniper::Context + Send + Sync + Clone + 'static,
Query: juniper::GraphQLType<Context = Context, TypeInfo = ()> + Send + Sync + 'static,
Mutation: juniper::GraphQLType<Context = Context, TypeInfo = ()> + Send + Sync + 'static,
{
let schema = Arc::new(schema);
let context_extractor = warp::any().map(move || -> Context { ctx.clone() });
let handle_request =
move |context: Context, request: juniper::http::GraphQLRequest| -> Result<Vec<u8>, serde_json::Error> {
serde_json::to_vec(&request.execute(&schema, &context))
};
warp::post2()
.and(warp::path(path.into()))
.and(context_extractor)
.and(warp::body::json())
.map(handle_request)
.map(build_response)
.boxed()
}
The first thing it does is wrap the schema into an Arc
to make it multithread-aware.
It's easier to understand starting from the last part. It defines a new warp
filter accepting POST
connections
on the path
we provided (/query
, as you can see in the main
function), and adds to the request the Context
(by calling context_extractor
on every request) and the body of the request itself, decoding it from JSON and creating
a GraphQLRequest
in return.
it will then handle it (via the handle_request
function) and transform the result from Juniper into a proper JSON
response (via the build_response
function)
context_extractor
is a warp
filter that just passes the global context into the handler by cloning it (and, as it is
just an Arc
, it doesn't take any memory, just increase a counter)
handle_request
is a handler that will take the Context
and the GraphQLRequest
we just added to the request, pass
them to Juniper via execute
, and transform the result into a JSON byte-vec (a bunch of bytes, basically).
build_response
is quite simple: if the JSON encoding went well, just return it adding the correct Content-type
. If
it failed, return an error.
GrqphQL
The gql
module in gql.rs
is, thanks to Juniper, quite easy to read:
use bson::oid::ObjectId;
use juniper::FieldResult;
use super::Context;
use model::Product;
pub struct Query;
pub struct Mutations;
graphql_object!(Product: Context |&self| {
field id() -> String { if let Some(ref id) = self.id { id.to_hex() } else { "".into() } }
field name() -> &str { self.name.as_str() }
field slug() -> &str { self.slug.as_str() }
field tp() -> i32 { self.tp }
field qty() -> i32 { self.qty }
field price() -> i32 { self.price }
field width() -> i32 { self.width }
field height() -> i32 { self.height }
field depth() -> i32 { self.depth }
field weight() -> i32 { self.weight }
field description() -> &str { self.description.as_str() }
});
graphql_object!(Query: Context |&self| {
field apiVersion() -> &str {
"1.0"
}
field products(&executor) -> FieldResult<Vec<Product>> {
let context = executor.context();
Ok(context.db.list_products()?)
}
field product(&executor, id: String) -> FieldResult<Option<Product>> {
let context = executor.context();
Ok(context.db.get_product(&id)?)
}
});
graphql_object!(Mutations: Context |&self| {
field saveProduct(&executor,
id: Option<String>,
name: String,
slug: Option<String>,
tp: i32,
qty: i32,
price: i32,
width: Option<i32>,
height: Option<i32>,
depth: Option<i32>,
weight: Option<i32>,
description: Option<String>,
) -> FieldResult<Option<Product>> {
let context = executor.context();
let id = id.map(|id| ObjectId::with_string(&id)).map_or(Ok(None), |v| v.map(Some))?;
let product = Product {
id: id,
name: name,
slug: slug.unwrap_or_else(|| "".into()),
tp: tp,
qty: qty,
price: price,
width: width.unwrap_or(0),
height: height.unwrap_or(0),
depth: depth.unwrap_or(0),
weight: weight.unwrap_or(0),
description: description.unwrap_or_else( || "".into()),
};
Ok(context.db.save_product(product)?)
}
});
It starts by importing some pieces from libraries and other modules, the defines Query
and Mutations
as empty structs.
All the implementation is done via juniper's macros.
Queries
Let's ignore the Product
definition for now and focus on the Query
:
graphql_object!(Query: Context |&self| {
field apiVersion() -> &str {
"1.0"
}
field products(&executor) -> FieldResult<Vec<Product>> {
let context = executor.context();
Ok(context.db.list_products()?)
}
field product(&executor, id: String) -> FieldResult<Option<Product>> {
let context = executor.context();
Ok(context.db.get_product(&id)?)
}
});
This defines 3 queries: apiVersion
will just return a string, the version of the API. It's not needed, but it's
nice to have it (if you remember to bump the version when needed).
products
will return an array of all the products in the DB. All the resolvers (like this) receive a reference to
the executor
(the Juniper's object that processes the query) from which we can retrieve the Context
we defined
when we created the warp handler.
In this case it's just a matter of calling a function on the db we created in the Context
, convert the errors into
a form Juniper likes (via the ?
operator and the error converters we'll see later) and there is no step 3.
There is no parameter on the query (we just return the whole content of the DB. We might think of adding pagination
later), and we always return an array of products (the result is Vec<Product>
and not Option<Vec<Product>>
or
Vec<Option<Product>>
or, worse, Option<Vec<Option<Product>>>
). The array could be empty, and it's fine.
The same goes for product
that will return the product given the id. As you can see here, the id in the query is
required, as it's defined as a rust String
, while the result could be empty (no product matches the id in the DB),
so the result is defined as Option<Product>
and not just Product
The Product type
Let's go back to the Product
object now. This is how it's defined in the models
module:
use bson::oid::ObjectId;
#[derive(Serialize, Deserialize, Debug)]
pub struct Product {
#[serde(rename = "_id")]
pub id: Option<ObjectId>,
#[serde(default)]
pub name: String,
#[serde(default)]
pub slug: String,
#[serde(default)]
pub tp: i32,
#[serde(default)]
pub qty: i32,
#[serde(default)]
pub price: i32,
#[serde(default)]
pub width: i32,
#[serde(default)]
pub height: i32,
#[serde(default)]
pub depth: i32,
#[serde(default)]
pub weight: i32,
#[serde(default)]
pub description: String,
}
And like this in the gql
module:
graphql_object!(Product: Context |&self| {
field id() -> String { if let Some(ref id) = self.id { id.to_hex() } else { "".into() } }
field name() -> &str { self.name.as_str() }
field slug() -> &str { self.slug.as_str() }
field tp() -> i32 { self.tp }
field qty() -> i32 { self.qty }
field price() -> i32 { self.price }
field width() -> i32 { self.width }
field height() -> i32 { self.height }
field depth() -> i32 { self.depth }
field weight() -> i32 { self.weight }
field description() -> &str { self.description.as_str() }
});
This is due to the fact that Juniper doesn't know about MongoDB's ObjectId
type, so we need to define resolvers
for each of the fields. All of them just return the data unaltered, but the id()
one converts the ObjectId
into
a String
that Juniper knows.
Maybe there is a way to define a Scalar
in Juniper to handle it but I didn't investigate, for now.
Mutation
This is chatty, but the structure is quite simple, as all the logic is in the db
module:
graphql_object!(Mutations: Context |&self| {
field saveProduct(&executor,
id: Option<String>,
name: String,
slug: Option<String>,
tp: i32,
qty: i32,
price: i32,
width: Option<i32>,
height: Option<i32>,
depth: Option<i32>,
weight: Option<i32>,
description: Option<String>,
) -> FieldResult<Option<Product>> {
let context = executor.context();
let id = id.map(|id| ObjectId::with_string(&id)).map_or(Ok(None), |v| v.map(Some))?;
let product = Product {
id: id,
name: name,
slug: slug.unwrap_or_else(|| "".into()),
tp: tp,
qty: qty,
price: price,
width: width.unwrap_or(0),
height: height.unwrap_or(0),
depth: depth.unwrap_or(0),
weight: weight.unwrap_or(0),
description: description.unwrap_or_else( || "".into()),
};
Ok(context.db.save_product(product)?)
}
});
Don't think too much about the fields. There could be more, there could be fewer.
As this endpoint will be used both for inserting new products (the id
is null
) and for updating them
(the id
is not null
), id
in Product
is defined as an Option<ObjectId>
instead of just ObjectId
and there
is a bit of logic to take care of the null
case.
And, as ObjectId::with_string()
could return an error (invalid id), there is a .map_or()
expression that will
transform a Option<Result<ObjectId>>
into a Result<Option<ObjectId>>
, while the ?
at the end of the line will
get rid of the Result
part and return immediately if there is an error, or continue with just the Option<ObjectId>
if not.
For all the optional fields in the GrqphQL parameters (the Option<...>
ones) we just take default options, for now.
And finally, this weird construct (Ok(somethig?)
) will pass everything to the DB and possibly convert the error.
Db
Now comes the messy part: the database access.
use bson::{from_bson, oid::ObjectId, to_bson, Bson, Document};
use mongodb::{
coll::options::{FindOneAndUpdateOptions, ReturnDocument, UpdateOptions},
coll::Collection,
cursor::Cursor,
db::ThreadedDatabase,
Client, ThreadedClient,
};
use error::Error;
use model::Product;
pub struct Db {
client: Client,
db_name: String,
}
impl Db {
pub fn new<S>(db_name: S) -> Db
where
S: ToString,
{
let db_name = db_name.to_string();
let client = Client::connect("localhost", 27017).expect("Failed to initialize client.");
Db { client, db_name }
}
pub fn list_products(&self) -> Result<Vec<Product>, Error> {
let coll: Collection = self.client.db(&self.db_name).collection("products");
let cursor = coll.find(None, None)?;
let res: Result<Vec<_>, _> = cursor
.map(|row| row.and_then(|item| Ok(from_bson::<Product>(Bson::Document(item))?)))
.collect();
Ok(res?)
}
pub fn get_product(&self, id: &str) -> Result<Option<Product>, Error> {
let coll: Collection = self.client.db(&self.db_name).collection("products");
let cursor: Option<Document> = coll.find_one(Some(doc! { "_id": ObjectId::with_string(id)? }), None)?;
cursor
.map(|doc| Ok(from_bson::<Product>(Bson::Document(doc))?))
.map_or(Ok(None), |v| v.map(Some))
}
pub fn save_product(&self, prod: Product) -> Result<Option<Product>, Error> {
let coll: Collection = self.client.db(&self.db_name).collection("products");
if let Bson::Document(mut doc) = to_bson(&prod)? {
doc.remove("_id");
if let Some(ref id) = prod.id {
let filter = doc!{ "_id": Bson::ObjectId(id.clone()) };
let write_options = FindOneAndUpdateOptions {
return_document: Some(ReturnDocument::After),
..Default::default()
};
let res = coll.find_one_and_replace(filter, doc, Some(write_options))?;
if let Some(res) = res {
Ok(Some(from_bson::<Product>(Bson::Document(res))?))
} else {
Err(Error::Custom("No data returned after update".into()))
}
} else {
let res = coll.insert_one(doc, None)?;
if let Some(exception) = res.write_exception {
return Err(Error::from(exception));
}
if let Some(inserted_id) = res.inserted_id {
if let Bson::ObjectId(id) = inserted_id {
self.get_product(&id.to_hex())
} else {
Err(Error::Custom("No valid id returned after insert".into()))
}
} else {
Err(Error::Custom("No data returned after insert".into()))
}
}
} else {
Err(Error::Custom("Invalid document".into()))
}
}
}
Try to ignore the save
function for now.
Apart from the million things we need to import from the mongodb crate, there is the definition of the Db
struct,
which just contains the Mongo Client
and the name of the database in MongoDB. And, in the implementation part, the
constructor taking just the database name. The URL is hardcoded, but it could be easily passed via the constructor.
Get the list of products
The first function gets the list of all the products from the DB:
pub fn list_products(&self) -> Result<Vec<Product>, Error> {
let coll: Collection = self.client.db(&self.db_name).collection("products");
let cursor = coll.find(None, None)?;
let res: Result<Vec<_>, _> = cursor
.map(|row| row.and_then(|item| Ok(from_bson::<Product>(Bson::Document(item))?)))
.collect();
Ok(res?)
}
I will admit it, it took me a while to figure it out (and it took even more with the single line, and an insane amount of
time for the save
, as the documentation for the MongoDB crate is... sparse. But I digress).
So, we query the client, find the database, from there find the collection and then we can run the query via .find
.
By passing no filter (the first None
) and no options (the second None
) we retrieve everything unsorted and unfiltered.
We get a Cursor
over which we can map to transform bson
objects into Product
objects with the power of serde
.
from_bson
returns a Result
that we abuse via the usual ?
operator to just stop the map
function as soon as we
get an error.
If everything goes well, we just collect
and rust knows what to do because we told it in the function signature.
Then, again, this weird Ok(res?)
syntax to transform any Mongo error into our custom "catch-all" error (see later
for the Error
object).
Get a single product
This is slightly more complicated, but not much:
pub fn get_product(&self, id: &str) -> Result<Option<Product>, Error> {
let coll: Collection = self.client.db(&self.db_name).collection("products");
let cursor: Option<Document> = coll.find_one(Some(doc! { "_id": ObjectId::with_string(id)? }), None)?;
cursor
.map(|doc| Ok(from_bson::<Product>(Bson::Document(doc))?))
.map_or(Ok(None), |v| v.map(Some))
}
Get the collection, run the query with... oops. What is that?
Turns out, mongodb
(and MongoDB itself) requires a full Document
as a filter for the queries, so the mongodb
crate
provides a useful doc!{}
macro to build it dynamically. In this case we just build one with the _id
field by
converting it from the string we passed, and returning early (our frenemy ?
) if the conversion fails or if the query
fails.
Then, again, take the result (it is not a cursor, just a single possible document, but bear with me...), transform it
into a Product
object and, again, as the conversion can fail, transform the <Option<Result<Product>>
into a
<Result<Option<Product>>
with this magic map_or
trick and return the result.
Saving a product
Here comes the mess.. :)
pub fn save_product(&self, prod: Product) -> Result<Option<Product>, Error> {
let coll: Collection = self.client.db(&self.db_name).collection("products");
if let Bson::Document(mut doc) = to_bson(&prod)? {
doc.remove("_id");
if let Some(ref id) = prod.id {
let filter = doc!{ "_id": Bson::ObjectId(id.clone()) };
let write_options = FindOneAndUpdateOptions {
return_document: Some(ReturnDocument::After),
..Default::default()
};
let res = coll.find_one_and_replace(filter, doc, Some(write_options))?;
if let Some(res) = res {
Ok(Some(from_bson::<Product>(Bson::Document(res))?))
} else {
Err(Error::Custom("No data returned after update".into()))
}
} else {
let res = coll.insert_one(doc, None)?;
if let Some(exception) = res.write_exception {
return Err(Error::from(exception));
}
if let Some(inserted_id) = res.inserted_id {
if let Bson::ObjectId(id) = inserted_id {
self.get_product(&id.to_hex())
} else {
Err(Error::Custom("No valid id returned after insert".into()))
}
} else {
Err(Error::Custom("No data returned after insert".into()))
}
}
} else {
Err(Error::Custom("Invalid document".into()))
}
}
There are so many ways this can go wrong that the error handling code takes about 99% of the whole function.
After retrieving the collection on the first line we convert the Product
object into a bson
structure, remove the
_id
field (as it's null during insert and will raise an "Immutable field mutated" error during update), then split the
flow.
If it's an update (id
is Some
), prepare a filter to find the document to update, then prepare an Options
object
to ask MongoDB to return the updated document instead of the old version (this depends on what you need, obviously) and
finally run the find_one_and_replace
query. If it fails, return early (always ?
). If it doesn't, try to convert the
returned document into a Product
and return it (or return an error with ?
). If not, return a generic error.
If it's an insert... first of all try to insert (and fail fast with ?
).
if there is no immediate error, there could have been a more structured one, so we need to check the exception
field
in the returned struct. Otherwise, check if the insert went well and we got a new _id
(inserted_id
). If we did, just
fall back to our get_product()
function to retrieve the new object. If not, maybe the id is not valid, or maybe we
didn't get anything back. Either way, return an error.
If the initial conversion into bson
failed, return an error.
I REALLY think this part should be improved, but honestly I don't see many ways I can do it in my code. Maybe something
can be done better in the mongodb
crate (unify errors as Result
from the functions instead of mimicking the original
MongoDB way), and definitely something should be done on Rust's side to simplify error management.
Errors
Speaking of errors, this is the last piece of the puzzle: the error
module in error.rs
:
use bson::{oid::Error as BsonOidError, DecoderError as BsonDecoderError, EncoderError as BsonEncoderError};
use juniper::{FieldError, Value};
use mongodb::{coll::error::WriteException as MongoWriteException, Error as MongoError};
#[derive(Fail, Debug)]
pub enum Error {
#[fail(display = "Error: {}", _0)]
Custom(String),
#[fail(display = "Mongo Error: {}", _0)]
Mongo(#[cause] MongoError),
#[fail(display = "Mongo Write Error: {}", _0)]
MongoWriteException(#[cause] MongoWriteException),
#[fail(display = "Error encoding BSON: {}", _0)]
BsonEncode(#[cause] BsonEncoderError),
#[fail(display = "Error decoding BSON: {}", _0)]
BsonDecode(#[cause] BsonDecoderError),
#[fail(display = "Invalid document id: {}", _0)]
BsonOid(#[cause] BsonOidError),
}
impl From<MongoError> for Error {
fn from(e: MongoError) -> Self {
Error::Mongo(e)
}
}
impl From<MongoWriteException> for Error {
fn from(e: MongoWriteException) -> Self {
Error::MongoWriteException(e)
}
}
impl From<BsonDecoderError> for Error {
fn from(e: BsonDecoderError) -> Self {
Error::BsonDecode(e)
}
}
impl From<BsonEncoderError> for Error {
fn from(e: BsonEncoderError) -> Self {
Error::BsonEncode(e)
}
}
impl From<BsonOidError> for Error {
fn from(e: BsonOidError) -> Self {
Error::BsonOid(e)
}
}
And now a little rant. Sorry.
I am using failure
that saves me from having to write huge Display
and Error
implementations for my unified
custom error, but still, I honestly think this should not even be necessary in a modern language.
All those impl From
parts are exactly the same. Most of the enum variants are exactly the same. All that changes is
the type of every error.
Why not have a single Error
object in the language?
This is the part of Rust I like less. It forces you to convert errors back and forth. It forces you to use the ?
operator in awkward places to convert errors, or fill your code with .map_err()
calls.
It forces you to use and_then
instead of map
in some cases.
It forces you to think about what kind of errors can be coming out of your code while you are still prototyping,
slowing you down in the most important part of the work: when you experiment with things.
I really hope it will be fixed somehow, but judging from the discussions in the official forums there will be even more workarounds but no real solution.
The End
I hope you survived till here and that this post was useful to understand a few of the solutions rust provides to implement a modern web application.
I will probably update it if something changes (while I was writing this post, warp deprecated .post()
and .get()
in favour of .post2()
and .get2()
, for example) or if I find better solutions, especially for error handling and
type conversions.
In the meantime, enjoy Rust. :)