• No products in the cart.

How To Build and Deploy a Model using FastAPI

Building machine learning models is one thing, but deploying it to make it actually useful for someone is a whole another. The accessibility of the model is of the essence; otherwise, no matter how good is it, it’s useless. Among all other methods, one of the most used and easiest is to wrap ML models inside a REST API, which is what we’ll be doing today – using FastAPI.

What’s FastAPI? Well, it’s a modern interactive framework that allows building APIs very conveniently using Python 3.6+. Not only is it effortless to use, but it also provides extreme performance, being able to be compared with the likes of NodeJS and Go.

So, today in this article, we will discuss how we can deploy a model using FastAPI and use its one-of-a-kind automatic interactive documentation. We’ll first code a complete model using Python, then use the API to make predictions.

Here’s the agenda for today:

· FastAPI Installation

· FastAPI Documentation

· Training the Machine Learning model

· Building the REST API

· Testing the Model

· Wrap Up

Note: The GitHub repo for this project can be found here

FastAPI Installation

First things first, let’s install the library along with an ASGI server. Just open up a terminal window and install fastapi with uvicorn – the ASGI server.

pip install fastapi uvicorn

Let’s create a folder for our project now. Inside the folder, we will create an app named app.py that will hold the API’s code. To code this file, we need to do the below-mentioned things:

1. Import the required libraries – FastAPIand Uvicorn

2. Instantiate the FastAPI class

3. Declare a route to return a simple JSON object (http://127.0.0.1:8000)

4. Declare another route to return a personalized JSON object, with the parameter coming from the URL (e.g., http://127.0.0.1:8000/Muneeb)

5. Run the API using Uvicorn

Here’s how we can code all of this:

 

That’s it. Now, all we have to do to start up the API is to open up a terminal where the app.py is located and type the following command:

uvicorn app:app -reload

But how does this command work? Let’s see. The first app command refers to the name of your Python file, but you don’t have to include the extension with it. The second app points towards the FastAPI instance we created in our app.py file. This way, FastAPI knows which file to look at the FastAPI instance for and then get it using its name. Lastly, the –reload parameter tells us that we want our API to reload every time we make changes to our source files without needing to be restarted.

After running the command, you need to hop on to the localhost URL we specified. Here’s what you will see after opening the webpage.

 

This one is the simple default JSON object we returned. Now, let’s try using the personalized JSON by passing the name object to the URL. Go to http://127.0.0.1:8000/Muneeb; here’s the output:

 

Great. You can enter any name in the URL, and it’ll work the same.

That’s pretty much how an API works. Now, before we dive into all the ML stuff, let’s explore the interactive documentation of FastAPI and see what the hype is about.

FastAPI Documentation

The documentation is what separates the FastAPI from the rest. With built-in interactive documentation, it’s certainly one-of-a-kind. Just hop on to http://127.0.0.1:8000/docsto start exploring.

 

Feel free to click on any of the drop-down menus to explore them further. Moreover, you can use this to document your API as well. If you try placing docstrings underneath the function declarations in your source files, it will be reflected in your docs as well.

Moving on, if you don’t like the UI, there’s another feature that lets you change the overview. You can go to http://127.0.0.1:8000/redoc.

That’s it for the Swagger UI. Now, let’s move on to building our machine learning model.

Training the Machine Learning model

Since we’re building the ML model just for the sake of the demo of FastAPI, we won’t indulge in very complex models. I’ve picked up a simple dataset to train a random forest, which you can access using my GitHub. However, no matter how complex of a model you have developed, you can deploy it on FastAPI just like we will deploy the one today.

We will use a bank notes dataset to train an ML model that will predict if a particular bank note is fake or not. So, let’s perform the following steps:

Train our model using the random forest classifier on a jupyter notebook named ModelTraining.ipynb. You can check out the notebook here.

Serialize the model using pickle. However, I’ve already added the pkl file in the project repo, so you can skip this step if you want.

Make a BankNotes.py class that inherits from BaseModel. This will be used to store the fields used to predict a bank note. Here’s how it looks:

 

Building the REST API

Let’s code the app.py again as we did at the start of the article. However, unlike before, we will have only one route to declare called ‘/evaluate’. This route will be used to make predictions using the model we trained in the previous step. Another important change is that we will use the POST method this time since it’s the recommended way to send parameters when dealing with ML models.

Let’s see the steps to follow to code the app.pyfile:

1. Importing the required libraries – FastAPI, uvircorn, pickle, BankNotes, etc

2. Instantiate the FastAPI and BankNotesclasses

3. Define a function to handle the POST request and return the predictions

4. Run the app.py using uvicorn

That’s it. Let’s see how to code this in Python:

 

Just like before, we can run this using the command uvicorn app:app –reload. Once you’ve run the command, traverse to the documentation page. Here’s what it would look like:

 

Expand the POST endpoint and click on the ‘Try it out’button to start making the predictions as shown below:

 

Once you have entered the parameters, simply click Execute to get the response from the model. Here’s what you will get in return with your own parameters:

 

As you can see, our model has predicted the note to be fake. You can test out with different values and check how the response changes. Moreover, you can scroll down to see more details about the response object returned.

So you see how convenient it was to make predictions using the interactive dashboard? The best part is that even the non-technical people can use the documentation to explore the model predictions.

Wrap Up

Today, we have learned what FastAPI is and how it can be used to deploy machine learning models in under ten minutes. We talked about the library in detail and why it is significantly more useful than the standard deployment tools out there.

Along with excellent interactive documentation, it also has a great learning curve that enables even the non-technical personnel to get and explore the responses from deployed ML models.

That’s all for today, hope you enjoyed reading!

May 1, 2022
© 2021 Ernesto.  All rights reserved.  
X