While working on our research project CartoX², we had to deploy a Machine Learning model for prediction of the received signal strength with V-2-X communication ITS-G5 (German name: WLANp). We will use a very simple and easy to understand linear prediction model and serve an API via a Docker container. Up in running in less than 1 hour!
WLANp Prediction Model with SciKit-Learn
We are working with Python and SciKit-Learn, but this may also work with other libs and models. Here is how we trained our linear regression model. First, we need some measurement data.
This is a distance in meter and the received signal strength (RSSI) in dBm with 5.9GHz frequency. We want to fit a linear regression model to predict the RSSI as function of distance. The distance could be between two cars or between a car and infrastructure.
With knowledge in the field of communication, you will know, that the physical model of free space path loss could be formulated with
\[FSPL = 20\log_{10}(r) + 20\log_{10}(f) – 147{,}55\]
On the first look, this is not a linear model, but if we provide the distance in \(\log_{10}\) and let the frequency be constant (ITS-G5 is around \(5{,}9GHz\)), this leads to a linear model in the form of
\[RSSI(r,f) = x_2 r + x_1 f + x_0\]
Load the Data
import numpy as np with open('./RSSI.csv', 'rb') as csvfile: data = np.loadtxt(csvfile, delimiter=',') r = data[:, 0] # Distance y = data[:, 1] # RSSI
Now we have the distance in r
and the measured RSSI in vector y
.
Train the Model
from sklearn import linear_model reg_model = linear_model.LinearRegression() x2 = np.log10(r) x1 = np.ones_like(r)*np.log10(5.9e9) x0 = np.ones_like(r) X = np.array([x2, x1, x0]).T # Features reg_model.fit(X,y)
Now we have a trained prediction model.
Evaluate the model
For sure this model is not very sophisticated nor useful in real world scenarios, because you may never have the ideal free space condition, but for now it can provide some feedback.
The model estimated the linear factors \(x_2=-19.66, x_1=1.89, x_0=0.0\). Because we didn’t had a variance in the frequency, the model consolidated the last two to 1.89.

LinearRegression() predicted an RSSI based on measurements for ITS-G5 in 5.9GHz (WLANp)
You will get an RSSI prediction with
reg_model.predict(np.array([np.log10(100.0), np.log10(5.9e9), 0]).reshape(1, -1)) array([-87.47])
Dump the Machine Learning Model to File
Now we save the learned model to a file. Let’s use Pickle.
import pickle pickle.dump(reg_model, open('RSSI_linear_prediction_model.pkl', 'wb'))
We can now use this file inside our Docker container.
Deploy a Machine Learning Model with Docker and Flask
We need a running Docker Deamon. There are a lot tutorials about Docker out there in the internet, feel free to watch them.
With the running deamon, we need a directory, containing some files. These are the following:
- Dockerfile
- __init__.py
- app.py
- requirements.txt
- RSSI_linear_prediction_model.pkl
Setup the Dockerfile
To build a Docker container, we need a Dockerfile. See this one for example:
FROM python:3.5.3 MAINTAINER Paul Balzer "spam@me.de" WORKDIR /app/ COPY requirements.txt /app/ RUN pip install -r ./requirements.txt COPY app.py __init__.py /app/ COPY RSSI_linear_prediction_model.pkl /app/ EXPOSE 5000 ENTRYPOINT python ./app.py #save this file as 'Dockerfile'
As one can see, we build from Python 3.5.3, which forces Docker to fetch all dependencies for a running Python 3.5.3 machine. Then the file installs everything from the requirements.txt
, which is the following:
numpy==1.13 scipy==0.19.1 Flask==0.12.2 scikit_learn==0.18.1
And it copies the app.py
and the __init__.py
to the working directory /app
inside the Docker container. The app.py
is basically a Flask App and looks like this:
from flask import Flask from flask import request from flask import jsonify from math import log10 from sklearn import linear_model import pickle app = Flask(__name__) @app.route('/prediction/api/v1.0/RSSI_prediction', methods=['GET']) def get_prediction(): distance = float(request.args.get('d')) frequency = float(5.9*1000000000.0) modelname = 'RSSI_linear_prediction_model.pkl' #print('Loading %s' % modelname) loaded_model = pickle.load(open(modelname, 'rb'), encoding='latin1') RSSI = loaded_model.predict([[log10(distance), log10(frequency), 0.0]]) return jsonify(distance=distance, frequency=frequency, RSSI=RSSI[0]) if __name__ == '__main__': app.run(port=5000,host='0.0.0.0') #app.run(debug=True)
If you have everything copied together, you can now build the container with:
docker build . -t <name> Sending build context to Docker daemon 7.68kB Step 1/9 : FROM python:3.5.3 ---> 56b15234ac1d Step 2/9 : MAINTAINER Paul Balzer "spam@me.de" ---> Using cache ---> 6286c3d69753 Step 3/9 : WORKDIR /app/ ---> Using cache ---> b42edf6324ba Step 4/9 : COPY requirements.txt /app/ ---> Using cache ---> a847f550d93b Step 5/9 : RUN pip install -r ./requirements.txt ---> Using cache ---> 564124ff8f07 Step 6/9 : COPY app.py __init__.py /app/ ---> Using cache ---> f02db3e6bd5f Step 7/9 : COPY RSSI_linear_prediction_model.pkl /app/ ---> 01baae61fc2e Step 8/9 : EXPOSE 5000 ---> Running in 2c1aaa1d4be3 ---> 8304f1396e71 Removing intermediate container 2c1aaa1d4be3 Step 9/9 : ENTRYPOINT python ./app.py ---> Running in f74f7a67e5af ---> 2b3ad2953ee0 Removing intermediate container f74f7a67e5af Successfully built 2b3ad2953ee0 Successfully tagged testserver:latest
And after some seconds of downloading, copying and building, you can run it with
docker run -p 3000:5000 -it <name> * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
REST-API up and running on your machine
Now the Flask App is serving a Restful-API on port 3000 on your local machine. You can, for example, get an RSSI prediction out of the trained model with the following API GET request:
http://localhost:3000/prediction/api/v1.0/RSSI_prediction?d=100.0
Which should return a JSON with the following predicted RSSI
{ "RSSI": -87.47366869523624, "distance": 100.0, "frequency": 5900000000.0 }
Congrats! Your own Machine Learning Model trained, embedded in an Flask App and shipped in a Docker container in less than 1 hour.
Ready to run RSSI prediction Docker from Dockerhub
You can take a shortcut and just take our ready to run built container from Dockerhub:
docker run -p 3000:5000 -d mechlabengineering/rssiprediction
Done! Open your browser and paste the API call: http://localhost:3000/prediction/api/v1.0/RSSI_prediction?d=100.0
Where to go from here?
As you can imagine, we are training more sophisticated models with a lot more data, which are using more features than just the distance between sending and receiving devices. Then you may not be able to use linear regression models. It doesn’t matter, you can change the models and use a different pickle file to interact between your development and the deployed API.
Leave A Comment