{"id":59157,"date":"2018-11-02T08:00:12","date_gmt":"2018-11-02T08:00:12","guid":{"rendered":"https:\/\/fedoramagazine.org\/?p=22888"},"modified":"2018-11-02T08:00:12","modified_gmt":"2018-11-02T08:00:12","slug":"create-a-containerized-machine-learning-model","status":"publish","type":"post","link":"https:\/\/sickgaming.net\/blog\/2018\/11\/02\/create-a-containerized-machine-learning-model\/","title":{"rendered":"Create a containerized machine learning model"},"content":{"rendered":"<p>After data scientists have created a machine learning model, it has to be deployed into production. To run it on different infrastructures, using containers and exposing the model via a REST API is a common way to deploy a machine learning model. This article demonstrates how to roll out a <a href=\"https:\/\/www.tensorflow.org\">TensorFlow<\/a> machine learning model, with a REST API delivered by <a href=\"https:\/\/connexion.readthedocs.io\/en\/latest\/\">Connexion<\/a> in a container with <a href=\"https:\/\/fedoramagazine.org\/running-containers-with-podman\/\">Podman<\/a>.<\/p>\n<p><span id=\"more-22888\"><\/span><\/p>\n<h2>Preparation<\/h2>\n<p>First, install Podman with the following command:<\/p>\n<pre>sudo dnf -y install podman<\/pre>\n<p>Next, create a new folder for the container and switch to that directory.<\/p>\n<pre>mkdir deployment_container &amp;&amp; cd deployment_container<\/pre>\n<h2>REST API for the TensorFlow model<\/h2>\n<p>The next step is to create the REST-API for the machine learning model. This <a href=\"https:\/\/github.com\/svenboesiger\/titanic_tf_ml_model\">github repository<\/a> contains a pretrained model, and well as the setup already configured for getting the REST API working.<\/p>\n<p>Clone this in the deployment_container directory with the command:<\/p>\n<pre>git clone https:\/\/github.com\/svenboesiger\/titanic_tf_ml_model.git<\/pre>\n<h4>prediction.py &amp; ml_model\/<\/h4>\n<p>The <a href=\"https:\/\/github.com\/svenboesiger\/titanic_tf_ml_model\/blob\/master\/prediction.py\">prediction.py<\/a> file allows for a Tensorflow prediction, while the weights for the 20x20x20 neural network are located in folder <a href=\"https:\/\/github.com\/svenboesiger\/titanic_tf_ml_model\/tree\/master\/ml_model\/titanic\"><em>ml_model<\/em>\/<\/a>.<\/p>\n<h4>swagger.yaml<\/h4>\n<p>The file swagger.yaml defines the API for the Connexion library using the <a href=\"https:\/\/github.com\/OAI\/OpenAPI-Specification\/blob\/master\/versions\/2.0.md\">Swagger specification<\/a>. This file contains all of the information necessary to configure your server to provide input parameter validation, output response data validation, URL endpoint definition.<\/p>\n<p>As a bonus Connexion will provide you also with a simple but useful single page web application that demonstrates using the API with JavaScript and updating the DOM with it.<\/p>\n<pre>swagger: \"2.0\" info: description: This is the swagger file that goes with our server code version: \"1.0.0\" title: Tensorflow Podman Article consumes: - \"application\/json\" produces: - \"application\/json\" basePath: \"\/\" paths: \/survival_probability: post: operationId: \"prediction.post\" tags: - \"Prediction\" summary: \"The prediction data structure provided by the server application\" description: \"Retrieve the chance of surviving the titanic disaster\" parameters: - in: body name: passenger required: true schema: $ref: '#\/definitions\/PredictionPost' responses: '201': description: 'Survival probability of an individual Titanic passenger' definitions: PredictionPost: type: object<\/pre>\n<h4>server.py &amp; requirements.txt<em><br \/>\n<\/em><\/h4>\n<p><a href=\"https:\/\/github.com\/svenboesiger\/titanic_tf_ml_model\/blob\/master\/server.py\"><em>server.py<\/em><\/a>\u00a0 defines an entry point to start the Connexion server.<\/p>\n<pre>import connexion app = connexion.App(__name__, specification_dir='.\/') app.add_api('swagger.yaml') if __name__ == '__main__': app.run(debug=True)<\/pre>\n<p><a href=\"https:\/\/github.com\/svenboesiger\/titanic_tf_ml_model\/blob\/master\/requirements.txt\"><em>requirements.txt<\/em><\/a> defines the python requirements we need to run the program.<\/p>\n<pre>connexion tensorflow pandas<\/pre>\n<h2>Containerize!<\/h2>\n<p>For Podman to be able to build an image, create a new file called &#8220;Dockerfile&#8221; in the <strong>deployment_container<\/strong> directory created in the preparation step above:<\/p>\n<pre>FROM fedora:28 # File Author \/ Maintainer MAINTAINER Sven Boesiger &lt;donotspam@ujelang.com&gt; # Update the sources RUN dnf -y update --refresh # Install additional dependencies RUN dnf -y install libstdc++ RUN dnf -y autoremove # Copy the application folder inside the container ADD \/titanic_tf_ml_model \/titanic_tf_ml_model # Get pip to download and install requirements: RUN pip3 install -r \/titanic_tf_ml_model\/requirements.txt # Expose ports EXPOSE 5000 # Set the default directory where CMD will execute WORKDIR \/titanic_tf_ml_model # Set the default command to execute # when creating a new container CMD python3 server.py<\/pre>\n<p>Next, build the container image with the command:<\/p>\n<pre>podman build -t ml_deployment .<\/pre>\n<h2>Run the container<\/h2>\n<p>With the Container image built and ready to go, you can run it locally with the command:<\/p>\n<table class=\"wysiwyg-macro\">\n<tbody>\n<tr>\n<td class=\"wysiwyg-macro-body\">\n<pre>podman run -p 5000:5000 ml_deployment<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Navigate to <a href=\"http:\/\/0.0.0.0:5000\/\">http:\/\/0.0.0.0:5000\/ui<\/a> in your web browser to access the Swagger\/Connexion UI and to test-drive the model:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-23037\" src=\"http:\/\/www.sickgaming.net\/blog\/wp-content\/uploads\/2018\/11\/create-a-containerized-machine-learning-model.png\" alt=\"\" width=\"616\" height=\"925\" \/><\/p>\n<p>Of course you can now also access the model with your application via the REST-API.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>After data scientists have created a machine learning model, it has to be deployed into production. To run it on different infrastructures, using containers and exposing the model via a REST API is a common way to deploy a machine learning model. This article demonstrates how to roll out a TensorFlow machine learning model, with [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":59158,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[48],"tags":[45,46,47,44],"class_list":["post-59157","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-fedora-os","tag-fedora","tag-magazine","tag-news","tag-using-software"],"_links":{"self":[{"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/posts\/59157","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/comments?post=59157"}],"version-history":[{"count":0,"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/posts\/59157\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/media\/59158"}],"wp:attachment":[{"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/media?parent=59157"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/categories?post=59157"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/tags?post=59157"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}