How to Deploy TensorFlow Models using Google’s Cloud AI platform?
- By Deepika
Professionals who have worked with machine learning models know all the challenges that come with it. Arguably the most tedious and time-consuming part of the model development process is the collection and curation of data needed to train those models. However, there are several situations where you can skip this cumbersome step and use a model that has already been trained and is ready to use. These could be functions like spam detection, speech to text conversion, or object labelling. It’s a greater benefit if the model is created and updated by professionals who have access to large datasets, machine learning expertise and powerful rigs for training. Professionals with a GCP certification are generally responsible for creating these models.
One of the most reliable places to find highly advanced pre-trained machine learning models today is TensorFlow Hub. It hosts a multitude of machine learning models that Google Research has built, available for free to use and download. You can find ML models for operations such as super resolution, image segmentation, text embedding and question answering, among others. The good news is that you don’t need training data sets before using these models, since these models are trained on huge ones. However, if you wish to use any one of these models in your application, you must resolve the challenge where you can host them to ensure efficiency, reliability and scalability.
Deploying ML Models and Versions:
Google’s AI Platform Prediction model can be defined as a container for multiple versions of a machine learning model you use. It organises trained ML models using various versions and model resources.
Before you deploy any model, you need to
- Create a model resource within AI Platform Prediction
- Create a different version of the model
- Link the version of the model to the model file in the cloud storage
- Creating a Model Resource using AI Platform Prediction:
AI Platform Prediction organises various versions of your ML model using model resources. At this point, you need to decide if you want model versions to be created using a regional or global endpoint. Generally, it is recommended to choose regional endpoints. If you are looking for ML models with functionality only on MLS1 machines, only then should you opt for a global endpoint. You also need to decide if you want these model versions to export logs of any kind while serving predictions.
The following steps are used to create a model resource in AI Platform Prediction:
- Go to the ML Models page.
- Tap the ‘New Model’ button at the top. This will direct you to the ‘Create Model’ page.
- Give a unique name to your ML model and enter it under ‘Model Name’.
- When you check the ‘Use Regional Endpoint’ box, AI Platform Prediction will use a regional endpoint. If you wish to use global endpoints, simply uncheck the box. We shall proceed with the former situation.
- You will now see a ‘Region’ list dropping down. Select one location for each prediction node. The regions available will vary based on whether you choose a regional or global endpoint.
- Once this is done, click on ‘Create’.
- Confirm that you are back to the Models page by locating your new ML model in the list.
- Create a model version:
Once you have completed the first step, you are ready to make a model version using the trained model you uploaded to your Cloud Storage. While creating a model version, you have the option to specify several parameters. Some of the most common ones are:
- name: It must be unique in the AI Platform Prediction model.
- deploymentUri: this is the path to your cloud storage model directory.
- framework: You can leave this parameter if you have deployed a custom prediction routine.
- runtimeVersion: This is based on your model’s dependency needs. If you deploy a scikit-learn, XGBoost or custom prediction routine, you need a minimum of 1.4. If you are going to use this model version for batch prediction, you need version 2.1 or earlier.
- packageUris: This is an optional parameter. It is the list of paths you can take to reach your custom code distribution packages within Cloud Storage. This parameter should only be provided if you deploy a custom prediction routine or a scikit-learn pipeline using custom code.
- pythonVersion: This parameter should be set to 3.5 or 3.7 depending on the runtime version you use. The objective is it must be compatible with model files that you export using Python 3.
- machineType: This is another optional parameter. It’s the type of VM used by AI Prediction Platform for nodes serving predictions.
- Link the model version to the model file in Cloud Storage: For the final step, link your model version with your ML model added in Cloud Storage. You can now use this TensorFlow model in your app without worrying about most operational and performance challenges.
To learn more and to gain expertise in the subject, you should consider enrolling in a TensorFlow certification course and sharpening your skills.