How to deploy a machine learning multi-class classification Keras model for predicting voice sentiment.
build the NN model, the Flask application and include Html, CSS, js for the voice recorder.
how to choose the right configuration for your Docker image.
deploy on AWS Elastic Beanstalk and how to adjust the voice recorder.
All the above contains common troubleshoots as well. Some of them you’ll find on my StackOverflow .
Why voice sentiment? Due to exponential growth in technology in the past decade, lifestyle has changed and people demand faster and easier to use services. Many companies do renew contracts and get customer agreement legally valid only through a phone call. Not to mention all the marketing, customer success and sales use. And it became quite necessary to predict what your customer sentiment is from a voice input/phone call.
Now let’s go through the voice sentiment model.
Data sets for training the model: contains different speakers, mostly actors, in a controlled environment and do convey the following sentiments, for both female and male: angry, disgust, fear, happy, neutral, sad, surprised . (14 classes — since it’s always wise to train female separate from male voice).
To extract and transform the data features(Mel frequency cepstral coefficients: MFCC ) from the audio I have used LibROSA . Building the model using Keras: Sequential with Conv1D layers with BatchNormalization , Dropout, and MaxPooling1D . Feel free to have your try on my Kaggle kernel .
Flask application: requests the data from the user and saves it as a WAV file which is processed and predicted later.
A couple of things to keep in mind here:
model._make_predict_function()Why is this happening and how to solve it, find it on my StackOverflow solution (and here you got so pros and cons of using the above).
Find the Html, CSS, and js for voice recorder at my Github repo . You can modify the recording time in recapp.js, just remember that 1 second=1000 milliseconds:
I have first deployed the Flask app with AWS Beanstalk and found that the server error log is not that detailed and clear about errors and so I have found Docker error log to be quite useful.
Of course, Docker comes with many other all in one tool and functions that makes it somehow easier and more compact for building an environment.
This will help you with making the right choices when comes about building Docker images and dealing with troubleshooting:
FROM python:3.6-slim(note: will avoid Alpine Linux for now due to some misfits: much fewer libraries, a different C library, etc.).
RUN apt-get update && \ apt-get -y --no-install-recommends install sudo && \ sudo apt-get -y --no-install-recommends install libsndfile1-dev && \ pip install --no-cache-dir -r requirements.txt && \ sudo rm -rf /var/lib/apt/lists/*
pip instal --no-deps #no dependenciesthough this can be cumbersome when dealing with Tensorflow and Keras (as Scikit-learn, SciPy, etc.)
pip instal --no-cache-dircan be helpful with that extra cache,
sudo rm -rf /var/lib/apt/lists/*
WORKDIR /deploy/ # pick your working directory COPY . . # useful for copying all folders and subfoldersEXPOSE 5000 # set port 5000 for Flask app CMD [“python”, “application.py”] #application to be run
Amazon Web Services and Elastic Beanstalk seemed like a good choice.
I have followed all the instructions for creating a single container docker environment here .
If you use the AWS console then things are quite easy. If you AWS/EB CLI then make sure you set the path after installing EB CLI:
echo 'export PATH="/Users/Daniel/.ebcli-virtual-env/executables:$PATH"' >> ~/.bash_profile && source ~/.bash_profile
You create a new application and environment in beanstalk, uploading the archive/zip of your docker app folder contents. From the console you can modify the capacity, the instances you want to use, environment type (load balancing, autoscaling) and so on.
Now let’s have a look at troubleshoots:
zip -d Archive.zip __MACOSX/\*
All the above code and instructions can be found via my StackOverflow response.
I hope this helps (and saves you days of hard hard work ) and now your app is running smoothly and you enjoy the predictions. The model provided on my Github gives a little bit less than 50% accuracy — though this can be quite good for a multi-class classification at a first glance.
Take care and wishing you all a superb 2020,
A new exciting decade for exponential growth and innovation!!!