Build a Python Chatbot using Keras & NLTK

Deep Learning Project

Juan Arturo Cruz Cardona
7 min readMay 21, 2021

Through this tutorial, you will build a Chatbot capable of responding some of your messages after learning certain patterns the user can introduce. You require good python programming skills and basic knowledge about neural networks and deep learning.

BEFORE STARTING

SYSTEM REQUIREMENTS

The project was made using the following package versions:

  • Python 3.9.5 x64 bit (Tensorflow doesn’t run on 32-bit distributions)
  • Tensorflow 2.5.0
  • Keras 2.4.3
  • nltk 3.6.2
  • numpy 1.19.5

ERRORS THAT MAY APPEAR

While running the project I encountered the following issues that you may also have to deal with:

  • Error Type #1

To solve this error you may need to download a special module required by the libraries we are using, the name of it appears on a message asking for using an import to download a module.

So, in python shell you may introduce something like this

>>> import nltk
>>> nltk.download(‘punkt’/‘wordnet’/’module_name’)
  • Error Type #2

ImportError: No module named ‘tkinter’ was found

You need to be sure that your python installation has tkinter selected

USEFUL INFORMATION

Before starting we need to define certain terminology and the purpose of the files that will be used throughout the execution of the chatbot.

Terminology

  • Tokenizing is the process of breaking the whole text into small parts like words.
  • Lemmatizing is the process of converting a word into its lemma form, we use this while predicting.
  • Lemma is the canonical form, dictionary form, or citation form of a set of words (headword).
  • A class will be a category of a list and each has a tag.
  • JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays.
  • A document is a combination between patterns and intents.

Files

  • The data file which has predefined patterns and responses is intents.json, you can change its content to have your own dataset.
  • The script to build the model and train our chatbot is inside the train.py file.
  • Words.pkl — This is a pickle file in which we store the words Python object that contains a list of our vocabulary.
  • The list of categories (classes) is inside of the pickle file called classes.pkl
  • The trained model that contains information about the model and has weights of the neurons is model.h5
  • Implemented GUI where users can easily interact with our chatbot is chat.py

TRAINING THE CHATBOT

Import and load the data file

We import the necessary packages for our chatbot and initialize the variables we will use in our Python project.

Is worth noticing that inside our intents.json file we can add more possible answers and even try to add more patterns. Also there is the possibility to do or select a completly different intent for another type of chatbot.

On this link you can find more intent datasets for the chatbot.

Preprocess data

We need to perform certain operations to preprocess the data before we make a machine learning or a deep learning model.

We will start with tokenizing, because is the most basic and first thing you can do on text data. Then we iterate through the patterns and tokenize the sentence using nltk.word_tokenize() and also create a list of categories for our tags.

Once we have the list we will lemmatize each word and remove duplicates from the list and generate the files needed for running the chatbot.

Lemmatizing is the process of converting a word into its lemma form.

The generated files will be used while predicting.

Create training and testing data

Our input will be the pattern and output will be the class our input pattern belongs to. But the computer doesn’t understand text so we will convert text into numbers.

As you can see, we lemmatize each word to represent related words and then with the tokenized list we checked if each word matchs current pattern, if that is the case we add it to the bag we created using 1 and 0 otherwise.

Build the model

Now that we have the trainig data ready we proceed to use the Keras sequential API for building a deep neural network with 3 layers.

Notice that I trained the model for 200 epochs, this number achieved 100% accuracy on the model, but you can try different numbers to see how it changes the value.

Finally we save the model as 'model.h5'.

RUNNING THE CHATBOT

Predict the response

To predict the sentences and get a response we will load the necessary packages, the trained model 'model.h5' and the pickle files 'words.pkl' 'classes.pkl' which we have created when we trained our model.

The model will only tell us the class it belongs to, so we will implement some functions which will perform text preprocessing, identify the class and then retrieve us a random response from the list of responses.

Predict the class functions

To predict the class, we will need to provide input in the same way as we did while training. So we will create some functions that will perform text preprocessing and prediction of the class.

  • clean_up_sentence : Tokenizing the pattern creating a bag of words, 0 or 1 depending if the word exists in the sentece.
  • bow : It calls clean_up_sentence function, once the pattern is tokenized checks if words are in the vocabulary and if are found in any bag to finally return it.
  • predict_class : It calls bow function, filters predictions using a threshold and then sort the list by strength of probability.

Get a random response functions

  • get_response : Returns a random response from the list of intents.
  • chatbot_response : Main function of getting a response, it uses user input and the model we created.

Fianlly, let’s use Tkinter library which is shipped with tons of useful libraries for GUI to create an interface where user can introduce their input messages and display the response from the bot.

  • send : It sends user input to chatbot_response function and displays the messages on the interface.

Run the chatbot

First, we train the model using the command in the terminal:

python train.py

Then to run the app, we run the second file.

python chat.py

A GUI window will be open, and now you can easily chat with the bot.

CONCLUSIONS

Trying different epochs

Playing a bit with the values to see how the model behaved, I decided to train it with 10, 100 and 200 epochs obtaining the following results:

Also is worth saying that when I tried to run the chat application with 10 epochs the bot couldnt answer any of the messages I sent to it. But with 100 and 200 showed a good performance.

Using other intent dataset

I used a COVID-19 dataset for generating another intent file, it had up to 10,000 entries so it was quite heavy to learn, I had to do some changes on the training file, some of them are that I used 10,000 epochs for training the chatbot and adding more neurons to each layer.

Unfortunately, I was not able to reduce that much the loss, maybe changing the structure of the nueral network adding more layers, but the accuracy was good enough.

Personal reflection

Does adding more layers or neurons always result in more accuracy? It completely depends upon how large your dataset is. Adding layers unnecessarily to any DNN (Deep neural network) will increase your number of parameters and increasing unnecessary parameters will only overfit your network.

The truth is you can customize the data according to business requirements and train the chatbot with great accuracy.

Here you can find the link to repository and download the files.

REFERENCES

--

--