Inferencing on Raspberry Pi – Low-cost platform

This article describes the complete process from defining the network parameters to data preparation MNIST to training and inferencing.
The trained network is used to inference the test dataset of MNIST on the Raspberry Pi.

  • The inferencing is done three times to make the results comparable and show the flawless operation:
    • 1. TensorFlow
    • 2. LabVIEW
    • 3. Raspberry Pi
  • After defining the network parameters
    • Input layer size
    • Hidden layer size and corresponding activation function for each layer
    • Output layer size and activation function
    • Loss function, optimizer, learning rate and batch size

The python code is generated and executed. During training process loss values and accuracy for training and test data are transferred to LabVIEW. When training is done the network weights and biases are used to doublechecked. TensorFlow inferencing is executed to get accuracy values. LabVIEW inferencing on x86 is done to validate accuracy. Transferring network parameters to Raspberry Pi (ARM) and inferencing for test data is executed to validate accuracy.

should be an image

xThink, learning deeply since 2003.

MNIST Database

should be an image

The MNIST database is a large database of handwritten digits and is widely used for training and testing in the field of machine learning. The database contains 60,000 training images and 10,000 testing images. The images are grayscale images with a resolution of 28x28 pixels. The first 9x9 images are shown to the left.

Network architecture - layer size and activation functions

should be an image

Settings for the network, since we are working with MNIST 28x28 pixels, we need 784 inputs on the network. The three hidden layers are 120 neurons in size, with ReLU as activation function. It is very common to use one hot true classifcation for MNIST, so the output size is 10 with a Softmax activation to spcify the winner. The datatype training data is for internal transactions between LabVIEW and Python. The original MNIST data are presented in greyscale values between 0 and 255, these are normalized to values between 0 and 1 which recommends floating point. The batch size is set to 128 with shuffle batch deactivated. The neural network is optimizing against a 10 class vector with one hot true, so categorical crossentropy is the chosen loss function. Adam optimizer with a learning reate of 0.001 is used.

The test set reaches an accuracy of 97.5% on the training platform.


should be an image

A fully connected feed forward network with an input, 3-hidden and one output layer (see image) is used for this demonstration.

Inferencing and validation on Raspberry Pi

should be an image

The MNIST test set and the trained network is transferred to the Raspberry Pi.
During inferencing the network is executed in forward direction.

Testing on Raspberry Pi is done to validate the correct execution. One way to show that is the accuracy of the test set. It reaches the same value of 97.5%.