Recently i have just completed a project on Automated Elephant detection using TensorFlow. To do the object detection i have used TensorFlow and OpenCV. And the system i have used that is Raspberry Pi. I have used Raspberry Pi instead of windows because i used an alarm system in the elephant detection spot after detecting Elephant through IP camera. So it was easy to produce alarm in the spot after detecting elephant for Raspberry Pi specially in hilly area whereas Radio Frequency signal might be weak if i had used windows platform.
In this article i will guide you how to detect Elephant using Raspberry Pi and then produce alarm. So let’s start.
Automated Elephant detection system guideline steps:
1. At first set up the environment connecting your Raspberry Pi and camera. You can also connect a Raspberry Pi as your laptop display. Her ei have made a tutorial that how you cam use your Raspberry Pi as your laptop display
2. After setting up the environment you just need to update your Raspberry Pi. Open a terminal and command
sudo apt-get update sudo apt-get dist-upgrade
3. Now install TensorFlow in your Raspberry Pi directory. It can be your home directory or in any other directory. Here create a folder “eds” in the /home/pi directory where all the installation will be done. Let’s install TensorFlow and other dependencies.
cd edssudo apt install libatlas-base-dev pip3 install tensorflow
sudo pip3 install pillow lxml jupyter matplotlib cython
sudo apt-get install python-tk
4. Install OpenCV following few dependencies ( e.g. libjpeg-dev, libxvidcore and qt4-dev tools) that need to be installed through apt-get. If any of the following commands don’t work, then you will update your Raspberry Pi and try again.
sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev sudo apt-get install libxvidcore-dev libx264-dev sudo apt-get install qt4-dev-tools pip3 install opencv-python
5. In this part you have to install Protobuf which is used by TensorFlow object detection API. At this point Protobuf is used for Google’s Protocol Buffer data format.
Execute first command to get the packages compiling protobuf. Later execute the second command to download the protobuf from its github repository and third command to unpack. Then enter into the folder by cd command.
sudo apt-get install autoconf automake libtool curl
tar -zxvf protobuf-all-3.5.1.tar.gz cd protobuf-3.5.1
6. Execute the following command to configure (it takes about 2 minutes):
7. Then execute the following command to build the packages.This process generally takes lot of times. It’s quite lengthy process. It usually takes more than one hour to complete specially in Raspberry Pi.
8. After completing the previous one execute the following command .This process takes even longer than the previous one. I have seen that it usually took almost two hours to complete on my Raspberry Pi. Please note that this command may exit out with errors, but Protobuf will still work. If you see errors, you can ignore them for now.
9. At this part install the build by following command. Then enter into the python directory by cd command and export the library path.
sudo make install
cd python export LD_LIBRARY_PATH=../src/.libs
10. Next execute the following commands one by one.
python3 setup.py build --cpp_implementation python3 setup.py test --cpp_implementation sudo python3 setup.py install --cpp_implementation
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION=3
11. This is end of installing Protobuf on your Raspberry Pi. You may verify the installation by executing following command and reboot your Raspberry Pi.
protoc sudo reboot now
Set up TensorFlow Directory Structure
Now that we’ve installed all the packages, we need to set up the TensorFlow directory. Move back to your home directory, then make a directory called “tensorflow1”, and cd into it.
mkdir tensorflow1 cd tensorflow1
Download the tensorflow repository from GitHub by issuing:
git clone --recurse-submodules https://github.com/tensorflow/models.git
Next, we need to modify the PYTHONPATH environment variable to point at some directories inside the TensorFlow repository we just downloaded. We want PYTHONPATH to be set every time we open a terminal, so we have to modify the .bashrc file. Open it by issuing:
sudo nano ~/.bashrc
Move to the end of the file, and on the last line, add:
Then, save and exit the file. This makes it so the “export PYTHONPATH” command is called every time you open a new terminal, so the PYTHONPATH variable will always be set appropriately. Close and then re-open the terminal.
Now, we need to use Protoc to compile the Protocol Buffer (.proto) files used by the Object Detection API. The .proto files are located in /research/object_detection/protos, but we need to execute the command from the /research directory. Issue:
cd /home/pi/tensorflow1/models/research protoc object_detection/protos/*.proto --python_out=.
This command converts all the “name”.proto files to “name_pb2”.py files. Next, move into the object_detection directory:
Now, we’ll download the SSD_Lite model from the TensorFlow detection model zoo. The model zoo is Google’s collection of pre-trained object detection models that have various levels of speed and accuracy. The Raspberry Pi has a weak processor, so we need to use a model that takes less processing power.
Download the SSDLite-MobileNet model and unpack it by issuing:
wget http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz tar -xzvf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz
Now the model is in the object_detection directory and ready to be used.
Okay, now everything is set up for performing object detection on the Pi! If you’re using a Picamera, make sure it is enabled in the Raspberry Pi configuration menu.
Download the elephant detection python code from below and run in the command prompt.
I hope you have enjoyed the article and above two videos. To do a real projects like this you can email me at email@example.com