Now that the program works on the desktop, we can make an embedded system from it. The details given here are specific to Raspberry Pi, but similar steps apply when developing for other embedded Linux systems such as BeagleBone, ODROID, Olimex, Jetson, and so on.
There are several different options for running our code on an embedded system, each with some advantages and disadvantages in different scenarios.
There are two common methods for compiling the code for an embedded device:
- Copy the source code from the desktop onto the device and compile it directly on board the device. This is often referred to as native compilation since we are compiling our code natively on the same system that it will eventually run on.
- Compile all the code on the desktop but using special methods to generate code for the device, and then you copy the final executable program onto the device. This is often referred to as cross-compilation since you need a special compiler that knows how to generate code for other types of CPUs.
Cross-compilation is often significantly harder to configure than native compilation, especially if you are using many shared libraries, but since your desktop is usually a lot faster than your embedded device, cross-compilation is often much faster at compiling large projects. If you expect to be compiling your project hundreds of times, in order to work on it for months, and your device is quite slow compared to your desktops, such as the Raspberry Pi 1 or Raspberry Pi Zero, which are very slow compared to a desktop, then cross-compilation is a good idea. But in most cases, especially for small, simple projects, you should just stick with native compilation since it is easier.
Note that all the libraries used by your project will also need to be compiled for the device, so you will need to compile OpenCV for your device. Natively compiling OpenCV on a Raspberry Pi 1 can take hours, whereas cross-compiling OpenCV on a desktop might take just 15 minutes. But you usually only need to compile OpenCV once and then you'll have it for all your projects, so it is still worth sticking with native compilation of your project (including the native compilation of OpenCV) in most cases.
There are also several options for how to run the code on an embedded system:
- Use the same input and output methods you used on the desktop, such as the same video files, USB webcam, or keyboard as input, and display text or graphics on an HDMI monitor in the same way you were doing on the desktop.
- Use special devices for input and output. For example, instead of sitting at a desk using a USB webcam and keyboard as input and displaying the output on a desktop monitor, you could use the special Raspberry Pi Camera Module for video input, use custom GPIO push buttons or sensors for input, and use a 7-inch MIPI DSI screen or GPIO LED lights as the output, and then by powering it all with a common portable USB charger, you can be wearing the whole computer platform in your backpack or attaching it on your bicycle!
- Another option is to stream data in or out of the embedded device to other computers, or even use one device to stream out the camera data and one device to use that data. For example, you can use the GStreamer framework to configure the Raspberry Pi to stream H.264 compressed video from its camera module to the Ethernet network or through Wi-Fi, so that a powerful PC or server rack on the local network or the Amazon AWS cloud computing services can process the video stream somewhere else. This method allows a small and cheap camera device to be used in a complex project requiring large processing resources located somewhere else.
If you do wish to perform computer vision on board the device, be aware that some low-cost embedded devices such as Raspberry Pi 1, Raspberry Pi Zero, and BeagleBone Black have significantly less computing power than desktops or even cheap netbooks or smartphones, perhaps 10-50 times slower than your desktop, so depending on your application you might need a powerful embedded device or stream video to a separate computer, as mentioned previously. If you don't need much computing power (for example, you only need to process one frame every 2 seconds, or you only need to use 160 x 120 image resolution), then a Raspberry Pi Zero running some computer vision on board might be fast enough for your requirements. But many computer vision systems need far more computing power, and so if you want to perform computer vision on board the device, you will often want to use a much faster device with a CPU in the range of 2 GHz, such as a Raspberry Pi 3, ODROID-XU4, or Jetson TK1.