Book Image

Linux: Embedded Development

By : Alexandru Vaduva, Alex Gonzalez, Chris Simmonds
Book Image

Linux: Embedded Development

By: Alexandru Vaduva, Alex Gonzalez, Chris Simmonds

Overview of this book

Embedded Linux is a complete Linux distribution employed to operate embedded devices such as smartphones, tablets, PDAs, set-top boxes, and many more. An example of an embedded Linux distribution is Android, developed by Google. This learning path starts with the module Learning Embedded Linux Using the Yocto Project. It introduces embedded Linux software and hardware architecture and presents information about the bootloader. You will go through Linux kernel features and source code and get an overview of the Yocto Project components available. The next module Embedded Linux Projects Using Yocto Project Cookbook takes you through the installation of a professional embedded Yocto setup, then advises you on best practices. Finally, it explains how to quickly get hands-on with the Freescale ARM ecosystem and community layer using the affordable and open source Wandboard embedded board. Moving ahead, the final module Mastering Embedded Linux Programming takes you through the product cycle and gives you an in-depth description of the components and options that are available at each stage. You will see how functions are split between processes and the usage of POSIX threads. By the end of this learning path, your capabilities will be enhanced to create robust and versatile embedded projects. This Learning Path combines some of the best that Packt has to offer in one complete, curated package. It includes content from the following Packt products: ? Learning Embedded Linux Using the Yocto Project by Alexandru Vaduva ? Embedded Linux Projects Using Yocto Project Cookbook by Alex González ? Mastering Embedded Linux Programming by Chris Simmonds
Table of Contents (6 chapters)

In this chapter, you will be given a brief introduction to a number of tools that address various problems and solves them in ingenious ways. This chapter can be thought of as an appetizer for you. If any of the tools presented here seem to interest you, I encourage you to feed your curiosity and try to find more about that particular tool. Of course, this piece of advice applies to any information presented in this book. However, this bit of advice holds true particularly for this chapter because I've chosen a more general description for the tools I've presented. I've done this as I've assumed that some of you may not be interested in lengthy descriptions and would only want to focus your interest in the development process, rather than in other areas. For the rest of you who are interested in finding out more about other key areas, please feel free to go through the extensions of information available throughout the chapter.

In this chapter, a more detailed explanation of components, such as Swabber, Wic, and LAVA, will be offered. These tools are not the ones, which an embedded developer will encounter on everyday jobs, though interaction with such tools could make life a little easier. The first thing I should mention about these tools is that they have nothing in common with each other, and are very different from each other and address different requests. If Swabber, the first tool presented here, is used for access detection on a host development machine, the second tool represents a solution to the limitations that BitBake has with complex packaging options. Here, I am referring to the wic tool. The last element presented in this chapter is the automation testing framework called LAVA. It is an initiative from Linaro, a project that, in my opinion, is very interesting to watch. They are also combined with a continuous integration tool, like Jenkins, and this could make it a killer combination for every taste.

Swabber is a project, which although is presented on Yocto Project's official page, is said to be a work in progress; no activity has been done on it since September 18, 2011. It does not have a maintainers file where you can find more information about its creators. However, the committers list should be enough for anyone interested in taking a deeper look at this project.

This tool was selected for a short introduction in this chapter because it constitutes another point of view of the Yocto Project's ecosystem. Of course, a mechanism for access detection into the host system is not a bad idea and is very useful to detect accesses that could be problematic for your system, but it is not the first tool that comes to mind when developing software. When you have the possibility of redoing your build and inspecting your host ecosystem manually, you tend to lose sight of the fact that tools could be available for this task too, and that they could make your life easier.

For interaction with Swabber, the repository needs to be cloned first. The following command can be used for this purpose:

After the source code is available on the host, the content of the repository should look as follows:

As you can see, this project is not a major one, but consists of a number of tools made available by a passionate few. This includes two guys from Windriver: Alex deVries and David Borman. They worked on their own on the previously presented tools and made them available for the open source community to use. Swabber is written using the C language, which is a big shift from the usual Python/Bash tools and other projects that are offered by the Yocto Project community. Every tool has its own purpose, the similitude being that all the tools are built using the same Makefile. Of course, this isn't restricted only to the usage of binaries; there are also two bash scripts available for distribution detect and update.

Note

More information about the tool can be found from its creators. Their e-mail addresses, which are available in the commits for the project, are and . However, please note that these are the workplace e-mail IDs and the people that worked on Swabber may not have the same e-mail address at the moment.

The interaction with the Swabber tools is well described in the README file. Here, information regarding the setup and running of Swabber is available, though, for your sake, this will also be presented in the next few lines, so that you can understand quicker and in an easier manner.

The first required step is the compilation of sources. This is done by invoking the make command. After the source code is built and the executables are available, the host distribution can be profiled using the update_distro command, followed by the location of the distribution directory. The name we've chosen for it is Ubuntu-distro-test, and it is specific for the host distribution on which the tool is executed. This generation process can take some time at first, but after this, any changes to the host system will be detected and the process will take lesser time. At the end of the profiling process, this is how the content of the Ubuntu-distro-test directory looks:

Ubuntu-distro-test/
├── distro
├── distro.blob
├── md5
└── packages

After the host distribution is profiled, a Swabber report can be generated based on the profile created. Also, before creating the report, a profile log can be created for later use along with the reporting process. To generate the report, we will create a log file location with some specific log information. After the logs are available, the reports can be generated:

This information was required by the tool, as shown in its help information:

From the help information attached in the preceding code, the role of the arguments selected for the test command can be investigated. Also, an inspection of the tool's source code is recommended due to the fact that there are no more than 1550 lines in a C file, the biggest one being the swabber.c file.

The required.txt file contains the information about the packages used and also about the packages specific files. More information regarding configurations is also available inside the extra.txt file. Such information includes files and packages that can be accessed, various warnings and files that are not available in the host database, and various errors and files that are considered dangerous.

For the command on which the tracing is done, the output information is not much. It has only been offered as an example; I encourage you to try various scenarios and familiarize yourselves with the tool. It could prove helpful to you later.

Wic is a command line tool that can be also seen as an extension of the BitBake build system. It was developed due to the need of having a partitioning mechanism and a description language. As it can be concluded easily, BitBake lacks in these areas and although initiatives were taken to make sure that such a functionality would be available inside the BitBake build system, this was only possible to an extent; for more complex tasks, Wic can be an alternative solution.

In the following lines, I will try to describe the problem associated with BitBake's lack of functionality and how Wic can solve this problem in an easy manner. I will also show you how this tool was born and what source of inspiration source was.

When an image is being built using BitBake, the work is done inside an image recipe that inherits image.bbclass for a description of its functionality. Inside this class, the do_rootfs() task is the one that the OS responsible for the creation of the root filesystem directory that will be later be included in the final package and includes all the sources necessary to boot a Linux image on various boards. With the do_rootf() task finished, a number of commands are interrogated to generate an output for each one of the image defined types. The definition of the image type is done through the IMAGE_FSTYPE variable and for each image output type, there is an IMAGE_CMD_type variable defined as an extra type that is inherited from an external layer or a base type described in the image_types.bbclass file.

The commands behind every one of these types are, in fact, a shell command-specific for a defined root filesystem format. The best example of this is the ext3 format. For this, the IMAGE_CMD_ext3 variable is defined and these commands are invoked, shown as follows:

After the commands are called, the output is in the form of a image-*.ext3 file. It is a newly created EXT3 filesystem according to the FSTYPES defined variable value, and it incorporates the root filesystem content. This example presents a very common and basic filesystem creation of commands. Of course, more complex options could be required in an industry environment, options that incorporate more than the root filesystem and add an extra kernel or even the bootloader alongside it, for instance. For these complex options, extensive mechanisms or tools are necessary.

The available mechanism implemented in the Yocto Project is visible inside the image_types.bbclass file through the IMAGE_CMD_type variable and has this form:

To use the newly defined image formats, the machine configuration needs to be updated accordingly, using the following commands:

By using the inherit ${IMAGE_CLASSES} command inside the image.bbclass file, the newly defined image_types_foo.bbclass file's functionality is visible and ready to be used and added to the IMAGE_FSTYPE variable.

The preceding implementation implies that for each implemented filesystem, a series of commands are invoked. This is a good and simple method for a very simple filesystem format. However, for more complex ones, a language would be required to define the format, its state, and in general, the properties of the image format. Various other complex image format options, such as vmdk, live, and directdisk file types, are available inside Poky. They all define a multistage image formatting process.

To use the vmdk image format, a vmdk value needs to be defined in the IMAGE_FSTYPE variable. However, for this image format to be generated and recognized, the image-vmdk.bbclass file's functionalities should be available and inherited. With the functionalities available, three things can happen:

This functionality offers the possibility of generating images that can be copied onto a hard disk. At the base of it, the syslinux configuration file can be generated, and two partitions are also required for the boot up process. The end result consists of an MBR and partition table section followed by a FAT16 partition containing the boot files, SYSLINUX and the Linux kernel, and an EXT3 partition for the root filesystem location. This image format is also responsible for moving the Linux kernel, the syslinux.cfg, and ldlinux.sys configurations on the first partition, and copying using the dd command the EXT3 image format onto the second partition. At the end of this process, space is reserved for the root with the tune2fs command.

Historically, the usage of directdisk was hardcoded in its first versions. For every image recipe, there was a similar implementation that mirrored the basic one and hardcoded the heritage inside the recipe for the image.bbclass functionality. In the case of the vmdk image format, the inherit boot-directdisk line is added.

With regard to custom-defined image filesystem types, one such example can be found inside the meta-fsl-arm layer; this example is available inside the imx23evk.conf machine definition. This machine adds the next two image filesystem types: uboot.mxsboot-sdcard and sdcard.

The mxs-base.inc file included in the preceding lines is in return including the conf/machine/include/fsl-default-settings.inc file, which in return adds the IMAGE_CLASSES +="image_types_fsl" line as presented in the general case. Using the preceding lines offers the possibility for the IMAGE_CMD commands to be first executed for the commands available for the uboot.mxsboot-sdcard format, followed by the sdcard IMAGE_CMD commands-specific image format.

The image_types_fsl.bbclass file defines the IMAGE_CMD commands, as follows:

At the end of the execution process, the uboot.mxsboot-sdcard command is called using the mxsboot command. Following the execution of this command, the IMAGE_CMD_sdcard specific commands are called to calculate the SD card size and alignment, as well as to initialize the deploy space and set the appropriate partition type to the 0x53 value and copy the root filesystem onto it. At the end of the process, several partitions are available and they have corresponding twiddles that are used to package bootable images.

There are multiple methods to create various filesystems and they are spread over a large number of existing Yocto layers with some documentation available for the general public. There are even a number of scripts used to create a suitable filesystem for a developer's needs. One such example is the scripts/contrib/mkefidisk.sh script. It is used to create an EFI-bootable direct disk image from another image format, that is, a live.hddimg one. However, a main idea remains: this kind of activity should be done without any middle image filesystem that is generated in intermediary phases and with something other than a partition language that is unable to handle complicated scenarios.

Keeping this information in mind, it seems that in the preceding example, we should have used another script. Considering the fact that it is possible to build an image from within the build system and also outside of it, the search for a number of tools that fit our needs was started. This search ended at the Fedora kickstart project. Although it has a syntax that is also suitable for areas involving deployment efforts, it is often considered to be of most help to developers.

From this project, the most used and interesting components were clearpart, part, and bootloader, and these are useful for our purposes as well. When you take a look at the Yocto Project's Wic tool, it is also available inside the configuration files. If the configuration file for Wic is defined as .wks inside the Fedora kickstart project, the configuration file read uses the .yks extension. One such configuration file is defined as follows:

def pre():
    free-form python or named 'plugin' commands

  clearpart commands
  part commands
  bootloader commands
  named 'plugin' commands

  def post():
    free-form python or named 'plugin' commands  

The idea behind the preceding script is very simple: the clearpart component is used to clear the disk of any partitions while the part component is used for the reverse, that is, the components used for creating and installing the filesystem. The third too that is defined is the bootloader component, which is used for installation of the bootloader, and also handles the corresponding information received from the part component. It also makes sure that the boot process is done as described inside the configuration file. The functions defined as pre() and post() are used for pre and post calculus for creation of the image, stage image artefacts, or other complex tasks.

As shown in the preceding description, the interaction with the Fedora kickstarter project was very productive and interesting, but the source code is written using Python inside the Wic project. This is due to the fact that a Python implementation for a similar tool was searched for and it was found under the form of the pykickstarted library. This is not all that the preceding library was used for by the Meego project inside its Meego Image Creator (MIC) tool. This tool was used for a Meego-specific image creation process. Later, this project was inherited by the Tizen project.

Wic, the tool that I promised to present in this section is derived from the MIC project and both of them use the kickstarter project, so all three are based on plugins that define the behavior of the process of creating various image formats. In the first implementation of Wic, it was mostly a functionality of the MIC project. Here, I am referring to the Python classes it defines that were almost entirely copied inside Poky. However, over time, the project started to have its own implementations, and also its own personality. From version 1.7 of the Poky repository, no direct reference to MIC Python defined classes remained, making Wic a standalone project that had its own defined plugins and implementations. Here is how you can inspect the various configuration of formats accessible inside Wic:

tree scripts/lib/image/canned-wks/
scripts/lib/image/canned-wks/
├── directdisk.wks
├── mkefidisk.wks
├── mkgummidisk.wks
└── sdimage-bootpart.wks

There are configurations defined inside Wic. However, considering the fact that the interest in this tool has grown in the last few years, we can only hope that the number of supported configurations will increase.

I mentioned previously that the MIC and Fedora kickstarter project dependencies were removed, but a quick search inside the Poky scripts/lib/wic directory will reveal otherwise. This is because Wic and MIC are both have the same foundation, the pykickstarted library. Though Wic is now heavily based on MIC and both have the same parent, the kickstarter project, their implementations, functionalities, and various configurations make them different entities, which although related have taken different paths of development.

LAVA (Linaro Automation and Validation Architecture) is a continuous integration system that concentrates on a physical target or virtual hardware deployment where a series of tests are executed. The executed tests are of a large variety from the simplest ones which only requires booting a target to some very complex scenarios that require external hardware interaction.

LAVA represents a collection of components that are used for automated validation. The main idea behind the LAVA stack is to create a quality controlled testing and automation environment that is suitable for projects of all sizes. For a closer look at a LAVA instance, the reader could inspect an already created one, the official production instance of which is hosted by Linaro in Cambridge. You can access it at https://validation.linaro.org/. I hope you enjoy working with it.

The LAVA framework offers support for the following functionalities:

LAVA is primarily written using Python, which is no different from what the Yocto Project offers us. As seen in the Toaster Project, LAVA also uses the Django framework for a web interface and the project is hosted using the Git versioning system. This is no surprise since we are talking about Linaro, a not-for-profit organization that works on free and open source projects. Therefore, the thumb rule applied to all the changes made to the project should return in the upstream project, making the project a little easier to maintain. However, it is also more robust and has better performance.

For testing with the LAVA framework, the first step would be to understand its architecture. Knowing this helps not only with test definitions, but also with extending them, as well as the development of the overall project. The major components of this project are as follows:

               +-------------+
               |web interface|
               +-------------+
                      |
                      v
                  +--------+
            +---->|database|
            |     +--------+
            |
+-----------+------[worker]-------------+
|           |                           |
|  +----------------+     +----------+  |
|  |scheduler daemon|---→ |dispatcher|  |
|  +----------------+     +----------+  |
|                              |        |
+------------------------------+--------+
                               |
                               V
                     +-------------------+
                     | device under test |
                     +-------------------+

The first component, the web interface, is responsible for user interaction. It is used to store data and submitted jobs using RDBMS, and is also responsible to display the results, device navigation, or as job submission receiver activities that are done through the XMLRPC API. Another important component is represented by the scheduler daemon, which is responsible for the allocation of jobs. Its activity is quite simple. It is responsible for pooling the data from a database and reserving devices for jobs that are offered to them by the dispatcher, another important component. The dispatcher is the component responsible for running actual jobs on the devices. It also manages the communication with a device, download images, and collects results.

There are scenarios when only the dispatcher can be used; these scenarios involve the usage of a local test or a testing feature development. There are also scenarios where all the components run on the same machine, such as a single deployment server. Of course, the desired scenario is to have components decoupled, the server on one machine, database on another one, and the scheduler daemon and dispatcher on a separate machine.

For the development process with LAVA, the recommended host machines are Debian and Ubuntu. The Linaro development team working with LAVA prefer the Debian distribution, but it can work well on an Ubuntu machine as well. There are a few things that need to be mentioned: for the Ubuntu machine, make sure that the universe repositories are available and visible by your package manager.

The first package that is necessary is lava-dev; it also has scripts that indicate the necessary package dependencies to assure the LAVA working environment. Here are the necessary commands required to do this:

Taking into consideration the location of the changes, various actions are required. For example, for a change in the templates directory's HTML content, refreshing the browser will suffice, but any changes made in the *_app directory's Python implementation will require a restart of the apache2ctl HTTP server. Also, any change made in the *_daemon directory's Python sources will require a restart of lava-server altogether.

To install LAVA or any LAVA-related packages on a 64-bit Ubuntu 14.04 machine, new package dependencies are required in addition to the enabled support for universal repositories deb http://people.linaro.org/~neil.williams/lava jessie main, besides the installation process described previously for the Debian distribution. I must mention that when the lava-dev package is installed, the user will be prompted to a menu that indicates nullmailer mailname. I've chosen to let the default one remain, which is actually the host name of the computer running the nullmailer service. I've also kept the same configuration defined by default for smarthost and the installation process has continued. The following are the commands necessary to install LAVA on a Ubuntu 14.04 machine: