Swabber is a project, which although is presented on Yocto Project's official page, is said to be a work in progress; no activity has been done on it since September 18, 2011. It does not have a maintainers file where you can find more information about its creators. However, the committers list should be enough for anyone interested in taking a deeper look at this project.
For interaction with Swabber, the repository needs to be cloned first. The following command can be used for this purpose:
As you can see, this project is not a major one, but consists of a number of tools made available by a passionate few. This includes two guys from Windriver: Alex deVries and David Borman. They worked on their own on the previously presented tools and made them available for the open source community to use. Swabber is written using the C language, which is a big shift from the usual Python/Bash tools and other projects that are offered by the Yocto Project community. Every tool has its own purpose, the similitude being that all the tools are built using the same Makefile. Of course, this isn't restricted only to the usage of binaries; there are also two bash scripts available for distribution detect and update.
Note
More information about the tool can be found from its creators. Their e-mail addresses, which are available in the commits for the project, are <
[email protected]>
and <[email protected]>
. However, please note that these are the workplace e-mail IDs and the people that worked on Swabber may not have the same e-mail address at the moment.
The interaction with the Swabber tools is well described in the README
file. Here, information regarding the setup and running of Swabber is available, though, for your sake, this will also be presented in the next few lines, so that you can understand quicker and in an easier manner.
The first required step is the compilation of sources. This is done by invoking the make
command. After the source code is built and the executables are available, the host distribution can be profiled using the update_distro
command, followed by the location of the distribution directory. The name we've chosen for it is Ubuntu-distro-test
, and it is specific for the host distribution on which the tool is executed. This generation process can take some time at first, but after this, any changes to the host system will be detected and the process will take lesser time. At the end of the profiling process, this is how the content of the Ubuntu-distro-test
directory looks:
Ubuntu-distro-test/ ├── distro ├── distro.blob ├── md5 └── packages
After the host distribution is profiled, a Swabber report can be generated based on the profile created. Also, before creating the report, a profile log can be created for later use along with the reporting process. To generate the report, we will create a log file location with some specific log information. After the logs are available, the reports can be generated:
This information was required by the tool, as shown in its help information:
From the help information attached in the preceding code, the role of the arguments selected for the test command can be investigated. Also, an inspection of the tool's source code is recommended due to the fact that there are no more than 1550 lines in a C file, the biggest one being the swabber.c
file.
Wic is a command line tool that can be also seen as an extension of the BitBake build system. It was developed due to the need of having a partitioning mechanism and a description language. As it can be concluded easily, BitBake lacks in these areas and although initiatives were taken to make sure that such a functionality would be available inside the BitBake build system, this was only possible to an extent; for more complex tasks, Wic can be an alternative solution.
When an image is being built using BitBake, the work is done inside an image recipe that inherits image.bbclass
for a description of its functionality. Inside this class, the do_rootfs()
task is the one that the OS responsible for the creation of the root filesystem directory that will be later be included in the final package and includes all the sources necessary to boot a Linux image on various boards. With the do_rootf()
task finished, a number of commands are interrogated to generate an output for each one of the image defined types. The definition of the image type is done through the IMAGE_FSTYPE
variable and for each image output type, there is an IMAGE_CMD_type
variable defined as an extra type that is inherited from an external layer or a base type described in the image_types.bbclass
file.
The available mechanism implemented in the Yocto Project is visible inside the image_types.bbclass
file through the IMAGE_CMD_type
variable and has this form:
The preceding implementation implies that for each implemented filesystem, a series of commands are invoked. This is a good and simple method for a very simple filesystem format. However, for more complex ones, a language would be required to define the format, its state, and in general, the properties of the image format. Various other complex image format options, such as vmdk, live, and directdisk file types, are available inside Poky. They all define a multistage image formatting process.
Historically, the usage of directdisk
was hardcoded in its first versions. For every image recipe, there was a similar implementation that mirrored the basic one and hardcoded the heritage inside the recipe for the image.bbclass
functionality. In the case of the vmdk
image format, the inherit boot-directdisk
line is added.
The image_types_fsl.bbclass
file defines the IMAGE_CMD
commands, as follows:
There are multiple methods to create various filesystems and they are spread over a large number of existing Yocto layers with some documentation available for the general public. There are even a number of scripts used to create a suitable filesystem for a developer's needs. One such example is the scripts/contrib/mkefidisk.sh
script. It is used to create an EFI-bootable direct disk image from another image format, that is, a live.hddimg
one. However, a main idea remains: this kind of activity should be done without any middle image filesystem that is generated in intermediary phases and with something other than a partition language that is unable to handle complicated scenarios.
From this project, the most used and interesting components were clearpart
, part
, and bootloader
, and these are useful for our purposes as well. When you take a look at the Yocto Project's Wic tool, it is also available inside the configuration files. If the configuration file for Wic is defined as .wks
inside the Fedora kickstart project, the configuration file read uses the .yks
extension. One such configuration file is defined as follows:
def pre(): free-form python or named 'plugin' commands clearpart commands part commands bootloader commands named 'plugin' commands def post(): free-form python or named 'plugin' commands
The idea behind the preceding script is very simple: the clearpart
component is used to clear the disk of any partitions while the part
component is used for the reverse, that is, the components used for creating and installing the filesystem. The third too that is defined is the bootloader
component, which is used for installation of the bootloader, and also handles the corresponding information received from the part
component. It also makes sure that the boot process is done as described inside the configuration file. The functions defined as pre()
and post()
are used for pre and post calculus for creation of the image, stage image artefacts, or other complex tasks.
As shown in the preceding description, the interaction with the Fedora kickstarter project was very productive and interesting, but the source code is written using Python inside the Wic project. This is due to the fact that a Python implementation for a similar tool was searched for and it was found under the form of the pykickstarted
library. This is not all that the preceding library was used for by the Meego project inside its Meego Image Creator (MIC) tool. This tool was used for a Meego-specific image creation process. Later, this project was inherited by the Tizen project.
Wic, the tool that I promised to present in this section is derived from the MIC project and both of them use the kickstarter project, so all three are based on plugins that define the behavior of the process of creating various image formats. In the first implementation of Wic, it was mostly a functionality of the MIC project. Here, I am referring to the Python classes it defines that were almost entirely copied inside Poky. However, over time, the project started to have its own implementations, and also its own personality. From version 1.7 of the Poky repository, no direct reference to MIC Python defined classes remained, making Wic a standalone project that had its own defined plugins and implementations. Here is how you can inspect the various configuration of formats accessible inside Wic:
tree scripts/lib/image/canned-wks/ scripts/lib/image/canned-wks/ ├── directdisk.wks ├── mkefidisk.wks ├── mkgummidisk.wks └── sdimage-bootpart.wks
There are configurations defined inside Wic. However, considering the fact that the interest in this tool has grown in the last few years, we can only hope that the number of supported configurations will increase.
I mentioned previously that the MIC and Fedora kickstarter project dependencies were removed, but a quick search inside the Poky scripts/lib/wic
directory will reveal otherwise. This is because Wic and MIC are both have the same foundation, the pykickstarted
library. Though Wic is now heavily based on MIC and both have the same parent, the kickstarter project, their implementations, functionalities, and various configurations make them different entities, which although related have taken different paths of development.
LAVA (Linaro Automation and Validation Architecture) is a continuous integration system that concentrates on a physical target or virtual hardware deployment where a series of tests are executed. The executed tests are of a large variety from the simplest ones which only requires booting a target to some very complex scenarios that require external hardware interaction.
LAVA represents a collection of components that are used for automated validation. The main idea behind the LAVA stack is to create a quality controlled testing and automation environment that is suitable for projects of all sizes. For a closer look at a LAVA instance, the reader could inspect an already created one, the official production instance of which is hosted by Linaro in Cambridge. You can access it at https://validation.linaro.org/. I hope you enjoy working with it.
The LAVA framework offers support for the following functionalities:
- It supports scheduled automatic testing for multiple packages on various hardware packages
- It makes sure that after a device crashes, the system restarts automatically
- It conducts regression testing
- It conducts continuous integration testing
- It conducts platform enablement testing
- It provides support for both local and cloud solutions
- It provides support for result bundles
- It provides measurements for performance and power consumption
For testing with the LAVA framework, the first step would be to understand its architecture. Knowing this helps not only with test definitions, but also with extending them, as well as the development of the overall project. The major components of this project are as follows:
+-------------+ |web interface| +-------------+ | v +--------+ +---->|database| | +--------+ | +-----------+------[worker]-------------+ | | | | +----------------+ +----------+ | | |scheduler daemon|---→ |dispatcher| | | +----------------+ +----------+ | | | | +------------------------------+--------+ | V +-------------------+ | device under test | +-------------------+
The first component, the web interface, is responsible for user interaction. It is used to store data and submitted jobs using RDBMS, and is also responsible to display the results, device navigation, or as job submission receiver activities that are done through the XMLRPC API. Another important component is represented by the scheduler daemon, which is responsible for the allocation of jobs. Its activity is quite simple. It is responsible for pooling the data from a database and reserving devices for jobs that are offered to them by the dispatcher, another important component. The dispatcher is the component responsible for running actual jobs on the devices. It also manages the communication with a device, download images, and collects results.
To install LAVA or any LAVA-related packages on a 64-bit Ubuntu 14.04 machine, new package dependencies are required in addition to the enabled support for universal repositories deb http://people.linaro.org/~neil.williams/lava jessie main
, besides the installation process described previously for the Debian distribution. I must mention that when the lava-dev
package is installed, the user will be prompted to a menu that indicates nullmailer mailname
. I've chosen to let the default one remain, which is actually the host name of the computer running the nullmailer
service. I've also kept the same configuration defined by default for smarthost
and the installation process has continued. The following are the commands necessary to install LAVA on a Ubuntu 14.04 machine:
Note
Information about the LAVA installation process is available at https://validation.linaro.org/static/docs/installing_on_debian.html#. Here, you also find the installation processes for bot Debian and Ubuntu distributions.