A toolchain represents a compiler and its associated utilities that are used with the purpose of producing kernels, drivers, and applications necessary for a specific target. A toolchain usually contains a set of tools that are usually linked to each other. It consists of gcc
, glibc
, binutils
, or other optional tools, such as a debugger optional compiler, which is used for specific programming languages, such as C++, Ada, Java, Fortran, or Objective-C.
In a toolchain environment, three different machines are available:
These three machine are used to generate four different toolchain build procedures:
- A native toolchain: This is usually available on a normal Linux distribution or on your normal desktop system. This is usually compiled and run, and generates code for the same architecture.
- A cross-native toolchain: This represents a toolchain built on one system, though it runs and produces a binary code for the target system. A normal use case is when a native
gcc
is needed on the target system without building it on the target platform. - A cross-compilation toolchain: This is the most widespread toolchain type used for embedded development. It is compiled and run on an architecture type, usually x86, and produces a binary code for the target architecture.
- A cross-canadian build: This represents a process that involves building a toolchain on system A. This toolchain is then run on another system, such as B, which produces a binary code for a third system, called C. This is one of the most underused build processes.
Toolchains represent a list of tools that make the existence of most of great projects available today possible. This includes open source projects as well. This diversity would not have been possible without the existence of a corresponding toolchain. This also happens in the embedded world where newly available hardware needs the components and support of a corresponding toolchain for its Board Support Package (BSP).
The GNU toolchain is a term used for a collection of programming tools under the GNU Project umbrella. This suite of tools is what is normally called a toolchain, and is used for the development of applications and operating systems. It plays an important role in the development of embedded systems and Linux systems, in particular.
The following projects are included in the GNU toolchain:
- GNU make: This represents an automation tool used for compilation and build
- GNU Compiler Collection (GCC): This represents a compiler's suite that is used for a number of available programming languages
- GNU Binutils: This contains tools, such as linkers, assemblers, and so on - these tools are able to manipulate binaries
- GNU Bison: This is a parser generator
- GNU Debugger (GDB): This is a code debugging tool
- GNU m4: This is an m4 macro processor
- GNU build system (autotools): This consists of the following:
- Autoconf
- Autoheaders
- Automake
- Libtool
The projects included in the toolchain is described in the following diagram:
An embedded development environment needs more than a cross-compilation toolchain. It needs libraries and it should target system-specific packages, such as programs, libraries, and utilities, and host specific debuggers, editors, and utilities. In some cases, usually when talking about a company's environment, a number of servers host target devices, and an certain hardware probes are connected to the host through Ethernet or other methods. This emphasizes the fact that an embedded distribution includes a great number of tools, and, usually, a number of these tools require customization. Presenting each of these will take up more than a chapter in a book.
I will start by the introducing the first item on this list, the GNU Binutils package. Developed under the GNU GPL license, it represents a set of tools that are used to create and manage binary files, object code, assembly files, and profile data for a given architecture. Here is a list with the functionalities and names of the available tools for GNU Binutils package:
- The GNU linker, that is
ld
- The GNU assembler, that is
as
- A utility that converts addresses into filenames and line numbers, that is
addr2line
- A utility to create, extract, and modify archives, that is
ar
- A tool used to listing the symbols available inside object files, that is
nm
- Copying and translating object files, that is
objcopy
- Displaying information from object files, that is
objdump
- Generating an index to for the contents of an archive, that is
ranlib
- Displaying information from any ELF format object file, that is
readelf
- Listing the section sizes of an object or archive file, that is
size
- Listing printable strings from files, that is
strings
- Discarding the symbols utility that is
strip
- Filtering or demangle encoded C++ symbols, that is
c++filt
- Creating files that build use DLLs, that is
dlltool
- A new, faster, ELF-only linker, which is still in beta testing, that is
gold
- Displaying the profiling information tool, that is
gprof
- Converting an object code into an NLM, that is
nlmconv
- A Windows-compatible message compiler, that is
windmc
- A compiler for Windows resource files, that is
windres
The majority of these tools use the Binary File Descriptor (BFD) library for low-level data manipulation, and also, many of them use the opcode
library to assemble and disassemble operations.
In the toolchain generation process, the next item on the list is represented by kernel headers, and are needed by the C library for interaction with the kernel. Before compiling the corresponding C library, the kernel headers need to be supplied so that they can offer access to the available system calls, data structures, and constants definitions. Of course, any C library defines sets of specifications that are specific to each hardware architecture; here, I am referring to application binary interface (ABI).
As a general rule, ABI must be respected for its interaction with external components. However, with regard to interaction with its internal modules, the user is free to do whatever he or she wants. Basically, they are able to reinvent the ABI and form their own dependence on the limitations of the machine. The simple example here is related to various citizens who belong to their own country or region, because they learned and know the language of that region since they were born. Hence, they are able to understand one another and communicate without problems. For an external citizen to be able to communicate, he or she will need to know the language of a region, and being in this community seems natural, so it will not constitute a problem. Compilers are also able to design their own custom calling conventions where they know the limitations of functions that are called within a module. This exercise is typically done for optimization reasons. However, this can be considered an abuse of the ABI term.
The GNU Compiler Collection, also known as GCC, represents a compiler system that constitutes the key component of the GNU toolchain. Although it was originally named the GNU C Compiler, due to the fact that it only handled the C programming language, it soon begun to represent a collection of languages, such as C, C++, Objective C, Fortran, Java, Ada, and Go, as well as the libraries for other languages (such as libstdc++
, libgcj
, and so on).
This changed in 1997, when a group of developers gathered as the Experimental/Enhanced GNU Compiler System (EGCS) workgroup started merging several forks in one project. They had so much success in this venture, and gathered so many features, that they made Free Software Foundation (FSF) halt their development of GCC version 2 and appointed EGCS the official GCC version and maintainers by April 1999. They united with each other with the release of GCC 2.95. More information on the history and release history of the GNU Compiler Collection can be found at https://www.gnu.org/software/gcc/releases.html and http://en.wikipedia.org/wiki/GNU_Compiler_Collection#Revision_history.
The GCC interface is similar to the Unix convention, where users call a language-specific driver, which interprets arguments and calls a compiler. It then runs an assembler on the resulting outputs and, if necessary, runs a linker to obtain the final executable. For each language compiler, there is a separate program that performs the source code read.
The process of obtaining an executable from source code has some execution steps. After the first step, an abstract syntax tree is generated and, in this stage, compiler optimization and static code analysis can be applied. The optimizations and static code analysis can be both applied on architecture-independent GIMPLE or its superset GENERIC representation, and also on architecture-dependent Register Transfer Language (RTL) representation, which is similar to the LISP language. The machine code is generated using pattern-matching algorithm, which was written by Jack Davidson and Christopher Fraser.
Each available frontend generated a tree from the given source code. Using this abstract tree form, different languages can share the same backend. Initially, GCC used Look-Ahead LR (LALR) parsers, which were generated using Bison, but over time, it moved on to recursive-descendent parsers for C, C++, and Objective-C in 2006. Today, all available frontends use handwritten recursive-descendent parsers.
GENERIC is a more complex intermediate representation, while GIMPLE is a simplified GENERIC and targets all the frontends of GCC. Languages, such as C, C++ or Java frontends, directly produce GENERIC tree representations in the frontend. Others use different intermediate representations that are then parsed and converted to GENERIC representations.
The middle stage representation of GCC involves code analysis and optimization, and works independently in terms of a compiled language and the target architecture. It starts from the GENERIC representation and continues to the Register Transfer Language (RTL) representation. The optimization mostly involves jump threading, instruction scheduling, loop optimization, sub expression elimination, and so on. The RTL optimizations are less important than the ones done through GIMPLE representations. However, they include dead code elimination, global value numbering, partial redundancy elimination, sparse conditional constant propagation, scalar replacement of aggregates, and even automatic vectorization or automatic parallelization.
The last element that needs to be introduced here is the C library. It represents the interface between a Linux kernel and applications used on a Linux system. At the same time, it offers aid for the easier development of applications. There are a couple of C libraries available in this community:
The choice of the C library used by the GCC compiler will be executed in the toolchain generation phase, and it will be influenced not only by the size and application support offered by the libraries, but also by compliance of standards, completeness, and personal preference.
The first library that we'll discuss here is the glibc
library, which is designed for performance, compliance of standards, and portability. It was developed by the Free Software Foundation for the GNU/Linux operating system and is still present today on all GNU/Linux host systems that are actively maintained. It was released under the GNU Lesser General Public License.
The glibc
library was initially written by Roland McGrath in the 1980s and it continued to grow until the 1990s when the Linux kernel forked glibc
, calling it Linux libc
. It was maintained separately until January 1997 when the Free Software Foundation released glibc 2.0
. The glibc 2.0
contained so many features that it did not make any sense to continue the development of Linux libc
, so they discontinued their fork and returned to using glibc
. There are changes that are made in Linux libc
that were not merged into glibc
because of problems with the authorship of the code.
The glibc
library is quite large in terms of its dimensions and isn't a suitable fit for small embedded systems, but it provides the functionality required by the Single UNIX Specification (SUS), POSIX, ISO C11, ISO C99, Berkeley Unix interfaces, System V Interface Definition, and the X/Open Portability Guide, Issue 4.2, with all its extensions common with X/Open System Interface compliant systems along with X/Open UNIX extensions. In addition to this, GLIBC also provides extensions that have been deemed useful or necessary while developing GNU.
Since 2009, Debian and a number of its derivations chose to move from the GNU C Library to eglibc
. This might be because there is a difference in licensing between GNU LGPL and eglibc
, and this permits them to accept patches that glibc
developers my reject. Since 2014, the official eglibc
homepage states that the development of eglibc
was discontinued because glibc
had also moved to the same licensing, and also, the release of Debian Jessie meant that it had moved back to glibc
. This also happened in the case of Yocto support when they also decided to make glibc
their primary library support option.
The newlib
library is another C library developed with the intention of being used in embedded systems. It is a conglomerate of library components under free software licenses. Developed by Cygnus Support and maintained by Red Hat, it is one of the preferred versions of the C library used for non-Linux embedded systems.
Bionic is a derivate of the BSD C library developed by Google for Android based on the Linux kernel. Its development is independent of Android code development. It is licensed as 3-clause BSD license and its goals are publically available. These include the following:
- Small size: Bionic is smaller in size compared to
glibc
- Speed: This has designed CPUs that work at low frequencies
- BSD license: Google wished to isolate Android apps from GPL and LGPL licenses, and this is the reason it moved to a non-copyleft license which are as follows:
- Android is based on a Linux kernel which is based on a GPLv2 license
glibc
is based on LGPL, which permits the linking of dynamic proprietary libraries but not with static linking
It also has a list of restrictions compared to glibc
, as follows:
- It does not include C++ exception handling, mainly because most the code used for Android is written in Java.
- It does not have wide character support.
- It does not include a Standard Template library, although it can be included manually.
- It functions within Bionic POSIX and even system call headers are wrappers or stubs for Android -specific functions. This may lead to odd behavior sometimes.
- When Android 4.2 released, it included support for
glibc
FORTIFY_SOURCE
features. These features are very often used in Yocto, and embedded systems in general, but are only present in thegcc
version for Android devices with ARM processors.
The next C library that will be discussed is musl
. It is a C library intended for use with Linux operating systems for embedded and mobile systems. It has a MIT license and was developed with the idea of having a clean, standard-compliant libc
, which is time efficient, since it's been developed from scratch. As a C library, it is optimized for the linking of static libraries. It is compatible with C99 standard and POSIX 2008, and implements Linux, glibc
, and BSD non-standard functions.
Next, we'll discuss
uClibc
, which is a C standard library designed for Linux embedded systems and mobile devices. Although initially developed for μClinux and designed for microcontrollers, it gathered track and became the weapon of choice for anyone who's has limited space on their device. This has become popular due to the following reasons:
- It focuses on size rather than performance
- It has a GNU Lesser General Public License (LGPL) free license
- It is much smaller the glibc and reduces compilation time
- It has high configurability due to the fact that many of its features can be enabled using a
menuconfig
interface similar to the one available on packages, such as Linux kernel, U-Boot, or even BusyBox
The dietlibc
library is a standard C library that was developed by Felix von Leitner and released under the GNU GPL v2 license. Although it also contains some commercial licensed components, its design was based on the same idea as uClibc
: the possibility of compiling and linking software while having the smallest size possible. It has another resemblance to uClibc
; it was developed from scratch and has only implemented the most used and known standard functions. Its primary usage is mainly in the embedded devices market.
The last in the C libraries list is the klibc
standard C library. It was developed by H. Peter Anvin and it was developed to be used as part of the early user space during the Linux startup process. It is used by the components that run the the kernel startup process but aren't used in the kernel mode and, hence, they do not have access to the standard C library.
The development of klibc
started in 2002 as an initiative to remove the Linux initialization code outside a kernel. Its design makes it suitable for usage in embedded devices. It also has another advantage: it is optimized for small size and correctness of data. The klibc
library is loaded during the Linux startup process from initramfs (a temporary Ram filesystem) and is incorporated by default into initramfs using the mkinitramfs
script for Debian and Ubuntu-based filesystems. It also has access to a small set of utilities, such as mount
, mkdir
, dash
, mknod
, fstype
, nfsmount
, run-init
and so on, which are very useful in the early init stage.
When generating a toolchain, the first thing that needs to be done is the establishment of an ABI used to generate binaries. This means that the kernel needs to understand this ABI and, at the same time, all the binaries in the system need to be compiled with the same ABI.
When working with the GNU toolchain, a good source of gathering information and understanding the ways in which work is done with these tools is to consult the GNU coding standards. The coding standard's purposes are very simple: to make sure that the work with the GNU ecosystem is performed in a clean, easy, and consistent manner. This is a guideline that needs to be used by people interested in working with GNU tools to write reliable, solid, and portable software. The main focus of the GNU toolchain is represented by the C language, but the rules applied here are also very useful for any programming languages. The purpose of each rule is explained by making sure that the logic behind the given information is passed to the reader.
It is better to use the int
type, although you might consider defining a narrower data type. There are, of course, a number of special cases where this could be hard to use. One such example is the dev_t
system type, because it is shorter than int
on some machines and wider on others. The only way to offer support for non-standard C types involves checking the width of dev_t
using Autoconf
and, after this, choosing the argument type accordingly. However, it may not worth the trouble.
The POSIX.2 standard mentions that commands, such as du
and df
, should output sizes in units of 512 bytes. However, users want units of 1KB and this default behavior is implemented. If someone is interested in having the behavior requested by POSIX standard, they would need to set the POSIXLY_CORRECT
environment variable.
To make sure that you write robust code, a number of guidelines should be mentioned. The first one refers to the fact that limitations should not be used for any data structure, including files, file names, lines, and symbols, and especially arbitrary limitations. All data structures should be dynamically allocated. One of the reasons for this is represented by the fact that most Unix utilities silently truncate long lines; GNU utilities do not do these kind of things.
To decode arguments, the getopt_long
option can be used.
For error checks that identify impossible situations, just abort the program, since there is no need to print any messages. These type of checks bear witness to the existence of bugs. To fix these bugs, a developer will have to inspect the available source code and even start a debugger. The best approach to solve this problem would be to describe the bugs and problems using comments inside the source code. The relevant information could be found inside variables after examining them accordingly with a debugger.
This can also be done using the mkstemps
function, which is made available by Gnulib
.
After the introduction of the packages that comprise a toolchain, this section will introduce the steps needed to obtain a custom toolchain. The toolchain that will be generated will contain the same sources as the ones available inside the Poky dizzy branch. Here, I am referring to the gcc
version 4.9, binutils
version 2.24, and glibc
version 2.20. For Ubuntu systems, there are also shortcuts available. A generic toolchain can be installed using the available package manager, and there are also alternatives, such as downloading custom toolchains available inside Board Support Packages, or even from third parties, including CodeSourcery and Linaro. More information on toolchains can be found at http://elinux.org/Toolchains. The architecture that will be used as a demo is an ARM architecture.
The toolchain build process has eight steps. I will only outline the activities required for each one of them, but I must mention that they are all automatized inside the Yocto Project recipes. Inside the Yocto Project section, the toolchain is generated without notice. For interaction with the generated toolchain, the simplest task would be to call meta-ide-support, but this will be presented in the appropriate section as follows:
- The setup: This represents the step in which top-level build directories and source subdirectories are created. In this step, variables such as
TARGET
,SYSROOT
,ARCH
,COMPILER
,PATH
, and others are defined. - Geting the sources: This represents the step in which packages, such as
binutils
,gcc
,glibc
,linux kernel
headers, and various patches are made available for use in later steps. - GNU Binutils setup - This represents the steps in which the interaction with the
binutils
package is done, as shown here:- Unzip the sources available from the corresponding release
- Patch the sources accordingly, if this applies
- Configure, the package accordingly
- Compile the sources
- Install the sources in the corresponding location
- Linux kernel headers setup: This represents the steps in which the interaction with the Linux kernel sources is presented, as shown here:
- Unzip the kernel sources.
- Patch the kernel sources, if this applies.
- Configure the kernel for the selected architecture. In this step, the corresponding kernel config file is generated. More information about Linux kernel will be presented in Chapter 4, Linux Kernel.
- Compile the Linux kernel headers and copy them in the corresponding location.
- Install the headers in the corresponding locations.
- Glibc headers setup: This represents the steps used to setting the
glibc
build area and installation headers, as shown here:- Unzip the glibc archive and headers files
- Patch the sources, if this applies
- Configure the sources accordingly enabling the
-with-headers
variable to link the libraries to the corresponding Linux kernel headers - Compile the glibc headers files
- Install the headers accordingly
- GCC first stage setup: This represents the step in which the C runtime files, such as
crti.o
andcrtn.o
, are generated:- Unzip the gcc archive
- Patch the gcc sources if necessary
- Configure the sources enabling the needed features
- Compile the C runtime components
- Install the sources accordingly
- Build the glibc sources: This represents the step in which the
glibc
sources are built and the necessary ABI setup is done, as shown here:- Configure the
glibc
library by setting themabi
andmarch
variables accordingly - Compile the sources
- Install the
glibc
accordingly
- Configure the
- GCC second stage setup: This represents the final setup phase in which the toolchain configuration is finished, as shown here:
- Configure the
gcc
sources - Compile the sources
- Install the binaries in the corresponding location
- Configure the
After these steps are performed, a toolchain will be available for the developer to use. The same strategy and build procedure steps is followed inside the Yocto Project.
To decode arguments, the getopt_long
option can be used.
For error checks that identify impossible situations, just abort the program, since there is no need to print any messages. These type of checks bear witness to the existence of bugs. To fix these bugs, a developer will have to inspect the available source code and even start a debugger. The best approach to solve this problem would be to describe the bugs and problems using comments inside the source code. The relevant information could be found inside variables after examining them accordingly with a debugger.
This can also be done using the mkstemps
function, which is made available by Gnulib
.
After the introduction of the packages that comprise a toolchain, this section will introduce the steps needed to obtain a custom toolchain. The toolchain that will be generated will contain the same sources as the ones available inside the Poky dizzy branch. Here, I am referring to the gcc
version 4.9, binutils
version 2.24, and glibc
version 2.20. For Ubuntu systems, there are also shortcuts available. A generic toolchain can be installed using the available package manager, and there are also alternatives, such as downloading custom toolchains available inside Board Support Packages, or even from third parties, including CodeSourcery and Linaro. More information on toolchains can be found at http://elinux.org/Toolchains. The architecture that will be used as a demo is an ARM architecture.
The toolchain build process has eight steps. I will only outline the activities required for each one of them, but I must mention that they are all automatized inside the Yocto Project recipes. Inside the Yocto Project section, the toolchain is generated without notice. For interaction with the generated toolchain, the simplest task would be to call meta-ide-support, but this will be presented in the appropriate section as follows:
- The setup: This represents the step in which top-level build directories and source subdirectories are created. In this step, variables such as
TARGET
,SYSROOT
,ARCH
,COMPILER
,PATH
, and others are defined. - Geting the sources: This represents the step in which packages, such as
binutils
,gcc
,glibc
,linux kernel
headers, and various patches are made available for use in later steps. - GNU Binutils setup - This represents the steps in which the interaction with the
binutils
package is done, as shown here:- Unzip the sources available from the corresponding release
- Patch the sources accordingly, if this applies
- Configure, the package accordingly
- Compile the sources
- Install the sources in the corresponding location
- Linux kernel headers setup: This represents the steps in which the interaction with the Linux kernel sources is presented, as shown here:
- Unzip the kernel sources.
- Patch the kernel sources, if this applies.
- Configure the kernel for the selected architecture. In this step, the corresponding kernel config file is generated. More information about Linux kernel will be presented in Chapter 4, Linux Kernel.
- Compile the Linux kernel headers and copy them in the corresponding location.
- Install the headers in the corresponding locations.
- Glibc headers setup: This represents the steps used to setting the
glibc
build area and installation headers, as shown here:- Unzip the glibc archive and headers files
- Patch the sources, if this applies
- Configure the sources accordingly enabling the
-with-headers
variable to link the libraries to the corresponding Linux kernel headers - Compile the glibc headers files
- Install the headers accordingly
- GCC first stage setup: This represents the step in which the C runtime files, such as
crti.o
andcrtn.o
, are generated:- Unzip the gcc archive
- Patch the gcc sources if necessary
- Configure the sources enabling the needed features
- Compile the C runtime components
- Install the sources accordingly
- Build the glibc sources: This represents the step in which the
glibc
sources are built and the necessary ABI setup is done, as shown here:- Configure the
glibc
library by setting themabi
andmarch
variables accordingly - Compile the sources
- Install the
glibc
accordingly
- Configure the
- GCC second stage setup: This represents the final setup phase in which the toolchain configuration is finished, as shown here:
- Configure the
gcc
sources - Compile the sources
- Install the binaries in the corresponding location
- Configure the
After these steps are performed, a toolchain will be available for the developer to use. The same strategy and build procedure steps is followed inside the Yocto Project.
steps needed to obtain a custom toolchain. The toolchain that will be generated will contain the same sources as the ones available inside the Poky dizzy branch. Here, I am referring to the gcc
version 4.9, binutils
version 2.24, and glibc
version 2.20. For Ubuntu systems, there are also shortcuts available. A generic toolchain can be installed using the available package manager, and there are also alternatives, such as downloading custom toolchains available inside Board Support Packages, or even from third parties, including CodeSourcery and Linaro. More information on toolchains can be found at http://elinux.org/Toolchains. The architecture that will be used as a demo is an ARM architecture.
The toolchain build process has eight steps. I will only outline the activities required for each one of them, but I must mention that they are all automatized inside the Yocto Project recipes. Inside the Yocto Project section, the toolchain is generated without notice. For interaction with the generated toolchain, the simplest task would be to call meta-ide-support, but this will be presented in the appropriate section as follows:
- The setup: This represents the step in which top-level build directories and source subdirectories are created. In this step, variables such as
TARGET
,SYSROOT
,ARCH
,COMPILER
,PATH
, and others are defined. - Geting the sources: This represents the step in which packages, such as
binutils
,gcc
,glibc
,linux kernel
headers, and various patches are made available for use in later steps. - GNU Binutils setup - This represents the steps in which the interaction with the
binutils
package is done, as shown here:- Unzip the sources available from the corresponding release
- Patch the sources accordingly, if this applies
- Configure, the package accordingly
- Compile the sources
- Install the sources in the corresponding location
- Linux kernel headers setup: This represents the steps in which the interaction with the Linux kernel sources is presented, as shown here:
- Unzip the kernel sources.
- Patch the kernel sources, if this applies.
- Configure the kernel for the selected architecture. In this step, the corresponding kernel config file is generated. More information about Linux kernel will be presented in Chapter 4, Linux Kernel.
- Compile the Linux kernel headers and copy them in the corresponding location.
- Install the headers in the corresponding locations.
- Glibc headers setup: This represents the steps used to setting the
glibc
build area and installation headers, as shown here:- Unzip the glibc archive and headers files
- Patch the sources, if this applies
- Configure the sources accordingly enabling the
-with-headers
variable to link the libraries to the corresponding Linux kernel headers - Compile the glibc headers files
- Install the headers accordingly
- GCC first stage setup: This represents the step in which the C runtime files, such as
crti.o
andcrtn.o
, are generated:- Unzip the gcc archive
- Patch the gcc sources if necessary
- Configure the sources enabling the needed features
- Compile the C runtime components
- Install the sources accordingly
- Build the glibc sources: This represents the step in which the
glibc
sources are built and the necessary ABI setup is done, as shown here:- Configure the
glibc
library by setting themabi
andmarch
variables accordingly - Compile the sources
- Install the
glibc
accordingly
- Configure the
- GCC second stage setup: This represents the final setup phase in which the toolchain configuration is finished, as shown here:
- Configure the
gcc
sources - Compile the sources
- Install the binaries in the corresponding location
- Configure the
After these steps are performed, a toolchain will be available for the developer to use. The same strategy and build procedure steps is followed inside the Yocto Project.
As I have mentioned, the major advantage and available feature of the Yocto Project environment is represented by the fact that a Yocto Project build does not use the host available packages, but builds and uses its own packages. This is done to make sure that a change in the host environment does not influence its available packages and that builds are made to generate a custom Linux system. A toolchain is one of the components because almost all packages that are constituents of a Linux distribution need the usage of toolchain components.
The GNU CC and GCC C compiler package, which consists of all the preceding packages, is split into multiple fractions, each one with its purpose. This is mainly because each one has its purpose and is used with different scopes, such as sdk
components. However, as I mentioned in the introduction of this chapter, there are multiple toolchain build procedures that need to be assured and automated with the same source code. The available support inside Yocto is for gcc 4.8 and 4.9 versions. A quick look at the gcc
available recipes shows the available information:
The GNU Binutils package represents the binary tools collection, such as GNU Linker, GNU Assembler, addr2line
, ar
, nm
, objcopy
, objdump
, and other tools and related libraries. The Yocto Project offers support for the Binutils version 2.24, and is also dependent on the available toolchain build procedures, as it can be viewed from the inspection of the source code:
The uClibc is used as an alternative to glibc
C library because it generates smaller executable footprints. At the same time, uClibc
is the only package from the ones presented in the preceding list that has a bbappend
applied to it, since it extends the support for two machines, genericx86-64
and genericx86
. The change between glibc
and uClibc
can be done by changing the TCLIBC
variable to the corresponding variable in this way: TCLIBC = "uclibc"
.
Set the MACHINE
variable to the value qemuarm
accordingly inside the conf/local.conf
file:
The default C library used for the generation of the toolchain is glibc
, but it can be changed according to the developer's need. As seen from the presentation in the previous section, the toolchain generation process inside the Yocto Project is very simple and straightforward. It also avoids all the trouble and problems involved in the manual toolchain generation process, making it very easy to reconfigure also.