12

I am part of a small team ( 4 - 5 ) people working on an embedded linux project. We are using Buildroot and the Linaro toolchain to build for our target. We use git for version control and Jenkins for nightly builds.

This is our first go at a project like this and I've been unsuccessful at finding any resources describing models for development with this kind of environment.

Right now after a nightly build I create a tarball of the Buildroot 'output' directory, which contains the u-boot images and root file system. This can be downloaded directly from the Jenkins 'archive' page for the last successful build.

Some of us will be working on lower level development and some on user space development (QT). Our problem is deciding what is the most efficient / streamlined approach is to developing in an environment like this given that people will be working on different areas within the project scope. The userland guys could download the tarball with everything and incorporate their applications into the rfs to run on the board and debug, but how should we handle work done on the lower level development? Basically, how should we distribute the artifacts to the team? I greatly appreciate any thoughts.

1 Answer 1

19

I recently spent some time restructuring the build environment for an OpenEmbedded-based linux project. I have no direct experience with Buildroot but I expect OpenEmbedded is similar enough to what you're using. I'll describe my setup and with any luck you'll find something here useful...

The Problem

There are three software components that can be installed separately (i.e. independent of each other): the bootloader (u-boot); the kernel (linux); and the filesystem image. Our end product is shipped with a packaged release of these three components. That is, a version of u-boot, linux, and filesystem image that have been QA-tested and are known to work together. However, it is possible to independently upgrade any one of the components (e.g. install a new kernel image) to create a combination of software components that haven't been tested together.

This problem also exists for user space applications. Once a filesystem image has been installed into the target it becomes possible to update one or more user space binaries independently of other filesystem objects (assuming your filesystem isn't read-only). How do you know the specific combination of user space applications that are now installed can work together? How can I be certain the combination of binaries running in this particular unit is the same combination of binaries that have been QA-certified? How do I know what "version" the software is?

The other problem I needed to solve, the same problem you outlined in your question, is how to allow developers working on different parts of the software stack (kernel, root filesystem, user space Qt apps, etc) to work together?

A Solution

I addressed this and the "version" problem by:

  1. Storing the rootfs and sysroot in a git repository.
  2. Liberal use of git submodules.

Storing the target's root filesystem and system root files in a git repository initially rubbed me the wrong way (storing output files in version control, what!?!) but it provides the following advantages:

  1. A JFFS2 filesystem image (rootfs + our custom user space applications) can be built in as long as it takes to build the user space applications (i.e. tens of seconds). A developer is no longer required to first build the rootfs from scratch (which takes several hours with OpenEmbedded).
  2. All the other advantages of version control (changes to the rootfs can be easily tracked over time, tags for release, branches, etc).
  3. I initially considered storing the rootfs and sysroot as tarballs but I like the idea of git tracking changes on a per-file basis.

The directory structure looks something like this (some names have been changed to protect the innocent):

\---proj [*]             # Name of your project
    +---u-boot [*]
    +---linux [*]
    +---toolchain [*]
    \---fs [*]           # Filesystem stuff.
        +---oe [*]       # OpenEmbedded.
        +---qt [*]       # Qt framework.
        +---apps [*]     # Custom user-space applications.
        \---bin [*]      # Revision controlled binaries
            +---rootfs   # Target root filesystem, output of OpenEmbedded.
            \---sysroot  # System root, output of OpenEmbedded (headers, etc).

Each starred directory [*] is a git repository and each git repository is a submodule of its parent.

The build environment is initialised from a top-level Makefile which essentially does a recursive git submodule init and git submodule update. All developers would do:

$ git clone [email protected]:proj proj
$ cd proj
$ make git-init

A user-space developer can then build immediately:

$ make --directory proj/fs/apps all       # Build apps
$ make --directory proj/fs install        # Create JFFS2 image

The filesystem maintainer can update the rootfs:

$ cd proj/fs/oe
$ # Modify build recipes and other OpenEmbedded black magic stuff.
$ make
$ # Go make coffee while oe builds every package on the planet.
$ cd proj/bin    # OE writes output files here.
$ git commit     # Commit latest rootfs and sysroot.

Software Versioning

From the top-level makefile (proj/Makefile) it is possible to build all software components (kernel, u-boot, filesystem image). Using the following git commands the makefile exports to all sub-make processes a single environmental variable (e.g. VER_TAG) that describes the current software version. The version is either a tag from the git repository or a SHA (e.g. v1.0, 471087ec254e8e353bb46c533823fc4ece3485b4 or 471087ec254e8e353bb46c533823fc4ece3485b4-modified).

git rev-parse HEAD                 # Get current SHA
git status --porcelain | wc -c     # Is working copy modified?
git describe --exact-match HEAD    # Is the working copy a tag?

If even a single file in any of the project subdirectories has been modified then VER_TAG will always be xxxx-modified. This single VER_TAG variable is then passed as a compile-time constant to all builds (u-boot, kernel, user space apps, etc).

At run time a custom user space application accumulates the VER_TAG values from all components and if they all report the same value then that string becomes the official version reported by the product. If even one VER_TAG value is different to the others then the software stack wasn't built from the same top-level SHA and can't be released into the wild (to QA for testing, to production for manufacture, etc).

If a software component isn't built from the top-level makefile (e.g. make --directory proj/fs/apps all) then the VER_TAG for that component will be undefined and the resulting software stack for "internal use only". That is to say, a "release" of all software components can only be made by building from the top-level makefile.

For reference, linux reports VER_TAG through a custom file in procfs, u-boot reports via the linux command line (/proc/cmdline), and each user space application via interprocess communication.

Summary

A caveat. I only developed this build environment a month ago so can't make claims for its robustness, but for now it seems to be holding together...

If you have specific questions or points you want clarified I'll be happy to update my answer.

2
  • Thanks for the detailed answer. Alot of this makes great sense and I'll probably be doing something similar with Buildroot.
    – PhilBot
    Commented Jun 25, 2012 at 19:27
  • 1
    Would you still recommend this solution this many years later?
    – Alan Mimms
    Commented May 15, 2018 at 0:28

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.