Detect running in LXC container using new method
It seems LXC 2.0 switched to using the systemd container interface specification and one can no longer detect LXC containers using /proc/1/cgroup (but I am leaving the old method, so that people using distros with older versions of LXC can still use Gitian). https://www.freedesktop.org/wiki/Software/systemd/ContainerInterface/
|4 years ago|
|bin||5 years ago|
|contrib||5 years ago|
|doc||5 years ago|
|etc||6 years ago|
|libexec||5 years ago|
|share||8 years ago|
|target-bin||4 years ago|
|.gitignore||6 years ago|
|COPYING||8 years ago|
|LICENSE||8 years ago|
|README.md||5 years ago|
|RELEASE_NOTES||6 years ago|
|Vagrantfile||8 years ago|
Read about the project goals at the project home page.
This package can do a deterministic build of a package inside a VM.
This performs a build inside a VM, with deterministic inputs and outputs. If the build script takes care of all sources of non-determinism (mostly caused by timestamps), the result will always be the same. This allows multiple independent verifiers to sign a binary with the assurance that it really came from the source they reviewed.
sudo pacman -S python2-cheetah qemu rsync sudo pacman -S lxc libvirt bridge-utils # for lxc mode
Also, I had to modify the default /etc/sudoers file to uncomment the
secure_path line, because vmbuilder isn’t found otherwise when the
env -i ... sudo vmbuilder ... line is executed (because the i flag resets the environment variables including the PATH).
layman -a luke-jr # needed for vmbuilder sudo emerge dev-vcs/git net-misc/apt-cacher-ng app-emulation/vmbuilder dev-lang/ruby sudo emerge app-emulation/qemu export KVM=qemu-system-x86_64
This pulls in all pre-requisites for KVM building on Ubuntu:
sudo apt-get install git apache2 apt-cacher-ng python-vm-builder ruby qemu-utils
If you’d like to use LXC mode instead, install it as follows:
sudo apt-get install lxc
See Ubuntu, and also run the following on Debian Jessie or newer:
sudo apt-get install ubuntu-archive-keyring
On Debian Wheezy you run the same command, but you must first add backports to your system, because the package is only available in wheezy-backports.
sudo port install ruby coreutils export PATH=$PATH:/opt/local/libexec/gnubin # Needed for sha256sum
brew install ruby coreutils export PATH=$PATH:/opt/local/libexec/gnubin
Install virtualbox from http://www.virtualbox.org, and make sure
VBoxManage is in your
Gitian now supports Debian guests in addition to Ubuntu guests. Note that this doesn’t mean you can allow the builders to choose to use either Debian or Ubuntu guests. The person creating the Gitian descriptor will need to choose a particular distro and suite for the guest and all builders must use that particular distro and suite, otherwise the software won’t reproduce for everyone.
The official vmbuilder only includes support for Ubuntu guests, so you need to install Joseph Bisch’s fork of vmbuilder, which adds a Debian plugin.
To create a Debian guest:
bin/make-base-vm --distro debian --suite jessie
There is currently no support for LXC Debian guests. There is just KVM support. LXC support for Debian guests is planned to be added soon.
Only Debian Jessie guests have been tested with Gitian. Debian Jessie is the current stable release of Debian at this time. If you have success (or trouble) with other versions of Debian, please let us know.
If you are creating a Gitian descriptor, you can now specify a distro. If no distro is provided, the default is to assume Ubuntu. Since Ubuntu is assumed, older Gitian descriptors that don’t specify a distro will still work as they always have.
sudo, please review the script
bin/make-base-vm bin/make-base-vm --arch i386
bin/make-base-vm --lxc bin/make-base-vm --lxc --arch i386
USE_LXC environment variable to use
LXC instead of
VBoxManage must be in your
make-base-vm cannot yet make VirtualBox virtual machines ( patches welcome, it should be possible to use
VBoxManage, boot-from-network Linux images and PXE booting to do it). So you must either get or manually create VirtualBox machines that:
Gitian-<suite>-<arch>-- e.g. Gitian-xenial-i386 for a 32-bit, Ubuntu 16 machine.
Gitian-Clean. The build script resets the VM to that snapshot to get reproducible builds.
localhost:2223on the host machine to port
22of the VM; e.g.:
VBoxManage modifyvm Gitian-xenial-i386 --natpf1 "guestssh,tcp,,2223,,22"
The final setup needed is to create an
ssh key that will be used to login to the virtual machine:
ssh-keygen -t rsa -f var/id_rsa -N "" ssh -p 2223 ubuntu@localhost 'mkdir -p .ssh && chmod 700 .ssh && cat >> .ssh/authorized_keys' < var/id_rsa.pub
Then log into the vm and copy the
ssh keys to root’s
ssh -p 2223 ubuntu@localhost # Now in the vm sudo bash mkdir -p .ssh && chmod 700 .ssh && cat ~ubuntu/.ssh/authorized_keys >> .ssh/authorized_keys
USE_VBOX environment variable to use
VBOX instead of
If you have everything set-up properly, you should be able to:
PATH=$PATH:$(pwd)/libexec make-clean-vm --suite xenial --arch i386 # on-target needs $DISTRO to be set to debian if using a Debian guest # (when running gbuild, $DISTRO is set based on the descriptor, so this line isn't needed) DiSTRO=debian # For LXC: LXC_ARCH=i386 LXC_SUITE=xenial on-target ls -la # For KVM: start-target 32 xenial-i386 & # wait a few seconds for VM to start on-target ls -la stop-target
Copy any additional build inputs into a directory named inputs.
Then execute the build using a
YAML description file (can be run as non-root):
export USE_LXC=1 # LXC only bin/gbuild <package>.yml
or if you need to specify a commit for one of the git remotes:
bin/gbuild --commit <dir>=<hash> <package>.yml
The resulting report will appear in
To sign the result, perform:
bin/gsign --signer <signer> --release <release-name> <package>.yml
<signer> is your signing PGP key ID and
<release-name> is the name for the current release. This will put the result and signature in the
sigs/<package> directory can be managed through git to coordinate multiple signers.
After you’ve merged everybody’s signatures, verify them:
bin/gverify --release <release-name> <package>.yml
start-target 32 xenial-i386or
start-target 64 xenial-amd64
on-target(after setting $DISTRO to debian if using a Debian guest) or
on-target -u root
<package>.ymlstarts with any environment setup you would need to manually compile things on the target
lxc-start, which may require root. If you are in the admin group, you can add the following sudoers line to prevent asking for the password every time:
%admin ALL=NOPASSWD: /usr/bin/lxc-execute %admin ALL=NOPASSWD: /usr/bin/lxc-start
lxc-start is the default, but you can force
lxc-execute (useful for Ubuntu 14.04) with:
Recent distributions allow lxc-execute / lxc-start to be run by non-priviledged users, so you might be able to rip-out the
sudo calls in
If you have a runaway
lxc-start command, just use
kill -9 on it.
The machine configuration requires access to br0 and assumes that the host address is
sudo brctl addbr br0 sudo ifconfig br0 10.0.2.2/24 up
Not very extensive, currently.
python -m unittest discover test