Raspberry Pi Virtual Machine Automation

Aug 23 2014 Published by under howto, linux

Several months ago now I was doing some development for Raspberry Pi. I guess that shows how busy with life things have been. (I have a backlog of lots of things I would like to blog about that didn’t get blogged yet, pardon my grammar!)

Now the Pi runs on an SD card and it some things start to get very tedious after a while, write performance is not exactly fast, and doing extensive work would probably start to wear cards a bit fast.

So I looked into running a Raspberry Pi Qemu virtual machine, which would let me do builds on my main workstation without needing to setup a cross compiling buildroot. This has the further advantage in that I could test automating full installations for real Raspberry Pis.

Overview

Because I really dislike having to lots of manual steps I automated the whole process. The scripts I used are on GitHub (https://github.com/pastcompute/pi_magic), use at your own risk etc.

There are three scripts: pi_qemu_build.sh which generates a pretend SD card image suitable for use by Qemu; pi_qemu_setup.sh which will SSH into a fresh image and perform second stage customisation; and pi_qemu_run.sh which launches the actual Qemu VM.

My work is based on that described at http://xecdesign.com/qemu-emulating-raspberry-pi-the-easy-way/.

Image Generation

The image generation works as follows:

  • start with an unzipped Raspbian SD card image downloaded from http://www.raspbian.org/
  • convert the image to a Qemu qcow2 image
  • extend the size out to 4GB to match the SD card
  • mount using Qemu NBD and extend the ext2 partition to the rest of the disk
  • Patch the image to work with Qemu quirks (more on that in a moment)
  • Create a snapshot so that changes made later inside Qemu can be rolled back if required

Now Raspbian is setup to load a DLL /usr/lib/arm-linux-gnueabihf/libcofi_rpi.so, which needs to be disabled for Qemu; also it is set up to search for an MMC device so we need to make a symlink to access /dev/mmc* as /dev/sda inside Qemu instead.

Qemu Execution

The stock Raspberry Pi kernel wont work inside Qemu, so we need to launch with a different kernel.

I haven’t yet had time to include producing one in the build process.  Instead, I launch Qemu using the kernel downloadable from XEC Design.

Qemu is invoked using qemu-system-arm -kernel path/to/kernel -cpu arm1176 -m 256 -M versatilepb ..., so that the processor matches that used by the Raspberry Pi. The script also maps HTTP and SSH through to the VM so you can connect in from localhost, and runs using the snapshot.

This was tested on Debian Wheezy with qemu 1.7 from backports.

PI VM screenshot

PI VM screenshot

No responses yet

Using a Brother network scanner with Linux

Feb 23 2014 Published by under howto, linux

For a while now we have had a Brother MFC-J415W all in one wireless printer / fax / scanner thingy. It prints fine using CUPS and we have used it as a scanner with SD cards in sneaker-net mode.

Linux support

Brother actually do a reasonably good job supporting all of their recent equipment under Linux. All the drivers for CUPS and for the scanner functionality are available as packages for Debian, with usable documentation, from the Brother website [1].

I finally got around to setting things up for network scanning.

First you need to install the scanner driver; in my case it was the brsaneconfig3 driver. You then set this up as follows:

Running brsaneconfig -q will output a (large number of) supported devices, and any it finds on your network:

You can then run up gimp or your favourite graphical / scanner tool and test that scanning works as expected.

Having done this, I then set up remote scanning. This involved running a service on the destination computer, which I set up to run as the logged in user from the openbox autostart. For some reason the tool (undocumented) requires the argument ‘2’ to get help to show… The simplest method is as follows:

After this setup, you simply need to run brscan-skey when you want to scan to your PC. From the MFC LCD panel, you choose ‘Scan’, ‘File’ and then it should find your computer, and it displays the current user name as the computer for some reason.

Files get saved into $HOME/brscan by default.

Improved remote scanning integration

Well of course I didn’t want to stop there. To make the system more user friendly, we expect:
* notifications when a document is received
* conversion to a useful format – the default is PNM for some reason, possibly this is the native scanner output

So I wrote a script, which starts with my X/session for openbox.

This is the essentials:

The way this works is as follows:
* brscan-skey outputs received file information on stdout
* so we run it in a forever loop and parse that information
* The notify-send program will popup a notification in your desktop window manager
* I then convert from PNM to both JPEG and PDF using imagemagick
* I also keep a log

I have pushed this script to the blog github repository, https://github.com/oldcomputerjunk/blogscripts

[1] http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/index.html

No responses yet

OQGraph – bazaar adventures in migrating to git

Jan 27 2014 Published by under oqgraph

Background

I have been acting as a ‘community maintainer’ for a project called OQGraph for a bit over a year now.

OQGraph provides graph traversal algorithms over SQL data for MariaDB.  Briefly, it is very difficult if not impossible to write a SQL query to find the shortest path through a graph where the edges are fairly typically represented as two columns of vertex identifers.  OQGraph as of v3 provides a virtual table that allows such an operation via a single SQL query.  This thus provides a simple mechanism for existing data to be queried in new and novel ways.  OQGraph v3 is now merged into MariaDB 10.0.7 as of 16 December, 2013.

Aside: I did a talk [1][2][3] about this subject at the Linux.conf.au OpenProgramming miniconf. I really didn’t do a very good job, especially compared with my SysAdmin [4][5][6] miniconf talk; I lost my place, “ummed” way too much, etc., although audience members kindly seemed to ignore this when I talked to them later :-) I know I was underprepared, and in hindsight I tried to cover way too much ground in the time available which resulted in a not-really coherent story arc; I should have focused on MTR for the majority of the talk. But I digress…

Correction: I also had a snafu in my slides; OQGraph v3 supports theoretically any underling storage engine, the unit test coverage is currently only over MyISAM but we plan to extend it to test the other major storage engines in the next while.

Launchpad and BZR

MariaDB is maintained on Launchpad. Launchpad uses bazaar (bzr) for version control.  Bazaar already has a reputation for poor performance, and my own experience using it with MariaDB backs this up.  It doesn’t help that the MariaDB history is a long one, the history converted to git shows 87000 commits and the .git directory weighs in at nearly 6 GBytes!  The MariaDB team is considering migration [7] to github, but in the meantime I needed a way to work on OQGraph using git to save my productivity, as I am only working on the project in my spare time.

Github

So heres what I wanted to achieve:

  1. Maintain the ‘bleeding edge’ development of OQGraph on Github
  2. Bring the entire history of all OQGraph v3 development across to Github, including any MariaDB tags
  3. Maintain the code on Github without all of the entirety of MariaDB.
  4. Be able to push changes from github back to Launchpad+bzr

Items 1 & 3 will give me a productivity boost.  The resulting OQGraph-only repository with entire history almost fits on a 3½inch floppy! Item 1 may help make the project more accessible in the future.  Item 2 will of course allow me to go back in time and see what happened in the past.  Item 3 also has other advantages: it may make it easier to backport OQGraph to other versions of MariaDB if it becomes useful to do so.  And item 4 is critical, as for bug fixes and features to be accepted for merging into MariaDB in the short term it is still easiest to maintain a bzr branch on launchpad.

To this end, I first created a maintenance branch on Launchpad: https://code.launchpad.net/~andymc73/maria/oqgraph-maintenance. I will regularly merge the latest MariaDB trunk into this branch, along with completed changes (tested bugfixes, etc.) from the github repository, for final testing before proposing for merging into MariaDB trunk.

Then I created a standalone git repository.  The OQGraph code is self contained in a subdirectory of MariaDB, storage/oqgraph . The result should be a git repository where the original content of storage/oqgraph is the root directory, but with the history preserved.

Doing this was a bit of a journey and really tested out by git skills, and my workstation!  I will describe this in more detail in a subsequent blog entry.

The resulting repository I finally pushed up to github and can be found at https://github.com/andymc73/oqgraph. I also determined the procedure for merging changes back to Launchpad, see the file Synchronising_with_Launchpad.md

 

[1] https://lca2014.linux.org.au/wiki/index.php?title=Miniconfs/Open_Programming#Developing_OQGRAPH:_a_tool_for_graph_based_traversal_of_SQL_data_in_MariaDB

[2] Video: http://mirror.linux.org.au/linux.conf.au/2014/Tuesday/139-Developing_OQGRAPH_a_tool_for_graph_based_traversal_of_SQL_data_in_MariaDB_-_Andrew_McDonnell.mp4

[3] Slides: http://andrewmcdonnell.net/slides/lca2014_oqgraph_talk.pdf

[4] http://sysadmin.miniconf.org/presentations14.html#AndrewMcDonnell

[5] Video: http://mirror.linux.org.au/linux.conf.au/2014/Monday/167-Custom_equipment_monitoring_with_OpenWRT_and_Carambola_-_Andrew_McDonnell.mp4

[6] Slides: http://andrewmcdonnell.net/slides/lca2014_sysadmin_talk.pdf

[7] https://mariadb.atlassian.net/browse/MDEV-5240

No responses yet

Something new: dcfldd – a more advanced dd for data transfer

Aug 30 2013 Published by under howto

Today I discovered dcfldd, and it was right there in Debian already: apt-get install dcfldd.

This was while building up a new Raspberry Pi image for a project I have in mind.

Specifically, dcfldd provides a progress meter, unlike the ubiquitous dd command. It also appears to be aimed at forensics and advanced data recovery, featuring on the go hashing of data as well.

You can use dcfldd to write an image onto an sdcard in exactly the same way as dd:

dcfldd bs=4096 if=2013-07-26-wheezy-raspbian.img of=/dev/sdl

Sample Progress Output:

141568 blocks (553Mb) written.

It looks like you can get a Windows binary as well.

No responses yet

Fixing sluggish write performance of USB flash (thumb) drives

Nov 13 2012 Published by under linux, tech

    This has been noted in various places around the web but in practice what I did seems to be a combination of various writings so I have documented my own experiences here.

    Background

    I recently acquired a (yet another) USB flash drive, this a 16 GB “Dolphin” brand. The actual device reports as “048d:1165 Integrated Technology Express, Inc.” when interrogated using lsusb. I am using it to transfer transcoded Kaffeine PVR recordings from my PC to the set top box in the lounge for more comfortable watching.

    On first use, however, it took what seemed like forever to transfer a 250MB AVI file, over USB2, and looking at the GKrellM chart the write data rate appeared to be a very poor 350 kB/sec. So it seemed yet again, I needed to optimise a USB disk before it was adequate for use.

    In theory, to simplify things to one sentence, flash disk (and in particular, modern SSD) should be faster than spinning disks, as access is a true physical random access operation, without having to wait for the heads to be in the right spot. However this is invalidated due the blocky nature of flash disk writes. The actual reason for the poor write speed is that the default partition starts at the 63rd sector (byte 32256) on the disk, and USB flash drives, SD cards, etc. are designed to write data in chunks of say 128kB at a time. Even if you only write one sector, the entire 128kB (or 256 sectors) must be (re-read first and) written. So when a partition is not aligned on a 128kB boundary, more writes than otherwise necessary are required, slowing performance. USB flash drives generally employ FAT32 so they are usable on the widest variety of devices (including set top boxes) and the general experience of FAT32 is that write performance is severely affected if the partition alignment does not match the flash write size, for both the partition and the FAT master table itself.

    Procedure

    The procedure I follow for doing fixing misaligned flash drives is:

    1. Find a Linux computer, or reboot using a live Linux distribution such as SysRescueCD
    2. Destroy the existing partition.
    3. Recreate a single partition, ensuring it starts at the 256th sector (byte 131072, or 128kB)
    4. Format the partition to FAT32. with the following non default options:
      • override the default sectors per cluster to ensure clusters are aligned. This comes at some expense of apparent usable space, but the performance gain for writing large files such as video files is more than worth it.
      • Adjust the “reserved” sectors so that the FAT table itself is aligned to 128kB.

    Detailed Steps

    The following command sequence will accomplish this under Linux. This assumes your drive is at /dev/sdd, this will vary depending on what other disks you have.

    1. Run GNU fdisk with units in sector mode not cylinder mode. Then print the existing partition table (enter p when prompted. Below you can see the start sector of the existing partition is at sector 63. Note this is also a primary partition. This is typical of USB flash disks you might purchase at the local supermarket…

    2. Delete the partition:
    3. Recreate the partition, aligned at sector 256 (131072 bytes), and set the type back to FAT32 LBA (in this case matching what previously existed) (type ‘c’, or 0x0c, i.e. FAT32 LBA). Use of FAT32 LBA allows use to start the filesystem on an arbitrary sector bearing no relationship to legacy cylinders, etc. The final sector depends on the disk size.
    4. Save changes:
    5. Format the partition, setting the number of reserved sectors so that the FAT table remains aligned at a 128kB boundary. Assuming sectors per cluster, s=128 (65536 bytes), and our partition length of 31567469 sectors, we want the first fat to start at the 256th sector within the partition (which is OK as the partition itself is aligned.) For some sizes of flash disk, this can be an iterative process, but generally setting the number of reserved sectors to 256 will achieve what we want.

    6. This is the most important step – verify that the chosen number of reserved sectors has resulted in an aligned FAT table and aligned data area.


      The important figure here, is the data area sector – it must be an integer multiple of 256, and 256 x 17 == 4362 in this example.
    7. Test the result. I copied a 256 MB file onto the drive, and GKrellM is now reporting ~2.5MB/sec. More importantly, it finished in approx. one eighth of the time compared to before reformatting.

    The improved write performance should be just as noticeable from Windows.

No responses yet

Older posts »