Using a Brother network scanner with Linux

Feb 23 2014

For a while now we have had a Brother MFC-J415W all in one wireless printer / fax / scanner thingy. It prints fine using CUPS and we have used it as a scanner with SD cards in sneaker-net mode.

Linux support

Brother actually do a reasonably good job supporting all of their recent equipment under Linux. All the drivers for CUPS and for the scanner functionality are available as packages for Debian, with usable documentation, from the Brother website [1].

I finally got around to setting things up for network scanning.

First you need to install the scanner driver; in my case it was the brsaneconfig3 driver. You then set this up as follows:

Running brsaneconfig -q will output a (large number of) supported devices, and any it finds on your network:

You can then run up gimp or your favourite graphical / scanner tool and test that scanning works as expected.

Having done this, I then set up remote scanning. This involved running a service on the destination computer, which I set up to run as the logged in user from the openbox autostart. For some reason the tool (undocumented) requires the argument ’2′ to get help to show… The simplest method is as follows:

After this setup, you simply need to run brscan-skey when you want to scan to your PC. From the MFC LCD panel, you choose ‘Scan’, ‘File’ and then it should find your computer, and it displays the current user name as the computer for some reason.

Files get saved into $HOME/brscan by default.

Improved remote scanning integration

Well of course I didn’t want to stop there. To make the system more user friendly, we expect:
* notifications when a document is received
* conversion to a useful format – the default is PNM for some reason, possibly this is the native scanner output

So I wrote a script, which starts with my X/session for openbox.

This is the essentials:

The way this works is as follows:
* brscan-skey outputs received file information on stdout
* so we run it in a forever loop and parse that information
* The notify-send program will popup a notification in your desktop window manager
* I then convert from PNM to both JPEG and PDF using imagemagick
* I also keep a log

I have pushed this script to the blog github repository,


No responses yet

Launchpad to Github round trip, save 5 giabytes on the way

Jan 31 2014

As alluded to in my previous entry, as part of working on the OQGraph storage engine I have set up a development tree on GitHub, with beta and released code remaining hosted on Launchpad using bzr as part of the MariaDB official codebase.  This arrangement is subject to the following constraints:

  • I will be doing most new development via Github
  • I will regularly need to be integrate changes from Github back to Launchpad
  • I occasionally may need to forward port changes from Launchpad to Github, these may arises from changes in MariaDB itself.
  • The codebase on GitHub consists solely of the storage engine without the overhead of the rest of MariaDB.
  • This provides a much more accessible route to development or testing of the latest OQGraph code – downloading just the source tree of MariaDB, and replacing the released OQGraph directory with the development code, rather than having to bzr branch or git clone all of MariaDB.

Until I have gone around the loop a couple of times, I expect some glitches in the process to arise and need resolving.

Some resulting branches in GitHub:

  • baseline : this branch locates where we started from in bzr
  • master : this branch is the result of merges from launchpad-tracking with ongoing development via github
  • other branches such as maria-mdev-xxx and maria-launchpad-xxx for work on specific bugs or features

But things had to start somewhere, so here is how I extracted the storage engine code from MariaDB whilst retaining full past history.

First things first – cloning MariaDB from launchpad

Since sometime around 1.8 a new remote helper was made available with git, for cloning from and pushing to bzr repositories with a local git repository.  On Debian Wheezy this is provided with the package git-bzr.  So the first step is to clone the mainline MariaDB repository into a git repository.  It turns out you need multiple gigabytes of local storage for this; the resulting .git directory is 5.7 gigabytes.  It also takes rather a long time, even over a 12 Mbps link – the bottleneck is not the network, but seems to be bzr itself.

Prior to cloning to git, however, I ensured I also had a tracking branch created inside Launchpad, to facilitate upstream integration.  First I followed the instructions on the MariaDB wiki [1] to create a local bzr repository, and created my maintenance branch from lp:maria.  The end result should look like this:

Now it is possible to make a git repository from the bzr repository.

MariaDB is showing its long and venerable history, with over 80000 commits in a very complex branch structure; the downside of this includes significant lag on simple operations such as git status or firing up the GUI gitg.  But otherwise, the history is complete and corresponds with that within the bzr repository; the gir-bzr remote helper does its job well.

We also have remotes that actually point to the bzr repository:

One thing I had to do, was cleanup an extraneous tag HEAD which I removed the ref for it to avoid ambiguous ref errors (this doesn’t interfere with round trip operations):


Lossless Data Reduction

Now at this point I could have create a new repository on Github and pushed the entire clone to it, and be done with it.  But thats a lot of data, clogging up both Github and my connections.  And not much use if I wanted to do a clone for some quick hacking in a spare half hour waiting for a plane or something – see bullet five in the introduction above.

One way to do this is to use git filter-branch with a subdirectory filter filter.  However, some of the OQGraph code for a lot of its history existed in two unrelated directories, and the past history of the files in the second directory would become lost.  Specifically, the test harnesses for the storage engine previously existed under mysql-test/suite/oqgraph and now live under storage/oqgraph/mysql-test. The trick is to rewrite history with a ‘fake’ directory so that the files previously under mysql-test/suite/oqgraph now appear to have resided under storage/oqgraph/maria/mysql-test/suite/oqgraph.  This preserves not only past changes in those files, but even the move to the final location, through the eventual subdirectory filter.  Note, at this point we are not ready to apply a subdirectory filter as yet.

The subdirectory filter operation takes a very long time, so we also look for a way to quickly remove a lot of code.  The history goes back to 2000 and OQGraph entered the scene in about 2010. So we can run a different filter-branch operation to remove all commits older than that point.  However, because we are preserving the merge history, the default method for doing that still leaves behind a lot of cruft back to 2007. So we need to apply yet another filter tor remove the empty merge commits.

Finally we can apply a subdirectory filter.

Putting it all together – remove old commits and free up space

I started by making a new clone.  We need to retain the original for round trip integration, and for starting again if we made a mistake.  Having done that, we want to find the oldest commit in any part of the OQGraph software.

From a visual examination of the log output, the oldest relevant commit might be for example 7de3c2e1d02a2a4b645002c9cebdaa673f05e767.  It would be feasible to write a script to work this out by parsing and sorting the dates, but I didn’t bother spending the time at this point.  To remove the old commits, we need to make a new ‘start of history’, which we do by adding a graft then running a filter branch command.  We could run just filter-branch by itself, but at the same time we also remove everything that is not part of the OQGraph storage engine.

(Note that the each command one long line, I have separated each line by a blank here)

Here, the index-filter option manipulates the git index, one entry at a time; we unstage the current commit, and re-commit just the files of interest.  This is faster than a subdirectory filter because we don’t check out the entire tree, and also we can keep two directories instead of just one.  The tag-name-filter is crucial, without it we loose all the MariaDB history tags. Having prune-empty removes commits with no data, and all ensures we look at all branches rather than just the direct ancestors of HEAD.

This is followed up by space reclamation: in three steps, we remove backup refs created by filter-branch, prune all obsolete refs, then garbage collect to reclaim space.

Although faster than a subdirectory filter it still takes a couple of hours to run, but the results are amazing: the entire repository has shrunk to just 7 Megabytes!

Cleanup cruft and  normalising the directory structure

There is still some work to do however; a large number of merge commits remain, all of which are empty, comprising history both beyond and after the graft point, and obscuring our OQGraph commits.  The previous argument prune doesn’t remove merge commits, so we need to do that differently.   The data is dramatically reduced, so subdirectory filter will be quite fast, and also removes merge commits, but we still need to retain history out in mysql-test/suite/oqgraph which would be lost if we just just did a subdirectory-filter.  So we need to rewrite the related paths first.

(Note again each command a one long line, I have separated lines by blanks)

Here, running index-filter with git update-index we can ‘rename’ file paths affected by a commit without aggecting the actual commit. So I used sed to change mysql-test/suite/oqgraph paths to storage/oqgraph/maria/mysql-test/suite/oqgraph. These now survive the subdirectory-filter and we retain a complete history of OQGraph files without really changing anything.

I ran one final clean up:

Now I had a complete history of OQGraph, with only OQGraph, including contemporary MariaDB tags, which I pushed to Github.
The results are at, and you could find the corresponding commits in the MariaDB trunk on Launchpad if you desired.

The draft procedure for round-tripping back to Launchpad is described in the Github maintainer-docs, this is still a work in progress.


No responses yet

OQGraph – bazaar adventures in migrating to git

Jan 27 2014


I have been acting as a ‘community maintainer’ for a project called OQGraph for a bit over a year now.

OQGraph provides graph traversal algorithms over SQL data for MariaDB.  Briefly, it is very difficult if not impossible to write a SQL query to find the shortest path through a graph where the edges are fairly typically represented as two columns of vertex identifers.  OQGraph as of v3 provides a virtual table that allows such an operation via a single SQL query.  This thus provides a simple mechanism for existing data to be queried in new and novel ways.  OQGraph v3 is now merged into MariaDB 10.0.7 as of 16 December, 2013.

Aside: I did a talk [1][2][3] about this subject at the OpenProgramming miniconf. I really didn’t do a very good job, especially compared with my SysAdmin [4][5][6] miniconf talk; I lost my place, “ummed” way too much, etc., although audience members kindly seemed to ignore this when I talked to them later :-) I know I was underprepared, and in hindsight I tried to cover way too much ground in the time available which resulted in a not-really coherent story arc; I should have focused on MTR for the majority of the talk. But I digress…

Correction: I also had a snafu in my slides; OQGraph v3 supports theoretically any underling storage engine, the unit test coverage is currently only over MyISAM but we plan to extend it to test the other major storage engines in the next while.

Launchpad and BZR

MariaDB is maintained on Launchpad. Launchpad uses bazaar (bzr) for version control.  Bazaar already has a reputation for poor performance, and my own experience using it with MariaDB backs this up.  It doesn’t help that the MariaDB history is a long one, the history converted to git shows 87000 commits and the .git directory weighs in at nearly 6 GBytes!  The MariaDB team is considering migration [7] to github, but in the meantime I needed a way to work on OQGraph using git to save my productivity, as I am only working on the project in my spare time.


So heres what I wanted to achieve:

  1. Maintain the ‘bleeding edge’ development of OQGraph on Github
  2. Bring the entire history of all OQGraph v3 development across to Github, including any MariaDB tags
  3. Maintain the code on Github without all of the entirety of MariaDB.
  4. Be able to push changes from github back to Launchpad+bzr

Items 1 & 3 will give me a productivity boost.  The resulting OQGraph-only repository with entire history almost fits on a 3½inch floppy! Item 1 may help make the project more accessible in the future.  Item 2 will of course allow me to go back in time and see what happened in the past.  Item 3 also has other advantages: it may make it easier to backport OQGraph to other versions of MariaDB if it becomes useful to do so.  And item 4 is critical, as for bug fixes and features to be accepted for merging into MariaDB in the short term it is still easiest to maintain a bzr branch on launchpad.

To this end, I first created a maintenance branch on Launchpad: I will regularly merge the latest MariaDB trunk into this branch, along with completed changes (tested bugfixes, etc.) from the github repository, for final testing before proposing for merging into MariaDB trunk.

Then I created a standalone git repository.  The OQGraph code is self contained in a subdirectory of MariaDB, storage/oqgraph . The result should be a git repository where the original content of storage/oqgraph is the root directory, but with the history preserved.

Doing this was a bit of a journey and really tested out by git skills, and my workstation!  I will describe this in more detail in a subsequent blog entry.

The resulting repository I finally pushed up to github and can be found at I also determined the procedure for merging changes back to Launchpad, see the file



[2] Video:

[3] Slides:


[5] Video:

[6] Slides:


No responses yet

On and Presentation logistics

Jan 06 2014

How to diff LibreOffice Impress presentations in git

These days I keep track of presentation material using git (like many things). One handy trick is you can `git diff` an ODP file and have the text changes show like any other code.

The following procedure will enable this for Debian, YMMV.

  1. sudo apt-get install odt2txt
  2. Edit the file .gitattributes in the repository containing the ODP files. Note that this file is interpreted at each level in the directory structure
  3. Add the following line to the file:
    *.odt diff=odt
  4. You should probably commit .gitattributes as well.

I haven’t yet worked out a way to make this global using ‘git config’ which is a little annoying.

Another quick tip,  to redock the ‘slides’ pane if you manage to set it “loose”, hold CTRL and double click the word ‘slides’ at the top _inside_ the pane window.

And lastly, if anyone can tell me how to get Impress Presenter Mode to actually work with xrandr without obscuring the actual presentation that would be awesome…

Managing to avoid making last minute tweaks to a presentation is another problem not solvable with code ;-) The problem is I always think of improvements each time I re-read my talk, and also once the conference has started I always think of good techniques to try after watching the other speakers…


(One miniconf talk down, one to go!)

No responses yet

Quick Tip – AspireOne xrandr

Nov 26 2013

Remember this to get projector:

xrandr  --output VGA1 -s 1440x900 --right-of LVDS1 --auto

No responses yet

« Newer posts Older posts »