Using a Brother network scanner with Linux

Feb 23 2014 Published by under howto, linux

For a while now we have had a Brother MFC-J415W all in one wireless printer / fax / scanner thingy. It prints fine using CUPS and we have used it as a scanner with SD cards in sneaker-net mode.

Linux support

Brother actually do a reasonably good job supporting all of their recent equipment under Linux. All the drivers for CUPS and for the scanner functionality are available as packages for Debian, with usable documentation, from the Brother website [1].

I finally got around to setting things up for network scanning.

First you need to install the scanner driver; in my case it was the brsaneconfig3 driver. You then set this up as follows:

Running brsaneconfig -q will output a (large number of) supported devices, and any it finds on your network:

You can then run up gimp or your favourite graphical / scanner tool and test that scanning works as expected.

Having done this, I then set up remote scanning. This involved running a service on the destination computer, which I set up to run as the logged in user from the openbox autostart. For some reason the tool (undocumented) requires the argument ’2′ to get help to show… The simplest method is as follows:

After this setup, you simply need to run brscan-skey when you want to scan to your PC. From the MFC LCD panel, you choose ‘Scan’, ‘File’ and then it should find your computer, and it displays the current user name as the computer for some reason.

Files get saved into $HOME/brscan by default.

Improved remote scanning integration

Well of course I didn’t want to stop there. To make the system more user friendly, we expect:
* notifications when a document is received
* conversion to a useful format – the default is PNM for some reason, possibly this is the native scanner output

So I wrote a script, which starts with my X/session for openbox.

This is the essentials:

The way this works is as follows:
* brscan-skey outputs received file information on stdout
* so we run it in a forever loop and parse that information
* The notify-send program will popup a notification in your desktop window manager
* I then convert from PNM to both JPEG and PDF using imagemagick
* I also keep a log

I have pushed this script to the blog github repository,


No responses yet

OQGraph – bazaar adventures in migrating to git

Jan 27 2014 Published by under oqgraph


I have been acting as a ‘community maintainer’ for a project called OQGraph for a bit over a year now.

OQGraph provides graph traversal algorithms over SQL data for MariaDB.  Briefly, it is very difficult if not impossible to write a SQL query to find the shortest path through a graph where the edges are fairly typically represented as two columns of vertex identifers.  OQGraph as of v3 provides a virtual table that allows such an operation via a single SQL query.  This thus provides a simple mechanism for existing data to be queried in new and novel ways.  OQGraph v3 is now merged into MariaDB 10.0.7 as of 16 December, 2013.

Aside: I did a talk [1][2][3] about this subject at the OpenProgramming miniconf. I really didn’t do a very good job, especially compared with my SysAdmin [4][5][6] miniconf talk; I lost my place, “ummed” way too much, etc., although audience members kindly seemed to ignore this when I talked to them later :-) I know I was underprepared, and in hindsight I tried to cover way too much ground in the time available which resulted in a not-really coherent story arc; I should have focused on MTR for the majority of the talk. But I digress…

Correction: I also had a snafu in my slides; OQGraph v3 supports theoretically any underling storage engine, the unit test coverage is currently only over MyISAM but we plan to extend it to test the other major storage engines in the next while.

Launchpad and BZR

MariaDB is maintained on Launchpad. Launchpad uses bazaar (bzr) for version control.  Bazaar already has a reputation for poor performance, and my own experience using it with MariaDB backs this up.  It doesn’t help that the MariaDB history is a long one, the history converted to git shows 87000 commits and the .git directory weighs in at nearly 6 GBytes!  The MariaDB team is considering migration [7] to github, but in the meantime I needed a way to work on OQGraph using git to save my productivity, as I am only working on the project in my spare time.


So heres what I wanted to achieve:

  1. Maintain the ‘bleeding edge’ development of OQGraph on Github
  2. Bring the entire history of all OQGraph v3 development across to Github, including any MariaDB tags
  3. Maintain the code on Github without all of the entirety of MariaDB.
  4. Be able to push changes from github back to Launchpad+bzr

Items 1 & 3 will give me a productivity boost.  The resulting OQGraph-only repository with entire history almost fits on a 3½inch floppy! Item 1 may help make the project more accessible in the future.  Item 2 will of course allow me to go back in time and see what happened in the past.  Item 3 also has other advantages: it may make it easier to backport OQGraph to other versions of MariaDB if it becomes useful to do so.  And item 4 is critical, as for bug fixes and features to be accepted for merging into MariaDB in the short term it is still easiest to maintain a bzr branch on launchpad.

To this end, I first created a maintenance branch on Launchpad: I will regularly merge the latest MariaDB trunk into this branch, along with completed changes (tested bugfixes, etc.) from the github repository, for final testing before proposing for merging into MariaDB trunk.

Then I created a standalone git repository.  The OQGraph code is self contained in a subdirectory of MariaDB, storage/oqgraph . The result should be a git repository where the original content of storage/oqgraph is the root directory, but with the history preserved.

Doing this was a bit of a journey and really tested out by git skills, and my workstation!  I will describe this in more detail in a subsequent blog entry.

The resulting repository I finally pushed up to github and can be found at I also determined the procedure for merging changes back to Launchpad, see the file



[2] Video:

[3] Slides:


[5] Video:

[6] Slides:


No responses yet

Something new: dcfldd – a more advanced dd for data transfer

Aug 30 2013 Published by under howto

Today I discovered dcfldd, and it was right there in Debian already: apt-get install dcfldd.

This was while building up a new Raspberry Pi image for a project I have in mind.

Specifically, dcfldd provides a progress meter, unlike the ubiquitous dd command. It also appears to be aimed at forensics and advanced data recovery, featuring on the go hashing of data as well.

You can use dcfldd to write an image onto an sdcard in exactly the same way as dd:

dcfldd bs=4096 if=2013-07-26-wheezy-raspbian.img of=/dev/sdl

Sample Progress Output:

141568 blocks (553Mb) written.

It looks like you can get a Windows binary as well.

No responses yet

Fixing sluggish write performance of USB flash (thumb) drives

Nov 13 2012 Published by under linux, tech

    This has been noted in various places around the web but in practice what I did seems to be a combination of various writings so I have documented my own experiences here.


    I recently acquired a (yet another) USB flash drive, this a 16 GB “Dolphin” brand. The actual device reports as “048d:1165 Integrated Technology Express, Inc.” when interrogated using lsusb. I am using it to transfer transcoded Kaffeine PVR recordings from my PC to the set top box in the lounge for more comfortable watching.

    On first use, however, it took what seemed like forever to transfer a 250MB AVI file, over USB2, and looking at the GKrellM chart the write data rate appeared to be a very poor 350 kB/sec. So it seemed yet again, I needed to optimise a USB disk before it was adequate for use.

    In theory, to simplify things to one sentence, flash disk (and in particular, modern SSD) should be faster than spinning disks, as access is a true physical random access operation, without having to wait for the heads to be in the right spot. However this is invalidated due the blocky nature of flash disk writes. The actual reason for the poor write speed is that the default partition starts at the 63rd sector (byte 32256) on the disk, and USB flash drives, SD cards, etc. are designed to write data in chunks of say 128kB at a time. Even if you only write one sector, the entire 128kB (or 256 sectors) must be (re-read first and) written. So when a partition is not aligned on a 128kB boundary, more writes than otherwise necessary are required, slowing performance. USB flash drives generally employ FAT32 so they are usable on the widest variety of devices (including set top boxes) and the general experience of FAT32 is that write performance is severely affected if the partition alignment does not match the flash write size, for both the partition and the FAT master table itself.


    The procedure I follow for doing fixing misaligned flash drives is:

    1. Find a Linux computer, or reboot using a live Linux distribution such as SysRescueCD
    2. Destroy the existing partition.
    3. Recreate a single partition, ensuring it starts at the 256th sector (byte 131072, or 128kB)
    4. Format the partition to FAT32. with the following non default options:
      • override the default sectors per cluster to ensure clusters are aligned. This comes at some expense of apparent usable space, but the performance gain for writing large files such as video files is more than worth it.
      • Adjust the “reserved” sectors so that the FAT table itself is aligned to 128kB.

    Detailed Steps

    The following command sequence will accomplish this under Linux. This assumes your drive is at /dev/sdd, this will vary depending on what other disks you have.

    1. Run GNU fdisk with units in sector mode not cylinder mode. Then print the existing partition table (enter p when prompted. Below you can see the start sector of the existing partition is at sector 63. Note this is also a primary partition. This is typical of USB flash disks you might purchase at the local supermarket…

    2. Delete the partition:
    3. Recreate the partition, aligned at sector 256 (131072 bytes), and set the type back to FAT32 LBA (in this case matching what previously existed) (type ‘c’, or 0x0c, i.e. FAT32 LBA). Use of FAT32 LBA allows use to start the filesystem on an arbitrary sector bearing no relationship to legacy cylinders, etc. The final sector depends on the disk size.
    4. Save changes:
    5. Format the partition, setting the number of reserved sectors so that the FAT table remains aligned at a 128kB boundary. Assuming sectors per cluster, s=128 (65536 bytes), and our partition length of 31567469 sectors, we want the first fat to start at the 256th sector within the partition (which is OK as the partition itself is aligned.) For some sizes of flash disk, this can be an iterative process, but generally setting the number of reserved sectors to 256 will achieve what we want.

    6. This is the most important step – verify that the chosen number of reserved sectors has resulted in an aligned FAT table and aligned data area.

      The important figure here, is the data area sector – it must be an integer multiple of 256, and 256 x 17 == 4362 in this example.
    7. Test the result. I copied a 256 MB file onto the drive, and GKrellM is now reporting ~2.5MB/sec. More importantly, it finished in approx. one eighth of the time compared to before reformatting.

    The improved write performance should be just as noticeable from Windows.

No responses yet

Patching and Building a custom Linux Kernel in Debian

Apr 10 2012 Published by under linux

These posts cover a topic which seems to be documented to varying degrees across the net, but nothing quite exactly matched what I wanted to do. In the end this is a result of multiple sources of information and inspiration (and perspiration…)

For some time I had been getting a Kernel fault report popup with irritating regularity. In the end I isolated it to something going wrong with my external Firewire drive after my computer was resuming from suspend (specifically Suspend to RAM.)
In the end chasing this down required working through the following tasks:

  1. Disabling the proprietary NVidia driver and activating ‘nv’ ( I was unable to successfully configure nouveau to work with my particular dual head configuration), so that my kernel was no longer ‘TAINTED’, which would have led me into a brick wall if I had been required to report a kernel bug.
  2. Consistently replicating the fault, which included learning about a bunch of stuff in the Linux /sys filesystem.
  3. Finally getting a 3-series kernel to work on Debian Squeeze – it turns out by now 3.2 has been packaged into Debian backports, which gets me past an earlier roadblock with kernel upgraded. Upgrading to the latest kernel would eliminate if the problem had been resolved (which is was not at least of 3.2.9)
  4. Rebuilding the kernel from source – (something I have done this many times before, but it doesn’t hurt to recap) and applying the patches needed
  5. Re-enabling NVidia – which involved verifying my DKMS setup was still working.

I haven’t blogged recently due to various family mini-crises to do with pets, sickness and other issues, as well as extra busyness at work.

As it is getting late this post will conclude with the command line used to build and install my kernel, and I will expand on this in the next post.

Things to note:

  • The above will build a kernel using the same configuration as an installed Debian backports 3.2 kernel, assuming the backports kernel an source packages have been installed. There are no changes or patches yet
  • Your user must be in the ‘src’ group for the make-kpkg command to work as-is.
  • The 3.2 kernel in backports (as of March 2012) was in fact version 3.2.9 although this is not indicated in the Debian version for some reason.

No responses yet

Older posts »