Booting a Windows7/Vista Recovery partition when Windows is broken

Apr 22 2014

Even though I am “pretty much” an open source advocate, I still have to use Windows professionally when required, and of course am the IT support for extended family :-) In this case, I needed to rebuild a laptop for my mum from scratch.

The laptop in question, a Benq Joybook A52 had previously been my dads, and been through incarnations of Windows Vista, downgraded to XP then back up to Vista, and I decided it would be safer to start with a clean slate and perform a factory restore, apply the service packs and all the recent security updates anew.

This laptop had been previously been cleansed of the usual ‘crapware’ and other default programs, including the factory recovery icon. I had early on installed the very useful tool EasyBCD from http://neosmart.net/EasyBCD/ to dual boot Vista and XP. (Aside: EasyBCD used to be free, it seems you can still get it free for Non-Commercial use but you have to dig a bit.) Using a Linux bootable USB I was able to detect that the recovery partition, a FAT32 primary partition labelled ‘PQSERVICE’ but set to type 0xde (Dell Utility) was still present (luckily). However regardless of which settings I tried I was unable to immediately get the factory recovery partition to start, either from the rescue USB or via EasyBCD. Being an mum & dads for dinner I didn’t have a lot of time to get into nuts and bolts.

Surprisingly, a quick search for PQSERVICE, booting benq recovery partition or various other combinations didn’t really bring up anything that useful. So for the moment /dev/sda4 remained stubbornly inaccessible to the Windows boot machinery.

A couple of weeks later, now having the laptop in my possession to deal with this, I took an image so that I could experiment. Luckily the drive was only 80GB, such an expansive size from circa 2006!

First thing I did was create an image to play with in qemu. Figuring that the important parts were simply the recovery partition and the boot sector, I managed this as follows:

(Aside – these instructions may or may not also work on other flavours of laptop of this vintage!)

Firstly, upon mounting the recovery partition, you can see what appears to be a bog standard cut down Windows filesystem:

Examine the partition layout:

Results:

The recovery partition is #4. Note that ‘diag’ is actually type 0xde when checked using fdisk.

Second, assemble a fresh experimental disk:

As a check the size of test.bin and laptop.img should be identical.

Now, attempt to boot the image in QEMU.

This is achievable using Grub2, and in this case I chose SuperGub2Disk, http://www.supergrubdisk.org.

After the boot screen starts, choose ‘c’ for a command line, and use the following:

For once, this worked first time, starting the Powerquest recovery software.

So now I simply had to repeat this process, using a USB key with Grub2 on the laptop itself.

Prologue

For some reason the BIOS in this laptop did not understand my favourite bootable USB with Grub2 so I ended up burning a CDR for the first time in a little while.

I also had to wipe the other partitions out of the partition table; it seems the recovery program just unpacks a partition to C: rather than rebuilding the partition table! Things to note include, remember to set the partition type of what will become C: (/dev/sda1) as bootable and NTFS; and for good measure zero the first sectors of that partition.

No responses yet

OpenWRT WDS between legacy WRT54G and recent TP-Link devices

Mar 30 2014

For a while now I had a multiple wifi routers all providing access points, and a connection to each other, using a feature called WDS. All of the routers run OpenWRT. Recently one of them died and everything kind of stopped working properly. I actually had the following configuration:

TP-LINK <--wired,bridged--> ASUS WL500G <--wireless,WDS,bridged--> Linksys WRT54G

Now this all worked because the conventional wisdom appears to be that WDS works when the device at each end is the same, i.e. Broadcom to Broadcom (which I had in the case of WL500G and WRT54G) and is problematic at best otherwise.

Documentation alluding to this includes:

http://wiki.openwrt.org/doc/recipes/broadcomwds

http://wiki.openwrt.org/doc/recipes/atheroswds

http://wiki.openwrt.org/doc/howto/clientmode

So with the WL500G out of the picture, the TP-Link and WRT54G were no longer able to see each other, and I experienced a loss of connectivity from my workshop where the WRT54G was physically located.

However if you look under the covers a bit, it seems this configuration can be made to work, as suggested by the folk over at DD-WRT.

So with a bit of tweaking I was able to get WDS working between the TP-LINK and WRT54G as follows.

TP-LINK Router (WR1043ND, OpenWRT 10.03 Backfire)

Content of /etc/config/wireless:

WRT54G Router (WRT54G, OpenWRT 10.03 with legacy 2.4 kernel for broadcom wireless)

Content of /etc/config/wireless:

Notes

  • xx:xx:xx:xx:xx:xx is the MAC address of the WR1043ND, and yy:yy:yy:yy:yy:yy the MAC address of the WRT54G
  • Both devices have to be on the same channel
  • On the secondary router (WRT54G) you actually connect on the ssid LOCALSSID but all DHCP, etc. comes from the TP-LINK

Note also you need to ensure that DHCP on the LAN is disabled on the second router!

I am unsure why this configuration has apparently been problematic for some – perhaps it is quite router specific as to whether it works.

No responses yet

Using a Brother network scanner with Linux

Feb 23 2014

For a while now we have had a Brother MFC-J415W all in one wireless printer / fax / scanner thingy. It prints fine using CUPS and we have used it as a scanner with SD cards in sneaker-net mode.

Linux support

Brother actually do a reasonably good job supporting all of their recent equipment under Linux. All the drivers for CUPS and for the scanner functionality are available as packages for Debian, with usable documentation, from the Brother website [1].

I finally got around to setting things up for network scanning.

First you need to install the scanner driver; in my case it was the brsaneconfig3 driver. You then set this up as follows:

Running brsaneconfig -q will output a (large number of) supported devices, and any it finds on your network:

You can then run up gimp or your favourite graphical / scanner tool and test that scanning works as expected.

Having done this, I then set up remote scanning. This involved running a service on the destination computer, which I set up to run as the logged in user from the openbox autostart. For some reason the tool (undocumented) requires the argument ’2′ to get help to show… The simplest method is as follows:

After this setup, you simply need to run brscan-skey when you want to scan to your PC. From the MFC LCD panel, you choose ‘Scan’, ‘File’ and then it should find your computer, and it displays the current user name as the computer for some reason.

Files get saved into $HOME/brscan by default.

Improved remote scanning integration

Well of course I didn’t want to stop there. To make the system more user friendly, we expect:
* notifications when a document is received
* conversion to a useful format – the default is PNM for some reason, possibly this is the native scanner output

So I wrote a script, which starts with my X/session for openbox.

This is the essentials:

The way this works is as follows:
* brscan-skey outputs received file information on stdout
* so we run it in a forever loop and parse that information
* The notify-send program will popup a notification in your desktop window manager
* I then convert from PNM to both JPEG and PDF using imagemagick
* I also keep a log

I have pushed this script to the blog github repository, https://github.com/oldcomputerjunk/blogscripts

[1] http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/index.html

No responses yet

Launchpad to Github round trip, save 5 giabytes on the way

Jan 31 2014

As alluded to in my previous entry, as part of working on the OQGraph storage engine I have set up a development tree on GitHub, with beta and released code remaining hosted on Launchpad using bzr as part of the MariaDB official codebase.  This arrangement is subject to the following constraints:

  • I will be doing most new development via Github
  • I will regularly need to be integrate changes from Github back to Launchpad
  • I occasionally may need to forward port changes from Launchpad to Github, these may arises from changes in MariaDB itself.
  • The codebase on GitHub consists solely of the storage engine without the overhead of the rest of MariaDB.
  • This provides a much more accessible route to development or testing of the latest OQGraph code – downloading just the source tree of MariaDB, and replacing the released OQGraph directory with the development code, rather than having to bzr branch or git clone all of MariaDB.

Until I have gone around the loop a couple of times, I expect some glitches in the process to arise and need resolving.

Some resulting branches in GitHub:

  • baseline : this branch locates where we started from in bzr
  • master : this branch is the result of merges from launchpad-tracking with ongoing development via github
  • other branches such as maria-mdev-xxx and maria-launchpad-xxx for work on specific bugs or features

But things had to start somewhere, so here is how I extracted the storage engine code from MariaDB whilst retaining full past history.

First things first – cloning MariaDB from launchpad

Since sometime around 1.8 a new remote helper was made available with git, for cloning from and pushing to bzr repositories with a local git repository.  On Debian Wheezy this is provided with the package git-bzr.  So the first step is to clone the mainline MariaDB repository into a git repository.  It turns out you need multiple gigabytes of local storage for this; the resulting .git directory is 5.7 gigabytes.  It also takes rather a long time, even over a 12 Mbps link – the bottleneck is not the network, but seems to be bzr itself.

Prior to cloning to git, however, I ensured I also had a tracking branch created inside Launchpad, to facilitate upstream integration.  First I followed the instructions on the MariaDB wiki [1] to create a local bzr repository, and created my maintenance branch from lp:maria.  The end result should look like this:

Now it is possible to make a git repository from the bzr repository.

MariaDB is showing its long and venerable history, with over 80000 commits in a very complex branch structure; the downside of this includes significant lag on simple operations such as git status or firing up the GUI gitg.  But otherwise, the history is complete and corresponds with that within the bzr repository; the gir-bzr remote helper does its job well.

We also have remotes that actually point to the bzr repository:

One thing I had to do, was cleanup an extraneous tag HEAD which I removed the ref for it to avoid ambiguous ref errors (this doesn’t interfere with round trip operations):

 

Lossless Data Reduction

Now at this point I could have create a new repository on Github and pushed the entire clone to it, and be done with it.  But thats a lot of data, clogging up both Github and my connections.  And not much use if I wanted to do a clone for some quick hacking in a spare half hour waiting for a plane or something – see bullet five in the introduction above.

One way to do this is to use git filter-branch with a subdirectory filter filter.  However, some of the OQGraph code for a lot of its history existed in two unrelated directories, and the past history of the files in the second directory would become lost.  Specifically, the test harnesses for the storage engine previously existed under mysql-test/suite/oqgraph and now live under storage/oqgraph/mysql-test. The trick is to rewrite history with a ‘fake’ directory so that the files previously under mysql-test/suite/oqgraph now appear to have resided under storage/oqgraph/maria/mysql-test/suite/oqgraph.  This preserves not only past changes in those files, but even the move to the final location, through the eventual subdirectory filter.  Note, at this point we are not ready to apply a subdirectory filter as yet.

The subdirectory filter operation takes a very long time, so we also look for a way to quickly remove a lot of code.  The history goes back to 2000 and OQGraph entered the scene in about 2010. So we can run a different filter-branch operation to remove all commits older than that point.  However, because we are preserving the merge history, the default method for doing that still leaves behind a lot of cruft back to 2007. So we need to apply yet another filter tor remove the empty merge commits.

Finally we can apply a subdirectory filter.

Putting it all together – remove old commits and free up space

I started by making a new clone.  We need to retain the original for round trip integration, and for starting again if we made a mistake.  Having done that, we want to find the oldest commit in any part of the OQGraph software.

From a visual examination of the log output, the oldest relevant commit might be for example 7de3c2e1d02a2a4b645002c9cebdaa673f05e767.  It would be feasible to write a script to work this out by parsing and sorting the dates, but I didn’t bother spending the time at this point.  To remove the old commits, we need to make a new ‘start of history’, which we do by adding a graft then running a filter branch command.  We could run just filter-branch by itself, but at the same time we also remove everything that is not part of the OQGraph storage engine.

(Note that the each command one long line, I have separated each line by a blank here)

Here, the index-filter option manipulates the git index, one entry at a time; we unstage the current commit, and re-commit just the files of interest.  This is faster than a subdirectory filter because we don’t check out the entire tree, and also we can keep two directories instead of just one.  The tag-name-filter is crucial, without it we loose all the MariaDB history tags. Having prune-empty removes commits with no data, and all ensures we look at all branches rather than just the direct ancestors of HEAD.

This is followed up by space reclamation: in three steps, we remove backup refs created by filter-branch, prune all obsolete refs, then garbage collect to reclaim space.

Although faster than a subdirectory filter it still takes a couple of hours to run, but the results are amazing: the entire repository has shrunk to just 7 Megabytes!

Cleanup cruft and  normalising the directory structure

There is still some work to do however; a large number of merge commits remain, all of which are empty, comprising history both beyond and after the graft point, and obscuring our OQGraph commits.  The previous argument prune doesn’t remove merge commits, so we need to do that differently.   The data is dramatically reduced, so subdirectory filter will be quite fast, and also removes merge commits, but we still need to retain history out in mysql-test/suite/oqgraph which would be lost if we just just did a subdirectory-filter.  So we need to rewrite the related paths first.

(Note again each command a one long line, I have separated lines by blanks)

Here, running index-filter with git update-index we can ‘rename’ file paths affected by a commit without aggecting the actual commit. So I used sed to change mysql-test/suite/oqgraph paths to storage/oqgraph/maria/mysql-test/suite/oqgraph. These now survive the subdirectory-filter and we retain a complete history of OQGraph files without really changing anything.

I ran one final clean up:

Now I had a complete history of OQGraph, with only OQGraph, including contemporary MariaDB tags, which I pushed to Github.
The results are at https://github.com/andymc73/oqgraph, and you could find the corresponding commits in the MariaDB trunk on Launchpad if you desired.

The draft procedure for round-tripping back to Launchpad is described in the Github maintainer-docs, this is still a work in progress.

[1] https://mariadb.com/kb/en/getting-the-mariadb-source-code/

No responses yet

OQGraph – bazaar adventures in migrating to git

Jan 27 2014

Background

I have been acting as a ‘community maintainer’ for a project called OQGraph for a bit over a year now.

OQGraph provides graph traversal algorithms over SQL data for MariaDB.  Briefly, it is very difficult if not impossible to write a SQL query to find the shortest path through a graph where the edges are fairly typically represented as two columns of vertex identifers.  OQGraph as of v3 provides a virtual table that allows such an operation via a single SQL query.  This thus provides a simple mechanism for existing data to be queried in new and novel ways.  OQGraph v3 is now merged into MariaDB 10.0.7 as of 16 December, 2013.

Aside: I did a talk [1][2][3] about this subject at the Linux.conf.au OpenProgramming miniconf. I really didn’t do a very good job, especially compared with my SysAdmin [4][5][6] miniconf talk; I lost my place, “ummed” way too much, etc., although audience members kindly seemed to ignore this when I talked to them later :-) I know I was underprepared, and in hindsight I tried to cover way too much ground in the time available which resulted in a not-really coherent story arc; I should have focused on MTR for the majority of the talk. But I digress…

Correction: I also had a snafu in my slides; OQGraph v3 supports theoretically any underling storage engine, the unit test coverage is currently only over MyISAM but we plan to extend it to test the other major storage engines in the next while.

Launchpad and BZR

MariaDB is maintained on Launchpad. Launchpad uses bazaar (bzr) for version control.  Bazaar already has a reputation for poor performance, and my own experience using it with MariaDB backs this up.  It doesn’t help that the MariaDB history is a long one, the history converted to git shows 87000 commits and the .git directory weighs in at nearly 6 GBytes!  The MariaDB team is considering migration [7] to github, but in the meantime I needed a way to work on OQGraph using git to save my productivity, as I am only working on the project in my spare time.

Github

So heres what I wanted to achieve:

  1. Maintain the ‘bleeding edge’ development of OQGraph on Github
  2. Bring the entire history of all OQGraph v3 development across to Github, including any MariaDB tags
  3. Maintain the code on Github without all of the entirety of MariaDB.
  4. Be able to push changes from github back to Launchpad+bzr

Items 1 & 3 will give me a productivity boost.  The resulting OQGraph-only repository with entire history almost fits on a 3½inch floppy! Item 1 may help make the project more accessible in the future.  Item 2 will of course allow me to go back in time and see what happened in the past.  Item 3 also has other advantages: it may make it easier to backport OQGraph to other versions of MariaDB if it becomes useful to do so.  And item 4 is critical, as for bug fixes and features to be accepted for merging into MariaDB in the short term it is still easiest to maintain a bzr branch on launchpad.

To this end, I first created a maintenance branch on Launchpad: https://code.launchpad.net/~andymc73/maria/oqgraph-maintenance. I will regularly merge the latest MariaDB trunk into this branch, along with completed changes (tested bugfixes, etc.) from the github repository, for final testing before proposing for merging into MariaDB trunk.

Then I created a standalone git repository.  The OQGraph code is self contained in a subdirectory of MariaDB, storage/oqgraph . The result should be a git repository where the original content of storage/oqgraph is the root directory, but with the history preserved.

Doing this was a bit of a journey and really tested out by git skills, and my workstation!  I will describe this in more detail in a subsequent blog entry.

The resulting repository I finally pushed up to github and can be found at https://github.com/andymc73/oqgraph. I also determined the procedure for merging changes back to Launchpad, see the file Synchronising_with_Launchpad.md

 

[1] https://lca2014.linux.org.au/wiki/index.php?title=Miniconfs/Open_Programming#Developing_OQGRAPH:_a_tool_for_graph_based_traversal_of_SQL_data_in_MariaDB

[2] Video: http://mirror.linux.org.au/linux.conf.au/2014/Tuesday/139-Developing_OQGRAPH_a_tool_for_graph_based_traversal_of_SQL_data_in_MariaDB_-_Andrew_McDonnell.mp4

[3] Slides: http://andrewmcdonnell.net/slides/lca2014_oqgraph_talk.pdf

[4] http://sysadmin.miniconf.org/presentations14.html#AndrewMcDonnell

[5] Video: http://mirror.linux.org.au/linux.conf.au/2014/Monday/167-Custom_equipment_monitoring_with_OpenWRT_and_Carambola_-_Andrew_McDonnell.mp4

[6] Slides: http://andrewmcdonnell.net/slides/lca2014_sysadmin_talk.pdf

[7] https://mariadb.atlassian.net/browse/MDEV-5240

No responses yet

Older posts »