Wednesday 28 December 2016

Augmenting Apple iCloud PhotoStream

As discussed previously, if you want to share in the photos of an Apple user, you have to do it the Apple way.

But once my windows PC has an incrementally downloading folder of photos, I want to upload them to Google Photos where everyone can view them.

I can upload the photos, but Google Photo's uses the file system dates when ordering the photos and ignores any timestamps in the exif tags; so somthing needs coding to extract that information from the tags to touch the files.

Furthermore, photos (and more likely, movies) that aren't tagged are totally without timestamp information.

And yet... iCloud photos viewer on windows has this information, along with comments -- none of which is stored in the images or movies.

The task

My self-appointed task is to find the source of this information for iCloud Photostream, embed it into the images and movies, and update file timestamps with it prior to upload to Google Photos.


There are some clues here: but I'd already got that far by using Process Explorer to examine what files iCloud Photo was accessing.

The App

My original intention was to write a window app to upload to the Windows App Store, as the only one there to help with this seems highly priced at 

But having examined the database schema I see that this could be about 10 lines of bash scripting that works for just me; is it worth the extra work to produce a windows app?

No, but I'll do both anyway; downloading Visual Studio Express Community Edition

The bash script

The bash script is plain sqlite3 query, with the results piping into a loop at updates each file; with the usual bash presumptions that filenames won't contain a newline character, etc.

#! /bin/bash

ICLOUD=AppData/Roaming/Apple\ Computer/MediaStream


sq() {
  sqlite3 -line "$DB" "$@"

albums() {
  sq "select albumName from MSASAlbums"

photos() {
  sq "select assetfilepath, caption, downloaded, deleted, datetime(dateCreated + 978307200, 'unixepoch') || 'Z' as datetime, createdbyme from MSASAlbumAssets left join MSASAlbums on MSASAlbumAssets.albumGuid = MSASAlbums.albumGuid where albumName = '$1' order by dateCreated"

readRecord() {
  local field value count
  while read -r field _ value && test -n "$field"
  do printf -v "$field" "%s" "$value"
     count=$(( count +1 ))
  test -n "$count"

updatePhotos() {
  while readRecord
  do file="${assetfilepath//C://mnt/c}"
     echo update "$datetime $file"
     touch -c -d "$datetime" "$file"

if test -z "$1"
then albums

photos "$@" | updatePhotos

And that works well enough.

Wednesday 14 December 2016

Two touchpads for one

Solution was to add in /etc/modprobe.d/blacklist.conf
blacklist i2c_designware-platform
And reboot the system. After that syndaemon works fine.

and added this entry:

# Disable generic Synaptics device, as we're using
# "DLL0704:01 06CB:76AE Touchpad"
# Having multiple touchpad devices running confuses syndaemon
Section "InputClass"
        Identifier "SynPS/2 Synaptics TouchPad"
        MatchProduct "SynPS/2 Synaptics TouchPad"
        MatchIsTouchpad "on"
        MatchOS "Linux"
        MatchDevicePath "/dev/input/event*"
        Option "Ignore" "on"

Wednesday 7 December 2016

What is wrong with Apple?

Apple make easy things very very hard

Sharing photos is easy with google

If I want to share my photos with someone who has a google account, I just name them in the share dialog, and then they can see my photos, via their own google account from any web browser.

Or if they don't have a google account, I can share the "link" to the album so that any one who has the hard-to-guess link can see my photos.

Anyone with a vaguely modern web browser.

That's all that's needed for 21st century communication.

But not with Apple

A family member uses an ipad and wants to share photos with me from her "photo stream".

So they send me an invitation to subscribe to their  photostream.

Luckily I have an icloud account from when I tried to use itunes to buy music (that's another story).

And I click on the link, to be greeted with:
To subscribe to ... photo stream on your iPhone, iPad, iPod touch or Mac, open your invitation in the Mail app and click the Subscribe button in the message.
To subscribe you need to be signed in to iCloud on:
  • an iPhone, iPad, or iPod touch with iOS 6 or later or
  • a Mac with macOS 10.8.2 or later and iPhoto 9.4 or Aperture 3.4 or later
I actually need to possess an apple device to view some photographs that someone has taken using their apple device!

I get my paws on an apple device

I manage to lay my hands on a MacBook Air. I give it all the updates. It's running Lion, the latest release of MacOS that it can run.

I install iCloud.

Do you want to guess if I can subscribe to the photostream?

I can't -- I need iPhoto 9. Free upgrades to iPhoto 9.4 are available if I have iPhoto 9, but I don't. It has iPhoto 8.

I can't buy iPhoto 9 from the Apple app store because it has been discontinued.

Apple won't let me pay them the money that they are extorting from me because... it isn't enough.

I track down iPhoto 9

I find that I can get iPhoto 9 if I buy a CDROM containing iLife 11. (iLife 11 has also been discontinued from the app store).

iLife 11 typically sells for $50 or £50 on Amazon, but I track down a copy for around £20 on ebay.

I'm not confident, and neither is my wife.

And then I notice, iPhoto 9 could possibly not be enough.

The small print in the email says: with macOS 10.8.2 or later
and I only have macOS 10.6.6 (Lion) and this MacBook Air  won't take a newer version.

So now I'm an apple user (with this MacBook Air I got hold of) and I still can't view the photostream. I need to shell out a few hundred pounds to but a newer macbook or ipad.

But iCloud is available for Windows.

iCloud for Windows

I install iCloud for windows.

It doesn't work.

After logging in (authenticating me successfully against the apple authentication servers) it then declares that I am not connected to the internet. c_a_murphy4 comments:
I hope you receive more professional support than I did. Their senior team took more than a week with this issue and then blamed Microsoft (in a rather backhand way by saying there was nothing more they could do from an Apple standpoint). I called up Microsoft, and they in turn blamed Apple. If you ever do find a solution to this problem, please let me know. Apple and Microsoft just aren't coming through with the goods - at least not for me.
Some people seem to suggest that it is only the windows 10 anniversary edition that has this problem. Well I'm not downgrading.

Windows 7

So I try again on an old windows 7 laptop and get exactly the same problem.

I suppose I can blame Apple that now it's not just windows 10 but apparently no version of windows can run the latest version of their software, and it's not there fault.

Taking advice, I downgrade to the previous version of iCloud for windows.

This time  get the Error occurred during authentication error.

So I try the previous version of iCloud for windows.

Surely one point Apple have managed to produce working software for windows? And it works.

But no photostream

Now iCloud works but isn't any use. The photostream subscription has to be accepted as describe in the sucks-be-to-you message I had when I originally tried to subscribe from an unclean non-apple system:
a Mac with macOS 10.8.2 or later and iPhoto 9.4 or Aperture 3.4 or later
So I lookup virtual mac machines in the cloud. I could pay a few dollars an hour and perhaps get it working. Or pay $20 a month with the first day free and then cancel right away.

Via a friend, I get long distance access to a mac, with all the right software, and the photostream subscription is accepted.

Back to windows

Now the photostream subscription is accepted I can begin to download it on the windows 7 PC and then use google photo uploader to sync that folder to a photo album.

Back to google

And so now I can share the google photo album with other non-apple family members.

But this is normal for Apple

They break your stuff on purpose

If a you sacrifice at the wrong altar and have a non-Apple vendor repair your phone, Apple prevent your phone from working.

If you jailbreak your phone so that you can install software that you didn't pay Apple for, they will brick your phone on the next firmware update.

I showed here how an Apple user with a MacBook Air is prevented from subscribing to the photostream of another Apple user, because they didn't sacrifice recently enough at the Apple altar.

But Apple made a mistake 

Their new much-cursed top-of-the-range laptop that is missing a load of ports -- no HDMI, no USB2, no sdcard, etc

Now some independent developers funded by kickstarter have put together The HyperDrive.
It’s a $100 dongle that slips neatly into both USB Type-C slots to give you a whole lot more connectivity. Not only do you get your two USB Type-C ports back, but you gain a couple of USB 3.1 ports; a microSD and SD slot; and an HDMI video port.
Did you notice? "slips neatly into both USB Type-C slots"

Because of a stupid Apple oversight in providing standard USB-C ports users have overcome Apple! How are they going to stop that?

Of course the most annoying thing is that there will have been room inside the case to fit this dongle.

Afraid of perfection

Did you ever know a company so afraid of perfection that they spent so much time and effort deliberately lousing things up for their customers?

Or a bunch of captive customers so suffering from Stockholm Syndrome that they keep paying over much money for such rubbish?

Stockholm syndrome, or capture-bonding, is a psychological phenomenon first described in 1973 in which hostages express empathy and sympathy and have positive feelings toward their captors, sometimes to the point of defending and identifying with the captors. These feelings are generally considered irrational in light of the danger or risk endured by the victims, who essentially mistake a lack of abuse from their captors for an act of kindness.
And we are not seeing that much lack of abuse.
The FBI's Hostage Barricade Database System shows that roughly eight percent of victims show evidence of Stockholm syndrome.
And yet Apple market share is quite a bit more than 8%, around 12%.

I wonder, what is the excuse of the other 4%?

Friday 2 December 2016

The End

The time is past
  the bell has rung
The deeds to do
  that have been done
    are gathered up
      into the store
The rest: undone,
  for ever more

(C) Sam Liddicott 2016

Wednesday 30 November 2016

Calibri and Cambria on Linux

I'm no font fanatic. I can't tell the difference between Calibri and Carlito but I can tell the a busted document layout.

Fortunately, Carlito has the same front metrics as Calibiri (and Caladea has the same metrics as Cambria).

The font-substitution table in LibreOffice then allows me to work on documents using these fonts while getting a layout the same as my colleagues on Microsoft Office.

Quoting from the Debian Wiki:

LibreOffice font substitution

To install them, issue these commands as root in a shell:
# apt-get update
# apt-get install fonts-crosextra-carlito fonts-crosextra-caladea

In LibreOffice, you exchange Calibri and Cambria with Carlito and Caladea this way:
  • Open the "Extras" menu
  • Go to "Options"
  • Choose "LibreOffice"
  • Choose "Fonts"
  • Define a substitution for each of the two fonts (Calibri -> Carlito, Cambria -> Caladea).
  • Remember to check "Always" in the substitution lines.

Once the program is restarted, documents sent from MS Office look almost the same on your screen and printouts.

Thursday 24 November 2016

async promises - just in time to be late for the node.js show

I'm late to the node.js show; and that's a good thing.

It means I don't have to worry about CPS callback hell

do_something(..., function(err, result) {

I means I don't even have to do much overt promising
do_something(...).then( (r) => {
}).catch( (e) => {

I can just use async/await with automatic exception propagation.
return await do_something(...);

I'm not averse to the odd promise for the interface between an old-style CPS function; after all only an async function can call await. But an async function is also a promise. So I can convert this:
function expressHandler(req, res, next) {
  do_something(..., function(err, result) {
    if (err) return next(err); // return avoids else
function do_something(..., callback) {
  try {
  return callback(undefined, result);
} catch (e) {
  return callback(e);

function expressHandler(req, res, next) {
  do_something(...).then((r) => {
async function do_something(...) {
  return await something_else(...);

do_something is called as a promise and can be an async function that can await on other async functions or on promises. Async functions all the way up; callbacks all the way down (where I can't see).

Thursday 8 September 2016

Windows 10 DNS Suffixes

To fix the problem of domain suffixes being applied to well-dotted fqdn:
combined with
The group policy reg keys for the searchlist are:
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\DNSClient\"SearchList"="domain1,domain2,domain3"
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Policies\Microsoft\Windows NT\DNSClient\"SearchList"="domain1,domain2,domain3"

Use these tricks to prefix: .,
(dot comma) to the front of the search list.

Thus each name will be tried in it's current form without any other domain suffixes first.

And so a well-dotted fqdn will resolve.

Tuesday 7 June 2016

Defining bash functions in a Makefile

As I answered here; a makefile recipe may depend on bash functions rather than bash scripts.

Because of bash's export -f function-name feature to export a bash function definition into the environment where it will be picked up by sub-child invocations of bash, we can define a bash variable in the environment in the same form in the Makefile.

If we define bash functions in the Makefile using the same format bash does, and export them into the environment, and set SHELL=bash then we can use these bash functions in the make recipe.

Let's see how bash formats functions exported in the environment:

$ # define the function
$ something() { echo do something here ; }
$ export -f something

$ # the Makefile
$ cat > Makefile <<END
all: ; something

Try it out

$ make
do something here

How does this look in the environment?

$ env | grep something
BASH_FUNC_something%%=() { echo do something here

The pattern is (currently):
  BASH_FUNC_function-name%%=() { function-body }

So we do this in the Makefile:

define BASH_FUNC_something-else%%
() {
  echo something else
export BASH_FUNC_something-else%%

all: ; something-else

and try it:

$ make
something else

It's a bit ugly in the Makefile but presents no ugliness for make -n

Thursday 26 May 2016

Parsing paths & deleting empty items from a bash list

This spell will split a path into directories:

IFS=/ read -d "" -r -a path <<<"$path"

but because of the action of <<<, the final item will have a newline appended -- but that can be removed thus:


If the path began with a /, then path[0] will be empty, so as the leading / is important, we'll recover that

test -z "${path[0]}" && path[0]="/"

This spell will remove empty items from the list (and also renumber the indexes):

IFS= eval 'path=(${path[@]})'

(We could avoid the eval by saving IFS, but eval is safe enough here not to bother).

Now it is simple enough to iterate over "${path[@]}" and perform a chdir on each stage.

Monday 16 May 2016

Update on old DisplayLink drivers

An update on my earlier post of 2012.

Driver 1.1.62 (15 May 2016) is now available at and does not require any fixups for wrong USB vendor ID numbers, or patches for kernels > 4.5.0

For my HP displaylink hub I have:


# DisplayLink devices always have the active configuration on configuration #1
ATTR{idVendor}=="17e9", ATTR{idProduct}=="01d4", RUN+="/usr/bin/dlconfig"

and /usr/bin/dlconfig:

#! /bin/bash

echo 1 > "/sys/$DEVPATH/bConfigurationValue"

However, I must insert the USB device AFTER logging into to X or I get two fb devices added, one owned by udl and one by evdi. I suspect this is a race condition problem.

In any case, none of it works for me -- I'm not able to view anything at all on the frame buffer. I guess my USB device is too old.

However, if I modprobe udlfb and ignore the evdi drivers, and udl then I at least get a working framebuffer that can display images with fbi. -- but the colour depth is stuck at 16.

I'm able to use it as a primary device for an X11 session with this /etc/X11/xorg.conf

Section "ServerLayout"
    Identifier     " Configured"
    Screen      0  "Screen0" 0 0
    InputDevice    "Mouse0" "CorePointer"
    InputDevice    "Keyboard0" "CoreKeyboard"

Section "Files"
    ModulePath   "/usr/lib/xorg/modules"
    FontPath     "/usr/share/fonts/X11/misc"
    FontPath     "/usr/share/fonts/X11/cyrillic"
    FontPath     "/usr/share/fonts/X11/100dpi/:unscaled"
    FontPath     "/usr/share/fonts/X11/75dpi/:unscaled"
    FontPath     "/usr/share/fonts/X11/Type1"
    FontPath     "/usr/share/fonts/X11/100dpi"
    FontPath     "/usr/share/fonts/X11/75dpi"
    FontPath     "built-ins"

Section "Module"
    Load  "glx"

Section "InputDevice"
    Identifier  "Keyboard0"
    Driver      "kbd"

Section "InputDevice"
    Identifier  "Mouse0"
    Driver      "mouse"
    Option        "Protocol" "auto"
    Option        "Device" "/dev/input/mice"
    Option        "ZAxisMapping" "4 5 6 7"

Section "Monitor"
    Identifier   "Monitor0"
    VendorName   "Monitor Vendor"
    ModelName    "Monitor Model"

Section "Device"
    Identifier  "Card1"
    Driver    "fbdev"
    BusID    "USB"
    Option    "fbdev"    "/dev/fb1"
    Option    "ReportDamage"    "true"

Section "Screen"
    Identifier "Screen0"
    Device     "Card1"
    Monitor    "Monitor0"
    DefaultDepth    16
    SubSection "Display"
        Viewport   0 0
        Depth     1
    SubSection "Display"
        Viewport   0 0
        Depth     4
    SubSection "Display"
        Viewport   0 0
        Depth     8
    SubSection "Display"
        Viewport   0 0
        Depth     15
    SubSection "Display"
        Viewport   0 0
        Depth     16

Monday 7 March 2016

I want some bluetooth headphones.

I want some Bluetooth headphones

  • A2DP - audio
  • AVRCP  - remote control
  • HSP, HFP - telephony
  • CTP - cordless telephony (why not?)

That's the easy bit.
  • Built in mic
  • and voice-dialler.
  • and noise-cancelling mics AND playback
  • and volume-boost for quiet sources (I'm looking at you, DVD player)
  • and AVC/compressor for classical music on noisy trains, planes and auto-mobiles.

That's easy too.

Other inputs:
  • I want it to take a micro-SD card for a built-in mp3 player
  • And even have built-in storage, MTP or FAT32 block device.
  • And have a built-in FM/DAB+ radio.
  • It must charge from micro-USB, and be a USB audio-interface when plugged via USB.
  • I also want a 3.5mm stereo line-in socket so I can use them as normal headphones. But I want it to take a 4-connector 3.5mm jack so that it can function as headphones with mic.
  • It should detect when a 3-connector 3.5mm jack is inserted, and offer a 3.5mm microphone out socket.
  • I also want a 3.5mm line-out that feeds whatever audio source is selected, so that I can feed into an amplifier or to a friend who has the same headphones (so line-out should not necessarily cut off the built-in speakers).

Did I miss anything?
  • It should be able to record to micro-sd or built-in storage. In stereo. From the stereo noise-cancelling microphones any of the other audio sources, including USB, and the phone.
  • I should be able to play from any source to the phone. While recording the phone conversation.
  • And waterproof for jogging in the rain. (Not for me, for joggers).
But not Chrome-cast or Mira-cast. That requires WLAN, and that would be a step too far.

For well less than £100.

On a stick. From standard parts.

BT interface. USB interface. Cross-bar audio mixer. SD interface. GPIO controlled radios.

Wednesday 10 February 2016

Android 6 semi-adopted storage

How to split your SD card between adopted internal storage and portable external storage.

(see how-to instructions below)

The pain

Fed up of waiting on Motorola for the Marshmallow upgrade on my XT1072, and being short of internal storage, and being very fed up of having to move-to-sd apps after upgrade (along with occasional tricks such as deleting all data for drive, google+, chrome to scavenge extra memory when clear-cache wasn't enough) I decided to upgrade to CyanogenMod 13 to get Marshmallow that way.

I was really looking forward to being able to use my SD card as internal memory and have no more problems about storage.

What really happens is that with Marshmallow you cannot move-to-sd without adopting your SD card as internal. And then the only apps that will move-to-sd (now called Change) are those that could already move-to-sd.

And one of those apps that won't move-to-sd (even after the adoption of the SD card as internal storage) is Google Music.

Only now there is no SD card for Google Music to store the music on. So it stores it on the *internal* memory.

Whaaaat! My music collection is way bigger than my app collection, how does storing my music internally instead of some apps (all of which could move to SD anyway) help anything?

After messing abuot with Links2SD, Apps2SD, root shells and mount points, reading about volume manager and changing API's I decided that even loopback file fat32 systems in the adopted storage probably wouldn't work.

So I looked at re-partitioning an adopted storage card to shrink the adopted partition to make room for a fat32 partition.

In searching how to access the encrypted partition outside of Android (so that I could resize the file system within it) I came across these notes I came across the sm command which can create a mixed or public or private volume.

Mixed turned out to be exactly what I was looking for!

How to split your card

WARNING: Before you do this, be sure to eject the card from the Settings/Storage & USB menu.

WARNING: Regardless of whether you split your card, or just adopt as encrypted, or even just re-format as FAT, it may not be good for the life of your card. It seems that some SD cards have long-life flash allocated to the first few blocks where the FAT is kept. Using a non-FAT file system, or having the FAT file system further down the device loses that benefit, possibly even altering the pre-shipped format could do that too. []

I found how to partition my SD card to give 8G as internal storage to which all apps that can be moved will be moved and leave ~20GB as portable storage to hold music, etc.

First, you need adb working, and your SD card inserted and formatted as portable.

$ adb shell sm list-disks adoptable

disk:179,64 is my SD card that can be made adoptable, I want 90% as external SD:

$ adb shell sm partition disk:179,64 mixed 90

Note: Your card may be listed with an underscore _ instead of a comma, e.g. disk:179_64 in which case, that is what you type.

This erases the entire SD card, and then gives me 90% as portable storage and the rest as adopted internal storage.

The partition table looks like this:
Number  Start   End     Size    File system  Name            Flags
 1      1049kB  57.5GB  57.5GB  fat32        shared          msftdata
 2      57.5GB  57.5GB  16.8MB               android_meta
 3      57.5GB  63.9GB  6369MB               android_expand

The fat32 partition is not encrypted and can be mounted on a computer (provided it can handle the new GUID partition table format).

I advise a reboot after setting the new music storage location this as Google Music may get the wrong idea about much space is available.

Rename the adopted storage. If it had the same name as the portable storage partition then it may prevent one of the partitions from being available over USB MTP.

For reasons I don't understand, my disk label gets set as some junk similar to this: 82^GM-^KM-^?N-q^Xa^Oo and although I can change this by inserting it into a computer: mlabel -i /dev/sdf1 :: if I put it back into my phone, it looks right until I soft-eject and re-insert it (from the menu) - and then the weird label is back.

I wonder if this can be avoided by swipe-dismissing the notice that a new SD card is discovered (which shows after the mixed partition is complete) instead of selecting it.

If you want apps to be installed on the adopted SD partition by default, then you need to choose the Migrate Data option from the menu:

For CM13 this seems to work as an alternative to my original suggestion below which required the phone to be rooted, but for stock it seems ineffective.

# You won't need this if you chose migrate data above
$ adb root pm set-install-location 2

to have apps installed on the storage by default where possible. It is very effective. (Location 1 means internal, and 0 means auto-choose, but I don't know on what criteria).

However that command requires you to have rooted your phone. I wish this could be set another way.

(If you want to root your XT1072, follow the top-post instructions here:

This lets you use USB devices as adopted/portable (if you are rooted):
$ adb shell sm set-force-adoptable true

Monday 1 February 2016

Using flock in bash without invoking a subshell

flock -c can call external commands but not bash functions. Consequently users mess about with file descriptors, and often making a mess of it.

Inspired by Ivan's post I've written a flock wrapper for bash, that uses flock underneath but allows flock -c to work for bash functions.

It can be called just like the regular flock command, with the benefit that the -c invocation is supported for bash functions; so you can use it like this:

flock -o /tmp/process my_thing "$@"

and I strongly recommend the -o option so that the file descriptor used for the lock is not passed to any sub-processes, which could be problematic if long live sub-processes (e.g. re-spawned daemons) keep it open.

It will pass through and invoke the regular flock if the command isn't a bash function, or if you aren't trying to execute a function.

Sadly, it doesn't recognize bash built-in's.

But the good news is, you can use flock -o ...file... on a shell function inside your shell script without having to worry about file descriptors.

# Helper function (in order to preserve $@ in the caller)
# If this isn't used to call a shell function then return 0
# otherwise return $? as the argumment number which represents
# the command/function to be called
_is_standard_flock() {
  local args=$#
  # find the first argument that doesn't begin with a -
  while test $# != 0
  do case "$1" in
     -*) shift ; continue ;;
     *) break ;;

  # if it is numeric and there are not additional arguments 
  test $# = 1 -a -n "$1" -a -z "${1//[0-9]}" && return 0
  # (skipping -c if present)
  # or the following argument is also not a shell function then use the original flock
  if test "$1" = "-c"
  then declare -F "$2" >/dev/null || return 0
  else declare -F "$1" >/dev/null || return 0

  # we can't have shifted many args if this is a legitimate use of flock
  # so we will be in range of the exit code
  return $(( args - $# + 1))

# Help function to determine if -o or --close was given in the flock arguments
_wants_close() {
  test "${*/#--close/}" != "$*" && return # will also match bogus arguments like --closed
  # remove any -- options
  set -- "${@/#--*/}"
  # look for options with o
  test "${*/#-*o/}" != "$*" && return
  return 1

flock() {
  if _is_standard_flock "$@"
  then : # do outside the if-clause so bash can optimise exec where possible
  else # save the exit code (offset) as $1
       set -- $? "$@"
       # ${!$1} is the lock file 
       # ${@:$(($1 + 1))} might be -c
       test "${@:$(($1 + 1)):1}" = "-c" && set -- "${@:1:$1}" "${@:$(($1 + 2))}"
       if _wants_close "${*:2:$(( $1 - 1))}"
       then { set -- "$1" "$_" "${@:2}" ; command flock "${@:3:$(( $1 - 2))}" $2 && eval '"${@:$(( $1 + 2))}"' "$2>&-" ; set -- $? $2 ; command flock -u $2 ; return $1 ; } {_}<"${!1}"
       else { set -- "$1" "$_" "${@:2}" ; command flock "${@:3:$(( $1 - 2))}" $2 &&       "${@:$(( $1 + 2))}"          ; set -- $? $2 ; command flock -u $2 ; return $1 ; } {_}<"${!1}"
  command flock "$@"


Wednesday 27 January 2016

Per-PC font sizes, etc

My home network has NFS homedirs, and so it doesn't matter which computer or laptop family members log into. They get all their files.

Now, one of the PC's has a 40 inch monitor, and users prefer a text-scaling-factor of 2 when using that PC.

Changing their personal settings means changing it back again; fortunately a system-wide default of 2 can be set on that PC, and apply to all users that don't override the system value.

Edit the text-scaling-factor in the schema:
sudo nano /usr/share/glib-2.0/schemas/org.gnome.desktop.interface.gschema.xml

and rebuild the schema:
sudo glib-compile-schemas /usr/share/glib-2.0/schemas


And then to reset the users custom value (if any):

gsettings reset org.gnome.desktop.interface text-scaling-factor

But doing the same for org.cinnamon.desktop as I do for org.gnome.desktop


Also default-zoom-level to larger and default-use-tighter-layouts to true, in /usr/share/glib-2.0/schemas/org.nemo.gschema.xml

Bash: setting and testing $?

You want to set $? in bash

?() {
  return ${1:-$?}

and then:

$ \? 2
$ echo $?

You also want to test $?:

$ \? 2
$ \? && echo yes $? || echo no $?
no 2 

which is simpler than
$ test $? = 0 && echo yes $? || echo no $?
no 1 
which also replaces $? with the result of the test.

But how about this extended form that allows you to run a command while preserving (or forcing) the return code?

?() {
  set -- $? "$@"
  if test "$#" -le 2 -a -z "${2//[0-9]}"
  then return ${2:-$1}
  else "${@:2}"
       return $1


$ tar -xzf "$tar"
$ \? rm -fr "$tar"

which leaves $? set to the result of the tar extraction.

Of course a command which is purely numeric with no arguments is mistaken for an exit code.

Note the use of: set -- $? "$@" as function arguments are which is the only lexical scoped variables in bash.

Timing bash commands

Try this, it runs a simple command with arguments and puts the times $real $user $sys and preserves the exit code. It also does not fork subshells or trample on any variables except real user sys, and does not otherwise interfere with the running of the script

timer () {
  { time { "$@" ; } 2>${_} {_}>&- ; } {_}>&2 2>"/tmp/$$.$BASHPID.${#FUNCNAME[@]}"
  set -- $?
  read -d "" _ real _ user _ sys _ < "/tmp/$$.$BASHPID.${#FUNCNAME[@]}"
  rm -f "/tmp/$$.$BASHPID.${#FUNCNAME[@]}"
  return $1


  timer find /bin /sbin /usr rm /tmp/
  echo $real $user $sys

note: it only times a simple command, not any part of a pipeline (all parts of which are run in a sub-shell).

This version allows you to specify as $1 $2 $3 the name of the variables that should receive the 3 times:

timer () {
  { time { "${@:4}" ; } 2>${_} {_}>&- ; } {_}>&2 2>"/tmp/$$.$BASHPID.${#FUNCNAME[@]}"
  set -- $? "$@"
  read -d "" _ "$2" _ "$3" _ "$4" _ < "/tmp/$$.$BASHPID.${#FUNCNAME[@]}"
  rm -f "/tmp/$$.$BASHPID.${#FUNCNAME[@]}"
  return $1


  timer r u s find /bin /sbin /usr rm /tmp/
  echo $r $u $s

and may be useful if it ends up being called recursively, to avoid trampling on times; but then r u s etc should be declared local in their use.

is a way of specifying a temporary file name that will not be trampled on until after this function exits.

Shared here:

Filtering stderr

This helper function will take $1 as a simple command to be used on stderr, on the rest of the command. The filtered output is emitted on stderr.

e.g. stderr "sed -e s/^/tar: /" tar -xvzf -

The function is short, but obscure, making use of a few tricks

stderr() {
  { set -- $_ "$@" ; } {_}>&1

  { eval '"${@:3}"' "$1>&-" ; } 2>&1 >&${1} | eval '$2' ">&2" "$1>&-" 

  set -- $1 ${PIPESTATUS[0]} "${@:2}"
  eval "exec $1>&-"
  return $2

An explanation is here:

stderr() {
  { set -- $_ "$@" ; } {_}>&1

The first line uses the temporary variable _ (underscore), which cannot generally be relied upon, but is safe enough in this context. This variable is used to avoid this helper leaving any imprint. Trampling on variables or declaring any local variables could affect destroy transparency and potentially affect the rest of the script.

So _ becomes a copy of stdout; and then inside the { ... } we update the function arguments so that this copy of stdout is now argument 1.

The function arguments are the only lexically scoped variables in bash. We can set them here in this function knowing that they will not have any other affect anywhere else.

So $1 now refers to a copy of stdout, $2 is now the filter to be applied to stderr, and "${@:3}" ($3 and onwards) is the command to be filtered.

We want to run the command with the $1 copy of standard out closed, in case the command spawns other processes that might inherit this copy and leave it open. Its a private copy, and as bash doesn't support close-on-exec we must close it.

We want to do this: "${@:3}" $1>&- but bash can't take a parameter variable on the left hand side of a redirector (not even as ${!1}) so we must use eval. We put the command in single quotes to prevent it being interpolated at all prior to eval, but the redirector is in double quotes so that the interpolated string is passed to eval; thus: eval '"${@:3}"' "$1>&-"

We want to run this command with stdout passed to the spare copy we made in $1  because we will redirect stderr to stdout to be fed into the filter. This redirection specification is: 2>&1 >&${1} (variables are allowed on the right hand side of a redirector).

However we can't append these redirectors to the previous one which already closed $1, so we use a brace scope { ... ; }  in which $1 is closed.

This gives us so far: { eval '"${@:3}"' "$1>&-" ; } 2>&1 >&${1} which has stdout going to our copy of stdout, and stderr going to actual stdout ready to pipe to the next stage.

The next stage also wants to close $1 for the same reason as before, and is:
eval '$2' ">&2" "$1>&-"

So the whole invocation is:
{ eval '"${@:3}"' "$1>&-" ; } 2>&1 >&${1} | eval '$2' ">&2" "$1>&-"

We now want to close $1  for the rest of the script without losing the exit code. As we have finished calling other commands we could save $? in a local variable, but I use the function arguments again to save as $2. Note that PIPESTATUS[0] holds the result of the first stage of the pipeline.

  set -- $1 ${PIPESTATUS[0]} "${@:2}"
  eval "exec $1>&-"
  return $2

And there it is.

Monday 18 January 2016

Check status of entiire bash pipeline

My useful answer here:

pipestatus() {
  local S=(${PIPESTATUS[*]})

  if test -n "$*"
  then test "$*" = "${S[*]}"
  else ! [[ "${S[@]}" =~ [^0\ ] ]]

Note that S is not set to ("${PIPESTATUS[@]}"); this is so that we can re-create an array if PIPESTATUS is passed as a string, like this:

PIPE_STATUS="${PIPESTATUS[*]}" pipestatus

because an array cannot be passed in that fashion. Why would anyone want to do that? Probably not directly, but other helper commands just and also may want to preserve PIPESTATUS as best as possible to permit an: also pipestatus combination.

Usage examples:

1. get_bad_things must succeed, but it should produce no output; but we want to see output that it does produce

get_bad_things | grep '^'
pipeinfo 0 1 || return

2: all pipeline must succeed

thing | something -q | thingy
pipeinfo || return

Thursday 7 January 2016

Switch hdmi audio when TV is off using cec-client

I have a HDMI TV used as a computer monitor.

When the TV is turned on, the audio plays through the TV, so that the TV remote volume control works.

When  the TV is turned off, I want the audio to play through analog audio out so that I can still hear it.

I use the PulseEIGHT CEC controller and cec-client program to talk to the TV.

I have cec-client run as root under SOCAT to broadcast cec-data by UDP.

I have another SOCAT process running as the logged in user to pick up the data, monitor it for TV on/off status and move the playing audio streams.

It works, but is very rough.

In rc.local: /usr/local/bin/cec -s &

In gnome/cinnamon startup programs: /usr/local/bin/cec

And then the script:

#! /bin/bash

MYADDR=8 # because it is. I should probably read this from the cec-client output 
# 0 and 1 and the audio device indexes associated with the device names
# later I will normalise these too, but "pacmd list" will show yours.
ext() {
  pacmd set-default-sink alsa_output.pci-0000_00_1f.3.analog-stereo
  for input in $( pacmd list-sink-inputs | sed -e 's/index: //;t;d' )
  do pacmd move-sink-input $input 1

hdmi() {
  pacmd set-default-sink alsa_output.pci-0000_01_00.1.hdmi-stereo-extra1
  for input in $( pacmd list-sink-inputs | sed -e 's/index: //;t;d' )
  do pacmd move-sink-input $input 0

cec_client() {
  socat -u UDP4-RECV:4224,reuseaddr - | while read type _ _ dir rest
  do # echo "$type    $dir    :$rest"
     case "$rest" in
       "0f:36") echo "POWER OFF" ; ext ;;
       "(0): power status changed"*"to 'on'") echo "POWER ON" ; hdmi ;;
#       *"($MYADDR) as inactive source"*) echo INACTIVE ; ext ;;
#       *"($MYADDR) the active source"*) echo ACTIVATED ; hdmi ;;
#       *active*) echo "$type    $dir    $rest" ;;
# mmkeys-mate2mpris2

cec_server() {
  read hostname _ < <( hostname )
  hex_hostname=$( echo -n "${hostname%.*}" | od -tx1 | sed -e '1!d;s/0* //' )

  <<< "tx 80 47 $hex_hostname" exec -a CEC socat -u -L/tmp/cec \
      EXEC:'cec-client -t p -p 2' UDP-DATAGRAM:,broadcast &


main() {
  if test "$1" = "-s"
  then cec_server
  else cec_client

main "$@"