— tomauger.com

Here’s the scenario: you have a smaller Solid-state drive as your boot and application drive, and a larger HDD for your files. You don’t want to gum up your SDD with temporary internet files and other cruft, so you want your TEMP directory to be on the HDD, rather than its default location in Users. In fact, you don’t even want your Users to be on your boot drive either, because that’s where My Documents and other libraries are stored by default.

The solution? Move Users to your data drive but create a HARD LINK on your boot drive to fool windows (and Adobe Creative Suite) into thinking that everything’s on the boot drive.

One additional note (that caused a major complication for me) was that I generally like my TEMP directory to be at the root of my drive. This makes it more visible and therefore more likely to get cleaned out (by me) regularly, as opposed to being buried inside my %USERPROFILE% directory where it will never see the light of day. This has always been C:\TEMP in the past, but under this setup, I wanted it on D:\TEMP.

It turns out that moving your Users to the data drive is easier than you might think. The additional complication is that you can’t be logged in as that user when you do it. The best option then is to boot from your Windows install disk (which should be handy, because in all likelihood you’re doing this right after a clean install of Windows, right?) and do it from a command prompt there.

Before we get started

One thing you might want to do is get your drive letters in order now, before you do any hard-linking, because experience shows that changing drive letters after the fact could be problematic.

Boot into Windows, and open up the D

Booting to a command prompt

Use your Windows install disk to boot to a command prompt. To get there, choose the repair your computer option. You get this scary message saying that windows has detected a problem with your install (this is not true) and do you want to “Repair and Restart”. NO. (Go ahead, click the “No” button).

In the list of recovery tools, there is your command prompt. Click it.

Moving USERS to your secondary drive

Now that you’re in a command prompt and not logged in as a User, you can move the Users directory. There’s a ton of hidden files and stuff, so we’ll be using robocopy, a powerful copy tool that’s bundled with Windows 7.

This is where things get a little weird. When in Recovery mode, your logical drive letters can be different than what you expected. So spend some time in your command prompt to discover which is which.

I recommend using the DISKPART utility, which is available at the command prompt. Type “DISKPART” (without the quotes of course, and capitalization is optional) then type “list volume”, yes, I know it’s grammatically incorrect Niles. Now you should see a listing of all your volumes with the drive letters they’re mapped to. Look at the drive sizes – hopefully that will help identify the drive. Type “exit” to return to your command prompt for the next step.

If that should fail to be conclusive you willhaveto rock it old skool. You will use the dir command to list the contents of your various drive letters. You type the drive letter followed by a colon (eg: C:) and then ENTER to switch drives. You might find that some drives are unavailable or are protected system partitions. Just start at C: and work your way through the alphabet, doing a dir each time you actually get a physical drive and determine which is your boot drive and which is your data drive.

You’ll know you found your boot drive when the dir command lists directories such as “Program Files”, “Windows” and “Users”.

Write these letters down. You might get confused because they are often quite different than what you would expect. When this process is complete, they will go back to normal, so just bear with me.

In my scenario, it turned out that my boot drive was E: and my data drive was D:, so those are the letters I used in the following commands.

1. Copy your Users directory from the boot drive to the data drive

robocopy /copyall /mir /xj E:\Users D:\Users

2. Remove the old Users directory (it can’t be there or you can’t create the hardlink)

rmdir /S /Q E:\Users

3. Create the hardlink (called a Junction in proper NTFS jargon). This is like a shortcut, only at a file system level.

mklink /J E:\Users D:\Users

Now reboot and navigate to your Data drive – you’ll see the Users there. Then, to test it out, open up another windows explorer window, and click on Documents, and put something there.

Important: Windows Explorer will show C:\Users\YourUserName because we’ve completely fooled it. Don’t take my word for it. Navigate manually to your data drive and open up Users/YourUserName/Documents and you should see the file you copied.

For more details on the various commands I’ve listed and to see my original reference, follow this link to lifehacker.com.

Photoshop Scratch Disk Errors with relocated TEMP file

In theory, if you haven’t messed with your TEMP folder, it will be %USERPROFILE%\AppData\Local\Temp. Which means that it’s been relocated, too. In my case I moved my TEMP file to D:\TEMP for the reasons I described above in my preamble. This required also going into the Windows environment variables and pointing the TEMP directory to this new location. This is actually easy to do via Start Menu > Search > View Advanced Systems Settings, then click on Advanced > Environment Variables. You can then edit each occurrence of TMP and TEMP to point to your new location.

This causes no end of grief for the Adobe Creative Suite applications.

Photoshop will fail on launch giving you this error: “Could not open a scratch file because the file is locked or you do not have the necessary access privileges”

(Aside: for you Googlenauts who just landed on this page because you’re having this error, you might want to try trashing your prefs by holding down Ctrl + Alt +Shift immediately after launching Photoshop. That is the recommended fix for a different, and much more common situation, that also leads to the same error message. The solution below only applies if you’ve moved your TEMP folder to another location).

Bridge will crash shortly after launch with a similar scratch disk error.

InDesign will crash a moment after it boots up.

It’s bad.

It’s actually bad design on Adobe’s part. They don’t respect the environment variable and for some bizarre reason need the TEMP folder to be on the boot drive.

The solution: create another hardlink to TEMP.

mklink /J C:\TEMP D:\TEMP

Then point the environment variables to the hardlink, not to the physical location. Open your Start Menu, right click Computer, choose Properties. Click “Advanced System Settings” in the top left navigation pane. Choose the Advanced tab and click “Environment Variables”. Now, look in both the User Variables and the System Variables for any occurrence of “TEMP” or “TMP” and press the corresponding Edit button. Type in C:\TEMP in each instance (there are usually 4 edits that need to be made).

Leave a comment if this worked for you!

Note: I’m not a systems guy, so if you have other issues I probably don’t have the answers. I’m just posting this here because it took me a lot of Google-fu to cobble this info together, and the hardlink to TEMP is something I came up with (though I’m sure others have, too).

Read More

In continuing my tradition of mySQL / bash shell one-liners (and posts with arguably SEO-friendly but embarassingly long titles) here’s one that just popped up today. I wanted to dump only selected tables from my database (full disclosure: all the tables relating to a particular WordPress plugin). Knowing I had done something similarly, I popped over to this post and modified it accordingly.

mysql -uUSERNAME -pPASS DBNAME --skip-column-names -e "select table_name from information_schema.tables where table_schema not like 'information_schema' and table_name like 'wp\_visual%';" | xargs -t -I {} mysqldump -uUSERNAME -pPASS DBNAME {} | gzip > dump_VFB-only_2012-11-20.sql.gz

Xargs and I are still dating on-and-off, (well, more off than on), but when I need her special kind of loving to get my freak on, there’s really no other game in town. Don’t be confused by the -t output. I was. It looked like 3 separate files, and I was basically expecting only the last of the tables to actually find its way into my gzip (that sounds ruder than I intended. Excellent) but that was not the case.

If you find yourself vaguely offended or disturbed by my thinly-veiled sexual references to Unix shell commands, then you do not love your code. Or sex. Not sure which is worse.

Read More

During one of the wrap-up talks at WordPress Community Summit 2012, @nacin dropped the hint that a crack team was working on modifying the i18n/l10n system within WordPress to allow incremental / on-demand deployment of “language packs” with new .mo files, independently of the core update cycle.

This is a huge step toward enabling complete localization of WordPress, but the scary-cool part is for plugin developers – opening the door for third-party translations to be easily pushed out without necessitating a full dot revision.

[Skip all the verbage and take me to the pretty UI pictures]

Much (okay, all) of the details were clouded in much mystery, but it seems that there would be some kind of server / repo component (I envision something like Trac, only not like Trac) where designated community translators can log in, create translations for core, plugins, themes, whatever, and then push them out into language packs, which would then pop up in your Updates panel whenever your wp-cron did an update check.

Cool. But There’s a Bottleneck.

The bottleneck of course are the community translators themselves. Having the responsibility to translate all those strings from all those plugins, particularly if the plugin addresses functionality from some knowledge domain outside the purview of the translator’s expertise. This could be the case of building a state-of-the-art sports arena that never gets filled to capacity.

Read More

For a recent client, we found ourselves with 11,000+ registered WordPress users with a boatload of meta data that we wanted to be searchable in the Admin backend (All Users screen). Additionally, we wanted to be able to search on a user’s email address as well, to help the client’s Customer Support folk locate a user record quickly.

Now, out of the box, WordPress only searches the following fields:

  • user_login
  • user_nicename

This short post walks through the process of extending the WordPress Admin’s user search without mucking about in any core files.

Read More

Okay, so this is a pretty edge case, I’ll admit it, but recently found myself having to do a manual swap of a taxonomy through direct manipulation of the database. Short story: we had used the default post category instead of a custom taxonomy, and needed to preserve all the existing terms, so we registered the new custom taxonomy and then went into the DB and did a quick UPDATE, setting the ‘taxonomy’ field of wp_term_taxonomy to the new slug. All appeared well.

Problems with hierarchical taxonomies

Strangely though, when we tried to do a tax query using get_terms( array( ‘parent’ => $parent_id ) ), we kept getting empty result sets. Checking the database that made no sense because the hierarchy was there, the parent_ids were set, all was good in $wpdb-land. But each query yielded no results, even with ‘hide_empty’ set to false so we got the entire list, rather than just the terms that were actually in use. Sigh.

After much digging through core, the culprit was _get_term_hierarchy() defined in wp-includes/taxonomy.php. This core function, marked as ‘private’ through the use of the underscore in front of its name, is responsible for drafting up an array where the keys are the term_ids of all the terms in the taxonomy, and the value is the parent_id of that term. It’s just a shortcut, really, and in my opinion is probably a bit redundant. EXCEPT, that it was decided to use a primitive form of cacheing here. This must predate the $wp_cache because that probably should have been used instead. What it does is it stores the resulting hierarchy within the wp_options table of the database, with the option_name of {$taxonomy}_children.

So the _get_term_hierarchy() function first checks to see whether such an option record exists. If it does, it just uses that and returns the result. If it does not, it performs a proper query on the DB and stores that in the wp_options record for future use, before returning the new result.

So here’s where things fell apart. I suppose this option gets updated every time you at a term using the admin interface. Clearly, if you short-circuit this by writing directly to the DB, you won’t be thinking of updating the serialized array in wp_options. And when you register the taxonomy for the first time, it does write that option, so it will be there from the get-go, an empty array waiting to be filled with happy children like Mother Hubbard’s shoe. And that short-circuits the _get_term_hierarchy() call, which is used whenever you specify ‘parent’ or ‘child_of’ in your args to get_terms().

The solution in this case was simply to delete the option from the database and then run the query again. This forces _get_term_hierarchy() to re-build the list and cache it in wp_options, and all is well with the world.

So remember this little gotcha whenever you’re manipulating a hierarchical taxonomy directly on the database – not that this should happen very often, but I could see it cropping up during an import of terms from another DB, or, as in my case, a switch of taxonomy name.

Read More

Digital Download

cc:me is an hour and seventeen minute long continuous electroacoustic composition commissioned by Elaine Whittaker for her Cc:me art installation, which premiered in March of 2012 in Toronto. It’s available to the public for high-quality download at Bandcamp.com. You can listen to it for free by clicking the Play button at the top of this post (Flash required). This version is slightly modified from the installation version: it has an extra 5 minutes at the tail, allowing the piece to end rather than loop continuously as it does in the installation.

Listening Notes

Because it was originally composed as an ambient accompaniment to a physical art installation, listening to the piece as a stand-alone composition for its full duration can be an extreme experience, primarily due to its length and slow pace. The piece evolves through 7 episodes, with the opening theme resurfacing and subsiding, before restating itself in its entirety to bracket and close the piece. Episodes transition into each other gradually and seamlessly, evoking a journey through a highly textured landscape with significant auditory events as landmarks along the way.

Read More

Check this code snippet out. Include jQuery and the most excellent lettering.js jQuery plugin by Dave Rupert and the guys at Paravel, inc, and then paste this into a script tag, or whatever you do to get JavaScript on your page. Then, just customize the list of classes that will be “bumped”.

// Bump.js by Tom Auger (http://www.tomauger.com). Free to use for anything. Maybe leave this line intact?
// Set up some customization here for the randomness
var classes = [
	// ['css selector', vertical-align variance (in pixels) eg: 2 = -2px to +2px, % chance of getting 'bumped']
	['.text', 1, 10],
	['#hours', 1, 50],
	['.textlist li', 1, 20]
	// Everything's been loaded, let's have some fun!
	for (var i=0, l=classes.length; i Math.random()) $(this).css('vertical-align', Math.floor(Math.random()*(shift*2+1))-shift + "px");
Read More

If you’re using DreamWeaver without its built-in Subversion support, or for whatever reason need to add newly-created files (that may still be checked out) to the repository, this one liner will do the trick:

svn st | grep "^\?" | grep -v ".LCK$" | awk '{print $2}' | xargs svn add

Let’s break it down, by pipes (reading left to right):

  1. Give me a subversion status: a list of all files that have been added, modified, or otherwise
  2. Using grep, only select those that start with a “?”, namely those files added to the working copy, but not yet under version control
  3. Using reverse grep (the -v switch), exclude any of those records where the filename ends with “.LCK” (the DreamWeaver lock file that indicates something is checked out)
  4. Using AWK, only output the second field (which translates to the filename without all the SVN flags in front of it)
  5. Using XARGS, pipe this (now filtered) list of files to the svn add command.
Read More

I had no idea, but in researching the expression “Yak Shaving”, it appears that it applies to many, if not entirely all, of my daily activities.

Including writing this post, as a natural result of trying to import a new project into my SVN repo while ignoring a directory. Totally relevant of course.

Read More

Since I’ve gotten back into Magic: The Gathering, a lot of things have changed, mostly for the better (though, seriously, who came up with the whole Planeswalkers idea? I thought we tried that with thos ugly oversized “Character” cards back in the day).

One of the great new things Wizard has given us is the Gatherer, their online card search database. I was surprised to learn that it allows Regular Expressions in its search terms. This makes for some seriously powerful search possibilities.

Since I realize that not all MtG afficionados might be programmers or even care what RegExps are, I am compiling a list of cool searches using Regular Expressions that will give you comprehensive card lists that you just wouldn’t be able to get any other way.

I’ll be adding to this list from time to time, so be sure to bookmark it for your own deck building / collecting research!

All red cards that deal direct damage:



All cards that put a card from a graveyard directly into play

(plus a bunch of others that put Zombie tokens into play instead  (I’ll need to refine this one):


Note that there’s a bit of a bug with Gatherer right now in that it will lose the “?” in the URL when you click on any of the pagination links. So, in order to go to the second, third etc. pages, you need to modify the url by adding “&page=X” where X is the page number – 1. So to get to page TWO:



 All cards that grant an additional combat/attack phase:



All cards that donate a creature (token) to your opponent:



All Cards that can be sacrificed on ETB



All cards that have player gain or lose life equal to a creature’s Power or Toughness

Read More

Named Ranges in Excel are pretty cool. They enable things like drop-down lists in your Data Validation. You can define a series of values in a column, give that column a name, and then you can refer to that range by name instead of its coordinates (ie: A1:B5).

One of the frustration with maintaining these lists is that as soon as you add a new value, you have to go back to Formulas > Name Manager and redefine the range to include the new value. To avoid this, you can create what’s called a Dynamic Range, by using a formula instead of a hard-coded set of coordinates. This is most often handled using the OFFSET( ) function as you’ll see below. Googling “Excel dynamic range” will return tons of results that all have variants of:

=OFFSET(Sheet1!$A$1, 0, 0, COUNTA($A:$A), 1)

The Basic Formula, Explained

‘OFFSET’ will return a range based on a STARTING POSITION, the ROW OFFSET (that gets added to the starting position), the COLUMN OFFSET (added to the starting position, the HEIGHT (number of rows down) and the WIDTH (number of columns across).

STARTING POSITION is usually the top left cell of your named range

ROW OFFSET is usually 0, since we have defined the starting position of our range

COLUMN OFFSET is also usually 0, for the same reason

HEIGHT is the number of rows to include in the range (here we’re using another formula – more on that below)

WIDTH is the number of columns to include in the range (must be at least 1)

Expanding the Range Dynamically

So clearly, OFFSET by itself doesn’t do that much for us – in fact, if you’re just going to plug static number into OFFSET, there’s actually no use for it – just plug in your range directly and leave it at that. The reason you use OFFSET is because it allows you to substitute any of its parameters (arguments) with some other formula. This is where the ‘dynamic’ part comes in.

The typical dynamic range formula you’ll find on the Interwebs uses COUNT( ) (for numerical columns) or COUNTA( ) for text columns. COUNT and its text cousin COUNTA count the number of non-blank cells. If we count the number of non-blank cells in a row and use that as the HEIGHT parameter of OFFSET, we get a range that starts at the starting coordinate and continues on up to the last cell that contains a text entry. In theory.

In actuality if you read the fine print, what you’ll get is a range of cells that starts at the first cell and continues down for as many cells as were counted by the COUNTA function. If there are blank cells in the middle of your range for some reason, this number will be wrong. That’s because the non-blank cells are not counted, so your range will be shorter than it should, and the last cells in the range will be left off. So be aware of this subtlety.

Read More

For those of you that just want the straight goods, here’s how we delete everything EXCEPT the ‘backup’ directory:

find . -maxdepth 1 ! -name 'backup' ! -name '.*' | xargs rm -rf

Here’s a bit of a (simplistic) explanation:

  1. ‘find’ is the magic sauce here. We’re using it to recursively search through all the files (and directories) starting at ‘.’ (the current directory)
  2. the ‘!’ is the negation operator, which tells find that the operator that follow (-name) should actually perform a negative match (match everything that does NOT match this criteria)
  3. we also need to set up a negative match for ‘.’, ‘..’ etc, since find returns those files as well. Do note that one side effect here is that it won’t delete, for example, ‘.htacess’. You may need to modify this if you want to kill those ‘hidden’ files as well
  4. find is recursive, so even if it doesn’t attempt to delete the ‘backup’ directory, it will still traverse the ‘backup’ directory and delete all the files inside it, leaving an empty directory! To avoid this, we use -maxdepth 1 which effectively turns recursion off. We then make sure we recursively delete the files in the OTHER directories by using the -r flag on ‘rm’
  5. xargs is one way to ‘do’ something useful with the list of files and directories returned by find (find also has an -exec operator which will be able to do almost exactly the same thing, but I like xargs’ syntax better, personally). We pipe the output to xargs and then follow it with the command that we would want to run once per file/directory, in this case ‘rm -rf’

Have fun! And please backup your shit before trying this, cause one mis-step and you can trash a whole lotta stuff!

Read More