All hail the glorious vim master race

in code

OR: “Moving from Sublime Text 2 to vim.”

Click here to revel in my glorious vim master race desktop.

Dual head vim setup

I have used Sublime Text since 2011. I don’t know when I first used it, beyond clear memories of it open in class during my first year at I.T. Sligo. I asked “suggest a prettier editor than Notepad++” on Facebook, and received a link to Sublime. The rest, as the clich√© goes, is history.

I have used vim alongside other GUI editors since ~2001. I have used:

  1. gvim.
  2. Kate in vi mode.
  3. Sublime Text in vintage mode.
  4. Vimium in Google Chrome.

And so on, ad infinitum; vim-style modes and bindings are both popular and easy to shoehorn into any software at all with an API.

Sublime Text is awesome; Sublime is flexible, extensible and packed with all sorts of great features. I have relied on Sublime and the Sublime SFTP plugin since I began work on with Tuairisc in August 2014. Sublime SFTP is, like Sublime, awesome: not only has it support for FTP, but also SFTP, FTPS and straight SSH connections. It can sync files and folders in both directions. It’s easy. The problem I have is that, in the end, my dependence on vintage mode and Sublime FTP has led to bad habits:

  1. Why version control and sandbox the test code when I can just upload shit and see what runs?
  2. Why bother to learn vim in depth when Sublime Does a Good Enough Job?

Both these and other things have led me to reassess Sublime over the past few weeks:

  • Many of Sublime’s keybinds aren’t friendly on my wrist in OS X. I fucked my right arm with RSI when I was bedridden during my 2012 illness. I have been careful about mouse use since then, to the point that I keybound everything in World of Warcraft. While I know I’m free to change every keybind in Sublime Text, that is a time-intensive exercise I don’t relish.
  • Sublime Text and Sublime SFTP are nagware and I’m cheap.
  • Sublime has become an intermediary between my true workspace and I. When I work, I feel like I am building a ship in a bottle while I wear oven mitts. Chris Hadfield’s tales of working in thick spacesuit gloves is a good allegory.

So as of Monday I went all-vim, all the time. I now work on my server through SSH, screen and tmux. I dove in headfirst as I yodeled “YOLO”. These tools allow me to now work on my code with vim, manage commits with tig and save a large amount of time. Neither do I have to fuck around with GUI applications anymore.

Here is what I have learned:

  • Having a keyboard and mouse with which to copy and paste makes one lazy, but it is also slow. You can see what you need to copy and where you need to paste it, but you don’t look at their positions. You click first on one and then the other. vim-for lack of a better description-forces you to be aware of the spatial position of characters in the file. What do I need to copy? What are its line and column numbers? Where is the target area in relation to this? How many lines and columns away is the destination?
  • vim isn’t as scary as I feared. I mean, fuck, whenever I think of vim, I conjure up a hoary veteran of the of the kernel mailing lists. But additions to the .vimrc (here’s mine) have proved straightforward. Adding new plugins and colour schemes (oxeded) has proved to be as simple as untarring them into the respective folder.
  • Macros are a lifesaver. To give an example, I use the phpdoc style of code comments. I am able to save and then paste the same boilerplate over and over without having to type it out for the nth time.

vim has been good to me and this move has proven good for me.


Get an array of per-year and per-month counts of your WordPress posts

in code

PHP archive function

I had to research this for a client. Their project requires a list of posts per month and per year over the duration of the blog but, like mine, they have a lot of posts spread over an extended period of time. It would benefit me to have the same count, so I approached a solution for both sites. As of this article, I have published just shy of 1800 posts over an inclusive 11 years (2005-2015). There are a few methods to grab a post count for the current month, or for a given month using WP_Query or get_posts. These methods take up to a second to iterate for a blog of my age and and size, which is just too slow. While I could save the results using update_option, that bring in date checking, incremental generation, variable validation and a host of overhead that wouldn’t work as well as an efficient method.

This Stackoverflow thread gave me the seeds of a solution, which I expanded into a function that:

  1. Determines the year of the first post.
  2. Determines the current year.
  3. Inclusively iterate through each year using the ‘$wpdb‘ global variable to find the number of posts for each month of that year. A month isn’t added if there were no posts during it.

I’m going to assume that most calendar archive widgets use something like this. Thing is, Google didn’t turn up anyone who talked about an easy and copy-pastable solution, so here you go:

GitHub gist!

The function returns a multidimensional array ordered by year in descending order, with each month added by name followed by a count.

 * Convert Number to Month
 * -----------------------------------------------------------------------------
 * See:
 * @param  int          $number             The month of the year as a number.
 * @param  string                           The month as a word.

function get_month_from_number($number) {
    return date_create_from_format('!m', $number % 12)->format('F');

 * Generate Dated Archive Post Count
 * -----------------------------------------------------------------------------
 * Generate the initial count of posts by year and month, and save it under the
 * given options key. Generating this can be resource intensive, so it makes 
 * sense to store this as a variable.
 * See:
 * @param   string      $option_name        Options key for the post count.
 * @return  array       $counts             Returned counts for the 

function timed_archives_count() {
    global $wpdb;

    /* Get the year of the first post: 
     * -------------------------------------------------------------------------
     * 1. Get 1 post in ascending order. This is the first post on the blog.
     * 2. Extract the date of the post.
     * 3. Parse that down to the year alone. */
    $from_date = preg_replace('/-.*/', '', get_posts(array(
        'posts_per_page' => 1,
        'order' => 'ASC'

    for ($i = date('Y'); $i >= $from_date; $i--) {
        $counts[$i] = array();

        $month = $wpdb->get_results($wpdb->prepare(
            "SELECT MONTH(post_date) AS post_month, count(ID) AS post_count from " .
            "{$wpdb->posts} WHERE post_status = 'publish' AND YEAR(post_date) = %d " .
            "GROUP BY post_month;", $i
        ), OBJECT_K);

        foreach ($month as $m) {
            $counts[$i][get_month_from_number($m->post_month)] = $m->post_count;

    return $counts;

Here is a snippet of how the returned array appears:

array(5) {
    string(1) "8"
    string(2) "14"
    string(2) "12"
    string(2) "17"
    string(1) "6"

There’s lots more I can do from this nub of a function. Quick example that spits out an ordered list:

$archive_counts = timed_archives_count();

foreach ($archive_counts as $year => $months) {
    printf('<br />%s<br />', $year);

    foreach ($months as $month => $count) {
        printf('%s: %s<br />', $month, $count);

Find duplicate files in a directory

in code

When I photographed heavily/professionally, I was rigorous in how I handled my imported raw files, and master processed (PSD/XCF) files. I was much less rigorous in how I sorted and stored my processed JPG files, to the point that I’ve found several directories with anywhere between hundreds and thousand of images, some or many of them straight duplicates.

For the hell of it, and also because I haven’t touched C# since early 2013, I drew up a simple console application in C# to search for duplicate file in a given directory. I made a good start on it in Bash, but…fuck. Bash is slow and interacting with arrays in Bash leaves me wanting to murder somebody.

Order of the program:

  1. Check directory was provided. Check directory exists. Check it has more than one file.
  2. Get list of files in directory.
  3. Generate MD5 checksums for each given file.
  4. For each checksum:
    i. Check each file after this in the list to see if it has the same sum.
    ii. If a duplicate is found, check if it is on the recorded dupe list.
    iii. If it isn’t on the dupe list, add it.
  5. Run through the file list once for each dupe checksum. Print all file names with the same checksum.

I need to find more little projects like this in C#; it was fun to dust off what I knew.

using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Security.Cryptography;

public class findDupes {
    static void Main(string[] args) {

        string[] files = Directory.GetFiles(args[0]);
        List<string> filesums = new List<string>();

        foreach (string file in files)

        List<string> dupes = SearchForDupes(filesums);
        PrintDupes(filesums, dupes, files);

    static void PrintDupes(List<string> sums, List<string> dupes, string[] files) {
        // Print output.
        foreach (string dupe in dupes) {
            Console.WriteLine("{0}\n----------", dupe);

            for (int i = 0; i <= (files.Length - 1); i++)
                if (sums[i] == dupe)


    static List<string> SearchForDupes(List<string> sums) {
        // Search for duplicate files within the given list of sums.
        List<string> dupes = new List<string>();

        for (int i = 0; i <= (sums.Count - 2); i++)
            for (int j = (i + 1); j <= (sums.Count - 2); j++)
                if (sums[i] == sums[j])
                    if (!dupes.Contains(sums[i]))

        return dupes;

    static void CheckBeforeProceeding(string[] args) {
        // Check things are good with the target dir before proceeding.
        if (args.Length == 0) {
            Console.WriteLine("Error: No directory provided");

        if (!Directory.Exists(args[0])) {
            Console.WriteLine("Error: '{0}' is not a valid directory", args[0]);

        if (Directory.GetFiles(args[0]).Length == 0) {
            Console.WriteLine("Error: '{0}' does not contain any files", args[0]);

        if (Directory.GetFiles(args[0]).Length == 1) {
            Console.WriteLine("Error: '{0}' only contains 1 file", args[0]);

    static string GetFileSum(string file) {
        // Function scalped from
        using (var sum = MD5.Create())
            using (var stream = File.OpenRead(file))
                return BitConverter.ToString(sum.ComputeHash(stream)).Replace("-","").ToLower();

Here is some example output:

[mark][new_instagram] $ ~/dupe_find.exe .



Neater output from the program is left as an exercise to the reader.

Makeout Point

in the website

Makeout Point, menu closed

Thank my housemate Alanna for this theme. Alanna has wanted to blog for a while, but Alanna has also wanted to procrastinate and play video games in her free time. :) She hopes that because she asked for a unique theme for her site, and drove me to build it, she might be more likely to blog out of a sense of responsibility to the work undertaken.

And here I am two weeks later, just about finished with Makeout Point. The theme is clean, minimal, responsive, and not half-bad compared to my earlier efforts. The theme is built for Anchor, a PHP Facebook-lite. The available functions are bare bones compared to WordPress, the upshot is that you don’t have WordPress’ cruft. Want to loop comments? Loop comments. Want to get a count of comments? Get a count of comments. There’s a really nice straightforwardness to Anchor that I’ve enjoyed working with.

The design is clean, responsive (smartphone through to desktop), fast, has a strong focus on code snippets, and makes use of burger menus for both site navigation and article comments.

Makeout Point, menu open

Anchor isn’t perfect-you can freely use a mix of both HTML and Markdown, and this leads to occasions where previously-escaped HTML and XML code snippets in <pre> tags are parsed. Whoops. It’s nice though, don’t mistake me! I’d love to work with Anchor more in future.

You can fork or pull Makeout Point from it’s GitHub repo.