Fast path finding with priorityq

The power of Nostalgia

I really miss Weewar!    Weewar for those who don’t remember was a very very simple and light themed, turn-based, strategy war game.   There have been many clones since then but nothing comes close.  Here’s a video to give an idea of the mayhem:

So it all came flooding back and I embarked on what I would do when I am flooded with nostalgia.  I recreate it!    Plus now I had the added incentive of exposing my kids to some good old fashioned fun and strategy!

Getting there fast!

The first thing I wanted to teach my kid was around the concept of finding your way, say in a maze.   And here again, Weewar provides us with good life lessons.   Weewar, like other turn-based war games, offers different kinds of units (eg Soldier, Tank, Plane, Mortar etc) that, like other turn-based war games, can move around on different kinds of terrain (eg river, mountain, plains, swamp etc).    Each unit has several movement points (MP) and each terrain type has different costs (that would eat away at a unit’s movement cost).    So clearly it is strategically paramount that you get your units to the enemy base as quickly as possible.

Given the different costs and movement points and speeds, it requires a bit of careful thought.   You cannot adopt an easy approach where you pick the terrain with the best cost and hope it would get you there as that could be a trap that leads you to a “costly” mountain!   There needs to be a way to compute the cheapest/shortest path between a given two points and this is where Dijkstra’s shortest path algorithm comes in!   It looks something like (courtesy of Skienna’s The Algorithm Design Manual):

In line 7, in order to find a vertex with the smallest distance, a Priority Queue is usually used.   And in line 8, as the distances of new nodes are modified, they will be reorganized within the Priority Queue based on these distances (this is the decrease-key method on a Priority Queue).

Current hurdles

A python implementation of the above algorithm highlights two problems:

  1. Finding an element is inefficient (O(n))
  2. Adjusting the position of an item in the heap after the item has been modified (the decrease-key operation) is not possible.

All that can be done now is the child node will have to be removed (which is an O(n) operation) and then have to be re-inserted into the heap.   In the python’s heapq module a node cannot also be easily removed.

PriorityQ to the rescue

To fix this I have developed the priorityq python module.   This module has the following advantages over the traditional heapq module:

  1. Finding a value is an O(1) operation
  2. Once a value is updated, it can be re-adjusted without a removal.
  3. The heap storage strategy can be changed transparently without needing to peek into the internals of the implementation.
  4. Seamless integration of custom comparators.

The new implementation with the priorityq module is very simple and can be found here.

How it works?

The reason this works is delightfully simple.  Current implementations refer directly to the object that is stored in the heap.   What PriorityQ does is slightly different.  Instead of dealing with the objects directly, the objects are wrapped in an opaque handle which is what is returned to the user (of the library).

The user, in turn, can pass these handles back to the queue to, well, do things!    Though a value can be modified, the handle remains the same and more importantly, the handle (opaquely) contains all the information required by the custom heap storage implementation to find and identify an existing element.


The examples folder (in the package) contains a few examples that can be used on real use cases.   Thankfully the 9th DIMACS international challenge was just the arena that provided a few really good graphs.

The example can be run as follows:

python examples/dijkstra/      


python examples/dijkstra/ examples/dijkstra/data/   10

The above runs 10 queries over randomly selected source and destination nodes in a graph that contains about 320,000 nodes and 800,000 edges.   The times (as expected) were proportional to the number of nodes processed (ie searched and adjusted).  Given the different heap storage types the following times were recorded:

Storage Strategy Total Nodes Processed Average Nodes Per Second
List Heap


Binary Heap



Binomial Heap TBD TBD
Pairing Heap TBD TBD

What coming next?

In the upcoming parts of this series, I will talk about:

  1. The JS (and Swift) ports of this library
  2. More heap storage strategies (like binomial, pairing and Fibonacci heaps).
  3. Adding more “standard” algorithms that can leverage the “find” and “decrease-key” features.

Useful links

Python Heapq Module

Java PriorityQueue

PriorityQueue Java Source

The skyline problem

Tradeoffs in messaging oriented architectures

Messaging systems are a very important component of the data pipeline in any large scale system.   Consider a simple web application that serves all its requests online (ie directly from the source of truth data store without any preprocessing or offline index generation):


Pretty soon as load increases and horizontal scaling schemes (partitioning, replication, LB etc) have been exhausted strict consistency has to be relaxed by separating the read and write paths, which necessitates the employment of message passing (or event dispatching) across the differents components (services) in your ecosystem, resulting in (a broader version of) the following (each tier is actually Load Balanced in a cluster for high availability):screen-shot-2016-12-10-at-2-56-51-pm

Message passing in a service-oriented architecture (SOA) is very critical for scalability and availability (from enabling eventual consistency to offline processing and analytics to buffering of event firehoses and so on).   In the simplest and most abstract term, message passing can be just thought of as firing off events (user action, metrics, errors, conditions, sensor data, stock prices etc) for asynchronous consumption and processing by one or more consumers depending on application needs.    Several systems enable messaging, eg Kafka, Google PubSub, Amazon SQS, RabbitMQ, ZeroMQ, and others.  While the overarching goal is asynchronous message delivery and consumption, these systems vary greatly on delivery semantics, persistence, ordering, access pattern optimization and so on.

Delivery Semantics

When messages are sent/fired, their delivery is never guaranteed in a distributed system (due to failures, corruption, unbounded network partitions etc).   At the highest level there are 3 delivery guarantees:

  1. exactly-once – Messages are delivered exactly once, processed exactly once and “verified/acknowledged” exactly once despite the level of resiliency in a system in a scalable and cost-efficient manner.  In practice, this requires a lot of application of idempotency constraints/tricks and expensive infrastructure and coordination.
  2. at-most-once: Messages are delivered, processed, verified at most once (including never).   Unfortunately if an acknowledgment is never received by the event processor or the receiver terminated during the processing the message is lost with no traces of its status.
  3. at-least-once: Messages can be delivered repeated until a successful verification – by far the simplest guarantee to provide.

Most systems provide at-least-once guarantee, and really, it takes us a very long way.   By easing the strictness of these guarantees, implementation costs are lowered and other semantics can be built *on top* of this.  A simple and widespread of this in practise is TCP/IP which provides ordering and reliability by building on top of IP (which is unreliable and unordered).

Persistence/Durability and Consumer state

Persistence and durability of messages affect delivery and processing requirements for consumers.   This is a major tradeoff between write-only queues and message brokers.  For example write-only queues like Kafka (and Amazon Kinesis and IBM MessageHub and others) are extremely producer/publisher-centric in that messages are published (reliably and ordered) at very high rates, but ultimately it is up-to the design of the consumers (in the architecture) to ensure a meaningful processing of messages and offset advancements in the message logs/queues.  ie Messages write scaling is independent of the number (or online/offline state) of consumers.  Adversely, higher durability and persistence guarantees are required since messages may be consumed offline at a different time to when the messages were actually published (which puts an upward pressure on write latencies).

In comparison typical message brokers (like RabbitMQ and *MQ) are great for enabling complicated topic and routing strategies but come at expense of state management across the consumers (by the broker).   Additionally depending on persistence configurations, the extra cost of state management affects how inexpensively writes can be scaled (hint: it is expensive) so are suited for cases with low write rates and suited more for “online” consumers rather than offline consumption and processing of messages.   If the requirement for asynchronicity is for online consumers for immediate consumption of messages then persistence constraints can be relaxed as the need and significance of “historical” messages may be low (eg in online learning systems).

Interface to a messaging system

Despite the fact that a messaging system is essentially an abstraction for storing messages for asynchronous consumption, the tradeoffs above influence different usage scenarios.   It so happens that building on top of a simpler abstraction (like a high throughput durable ordered queue) can lead to interesting routing strategies and workflows like those possible by a more complicated message broker.   What this boils down is to let the topology of consumers (and subsequent publishers) guide your guarantees at different parts of the system which requires more fine tunning and manual approach in topology selection and design.   There are ways to build this in a more reasoned approach, which is the topic of another blog (Spoiler Alert – Stream Processing).



Machine learning @ Coursera

[TL;DR;] Machine learning is not as hard as I imagined and is a great/fun toolset to learn and Andrew Ng’s course on Coursera is the best place to start!

There is something about darkness, that brings to life all our fears. It is tempting to say this (illogical manifestation of fears) only occurs during our childhood. The slightest rattles in the attick or the creaks on the floor board is enough to unlock your wildest and horrific imaginations of perhaps the bogeyman hiding under your bed or the *monster* lurking outside your house waiting to pounce on you (something I am still terrified of as I put the garbage bins on collection night).

*But*, shine a light on the darkness the next morning and your fears vanish! The strange, seemingly inexplicable noises and phenomena you observed earlier seem absurd suddenly can be explained (racoons the roof causing the rattle, somebody walking down the kitchen to get a drink, etc).

As strange as this may seem, this fear of darkness and all that lies within is rooted in our prehistoric past.   Wierdly and worse annoyingly this fear transcends the realm of darkness and caves into our modern daily lives plainly, simply and even boringly as the FEAR OF THE UNKNOWN.

For me that fear for years has been Machine Learning.   Actually, it was a combination of fear, laziness and simply brushing off ML as simply statistics and a whole lot of heuristics.   The long thanksgiving weekend was upon me and I really really really wanted to try something different and “new”.   A lunch with a machine learning expert-colleague-friend somehow got me curious to give it a shot.  And since not a day goes by when you dont hear how awesome Andrew Ng is in the field, his ML course on Coursera seemed like the most natural place to start!

Beset by notions of the course being heavy on statistics, I’d be lying if proclaimed my eagerness to it.   However, I was curious to see what the fuss was about!   It seemed every developer, her gardener and his dog were using ML for *something*, so it could’nt really be as fearsome as I had made it to be?

You see where this is going dont you?   Like a true [hb]ollywood flick this has all the promise of taking a carefree ladies man relinquishing his bachelor lifestyle when he falls for “the one”!   Well ok let us not kid ourselves.    This is real life, not bollywood!    More like man learns to appreciate friends he seldom focussed on… or better yet hardworking family man learns to appreciate all the things his partner does for him!

Firstly on ML itself.   It is surprisingly not as heavy on statistics on I had feared.   On the contrary it is pleasantly focussed on linear algebra and from what my limited experience has shown me takes the concept of linear regression to devastating effect.   As scary as that sounds this is actually the most fun part of ML (apart from the large scale infra side of things).   Knowing this 10 years ago would have exponentially altered my life, but better than knowing this *in* 10 years time!

Secondly the course is a testament to the hardwork Andrew Ng has put in personally not only in the study of the field, but also in improving himself as a presenter.   He has clearly taken a topic that has great scope and distilled it in a clear and structured manner.   It is almost as if he has a knack for predicting the kind of questions you would have about a topic and would proceed to address it immediately.   Each lesson builds on the previous chapters and is actually very easy to follow.   Even the math is layered in such a way that the reader can take as much as they would like and go as deeply as they would want without it being a hurdle to proceed (for example the derivation of cost and gradient functions while not explained are extremely fun exercises that you should try out – coming in another blog).

Finally the choice of language.   At first being faced with Octave incited the same gasp that would be incited in any self respective developer.  But not having to implement numeric computing routines (and optimising it) is a huge blessing!    Even for an Octave/Matlab newbie (atleast not having used it for 20 years) the language and environment were painless to get (back) into.

There *are*  a couple of ways the course may have been improved.   Firstly the language.  While Octave was a pretty appropriate environment for this exercise, I am not sure the same could not have been achieved with NumPy/SciPy or in other “popular” languages.   This is not a big deal, just putting it out there.   Definitely gives me a chance to try to implement a lot of the algorithms in another language like Haskell or Swift!   And by the way if I had to *really* pick on two things about Octave – it would, oh:

  1. A lack of a good debugger.  The only way to set breakpoints it seems is via a dbstop function which is extremely unintuitive.  How about a pythonesque “break” or “set_trace” statement?
  2. This was a major annoyance for me.   1-based indexing.   This is fine when your equations actually all refer to and start with “1”.  But when you infact introduce a 0th term, just the cognitive load of translating between the indexes added a good 30% to my assignment times!

Then there were the quizzes.  Personally I am not a fan of “Exam” type assessments.   I could do assignments/projects all day!   Even within the quizzes, the numerical ones I loved as I knew how to apply the formulas I had learnt, but the ones requiring a more analysis with respect to the wording, I personally felt a bit not so much at ease with.   My OCD mind always tries to end up dissecting the hidden meaning behind how questions are worded and end up making wrong assumptions.

But overall a very effective and awesome course in teaching the concepts of ML whether you are an experienced developer or a mathematician.    Definitely recommend it (if there is still anybody out there who hasnt done the course yet!).

Design of an equation editor

Each time I buy a new laptop, I usually copy and backup the contents of my old laptop and it just keeps accumulating.   I was just taking a trip down memory lane and stumbled upon some really old code I had written (about 16 years old to be precise – here at github now). You know the code is *really* old as it is a collection of applets (yep java applets).   One thing that really caught my attention was an equation editor I wrote back then (you can get it up and running from the repo, but here is a screenshot of it):


I am still surprised it actually worked as well as it did back then given howcertifiably little I knew.

And given that I wanted blog a few of my learnings from my first machine learning course not having an easy way to do equations on wordpress (short of taking screenshots of rendered latex) rekindled my interest in an equation editor again (for the modern web?).   So In the coming posts Il go through the design of a system that allows one to easily author and host equations (and math) for the web without having to a) create multiple copies or b) copy and paste from different rendered outputs.

Our math editor must have the following essential “features”:

  1. Perform as much as possible with the keyboard (ie latex style) without having to insert a symbol by clicking at some icon (see above screenshot).
  2. No “rendering” step ie any output must be directly reflected in the editor area so that the author can see the result live in the one place as they are typing the equations/math.
  3. Allow for both short snippets of math as well as reasonably sized proofs that can span paragraphs.
  4. Hosting, sharing and privacy controls.
  5. Renderable as an image in any blogging platform without requiring plugins to be installed.
  6. Customizable templates (ie new summation templates etc) in a secure manner.

That seems like a reasonable starting point.   In the next blog, Il go into a bit more detail into a possible API/data model for our content model looks like.   It is worth noting that a lot of this will build on top of existing awesome tools like latex and MathML.


A libev tutorial and wrapper

I’ve been asked to help people out with libev example code and usage for a while but I never got around to it since I hadnt used libev in ages.  However I have a need for a simple server framework (more on this in another post).   So I figured I’d publish a simple libev tutorial.  More importantly I will create a C library that is a wrapper on top of libev to handle and maintain multiple connections in a clean and extendible way.

A quick introduction to libev is in order.  Back in the day, handling multiple network client connections was either done using multiple threads (one thread per connuction) or via asynchronous apis that multiplexed between several io events across the various connections.  Several APIs existed to enable the later (select/poll, epoll, kqueue etc).  Each of the methods had their own performance guarantees and worse still had platform dependant compatibility issues.  To get around this libevent was developed to provide a uniform interface hiding platform specific details.  Libev developed later was designed to simplify and cleanup several aspects of libevent (eg stripped down, threadsafe, low watcher overhead etc).

The core concept in libev is that of an event loop – a loop that runs continuously listening registered events and calling the associated callback functions.   Events could be io events like file descriptors ready for reads/writes, socket connections accepted etc.

With that it is time to dive in.

Let us first start with a socket server structure that is the context for all things related to a multi client TCP server:

struct LEWSocketServer {
    // A pointer to the event loop that is handling event on 
    // this server socket.
    struct ev_loop *eventLoop;

    // Host the server socket is bound to (not used for now)
    char *host;

    // Port the server is running on
    int port;

    // The socket associated with the server.
    int serverSocket;
    struct ev_io acceptWatcher;

    // A listener structure for events on this socket.
    LEWSocketServerListener *listener;

A simple server socket API would begin with being able to start a server:

SSSocketServer *ss_start_server(const char *host, 
                                int port,
                                SSSocketServerListener *listener);

SSSocketServerListener is a structure that helps clients manage the callbacks associated with the individual connections instead of having to deal directly with the libev api.

typedef struct LEWSocketServerListener {
     * Called when a connection was accepted.  This is an opportunity
     * for the handler of this method to create any connection specific data
     * to be created and returned so that any further activities on the connection
     * will be invoked on this object.
     * This method must NOT return NULL.  If it returns NULL, then the connection is refused.
    void *(*createConnectionContext)();

     * Called when data has been received for this connection from a client.
    void (*processData)(LEWConnection *connection, const char *data, size_t length);

     * Called to indicate connection was closed.
    void (*connectionClosed)(LEWConnection *connection);

     * Called to ask the connection handler for data that can be written.
     * The buffer is an output parameters to be updated by the listener.
     * Return the number of bytes available in the buffer.
    size_t (*writeDataRequested)(LEWConnection *connection, const char **buffer);

     * Called to indicate that nWritten bytes of data has been written and that the connection
     * object should update its write buffers to discard this data.
    void (*dataWritten)(LEWConnection *connection, size_t nWritten);
} LEWSocketServerListener;

The general structure of a connection follows:

  1. Server socket is in listen state
  2. When a new connection is accepted, a client socket is created and added libev’s eventloop for read events.
  3. write events for the client socket are not enabled. The level-triggered nature of libev (by default) will cause unnecessary write event callbacks even when there is no data to be sent. So a design choice made was to make the api caller responsible to initiate writes when it had data to be sent.
  4. When data is available to be read, it is sent via the listener’s processData callback (along with the connection object associated with the client).
  5. When the caller has data to write it invokes the connection’s writeable attribute.
  6. When the writeable attribute on a connection is set, the write events on the client socket are enabled which invokes the writeDataRequested method on the caller until it return 0 (bytes).
  7. Additionally the library calls the dataWritten callback on the listener so that the client can update its own write data buffers (to pop off the written/sent data).

With this the echo server now looks like:

#include "server.h"

typedef struct EchoConnection {
    char readBuffer[4096];
    int length;
} EchoConnection;

 * Called when a connection was accepted.  This is an opportunity
 * for the handler of this method to create any connection specific data
 * to be created and returned so that any further activities on the connection
 * will be invoked on this object.
 * This method must NOT return NULL.  If it returns NULL, then the connection is refused.
void *createConnectionContextCallback()
    EchoConnection *out = calloc(1, sizeof(EchoConnection));
    return out;

 * Called when data has been received for this connection from a client.
void processDataCallback(LEWConnection *connection, const char *data, size_t length)
    EchoConnection *echoconn = (EchoConnection *)lew_connection_get_context(connection);
    memcpy(echoconn->readBuffer, data, length);
    echoconn->length = length;

 * Called to indicate connection was closed.
void connectionClosedCallback(LEWConnection *connection)
    printf("Connection closed...\n");

 * Called to ask the connection handler for data that can be written.
 * The buffer is an output parameters to be updated by the listener.
 * Return the number of bytes available in the buffer.
size_t writeDataRequestedCallback(LEWConnection *connection, const char **buffer)
    printf("Write data requested...\n");
    EchoConnection *echoconn = (EchoConnection *)lew_connection_get_context(connection);
    *buffer = echoconn->readBuffer;
    return echoconn->length;

 * Called to indicate that nWritten bytes of data has been written and that the connection
 * object should update its write buffers to discard this data.
void dataWrittenCallback(LEWConnection *connection, size_t nWritten)
    EchoConnection *echoconn = (EchoConnection *)lew_connection_get_context(connection);
    echoconn->length -= nWritten;

int main(void)
    LEWSocketServerListener listener;
    listener.createConnectionContext = createConnectionContextCallback;
    listener.processData = processDataCallback;
    listener.connectionClosed = connectionClosedCallback;
    listener.writeDataRequested = writeDataRequestedCallback;
    listener.dataWritten = dataWrittenCallback;

    lew_start_server(NULL, 9999, &listener);

While there is still some work to be done to handle edge cases this would be relegated to the library rather than leaking out to the client code and most the client code is actually to do with connection logic rather than messing about with event loops.

Full source code is available at:

Elements of Programming Interviews

This is an index to the problems from THE book being solved in haskell (and counting). Some of the trivial ones will only be attempted on demand!

Chapter 5 – Primitive Types

EPI 5.1: Find the number of bits set in a number (and Parity)

EPI 5.4: Closest integers with the same weight

EPI 5.5 – Generating Power Sets

EPI 5.6 String to Integer inter conversion functions

EPI 5.7 – Generalised conversion between bases

EPI 5.8 – Spreadsheet Column Encoding

EPI 5.9 – Elias encoding and decoding

EPI 5.10 – Greatest Common Divisor

Chapter 6 – Strings and Arrays

EPI 6.1 Dutch Flag Partition

EPI 6.3 – Max Difference

EPI 6.4 – Generalized Max Difference

EPI 6.5 – Subset summing to 0 mod n

EPI 6.6 – Longest contiguous increasing subarray

EPI 6.13 – Rotating an array

EPI 6.15 – Printing a 2D array in spiral order

EPI 6.18 – Run-Length Encoding

EPI 6.19 – Reversing words in a string

EPI 6.18 – Run-length Encoding


Implement functions to run length encode a string and decode the RLE value of an encoded string.

For example “aaaabcccaa” should be encoded to “4a1b3c2a”, while “3e4f2e” would be decoded to “eeeffffee”.


Encoding is straight forward:

run_length_encode :: [Char] -> [Char]
run_length_encode xs = run_length_encode' 0 '|' xs
        run_length_encode' 0 _ [] = []
        run_length_encode' 0 _ (x:xs) = run_length_encode' 1 x xs
        run_length_encode' count curr_ch [] = (show count) ++ [curr_ch]
        run_length_encode' count curr_ch (x:xs)
            | curr_ch == x = run_length_encode' (count + 1) curr_ch xs
            | otherwise = (show count) ++ [curr_ch] ++ run_length_encode' 1 x xs

Decoding is also fairly straightforward, except we just need to accumulate the “count” before that many characters can be output:

run_length_decode :: [Char] -> [Char]
run_length_decode xs = run_length_decode' 0 xs
        run_length_decode' _ [] = []
        run_length_decode' count (x:xs)
            | isDigit x = run_length_decode' ((count * 10) + (digitToInt x)) xs
            | otherwise = (replicate count x) ++ run_length_decode' 0 xs

EPI 6.15 – Printing a 2D array in spiral order


Given an nxm 2D array of integers, print it in spiral order.

Eg for a 3×3 matrix (with the following implicit values):

1 2 3
4 5 6
7 8 9

Would be printed as “1 2 3 6 9 8 7 4 5”


The straight forward solution is to have 4 iterations, and in each iteration to print one of the two directions (horizontal and vertical) alternatively.

However a more intuitive solution is to use vectors to create a tour starting from the index (0,0) and incrementing the next point based on the current direction and position.

First we create the concept of directions along with a couple of helper functions:

data Direction = North | East | South | West
-- Return the clockwise direction of a given direction
clockwise_of North = East
clockwise_of East = South
clockwise_of South = West
clockwise_of West = North

-- Returns the vector representing the direction
vector_of North = (0, -1)
vector_of South = (0, 1)
vector_of East = (1, 0)
vector_of West = (-1, 0)

-- The primary coordinate that will be updated when traversing in a particular direction
primary_coord North = 1
primary_coord South = 1
primary_coord East = 0
primary_coord West = 0

-- Given a direction and point, returns the next and previous points in the direction.
next_pt dir (x,y) = ((x + fst (vector_of dir)),(y + snd (vector_of dir)))
prev_pt dir (x,y) = ((x - fst (vector_of dir)),(y - snd (vector_of dir)))

Now we simply use the above helpers to start and iterate through a tour:

print_spirally width height = print_spirally' East (0,0) 0 [width, height]
        print_spirally' dir (x,y) t [w,h]
            | w == 0 || h == 0 = []
            | t < [w,h] !! curr_coord = (y * width + x + 1) : print_spirally' dir (next_pt dir (x,y)) (t+1) [w,h]
            | otherwise = print_spirally' next_dir (next_pt next_dir (prev_pt dir (x,y))) 0 [next_w,next_h]
                curr_coord = primary_coord dir
                dir_vector = vector_of dir
                next_dir = clockwise_of dir
                next_dir_vector = vector_of next_dir
                next_w = w - (abs (fst next_dir_vector))
                next_h = h - (abs (snd next_dir_vector))

Couple of things to note:

* w,h represent the “remaining” width and height in the spiral as each time there is a change in direction, the available coordinate size reduces by one (height if beginning vertically, width if beginning horizontally).
* t is the number of values printed in a given direction (when this value reaches “w” or “h” depending on the direction, direction is rotated and t is reset to 0).
* When the direction needs to change (in the otherwise) the “current” point is one beyond the last point in the direction. For this reason the next point is evaluted from the previous direction in the previous point.

EPI 6.13 – Rotate an array


Given an Array of n elements, design an algorithm for rotating an Array right by i positions. Only O(1) additional storage is allowed.


The natural solution is to start from position the value at index k to k + i, repeatedly n times. This will work well if GCD(n,i) is != 1. However a general solution is to perform m jumps of size i, l times, but each time starting from the next index.

The first helper function is to perform m jumps of “size” each starting from a given index, wrapping around if necessary. For example if A = [1,2,3,4,5,6,7,8,9,10],

m_rotations A 0 3 3

would cause the following jumps: 1 -> 4 -> 7, with A resulting in:

[1, 2, 3, 1, 5, 6, 4, 8, 9, 10]
m_rotations xs index size m = elems (m_rotations' (arrayFromList xs 0) index (xs!!index) size m)
        len = length xs
        m_rotations' arr curr_index curr_value size numleft 
            | curr_index < 0 || size <= 0 || numleft <= 0 = arr
            | otherwise = m_rotations' (arr // [(next_index, curr_value)]) next_index next_value size (numleft - 1)
                next_index = mod (curr_index + size) len
                next_value = arr!next_index

Now we create the actual rotator method that calls m_rotations k times, where k = gcd(|A|, j).

This is:

rotate_array xs j = rotate_array' xs 0
        lxs = length xs
        j' = mod j lxs
        gcd_lxs_j = greatest_common_divisor lxs j'
        numtimes = div lxs gcd_lxs_j
        rotate_array' xs start_index
            | start_index >= gcd_lxs_j = xs
            | otherwise = m_rotations ys (j' - (start_index + 1)) j' numtimes
                ys = rotate_array' xs (start_index + 1)

A simpler algorithm to perform this is very similar to reversing words in a sentence:

rotate_array_simple xs j = reverse (take j rxs) ++ reverse (drop j rxs) 
        where rxs = reverse xs

EPI 5.10 – Greatest Common Divisor


Design an algorithm for computing the GCD of two numbers without using multiplication, division or the modulus operator.

The GCD of two numbers can be computed by recursively subtracting the smaller number from the larger until one of the numbers is 0, at which point the non-zero value is the GCD.

However this can be improved by “quickly” eliminating factors of two (by inspecting the least significant bits) and doubling and halving values (via left and right shifting by 1 respectively).

greatest_common_divisor x 0 = x
greatest_common_divisor 0 y = y
greatest_common_divisor x y
    | x_is_even && y_is_even = 2 * greatest_common_divisor (x `shiftR` 1) (y `shiftR` 1)
    | x_is_odd && y_is_even = greatest_common_divisor x (y `shiftR` 1)
    | x_is_even && y_is_odd = greatest_common_divisor (x `shiftR` 1) y
    | x  y = greatest_common_divisor (x - y) x
    | otherwise = x
        x_is_even = (x .&. 1) == 0
        y_is_even = (y .&. 1) == 0
        x_is_odd = not x_is_even
        y_is_odd = not y_is_even

EPI 6.6 – Longest contiguous increasing subarray


An array is increasing if each element is less than its succeeding element except for the last element.

Given an array A of n elements return the beginning and ending indices of a longest increasing subarray of A.


Let S[i] be the longest increasing subarray between the indexes (0,i – 1).

S[i] = a,b if A[i] > A[i – 1],
i,i otherwise
where a,b = S[i – 1]

In haskell this would be:

longest_contig_inc_subarray :: (Ord a) => [a] -> (Int, Int)
longest_contig_inc_subarray [] = (-1, -1)
longest_contig_inc_subarray (x:xs) = longest_contig_inc_subarray' (0, x, 0, x) xs
    longest_contig_inc_subarray' (i,ai,j,aj) [] = (i,j)
    longest_contig_inc_subarray' (i,ai,j,aj) (x:xs) 
            | x >= aj = longest_contig_inc_subarray' (i,ai,j + 1,x) xs
            | otherwise = longest_contig_inc_subarray' (j + 1,x,j + 1,x) xs

A heuristic to improve the best case complexity (but does nothing in the worst case) is to realise that if the length of the longest subarray till i is L (and A[i + 1] < A[i] – indicating an end of the longest subarray), then a larger increasing subarray must contain *atleast* L elements. So we only need to start with L items in front and check backwards.

The code for this is (here the start index and the length of the subarray are returned instead):

-- Returns the size of the largest increasing "prefix" in an array
largest_inc_prefix [] = 0
largest_inc_prefix (x:[]) = 1
largest_inc_prefix (x:y:xs)
        | x = y = 1 + largest_dec_prefix (y:xs)
        | otherwise = 1

-- Returns the size of the largest decreasing "prefix" in an array
largest_dec_prefix [] = 0
largest_dec_prefix (x:[]) = 1
largest_dec_prefix (x:y:xs)
        | x >= y = 1 + largest_dec_prefix (y:xs)
        | otherwise = 1

lcisa :: (Ord a) => [a] -> (Int, Int)
lcisa [] = (-1,-1)
lcisa xs = lcisa' (0,1) 0 xs
        lcisa' (start,maxlen) i [] = (start,maxlen)
        lcisa' (start,maxlen) i xs
            | nextlen > maxlen = lcisa' nextbest
                                    (i + maxlen + inc_prefix)
                                    (drop inc_prefix rest)
            | otherwise = lcisa' (start,maxlen) (i + maxlen) rest
                first_l = take maxlen xs
                rest = drop maxlen xs
                dec_prefix = largest_dec_prefix (reverse first_l)
                inc_prefix = largest_inc_prefix rest
                nextlen = inc_prefix + dec_prefix
                nextbest = (i + maxlen - dec_prefix, nextlen) 

EPI 6.19 – Reversing words in a sentence


Reverse the words in a sentence.


We can define a word as any substring separated by one or more spaces. To do this on constant space, simply reverse the entire sentence, and then reverse each word within.

In haskell, the recursive solution would be:

reverse_words :: [Char] -> [Char]
reverse_words [] = []
reverse_words xs = initial_spaces ++ (reverse after_spaces) ++ reverse_words remaining
        initial_spaces = takeWhile isSpace xs
        spaces_dropped = dropWhile isSpace xs
        after_spaces = takeWhile (\x -> not (isSpace x)) spaces_dropped
        remaining = drop (length after_spaces) spaces_dropped

This is clearly not O(1) in space usage. The constant space solution would require maintaining array structures and swapping entries within the array.

EPI 6.5 – Subset summing to 0 mod n


Given an array A, a subset of S (i0, i1, i2… ik) of A such that Sum(S) mod n = 0. This is the 0 mod n subset problem.  Find this subset.


The general case of 0 mod k is NP complete however 0 mod n can be computed efficiently O(n) time and O(n) space.  Also in the case of n, there is always guaranteed to be a solution as seen below.

First create the array mod_till_k such that

mod_till_k[i] = Sum(A[0] .. A[i]) mod n

This is implemented below:

mod_till_k :: [Integer] -> [Integer]
mod_till_k xs = mod_till_k' 0 xs
        n = length xs
        mod_till_k' :: Integer -> [Integer] -> [Integer]
        mod_till_k' _ [] = []
        mod_till_k' prev (x:xs) = curr : (mod_till_k' curr xs)
            where curr = mod (x + prev) (toInteger n)

Now if mod_till_k[x] == 0, then the subset [0,x] is a solution.
If there no x such that mod_till_k[x] == 0, then there WILL exist some a < b such that mod_till_k[a] == mod_till_k[b]. In this case the solution is [a + 1,b]

This search can done linearly with.

zero_mod_n_subset :: [Integer] -> (Integer, Integer)
zero_mod_n_subset xs = zero_mod_n_subset' 0 index_table (mod_till_k xs)
        n = length xs
        index_table = array (0, toInteger (n - 1)) [(toInteger i, -1) | i <- [0..n - 1]]
        zero_mod_n_subset' i index_table (x:xs)
            | x == 0 = (0,i)
            | index_table!x /= -1 = (index_table!x,i)
            | otherwise = zero_mod_n_subset' (i + 1) (index_table // [(toInteger x,toInteger i)]) xs

Here index_table, which is initialised to -1, keeps track of previous positions where a particular mod was last seen.

EPI 6.4 – Generalized Max DIfference

Following on from EPI 6.3, we have three generalized versions of max difference:

Problem 1. Compute the maximum value of (A[j0] – A[i0]) + (A[j1] – A[i1]) such that i0 < j0 < i1 < j1.

The simple solution is O(n^4), by iterating all combinations of i0, j0, ii1 and ji1.

This could be improved to be O(n^2) by applying the O(n) solution to each entry in the array.

A O(n) solution with O(n) space is to compute the best solution in the forward and reverse directions (similar to max_increase_from_index in EPI 6.3) and use the two values on conjunction.  This is:

max_increase_forward :: (Num a, Ord a) => [a] -> [(Integer, a, a)]
max_increase_forward [] = []
max_increase_forward (x:xs) = max_increase_forward' 0 (0,x) (x:xs)
        max_increase_forward' i curr_max [] = []
        max_increase_forward' i (j,aj) (x:xs)
            | x >= aj = (j,aj,x - aj) : (max_increase_forward' (i + 1) (j,aj) xs)
            | otherwise = (i,x,0) : (max_increase_forward' (i + 1) (i,x) xs)

max_increase_backward :: (Num a, Ord a) => [a] -> [(Integer, a, a)]
max_increase_backward xs = max_increase_backward' 0 xs
max_increase_backward' i [] = []
max_increase_backward' i [x] = [(i,x,0)]
max_increase_backward' i (x:xs)
    | x <= aj = (j,aj,aj - x) : rest
    | otherwise = (i,x,0) : rest
        rest = max_increase_backward' (i + 1) xs
        (j,aj,diff) = head rest

Now it is a matter of iterating the two results to find the maximum – done in O(n):

max_increase_k2 :: (Num a, Ord a) => [a] -> a
max_increase_k2 xs = maximumBy compare (max_increase_k2_iter fs rs)
        fs = drop 1 (take (length xs - 1) (max_increase_forward xs))
        rs = drop 2 (max_increase_backward xs)
        max_increase_k2_iter [] [] = []
        max_increase_k2_iter ((a1,a2,a3):fs) ((b1,b2,b3):rs) = (a3 + b3) : (max_increase_k2_iter fs rs)

Problem 2. Compute maximum value of Sum(A[jt] – A[it]) for t = 0 -> k-1, such that i0 < j0 < i1 < j1 < … ik – 1 < jk – 1, for a fixed k.


Problem 3. Repeat (3) where k is any value between 0 and floor(n / 2).


EPI 6.3 – Max Difference

A robot needs to travel along a path that includes several ascents and descents.  As it at ascends, it uses up energy stored in its battery.  As it descends it converts the potential energy to charge the battery.  Assume the conversion is perfect (ie descending X units restores as much as energy as was expended in an ascent of X units).  Also assume that the “distance” travelled is irrelevant and only the height determines the energy lost and gained.


Given a set of 3 dimensional coordinates which plot the ascents and descents during a robot’s path, compute the minimum battery capacity required.


The O(n^2) solution (for each index find the maximum “increase” is going forward and then find the index with the maximum increase) is:

third (a,b,c) = c

max_diff_nsquared xs = maximumBy (comparing third) 
                        [(i,j,xj - xi) | (i,xi) <- zip [0.. length xs - 1] xs, 
                                         (j,xj) <- zip [0.. length xs - 1] xs, i < j]

In the naive method, finding the maximum increase at an index is O(n^2). This can be brought down to a linear algorithm with O(n) space:

max_increase_from_index :: [Int] -> [(Int,Int)]
max_increase_from_index [] = []
max_increase_from_index (x:[]) = [(0,x)]
max_increase_from_index (x:xs)
    | x <= aj = (j,aj):max_inc_from_next
    | otherwise = (length max_inc_from_next, x) : max_inc_from_next
        max_inc_from_next = max_increase_from_index xs
        (j, aj) = head max_inc_from_next

This would return a list of triples (j,A[j]) such that for each index i. Note that the indexes of j will be in “reverse”.

To get the best i, j such that i < j and A[j] – A[i] is maximised:

lxs = length xs - 1
maximumBy (comparing third) [(i,lxs - j,aj-ai) | (i,ai,(j,aj))<- zip3 [0..((length ms) - 1)] ms ms2]

Giving the final solution of:

minimum_robot_battery :: [(Int,Int,Int)] -> (Int,Int,Int)
minimum_robot_battery xs = maximumBy (comparing third) [(i,lxs - j,aj-ai) | 
                            (i,ai,(j,aj)) <- zip3 [0..((length ms) - 1)] ms ms2]
        ms = map third xs
        ms2 = max_increase_from_index ms
        lxs = length xs - 1

Note that we dont actually have to compute the array of differences. We could have simply also passed and maintained in a “curr_max” variable which would have stored have returned the max difference at the completion of the call.

EPI 6.1 Dutch Flag Partition


Write a function that taks an Array A and an index i into A, and rearranges the elements such that all elements less than A[i] appear first, followed by elements equal to A[i], followed by elements greater than A[i].


First we define a couple of auxiliary helper functions to create haskell Arrays from haskell lists and a method to swap elements at two indexes in an Array:

import Data.Array

arrayFromList input startIndex = array bounds [(i + startIndex, x) | (i,x) <- (zip indexes input)]
                bounds = (startIndex, startIndex + (length input) - 1)
                indexes = [0 .. length input]

swapItems xs a b = xs // [(b, xa)] // [(a, xb)]
                xa = xs!a
                xb = xs!b

Now the algorithm. The core of the algorithm is to start with 4 partitions (with in the original Array):

smaller – All items smaller than the pivot element (originally A[i])
equal – All items equal to the pivot item
larger – All items larger than the pivot item
unclassified – Items that have not yet been classified into one of the above arrays.

Intuitively #unclassified = |A| – #smaller + #equal + #larger and this will become 0 at the end.

Initially #smaller, #equal and #larger = 0, 0 and |A| – 1 respectively. The following tail recursive solution updates one or more of these regions in each step as it iterates through the array (and stopping when the “equal” region “hits” the “larger” region).

dutch_flag_partition xs i = elems (dutch_flag_partition' arrayXS i 0 0 ((length xs) - 1))
        arrayXS = (arrayFromList xs 0)
        pivot = arrayXS ! i
        dutch_flag_partition' xs i s e l
            | e > l = xs
            | (xs!e) < pivot = dutch_flag_partition' (swapItems xs s e) i (s + 1) (e + 1) l
            | (xs!e) == pivot = dutch_flag_partition' xs i s (e + 1) l
            | otherwise = dutch_flag_partition' (swapItems xs e l) i s e (l - 1)

A quick note. In this example (and others) we are using immutable collections. From an efficiency point of view using mutable collections would result in, well more efficiency. However we are striving for a balance between efficiency and reasonably simple code that is close enough to the guaranteed algorithmic complexities when these subtle differences (such as immutable vs mutable) are ignored.

EPI 5.9 – Elias encoding and decoding


Elias encoded version of an integer X = X in binary PREPENDED by number of zeroes in the binary representation minus 1.

So Elias of encoding of 13 (binary = 1101) would be 000 1101 (3 zeroes as length of 1101 = 4).


elias_encode_int :: Int -> [Char]
elias_encode_int x = (take (len - 1) (repeat '0')) ++ xbase2
        xbase2  = intToString 2 x
        len     = (length xbase2)

elias_decode_str :: Int -> [Char] -> Int
elias_decode_str size xs = stringToInt 2 (take size xs)

elias_encode :: [Int] -> [Char]
elias_encode xs = concat (map elias_encode_int xs)

elias_decode_helper :: Int -> [Char] -> [Int]
elias_decode_helper  nzeroes [] = []
elias_decode_helper  nzeroes (f:fs)
        | f == '0' = elias_decode_helper  (1 + nzeroes) fs
        | otherwise = (elias_decode_str (1 + nzeroes) (f:fs)) : (elias_decode_helper  0 (drop nzeroes fs))

elias_decode = elias_decode_helper 0

EPI 5.8 – Spreadsheet Column Encoding


Write a function that converts excel column IDs to corresponding integers with “A” corresponding to 1 and so on.


We will modify the solution in EPI 5.6 slightly so that we have:

indexOf :: Eq a => a -> [a] -> Int
indexOf item xs
    | length firstMatched == 0 = -1
    | otherwise = fst (head firstMatched)
        matchesItem x = snd x /= item
        firstMatched = dropWhile matchesItem 
                        (zip [0 .. (length xs) - 1] xs)
digitIndices :: [Char] -> [Int]
digitIndices digits = [ indexOf (chr i) digits | i <- [0 .. 255]]
spreadsheet_column_encoding :: [Char] -> [Int]
spreadsheet_column_encoding xs = 
        sum [ digit_to_value digit pos | (pos, digit) <- dig_positions xs]
        reverse_indexes xs = [(length xs) - 1, (length xs) - 2 .. 0]
        dig_positions xs = zip  (reverse_indexes xs) xs
        symbols = ['A' .. 'Z']
        ourIndexes = digitIndices symbols
        digit_to_value digit pos = (1 + (ourIndexes !! (ord digit))) * 
                                    floor (26 ** fromIntegral pos)

EPI 5.7 – Generalised conversion between bases


Continuing on from EPI 5.6, this problem is to convert an integer encoded as S1 in base b1, to S2 in base b2:


The easiest way is to use the techniques in EPI 5.6 and simply do:

 base1ToBase2 b1 s1 b2 = intToString b2 (stringToInt b1 s1)

Anybody looking for a version without the intermediate conversion to base 10?

EPI 5.6 String to Integer inter conversion functions


Implement the following methods, which convert (signed) integers to a string and vice versa respectively:

intToString x :: Int -> [Char]

stringToInt :: [Char] -> Int

Additionally invalid strings (when converting to Int) must return an error of some sort.


Even though the question was specific to converting to and from base 10, this can be generalised to any base with *very* little changes.  The straightforward solution is:

intToString :: Int -> Int -> [Char]
intToString base x 
            | x < 0 = "-" ++ intToString base (-x)
            | x < base = [intToDigit x]
            | otherwise = (intToString base (div x base)) ++ [intToDigit (mod x base)]

stringToInt :: Int -> [Char] -> Int
stringToInt base (x:xs)
            | x == '-' = - (stringIntHelper base xs)
            | otherwise = stringIntHelper base (x:xs)
            where stringIntHelper base (x:xs)
                    | dx >= base = error ("Invalid digit: " ++ [x])
                    | length xs == 0 = digitToInt x
                    | otherwise = (dx * (floor (fromIntegral base ** fromIntegral digitsLeft))) + (stringIntHelper base xs)
                        digitsLeft = (length xs)
                        dx = digitToInt x

Here is an alternative and more intuitive version of stringToIntHelper above:

stringToIntHelper2 base xs = 
            sum [ digit_to_value digit position |
                (position, digit) <- (zip dig_positions xs)]
        dig_positions = [(length xs) - 1, (length xs) - 2 .. 0]
        digit_to_value digit position = (digitToInt digit) * 
                            floor (base ** fromIntegral position)

This version does not check if each digit is between 0 and “base” but that is a trivial check to add via a “any” check in the original input (xs).

EPI 5.5 – Generating Power Sets


The power set of an alphabet set, S is the list of all strings (of any lengths) that can be formed from the symbols in S.  There can be no repetitions however with respect to character swaps within the string.

For instance, for S = “ABC”, one possible power set is:

“”, “A”, “B”, “C”, “AB”, “AC”, “BC”, “ABC”


The simplest way to think about this as iterating through the numbers 0 to 2^|S|, where each bit in the integer represents whether the alphabet at that index in the set, S is present in the output.  So a solution for this is:

import Data.Bits

lowest_set_bit :: Int -> Int
lowest_set_bit x = x .&. complement (x - 1)

lowest_set_bit_index :: Int -> Int
lowest_set_bit_index x = floor (logBase (fromIntegral 2) (fromIntegral (lowest_set_bit x)))

set_bit_positions :: Int -> [Int]
set_bit_positions 0 = []
set_bit_positions x = (lowest_set_bit_index x) : (set_bit_positions (clear_lowest_set_bit x))

power_set :: [Char] -> [[Char]]
power_set [] = []
power_set alphabet = [str_from_int x | x <- [0 .. ((1::Int) `shiftL` numAlpha) - 1]]
    numAlpha = length alphabet
    str_from_int x = [alphabet !! i | i <- set_bit_positions x]

EPI 5.4: Closest integers with the same weight

Problem (EPI 5.4):

The weight, W(x) of an integer, x, is the number of bits set (to 1) in binary.

Given a 64 bit unsigned integer x (where W(x) != 0 and W(x) != 64) find y such that W(x) = W(y) and |x – y| is minimized.


An example would make this clear, eg if

X = 4 (dec) = 0100 (bin)

The candiates are:

0001   –   Diff = 3

0010   –   Diff = 2

1000   –   Diff = 4

So Y = 010 (bin) = 2 (dec).

The solution to this is to start with X and find the first “01” or “10” starting from the LEFT and then swap the bits.

In haskell this is (assuming a 64 bit number)

import Data.Bits
closest_neighbour_by_weight x = closest_neighbour_by_weight_aux x 0
    where closest_neighbour_by_weight_aux x i 
            | two_digits /= 0 && two_digits /= 3 = xor x (shiftL 3 i)
            | otherwise = closest_neighbour_by_weight_aux x (i + 1)
              where two_digits = (x `shiftR` i) .&. 3

Essentially starting from the least significant bit (i = 0), we see if the bit at position i and the bit at position i + 1 are the same. If they are same, then we recursively continue with i + 1. If they are NOT the same then the bits are swapped (with the x ^ (3 << i) in the False case) and that is the solution. The xor with 3 is just a easy way to swap two consecutive bits.

EPI 5.1: Find the number of bits set in a number (and Parity)

The goal is to find the number of set bits in a number, when converted to binary.

So 4 (dec) -> 100 (bin)  –  has 1 bit set.

Similarly 7 -> 111 -> 3

The straightforward solution shown below has complexity of O(n) where n is the length of the input ie 8 bits for char, 16 bits for short, and 32 bits for int (ignoring hardware and compiler specific sizes here).

import Data.Bits

num_set_bits_simple 0 = 0
num_set_bits_simple x = case x .&. 1 of
                          1 -> 1 + num_set_bits_simple (x `shiftR` 1)
                          0 -> num_set_bits_simple (x `shiftR` 1)

Apart from being O(n) this also has the disadvantage of not being tail-call recursive and requires a conditional check in each “iteration”.

A solution that depends is O(s) where is the number of set bits is:

num_set_bits 0 = 0
num_set_bits x = 1 + num_set_bits (clear_lowest_set_bit x)


clear_lowest_set_bit x = x .&. (x - 1)

Here x & (x – 1) clears the lowest set bit in a number!

A Mercurial Branching Workflow

The StreetHawk code is broken up into multiple repositories, the backend webapp, the iphone UI, the iphone API layer that accesses the database and handles all logical/operational aspects of the iPhone app, the android app (with the same breakdown), tools etc.  Up until now all code has been checked straight into the “default” branch (HEAD).  Going forward we have decided to adopt the following scheme inspired by and based on workflows prescribed by Steve Losh and Vincent Driessen.  I would particularly recommend Steve’s tutorial on branching which was very very critical in helping me understand the general branching techniques in mercurial.  If anything this post is a summary of their article to act as a “manual” for ourselves and is prone to change given all the nasties we may find in the future.

Command Summary

Firstly a recap of some of the branching and tagging commands that we will use later on.

Creating a new branch

# Get into the branch from which the new branch is to be created.  
# This would usually be "default" if it is a new future development or "stable" if a bugfix
hg checkout <parent_branch_name>
# Create the new branch
hg branch <branch_name>
# commit it - at this point it is a local branch
hg commit -m "Added <branch_name> branch"
# push it to the server to create a remote branch
hg push -b <branch_name> --new-branch

Merging from another branch

Time to time you may find the need to merge from another branch.  This is simple.  This will ensure the source_branch is merged into the target_branch.  NOT the other way around (even after you do a push).

# Get into the branch "into" which you want changes merged 
hg checkout <target_branch>
# merge from the source branch
hg merge <source_branch>

Closing a branch

Some times (quite often actually) you may find branches created unnecessary.  To remove the branch:

# Get into the branch you want to close (by now you can see the general pattern here!)
hg checkout <target_branch>
# close it
hg commit --close-branch

Tagging a branch

Tag a branch on every release.  Simply do:

# Get into the branch you want to close
hg checkout <branch_name>
hg tag <tag_name>

The convention used here is: <PUBLIC_VERSION>_<PRIVATE_VERSION>

MAJOR_VERSION - indicates a back incompatible version
MINOR_VERSION - new features added
BUGFIX_NUMBER - bugfixes updates
PURPOSE - purpose of the build - eg "APPSTORE", "TESTFLIGHT", "ADHOC" etc...
BUILD_NUMBER - The build attempt for a given PUBLIC_VERSION and purpose.

Branching and Release Workflow

In this scheme we have the following branches:

“default” – This the dev branch that contains all the checked in features and bugfixes that are *ready* to go into the next stable release.

“stable” – The branch that tracks the stable production releases.  Code from “default” is always merged into stable before a release.

“feature_xyz” – New features are added and developed on their own branches (eg feature_camera, feature_location etc) and merged into “default” upon completion but NOT before they are ready to be accepted into are release.  So only merge these into default when they are ready for the next release.

“bugfix_xyz” – Similar to the feature branches but these are usually branched from stable and then merged back into both “default” and “stable”.  These are usually urgent changes/fixes to be applied on released code.


Given the above branches, an appropriate workflow would contain the following roles (could be the same person):

Developer – Is responsible for adding new features and bug fixes and most likely would be adding to “default” or “feature” branches (or onto stable via bugfix branches).

For example to add the new “awesome” feature, one would do:
# get into default
hg checkout default
# do an update
hg pull -u
# create the feature branch
hg branch feature_awesome
# commit it - at this point it is a local branch
hg commit -m "Added <branch_name> branch"
# add the awesome feature
# do a few commits A, B, C
# push it to the server to create a remote branch ONLY 
# when ready for release
hg push -b <branch_name> --new-branch

Release Creator/Owner – Creates new stable production releases.  Note DO NOT do a rebase here as that would detach local changes.  We specifically want to keep track of all past activity.

# get the latest on all branches
hg pull -u
# get into stable
hg checkout stable
# Merge default into stable to get all production ready 
# features into stable
hg merge default
# Do the testing and when time to release, tag it (you are still in stable)
hg tag <tag_name>
# push the new tag
hg push