April 28, 2011

It’s Okay. I’ll Improve It Later.

My speed of development definitely increased and I think my quality of code did as well when I switched mindsets, from thinking "this is the last time I could be in this code, so make it perfect" to "I'll be back in this code at a later date, so do the good thing, but not necessarily the perfect thing".  This was a difficult change to make, as I always aim for perfection in my work; anything less seems like a cop out.  But with the former thinking, my desire to perfect things immediately because I thought I might not work with the code again often led to a lot of upfront design, and even analysis paralysis, as I'd agonize over the ideal way to write a piece of code.  It had to be robust and highly flexible to handle all future needs, it had to perform well, and of course, it had to look good for posterity.  As you can see, I was doing a lot of premature polishing and optimization—and spending an ungodly amount of time doing it.

But now that I have started to approach code with the assurance that I will be revisiting it at some point, probably sooner than later, I feel free to write things faster knowing that I can and will improve upon it eventually.  I don't have to make the perfect design decision the first time, as I'll instead be making incremental improvements along the way, using the elapsed time to gain more insight into a better way to write the code.  I just need to identify a good way to write it now but don't have to dwell on it, straining to find a way to do it better.  Just keep it simple, remembering of course, that you ain't gonna need it.  However, I'm also aware that since I will be revisiting the code, I have to keep the code clean and expressive, so constant refactoring and good test coverage are a must.  I'm not advocating simply writing sloppy code as fast as you can, going with the first idea you have.  You should still put thought into the code you're writing and have the discipline to write it well, but just don't get caught up worrying about making it perfect.

In short, don’t obsess over the ideal.  Just start writing some code.  Write it as well as you can now with the information and ideas you currently have, but know in the back of your mind that you'll return to this code again soon.  So if it's not as good as you think it can be, that's okay.  It eventually will be.

April 13, 2011

The Right Tool for the Job

I remember a time when my father was looking to buy a reciprocating saw for something he was working on around the house.  As he already owned at least five other saws, I questioned why he wanted to buy another.  Couldn't he use one of his other ones?  He replied that this particular type of saw would allow him to finish his work faster and with a better cut.  So yes, he could use one of his other saws, but a reciprocating saw is the better one for the job. 

So where am I going with this?  Well, Alan Skorkin has an interesting blog post on the Dropbox programming challenges and his use of dynamic programming to solve one of problems.  This was certainly interesting, but while I was reading his summary of the problem, I immediately began thinking that this was right in Prolog's wheelhouse.

The problem is essentially the subset sum problem: given a set of integers, is there a non-empty subset whose sum is exactly zero?  Now, with your typical imperative language, once you understand the nature of the problem, you go about specifying the exact steps needed to solve it.  But with a declarative language like Prolog, you describe what the application should solve, providing context, facts, and inferences about the problem, but you don't actually have to solve it.  Prolog does the heavy lifting.

With the subset sum problem, I knew what needed to be accomplished, but I wasn't immediately sure of an algorithm to do it.  With Prolog, though, I didn't need to do know an algorithm.  I merely needed to describe the nature of the problem, which was actually easy.  So the solution to the subset sum problem in Prolog is:

subset_sum(List, Subset, Sum) :-
sublist(Subset, List), sum_list(Subset, Sum).

That's it, folks.  Pretty sweet, eh?  You're basically telling Prolog that a solution is valid if all of the members of the Subset list are in the full List, and the sum of the members in Subset equals Sum.  With this in place, Prolog will go about finding possible solutions to the problem.

| ?- sum_subset([802, 421, 143, -302, 137, 316, 150, -611, -466, -42, -195, -295], Subset, 0).

Subset = [] ? a

Subset = [316,150,-466]

(4 ms) no

Or if I want to find all subsets from a list that total 11:

| ?- sum_subset([7, 8, -4, -2, 3, 9, 12, 5], Subset, 11).

Subset = [-4,3,12] ? a

Subset = [-4,-2,12,5]

Subset = [-4,-2,3,9,5]

Subset = [8,3]

Subset = [8,-2,5]

Subset = [8,-4,-2,9]

Subset = [7,-4,3,5]

Subset = [7,8,-4]


This is a great example of using polyglot programming to leverage the right language to solve a particular problem.  Prolog excels at problems like the above one, so I was able to implement a solution much faster than I would have in any other language.  But for other problems, Prolog would be a horrible choice.  Thus understanding various languages and the kinds of situations where they are best utilized can lead to faster, simpler, and better solutions than just relying on a single general purpose language.  It's all about using the right tool for the job.