June 18, 2009

Notes From Chicago ALT.NET Meeting On Git

Last week I attended the June Chicago ALT.NET meeting, where Git was presented. When listening to the discussion, I kept notes on how Git compares to SVN (our current source control at work) and whether Git might be a viable replacement for SVN down the road.


  • Allows you to work offline.
  • Gives each user the whole repository, so most operations are local and therefore faster.
  • Easily facilitates concurrent development cycles, so developers can work on different projects at different times without complicated branches (you still have branches, but they’re much easier to work with).
  • Has support for hooks.
  • Saves disk space in comparison to SVN. For example, the Mozilla repository was drastically smaller after switching to Git.
  • Has better history tracking than SVN.
  • Has GUI options – TortoiseGit and Git Extensions.
  • Has a staging area that allows you to stage portions of a file before committing (so if you mess something up, you don’t have to revert the entire file and start over).
  • Supports a variety of workflows. You could set up your distribution layout so you have one central repository that everyone pushes to, similar to SVN. However, I found the “benevolent dictator” model used by the Linux kernel development community interesting. Basically, developers would push their changes to lead developers, who would then push all of their changes to the architect/boss, who would then do the commit to the main source. All developers would then get latest from this main source.

I’m definitely interested in Git, and it looks like it a promising replacement for SVN. I think I’ll try using it for some of my personal work to get a better feel for it.

Lastly, some links I’m looking at:

June 16, 2009

Notes From Hot Topics In Technology Debate

Last week I attended ThoughtWorks’s Hot Topics in Technology Panel Debate, where a group of ThoughtWorkers discussed trends in the software industry. I took a few notes while I was there, so I’ve decided to recap some of the discussion.

The first topic was cloud computing. One of the first issues raised was how can cloud computing be used in existing enterprise applications and not just in small startup applications, which led to the supposition that hybrids utilizing both physical and cloud computing structures will become the most common form of cloud computing. How a company uses cloud computing , however, will depend on its strategy and costs. Some of the example usages discussed were performance testing (where you don’t have to set up a new environments to test), handling peak performance periods (where intensive computation is needed), and Gmail (where ease of deployment is important). An intriguing suggested way to start working in the cloud is to set up a sandbox for prototypes of marketing ideas. This way you can easily deploy prototypes with little to no risk to your production environments while gathering customer feedback much faster. Lastly, one of the more interesting insights was that computation scales well, but I/O does not. Thus databases generally don’t perform any better in the cloud. As a result, the paradigms we use for persisting data could change – for example, non-relational data structures may become more widespread.

The dialogue then shifted to language workbenches. In the past DSLs have been troubled by a lack of support by conventional programming languages. Language workbenches, though, are a compilation of DSLs along with conventional languages. They allow you to design a language, plus they provide an execution environment for the language – thereby enabling you to work with multiple languages across multiple domains (e.g., a payroll processing language, an expense reporting language, etc.). Thus language workbenches will further help switch the focus to customer needs by enabling languages that are easier for the customer to understand and use (though it will still be a while before customers start writing their own DSLs). Interestingly, Excel-like DSLs and language workbenches (with their ability to easily facilitate anecdotal testing) will be one way that DSLs gain more traction in the marketplace.

Developer certification was next discussed, and there seemed to be a consensus that there are too many incapable programmers plaguing the software industry. These programmers are net-negative contributors, and while certification would help identify and weed out these people, current certification programs are inadequate. Current certifications are decent HR filters, but they are in no way good at determining the aptitude of a programmer. A good certification needs to test a programmer’s ability to do actual work – meaning that it needs an observational element that considers multiple people’s input on the programmer over an extended amount of time. This is why an apprenticeship/craftsman approach that encourages continuous learning is a promising model. Lastly, it was stressed that this problem is not an academic issue, as a computer science/software curriculum has way too much to teach in too short of a time. Thus our industry needs to recognize this and accept the burden of encouraging (possibly enforcing) continuous education – look at how other industries handle ongoing education (doctors, lawyers, even actuaries!). The divide between academia and industry contributes to this problem, as it leads to some professors having little to no enterprise experience.

The debate ended with a brief discussion of polyglot programming. Currently, the language used for a project is often determined by what team members are most familiar with (coupled with what the language is for the existing code base). This often leads developers to shoehorn all problems into the language they’re most comfortable with, when they should instead be using the language best-suited to solve a certain problem. Following this approach leads to applications that use multiple languages, each tailored to solve a particular problem, and with all of them running on the same managed runtime. Some examples include Mingle (JRuby and Java) and Twitter (Ruby on Rails for the frontend and Scala for the backend). However, when choosing a language, pragmatism is key; we need to deliver business value and not just chase the next trendy language. We also need to remember that a language isn’t just syntax; to best utilize a language, we need to use it correctly. That is why understanding the semantics behind a language is much more important, because when you understand the semantics of a language, syntax and expression flow naturally.

Overall the talk was good. I do think they tried to cover a little too much ground in too short of time, so it might have been better to just focus on a couple of the topics in more depth. It was interesting nonetheless, and it was cool to see the panelists in person. The talk also reminded me that I need to continue to strive for a well-rounded view of the software development industry, try not to get too focused on a particular technology or framework, and continue to learn a variety of things while mastering principals and fundamentals.

UPDATE: The recording of the debate has now been posted.

June 9, 2009

Place An Enumeration In A Separate File

Regardless of its size, I place each enumeration in its own file, with each enumeration having the namespace of the class(es) using it. I do this for a few reasons:
  • It follows my preference for having many small classes and files over having fewer large ones.
  • It’s consistent with how I handle classes and interfaces – and I see no reason why enumerations should be handled differently.
  • Most importantly, it makes it easier to find the enumeration, particularly when you’re outside the IDE and can’t use its navigation features.
Others advocate placing the enumeration in the same file as the class that’s using it. But what happens when multiple classes use the same enumeration – do you then move the enumeration into its own file, or do you continue to keep the enumeration in the same file as one of the classes using it? Either way, it’s inconsistent and confusing. Another common approach is to place all enumerations within one file, with the enumerations all under a single namespace. Again, I feel this convention makes it harder to locate an enumeration, plus it moves the enumerations out of the namespaces they’re logically related to.

Thus I recommend placing each enumeration in its own file. But regardless of the approach you use, the most important thing is to have everyone on the team follow the same convention.