Software Tools

Do one thing well, and work together.

My experience with Master Craftsman Teams

with one comment

Uncle Bob” wrote yesterday about Master Craftsman Teams.

The thing that struck me most about this article is how closely it matched the way my own team of coworkers, who came and went, contributed and moved on, helped me develop two significant software build systems over ten years.

Without a doubt I had the role of “master”, and as Peter Naur would say, had the theory of the program in my head. I developed the build system from empty directories twice, to build products for two different software systems. Each time I had key responsibility to fit the system to customers’ needs, and decide how it would involve internally. (I’ve also been programming for thirty years, since BASIC on my first computers in grade school.)

I delegated what I saw fit to a handful of people who grew in responsibility as they demonstrated they understood and could make creative improvements to the system (journeyman). One of the journeymen became committer when I recently left. Since no-one had the theory as well as I did, architectural responsibility devolved from my fiat to a change control board, so that everyone could bring bits and pieces of their expertise to together make better decisions.

Hundreds of small changes were made over the years by a few dozen people who did not really grasp the principles of the system, so needed a significant amount of guidance. I reviewed every line of code in the system, and know how it all works together. I expect that some of this efficiency will suffer in future, at the advantage—to the organization that owns it—of not having “the keys to the kingdom” in one set of hands.

I’m not sure how well this approach would sit with people as a formalized organization chart and pay scale, but I’ve seen it work very well in practice with a software project I managed. Further, I believe that if you look closely most software projects in practice run this way (eg Linux has Linus, Alan, various subsystem committers, and a hoi polloi of patchers.) Even further, a single person may be an apprentice to one project, a journeyman to another, and a master to a third.

Perhaps the thing to do is to compensate people outside the company by bounties for the code they contribute; set up a contractor arrangement for journeymen; and actually hire only masters. This keeps companies small, focused on their core competencies and most valuable people. Even better for the community of programmers, this provides a structure for programmers to put food on the table while contributing to as many projects, and at the levels of depth, as suits their inclinations and abilities.

Agile practice one-liners, and Alexander on refactoring

leave a comment »

A few one-line descriptions of Agile practices, because I feel they get to the heart of each.

Iterative development with feature teams: a few developers complete the development cycle (refactor, code, integrate, build, and test), for each of a few features, in two weeks to two months.

Refactoring: instead of a static, outdated architecture, rework the codebase with each feature.

Test-driven development: write tests of functionality for user stories, then write code to pass these tests.

Pairing and collective ownership: two developers introduce fewer errors to review and test for, and many developers distribute knowledge of the code.

Frequent integration: merge and build new changes at least daily, test at least weekly.

Automated regression test: collect all test cases into push-button test suites, run shortly after each deliverable build.

I also found a passage on refactoring from an OOPSLA keynote by Christopher Alexander:

It turns out that these living structures can only be produced by an unfolding wholeness. That it, there is a condition in which you have space in a certain state. You operate on it through things that I have come to call “structure-preserving transformations,” maintaining the whole at each step, but gradually introducing differentiations one after the other. And if these transformations are truly structure-preserving and structure-enhancing, then you will come out at the end with living structure. Just think of an acorn becoming an oak. The end result is very different from the start point but happens in a smooth unfolding way in which each step clearly emanates from the previous one.

Alexander, C. 1999. The origins of pattern theory: The future of the theory, and the generation of a living world. IEEE Software 16, 5 (Sept/Oct), 7182. http://doi.ieeecomputersociety.org/10.1109/52.795104

Written by catena

24 August 2007 at 1823

Blacksmithing a Train Engine

leave a comment »

Yet another analogy comparing software development to a hardware activity: in this case, train engines.

(Programs|engines) run, within the bounds of a (computer|track). The strength of the (computer|track) determines the rate at which they can run. The design and internal processes of the (program|engine) determines the rate at which they do run. Certain designs of (programs|engines) make it easier to handle various loads of (data|cars), since (data|cars) have different properties when placed under the stress of running.

If (data|cars) properly interface with the (file|rail) format, then the (program|engine) can handle them. If not, then the (program|engine) will fail to make any progress, or crash. The more subtle the difference between the format expected by the (program|engine) and that implemented by the (data|cars), the longer it will run before the difference creates a problem. Some problems might not be found at all, if no situation arises for which the difference has any noticeable effect. The (program|engine) handles a certain amount of (data|cars) before slowing to a crawl. Users can handle more (data|cars) in the same amount of time by running additional (programs|engines) in parallel, but improvement is limited by the time required to coordinate final and intermediate delivery.

Standard (file|rail) formats are better, because more (programs|engines) and (data|cars) are built to use them, their properties are better known, and users can even reuse them in different systems. However, vendors must differentiate themselves to drive business their way, so they often attempt to lock users into a particular format, in addition to (sometimes instead of) competing on the merits of their (program|engine).

In the early days of (programs|engines), each part was laboriously crafted and fit into place by hand, and often failed spectacularly from the additional pressures of running. Each part had to be crafted with a knowledge of all the stresses under which it would run, from materials suitable for the basic processes which powered the (program|engine). Nonetheless, its appearance made a more significant impact on users and bystanders, unless its performance differed drastically from others’.

The tools with which the craftsmen created (program|engine) parts were themselves created by the same process, so a craftsman’s product quality varied dramatically with his knowledge of toolmaking.

There were common parts, interfaces, and controls expected by users who had used similar (programs|engines). There was a great need to explain to users who had not: how they could expect the (program|engine) to behave with their (data|cars), and to make sure they were compatible with the (file|rail) format; how to control the (program|engine) at a basic level, and how to get a feel for its advanced capabilities; and how to get the (data|cars) to the (program|engine), and how to pass them off to others.

Smithing Tools to Smith Tools

Following text from Wikipedia on blacksmiths.

Over the centuries (blacksmiths|programmers) have taken no little pride in the fact that theirs is one of the few crafts that allows them to make the tools that are used for their craft. Time and tradition have provided some fairly standard basic tools which vary only in detail around the world.

There are many other tools used by smiths, so many that even a brief description of the types is beyond the scope of this article and the task is complicated by a variety of names for the same type of tool. Further complicating the task is that making tools is inherently part of the smith’s craft and many custom tools are made by individual smiths to suit particular tasks and the smith’s inclination.

With that caveat one category of tools should be mentioned. A (jig|hack) is generally a custom built tool, usually made by the smith, to perform a particular operation for a particular task or project.

More on software blacksmithing, apprenticeship, and artistry: Craftsmanship as a Bridge.

Written by catena

8 May 2007 at 1402

Posted in Software Design

Pave the Pathways

leave a comment »

Roger von Oech provides an example of backing off the design process, to observe usage patterns.

For example, designer Christopher Williams tells a story about an architect who built a cluster of large office buildings that were set on a central green. When construction was completed, the landscape crew asked him where he wanted the pathways between the buildings.

“Not yet,” the architect said. “Just plant the grass solidly between the buildings.”

This was done, and by late summer pedestrians had worn paths across the lawn, connecting building to building. The paths turned in easy curves rather than right angles, and were sized according to traffic.

In the fall, the architect simply paved the pathways. Not only did the new pathways have a design beauty, they responded directly to user needs.

In software, this argues to release to users in iterations, incorporate their feedback, and simplify their use cases. Common use cases, added late, tend not to elegantly fit the software’s architecture, so increase the maintenance burden.

Kathy Sierra reminds us that providing what’s good for users is good business.

Ask the question we keep bringing up here, “What will this help the user do?” Not, “How can we make a great product?” Nobody cares about your company, and nobody cares about your product. Not really. They care about themselves in relation to your product. What it means to them. What it does for them. What it says about them that they use your product or believe in your company.

And that users don’t want us to pave the entire courtyard.

How much control should users have?

Obviously this is a big “it depends”, but the main point is to focus on the relationship between user control and user capability. As user capability (knowledge, skill, expertise) increases, so should control — at least for a lot of things we make, especially software, and especially when we’re aiming not just for satisfied users but potentially passionate users. The big problem is that we make our beginning users suffer just so our advanced users can tweak and tune their configurations, workflow, and output. [For the record, I’m a big fan of splitting capabilities into different products, or having a really good user-level modes–where you use wizards or simpler interfaces for new users, etc. Yes, they’re often done badly, but they don’t have to be.]

In the build systems I write, the interface tends to one shell script, whose options behave like make options (passed to the main subsidiary make process), and whose parameters are make targets (with an optional subtarget syntax). So if you know how to ask make to build what you want, then you know how to ask my build system.

There is a lot of room in this interface style for build system developers (eg, you can ask the build system to run any internal target), application developer experts (eg, custom-build individual tasks or libraries, or even single files), and application developer novices (eg: all-encompassing clean, build, and package targets; wrapper scripts which set the shell environment).

Best of all, novice features tend to form the foundation of automation. If a novice developer can easily run a use case, then the process scheduler (cron) probably can also run it.

Written by catena

18 February 2007 at 0202

Notes on “AOP: Radical Research in Modularity”

leave a comment »

Notes on Aspect Oriented Programming: Radical Research in Modularity, by Gregor Kiczales, for Google TechTalks on 16 May 2006.

Pointcut Language and Overall Program Structure

Ableson and Sussman: We have a language when we have primitives, a way to compose those primitives to perform useful work, and a mechanism to abstract and refer to compositions elsewhere.

Pointcut syntax is a little language, with syntax to select a set of join points (primitives), construct logical expressions with several sets (composition), and label an expression (abstraction).

This is the sole advantage of modularity: abstract a set of primitives in one distinct place, and combine them with tools in the language. Using the language, instead of hand-weaving, provides a more structured way to consider how changing the base concern may change the pointcuts.

Dominant model applies crosscutting concerns to join points in base concerns structured as block models within a hierarchy, instead of composing orthogonal sets of crosscutting concerns with base concerns structured as crosscutting concerns.

Procedural, Object, and Aspect Programming Methods

Say we describe the ordinary process of functional/procedural programming as a problem expert (eg, the task at hand) collecting knowledge from other disciplines (secure, optimize, distribute, synchronize, persist, debug/log/trace, quality of service) to apply to the problem. Obviously the programmer can integrate all of this knowledge on a deep level, since all the knowledge appears before the analytic capability of the human mind. However, there is a limit to the size and number of disparate disciplines which can be integrated, before the scale and complexity produces an error rate unacceptable to customers.

Object-oriented programming does not alleviate this situation, since each class must still integrate all the crosscutting concerns with methods, or statements within methods, in the class definition.

Aspect-oriented programming allows a team of specialists to each apply expertise to many basic problems. One person could do everything the team does, but as mentioned can only work on a few very large problems at once, before reaching human limits; whereas a specialist can dispense information as needed across many large systems. This optimizes resources at the cost of the mentioned deep integration of specialist knowledge with other specialist knowledge. That integration is left up to the resource manager (the weaver), who schedules and reconciles the contributions of the specialists and the basic problem’s manager. The basic problem’s manager, in aspect-oriented programming, can represent the customer, verify correctness of the integration, and validate the integrated solution.

Consider Pointcuts in Aspects, and Pointcut Specifiers in Base Concerns, When Changing Base Concerns

We must be careful, when editing base concerns, to not break pointcuts; or, breaking them, to remember to update the pointcut. This is less likely to cause error than the original method of weaving crosscutting concerns into the base concerns by hand.

With respect to my weaver for shell-scripts and makefiles, I am hesitant to use explicit comments in the base scripts as search patterns for the sed scripts. However, these make a contract with any transforming program: we can consider these points reliable enough to anchor transforms. Without this, especially if the search patterns refer to code statements, additional pointcuts may not find the points they specify still in the program by the time the weaver gets to them (but at least they will fail-safe and apply no advice).

The base concern’s comments should only refer to the base concern, not any crosscutting concern, and should certainly not say “at this point, insert that statement from the other crosscutting concern.” Those kinds of comments defeat the whole purpose of modularization, and do not recognize that we may eventually weave completely different crosscutting concerns into the base concern.

We should not try to apply a single comment across many base concerns, to reduce the effort to write the join point specifier. This binds, in a single relation, all the base concerns, and all the crosscutting concerns which use the comment. If the advice changes for half the base concerns, then half the instances of the single comment must change. If instead the pointcut is a combination of several comments, then we more easily split off a subset of these into a new pointcut, to receive new advice.

The Weaving-Loom Analogy

weave applies one crosscutting concern to one base concern. The variation lace customizes the base concern for an application (eg, by laying sed changes over the file).

loom weaves many crosscutting concerns into many base concerns, adding each crosscutting concern to the base concern as modified by previous crosscutting concerns. Under user control, the loom orders crosscutting concerns applied to each base concern. User control is essential since the user writes the crosscutting concerns, and knows which crosscutting concerns apply to other crosscutting concerns, so should weave last.

warp finds crosscutting concerns which modify a given base concern. It also finds crosscutting concerns and advice which modify a given base concern’s given code statement.

weft finds base concerns modified by a given crosscutting concern. It also finds base concerns’ code statements modified by a given crosscutting concern’s given advice. woof is already taken as a s(n)ide-acronym for another function in my build system, and is slightly silly anyway, although it does have the advantage of turning loom upside-down and backward.

Weaving New Aspects into Reused Legacy Code

AOP removes all code for crosscutting concerns from base concerns. This is better than macros or libraries, which require at least calls in the base concern. The base concern can be completely oblivious to crosscutting concerns, which allows us to weave unanticipated crosscutting concerns as easily as we add the ones we had in mind when we wrote the base concern.

This is key to reusability, and in fact aspects should unlock the entire previous pre-aspect code base to modification, without editing the current working, tested, and delivered code files. I don’t believe in silver bullets, but the separation of concerns here is the first opportunity I’ve seen to keep the old code separate from the new code, yet use them together in a new program, while easily incorporating any further changes to the old code.

For new code, aspects promote writing code for reuse, since each concern is a separate file, and can be bundled and released separately. Reuse is available for base concerns since they do not refer to crosscutting concerns (low coupling), and completely implement one concern of the program (high cohesion). Reuse is easier for crosscutting concerns because: (1) if the aspect has no join point to match, it fails safe and applies no advice to the base concern; and (2) the advice code refers only to methods and fields exposed by the pointcut, which hides (low coupling) the remainder of the base concern.

Integrating Proprietary and Open-Source Code

This method could be very useful in incorporating open-source code into proprietary codebases. If we can show that we did not directly modify the original open-source code, then we may not need to release our proprietary changes to the open-source code.

Before compilation, the reused open-source code is in its files with its copyrights, in exactly the same state as when downloaded. Proprietary code is in its files with its copyrights. Since we must redistribute the open-source code files with any modifications to those files (the files under copyright), we just redistribute the original, unmodified open-source files, since they were not changed.

I am not a lawyer, so this analysis could be way off: specifically, we could still consider the distributed, weaved version a derived work, which means that all code that went into producing it must be distributed. Definitely must speak to an IP lawyer before basing any decisions on this.

Dynamic Join Points

Since I’m writing a weaver, I should consider join points other than code statements (including comments) in shell scripts and makefiles. These implement a static join point model, which change the code before it runs. Dynamic join point models create advice which executes only upon certain runtime behavior: for example, if a sub-script or particular makefile is called with certain parameters or targets, or a certain shell variable or makefile macro receives a particular value.

We could of course explicitly write more complicated advice to implement this. But it would be simpler to write the advice if the pointcut matched the runtime state of the program. To do this, we could either (1) monitor the execution state at runtime (better suited to modeling languages) or (2) extend the pointcut language to describe a runtime state. With (2), the weaver translates the runtime state description into static code to detect the state and apply the simpler advice.

Prototyping

Since the crosscutting concern in my implementation “owns” the final executable form, I can prototype new changes to the base concern and affect only the product or platform which needs the change. Once it’s proven, I can move any new variables or code in the interface back to the base concern, to become a default part of the interface for other products or platforms. At this point, I may need to write advice to change the interface for products and platforms which must differently define its values.

We can extend this thinking to consider prototyping itself a crosscutting concern, and maintain several prototypes concurrently, weaving them in according to control options. Of course, this begs me to make control options crosscutting concerns.

Effect on Software Configuration Management

AOP introduces the weaving step between code checkin and compilation. For C/C++, this step constructs woven header and source code files by combining (1) base header and source code files with (2) header and source code file content in advice gathered in files which encapsulate each crosscutting concern. The order of application of crosscutting concerns should be specified in the target makefile.

Alternatively, we could make the Aspect C and C++ compilers available as alternative platforms (eg, *_aspect), and let the compilers sort it out. The caveat here is that I haven’t used one of these languages yet. I do know that they would integrate most cleanly if they were actually preprocessors which created new *.c/*.cpp (C/C++ source code) or *.i/*.ii (preprocessed C/C++ source code) files suitable for compilation with the build system’s cross-compilers.

Steve Jobs on Microsoft, 1996

leave a comment »

The only problem with Microsoft is they just have no taste, they have absolutely no taste, and what that means is – I don’t mean that in a small way I mean that in a big way. In the sense that they they don’t think of original ideas and they don’t bring much culture into their product ehm and you say why is that important – well you know proportionally spaced fonts come from type setting and beautiful books, that’s where one gets the idea – if it weren’t for the Mac they would never have that in their products and ehm so I guess I am saddened, not by Microsoft’s success – I have no problem with their success, they’ve earned their success for the most part. I have a problem with the fact that they just make really third rate products.

VideoTranscript – ©1995-2006 PBS. All rights reserved.

Written by catena

15 October 2006 at 0111

Baselining from Daily Builds

leave a comment »

Branching. Agile asks to trade baseline stability for ease of integration. Take the latest baseline available, make it work with your change, and merge it as soon as it passes your tests. This reduces the number of concurrent branches active at once, and therefore the amount of simultaneous variability in the code base. Developers will not need to rebaseline from the last official build to include changes already merged to the integration branch.

Testing. This does mean that problems are more likely to be merged and affect others, so automated unit testing helps avoid defects not due to the interaction with other components. Merging a change sooner gives it more second-order testing with other components before major builds, which increases the likelihood that interaction problems will be found before they can delay a release.

Quicker feedback about integration problems refocuses a developer’s attention on a change sooner, while the developer is still thinking about the problem, instead of at the next major build after a developer has moved on to other issues.

Written by catena

5 September 2006 at 1538