Andrew Binstock shares the advice he gives when vendors call to ask why they didn’t become finalists.
Alan Zeichick’s proposal of an organizational Threading Maturity Model is an excellent contribution. As with object-orientation, it does not suffice for a single person to have mastery or near-mastery; the average ability of the team must be fair in order to maintain quality, chaos can be wrought by just one or two who are unaware, and reaching the higher levels requires cultivating talent for long enough for mastery to become part of the culture. However, the TMM isn’t very descriptive of individual progress and does not provide guidance. I thought that I’d take a crack at a Personal Thread Maturity Model (PTMM) that might be a complementary effort. Opinions more than welcome.
- Unaware: Single-threaded, may occasionally use threaded libraries unawares
The first level of concurrent programming is no conscious use of concurrence at all. Since no mainstream language makes threads a first-class concern, threads are only used as, essentially side-effects, in library and infrastructure calls. No useful recommendations can be made for programmers in this category, as by definition they are not part of the conversation.
- Casual: Conscious use of high-level abstractions (components, libraries) to solve specific problems
Today, the large majority of programmers are in this category. They are aware of threads and their ability to run a lengthy calculation or IO operation while maintaining a responsive UI. They may use a drag-and-drop component or a Thread object that allows a calculation to run to completion. Their experience with threading is generally positive: the UI remains responsive, the network calls back, etc. They may be taken aback by all the dire warnings about multithreading that are so common to discussions.
Recommendations: Continue! “The simplest multithreading that can possibly work” often does! Become comfortable with callbacks and asynchronous patterns (in .NET, the Event-based Asynchronous Pattern). Avoid guaranteed trouble spots: don’t update the UI from a worker thread, don’t rely on shared data maintaining its state, don’t try to roll your own synchronization techniques.
- Rigid: Significant use of multiple cores, coordination based on locking
In this stage, programmers move from using threading as a pain-relief mechanism to a benefit that can be actively exploited. Asynchronous processing becomes an intentional part of design. They face their first wicked problems and gnarly bugs and become initiates in the “threading is hard” camp. Their designs focus on the use of locking to coordinate their programs; while they may occasionally use other techniques, locking is the hammer with which they pound the nail of concurrency.
Recommendations: The problem for this group is complacency. They’ve triumphed over problems and may have reached a plateau. They discover that core logic is often non-parallel and, although willing, may have a difficult time seeing places where asynchrony can contribute. At this level of mastery it may not be clear on today’s hardware that further learning is necessary. Unfortunately, since manycore machines are not yet available and few of us have access to parallel clusters, moving beyond this level of maturity is driven more by faith and reading than by hands-on experience.
- Flexible: Attempt to maximize use of cores, use of lock-free algorithms and data structures
This level is characterized by a deeper theoretical understanding of asynchronous processing, mastery of basic techniques, and determination to achieve the best results possible. Programmers at this level seek, not just to exploit their processors, but to saturate them. This stage is additionally characterized by a shift from programming-language-based thinking to hardware-based thinking.
Recommendations: This is a very difficult phase to move through. Popular texts will not guide you past this stage, advancement requires a combination of textbook theory, hardware knowledge, and hands-on experience. (I’m not going to claim to have moved beyond this phase myself.)
- Optimizing: Optimal use of parallel architecture
Parallelism is fully incorporated into one’s mental approach to solving problems via symbols. Specific techniques are considered not along any “better-worse” axis, but for their individual benefits and drawbacks. Multiple approaches may be incorporated into a single design. The world appears as falling strings of glowing numbers, bullets can be dodged, and Agent Smith can be defeated.
My latest in SD Times is up, in which I decide to stop being polite about WS-*.