How Much of the Industry Will Go Parallel?

Michael Seuss ponders one of my favorite questions: How much of the software industry will have to deal with the concurrent computing [opportunity]? He hits the vital points:

  • 2, 4, and maybe 8 cores may be usefully exploited by system services (anti-virus, disk indexing and searching, etc.), but when you get beyond that, any program for which performance is any kind of issue simply cannot ignore the capacity (this is why I distinguish between our current “multicore” transitional phase and the coming “manycore” era).
  • Media programming (games, A/V processing) have an essentially infinite appetite for processing
  • The manycore era provides an opportunity for new types of functionality. He mentions concurrent semantic analysis of your input, both typing and spoken, and the accumulation of context documents. For instance, as I type this, my computer might be gathering all my blog posts, OneNote notes, source code, etc. relating to concurrency. (And then wouldn’t it be cool if it offered them for my perusal, maybe with, I dunno’, a goggle-eyed paperclip?).

But I think the $64 question is whether such services will be provided in a service-oriented, cross-application manner, or whether it will be the case that we find broad opportunities for them within applications. For instance, mail programs and word processors have had search functionality for a long time, but if you were designing such a program from scratch, you would probably be better advised to say “Hey, I won’t implement a complete search subsystem, I’ll just make sure I can be indexed by Windows and Google Desktop Search. If I want to add value, I’ll layer on top of those systems if at all possible.”

Conversely, if you had some powerful new value proposition (semantic analysis, task recognition, visual input), wouldn’t it be vastly better for you and your customers if you could provide it to applications other than those that you happen to have written? In other words, of course value in the manycore era will derive from increased parallelism but maybe that parallelism will still be very coarse-grained. Maybe software organizations will face a choice: “Either develop client-oriented value with the best practices of “traditional” non-parallel development or develop broader, system-oriented value using whatever is the emerging set of best practices for system-level parallel development.” Maybe that choice will become increasingly orthogonal.

Now, the final part of the thought experiment is this: if that scenario is reasonable, what kind of platform services / APIs would one desire?

Microsoft Unveils "Surface" Multi-Touch Table Interface

Bill Gates has gone public on Microsoft’s commercializing a multi-touch table interface called “Surface”. This has been shown before, but only as one of the (many) prototypes that you see these brief glimpses of and which often are not commercialized (I think “Surface” and the device-pairing stuff was shown at some demo relating to digital identity).

I doubt that the first few generations of Surface will be what I want, but I bet in about a decade professionals will be able to work at a desk with a blotter-sized 133-DPI display (as well as vertically-oriented screens). Sweet.

Comment:Code > 1:3 ?

 Andrew Binstock adds to his pithy series on quality ratios (unit tests per method, unit test coverage) with a post saying that high-quality code is likely to have around 35% and perhaps even more than 45% of lines devoted to comments.

He also mentions two “commenting” practices that drive me batty — the boilerplate license agreement header when a URI would do and commented-out source, which is like handing in your homework with crossouts all over the page. (Is a metaphor invoking hand-written homework hopelessly anachronistic?)

I’m somewhat contrarian on the common wisdom regarding comments, an attitude that developed from writing so much code for print publication. Source code was traditionally very difficult to format and inflexible, so when writing code for publication, you use very few comments, explicit-as-possible names, and straightforward-as-possible control structures. Of course you explain the “why” in the article, but you want the “how” to be evident. If a comment is needed within a function written for publication, that’s suspicious. But the thing is, that’s not a bad attitude to take in the real world! Documentation comments? Absolutely vital. But within a function, I’m skeptical.

The most confusing thing within functions is the combination of state relating to flow-control (if … else … if … case requires a comment saying “Okay, we’ve figured out that the situation is … ” comment) and the invocation of the consequences (“… So therefore, we know that the correct parameters are … and we execute the call … and we store the return in this variable relating to flow-control”). But the solution is not comments, it’s refactoring. Create a function that determines the flow-control (or, even better, use a proper object structure with virtual function calls, the result of which is that much flow-control is implicit), and another function that incorporates the call-and-return logic.

Having said that, I don’t doubt that the occasional within-function comment can do a world of good, especially in those situations when, due to library constraints, the names of the functions being invoked aren’t clearly related to the immediate programming goals.