(HOTEL_BLOCK_RESERVATION(HOTEL_ID (IATA_NODE MDW) (PHONE_NUMBER 708 563 0200) (HOTEL_NAME Courtyard by Marriot)) (DAY_BLOCK (DATE_NODE 08 31 07) (FLIGHT_BLOCK (PAIRING_NODE (PAIRI… etc…
“VisitorTests.parsesTheWholeThing(): Lex & Parse: 2765 ms. ReservationCountWalker 282 ms. to find 27327 reservations”
Man, if I could just get my clients to fund it, I could make that Silverlight “air travel” demo look like a joke…
Bill de h?ra makes an intriguing pitch that programming will be impacted by increasing data volumes more than by the transition to multi-/many-core. His basis is anecdotal — we don’t have the same metaphysical certainty that all of us will be dealing with much-larger datasets as we have the certainty that we will all be dealing with multiple and then many cores — but is logical. The speed of a single stream of in-cache instructions is blazing: short of chaotic functions, it’s hard to imagine perceptibly-slow scenarios that don’t involve large amounts of data.
What I find especially thought-provoking about this argument is that it stands in opposition to another post I was going to make about YAGNI infrastructure. Not long ago, Alan Zeichick ranked databases and Ian Griffiths questioned whether he took price-performance into account. Even allowing that there are costs for OSS (training, tools, administration, etc.), I’ve noticed that few real-world CEOs understand where their companies stand in relationship to scaling. In my experience, they often over-buy software- and hardware- capacity and under-buy contingency capacity.
It seems to me that nowadays we work more and more with data streams and not data sets. On a transaction-to-transaction basis, I think it’s an uncommon application that uses more data than can fit into several gigabytes of RAM (obvious exception: multimedia data). Never mind multi-node Map-Reduce; I’m saying that it seems to me that many “real” business systems could have a single-node non-relational data access layer.
It seems that what I’m saying is in direct contrast to what de h?ra is describing, and yet points to the same “maybe we ought not to start from the assumption of a relational DB” heresy. No conclusion… food for thought …
Reflection: I think I let my attention wander — the world de h?ra is describing is that high-performance computing and I wandered into general business-computing. The two intersect, of course, but are not generally the same. So the thought then becomes that powerful relational databases are being squeezed from both the low-end (“eh, just put in memory”) and the high-end (“ok, so this is our distributed tuple-space…”).
Dustin Campbell does a good job explaining the mechanics of currying in C#, although I’m afraid he stops before truly explaining why currying is considered an essential building block of functional programming. He promises to get to that in “the next post” so I won’t offer my own take. As with recursion, simple examples often seem pointlessly complex (“Why would I want to calculate a factorial with recursion? Why would I want to add numbers with a curried function?”), so you’re not likely to get the actual “art” of currying until he does his follow-up. (via .NET Languages Weblog)
Antonia Vargas sent me some links to some new “demos” (very small programs from which complex graphics and music emerge):
Purdue researchers have applied the idea of an ionic breeze to cooling computer chips. This seems like a slam-dunk to me: more flexibility than a fan in terms of structure, the ability to generate the wind without a dead layer near the surface, and thereby more efficient. Presumably much quieter, since there would be no ball bearings and increased efficiency means less total air displacement. Couple years to commercialize they say…
In the past 96 hours, I’ve been exposed to:
I feel like I should return that tiki to the cave…
We stayed over Tom & Lana Plum’s house last night, hoping to take advantage of their very dark skies to see the Perseids. Slept on the porch on thin mattresses, beautiful milky way / Sagittarius / Scorpio in the evening hours. Saw a couple nice earth-skimmers around midnight, and then dozed fitfully until 4AM for the “big show.” Unfortunately, large portions of the sky were overcast so that only 1st magnitude stars shone through, and there was only a keyhole near Orion that gave a glimpse of a few meteors.
Drove home early to talk to our wall construction guys and I now see that Hurricane Flossie is projected to remain a Category 3 as it passes by and if it’s north of the projected track at all, it will hit the Big Island. We should get significant protection from Mauna Loa and Hualalai mountains as long as the eye stays south of us and we’re hit by the East-to-West portion. If the eye tracks north and we get West-to-East, that could potentially suck.
Jon Skeet wonders:
I’ve been looking at C# 3 in a fair amount of detail recently, and likewise going over the features of C# 2….I feel sorry for someone wanting to learn C# 3 from scratch. It’s becoming quite a big language….It’s often been said in the newsgroups (usually when someone has been moving from another language to C#) that C# itself only takes a few days to learn….I suspect it would be hard to do it any sort of justice in less than about 700 pages, which is a pretty off-putting size (at least for me).
To compile an object-oriented “Hello, World” in C# (1, much less 3) has a huge conceptual load: namespaces, classes, references, the difference between static and instance methods and variables (which, to really understand, requires a digression into the this pointer and v-tables, which opens a can of worms about how the CLR differs from the underlying memory model)…. think about how many concepts you have to understand to understand public static void main(string args)
That so many of us learned C# after knowing Java after knowing C++ after knowing C has perpetuated the myth that any of those languages are “learnable in days.” It’s just not so. I used to teach a hugely successful 5-day course on Java which worked great… as long as they weren’t COBOL programmers. I imagine I would face the exact same problems trying to teach newcomers coming from, say, ColdFusion or Flash. Of course some people would “get” it, but I guarantee that those people would be those who had primed themselves on C-derived syntax and object orientation.
Harry Pierson wonders “Where Have All the SOA Mashups Gone?” Well, this one went well. I’m not sure if it counts as a “mashup” in that all the data I was working with was XML and the interdiction / mashup was programmed in Ruby.
For weeks, I’ve been chewing over this post by Raymond Chen, in which he says:
[T]he real cost of compatibility is not in the hacks. The hacks are small potatoes. Most hacks are just a few lines of code (sometimes as few as zero), so the impact on performance is fairly low.
The idea of a non-backward compatible version of Windows is something that I’ve mused about (as has Alan Zeichick). I’m not going to pick an argument with Chen, of course, but I wonder if he’s not being a little disingenuous. Even a few lines of code in a core routine can have an effect if it affects cache behavior; okay, that’s niggling… But still, to say that a non-compatible version wouldn’t be much faster but to go on to say:
[T]he real cost of compatibility is in the design.
If you’re going to design a feature that enhances the window manager in some way, you have to think about how existing programs are going to react to your feature. These are programs that predate your feature and naturally know nothing about it. Does your feature alter the message order? Does it introduce a new point of re-entrancy? Does it cause a function to begin dispatching messages that previously did not? You may be forced to design your feature differently in order to accommodate these concerns. These issues aren’t things you can “take out”; they are inherently part of the feature design.
Well, yeah. But isn’t that kind of like saying “the real cost of compatibility is not how fast you can type in the code, it’s in the work.”?
Surely (well, not surely, but surely “likely”) a version of Windows where backwards compatibility was negotiable would have more flexibility for the type of redesign / refactoring which Windows will need for the manycore era? If nothing else,
surely intuitively one would think that the very concept of the Windows message-loop (much less message ordering) would become highly problematic when trying to figure out how to exploit many cores.