Smalltalkers sneer at my LINQ-thusiasm

James Robertson thinks that I’m too breathless about LINQ in my recent article about C#’s popularity on the CLR. For some reason, I can’t post to his comment section, so I’ll just respond here and shoot him a trackback:

What got through the asbestos was the comment that I “confused s-expressions with function pointers.” C’mon, be fair to the context. I think that paragraph is pretty good for, what? 80 words?, making the point about code-data equivalence while trying to acknowledge languages like LISP and Smalltalk. Feel free to abhor C#, my code, or my conclusions, but please don’t ignore the fact that I’m one of the few industry analysts who bends over backwards to talk about languages outside the C family.

As to the issues of type, perhaps my statement on explicit-implicit vs. static-dynamic was not as clear as it could be. (Clear or not, though, I note that some commentors taking potshots at my accuracy perpetuate the static-dynamic confusion on their own blogs.) I think James took my point in his OP, where he acknowledges that the mainstream has voted for explicitness. He then goes on to (essentially) say “Look how much verbiage results!” Further, LINQ introduces type inference, but you still have to (finger-)type these as var. Implicit-typers feel free to roll your eyes. However, in contrast with what I take to be the popular sentiment [@ James’ blog], I don’t believe implicit vs. explicit typing plays the central role in language popularity. Some role, yes, but not nearly as dominant in practice as its presented by advocates.

The article is about why C# is the most popular language for the CLR. I tried to write that I felt there were 3 main issues: the popularity of the C language family, the evolution of the CLR from a technology with relatively narrow goals to one that aims to be a platform for broad developmentand the role of Anders Hejlsberg in the evolution of the CLR. I suggested that the role of Hejlsberg is the most intriguing, because one can clearly see an alignment between his interests and opinions and the evolution, not just of C#, but of the CLR. Therefore, I would think fans of languages other than C# would do well to pay attention to Hejlsberg’s latest work, precisely because it likely foreshadows the evolution of a very important platform.

Finally, the comment that the industry “blindly stumbled into” a preference for the C family is very disappointing. I hope Patrick Logan isn’t so tired of fighting the good fight that’s the best he can muster. C# is all of 5 years old, Java 10, the old excuse that they are languages of Moore’s Generations distant in the past has entirely lost its persuasiveness.

A question for those who believe explicit-implicit typing is central to language popularity: on the CLR, why has C# grown to be more popular than VB?

Origami: Textbook viral marketing

Apparently, “it’s been pretty much established as fact that Origami is a new “ultramobile lifestyle PC” from Microsoft.”

This fits a couple other data points, but we’ll just have to see. One way or the other, though, whoever organized this “mystery” sure earned their pay. I suspect that 90% of the bloggers linking to the origami project knew the truth.

What this reminds me of is the yearly pre-MacWorld speculation. It’s been the type of thing Microsoft has not, traditionally, been very good at.

On the other hand, if it turns out the Origami project is a COBOL compiler, a lot of people will be disappointed.

The myth of better programming languages

I like Andy Hunt, and I like Ruby, but his post at perpetuates a myth that I think is harmful. He says of Ruby, “First, more than any other language I’ve used, it stays out of your way. You don’t have to spend any effort ‘satisfying the compiler’…. I can type in an absurd amount of code in Ruby and have it work the first time. Not 2-3 passes …. It just works”

What I object to is a small thing: his use of the word “you” in the first sentence (“out of your way…You don’t…”). I have no problem with the second part of the quote (“I can type…”). The myth is that programmers share a psychology and therefore, the tool that “fits” my mind will “fit” your mind if only you give it a try. Now, that may prove the case for particular instances of “I” and “you,” and I applaud reasoned advocacy of whatever-the-heck you love in life, but I’ve come to believe that there is not a shared psychology of computer programming.

Please don’t reduce my point to Microsoft’s insulting “Mort, Elvis, Einstein,” scheme, which combines the same sweeping generality I’m condemning with a heap of condescension (do you think anyone working in Redmond classifies themselves as “Mort”?). I’m talking about programmers perfectly capable of tackling the same problems with the same productivity.

What I’m suggesting is that there is not a “best” programming language, nor perhaps are there even “better” programming languages once you get beyond a certain level of functionality. Certain programming languages are better at certain tasks, without a doubt (If you want to scrape a Web page, use a language in the Perl family. If you want to keep your CPU toasty-warm, use C++ and assembler).

Are there languages that “stay out of [my] way,” and in which my code “works the first time…It just works”? Absolutely. In my career, I’ve felt that way about Basic, Prolog, PAL, C++, Java, and, lately, C# (although I had this anonymous delegate thing the other day I still can’t parse). I used to write lengthy Prolog programs on the bus and type them in when I got to work. For a brief while, I thought that would be true for everyone, if only they gave the language a chance. In retrospect, I was fortunate to fall for a language that so few loved.

Lisp is the king of languages touted as “if only you gave it a chance.” But what Lisp advocates fail to acknowledge is that Lisp was given a chance by virtually everyone exposed to computer science in the 70s and 80s. Lisp and Basic are the most abandoned languages in the field. Lisp was the second or third language I learned (after Basic, and pretty much simultaneously with Fortran) and I worked with it professionally in the late 80s and early 90s. Back then, it didn’t fit my psychology. Now, though, in the Jolt Awards, my vote for best book of the year will go to Peter Seibel’s “Practical Common Lisp” and I’ve caught myself thinking about burying a Lisp interpreter in an upcoming project.

Ruby is the belle of the ball currently, largely because of Rails. It’s a recurring theme in programming language popularity that the differences between languages, which are very real, are masked by libraries and toolsets. Smalltalk, for instance, may or may not “fit” your mind, but the Smalltalk browser and workspace were, without a doubt, years beyond other IDEs. Ruby may or may not “fit” your mind, but Rails is without a doubt the most influential framework in several years. Java brought unit-testing into the mainstream, VB brought GUI builders, etc.

The shame is that when advocates conflate the benefits of their libraries and tools with the psychological aspects of their language, they focus attention incorrectly. What you get is “Hey, let’s port Rails to .NET,” or whatever, when what is needed is more discussion of what makes Ruby “fit” in certain approaches. Especially frustrating is that we don’t even have a decent vocabulary for discussing language differences. People talk about “dynamic languages” and that, to many, means “dynamic typing,” which, to many, means “implicit typing.” And implicit vs. explicit typing so dominates the discussion of programming languages that IT MAKES ME WANT TO F***ING SHOOT MYSELF!!!!! THERE ARE OTHER ISSUES, PEOPLE!!!!!!

Okay, sorry.

But, for instance, Prolog relies on sequences of predicates: true, true, false … oh, back up, try something else … true, true, true, etc. Is that something that “fits” your mind or not? I haven’t tried any languages that implement predicate dispatch, but I’m thoroughly convinced such languages would appeal to me.

Another example: As I’ve been reinvestigating Lisp, I’ve found myself actively liking Lots of Irritating Superfluous Parentheses. Now, I’m like “Yeah, why should I type a ‘)’ to end a call, a ‘;’ to end a statement, and a ‘}’ to end a block? They’re not ‘superfluous,’ there are rarely more parentheses than there would be operators-or-punctuators.” But is seeing different operators important to you to quickly understand structure? (For me, I think a major reason for my change in attitude is that after 17 years of OO, flow-control in my programs is now much more governed by the structure of the object-graph, not the values of local variables.)

[…Wow. I didn’t intend to rant like this… Okay, finishing it off abruptly…]

The important thing is to realize that different languages engage your mind in different ways. There are languages that you will find allow you to be profoundly more productive solving certain problems. Search for those languages, and do not conflate tool and environmental benefits (equally interesting, equally worth pursuing) with language benefits. Also realize that your own mental approach to programming is always evolving and may change the way that a certain language strikes you.

Okay. Off to walk the dog.

Refactoring vs. Procedural Code: Would you refactor this?

When my application starts, it sets in motion a process by which an event is received back at the main form. It’s an event that’s common enough (let’s say, “Maximize Window”) but the first time I receive it, I have to process it in a special manner. So, I have code like this:

 class MyMainForm{  ... big long complex code ...  bool firstTimeThroughEventHandler = true;  void MyEventHandler(object o, EventArgs e){   lock(this){     if(firstTimeThroughEventHandler == true)     {      firstTimeThroughEventHandler = false;      ... one time code ...     }    }  }  ... lots more code ... } 

So this is just begging for refactoring, right? You have one-time behavior, an instance variable used as a flag, a (slight) performance hit that is not needed 99.9% of the time, etc.

But the thing is, I can’t shake the feeling that anything I do to refactor it will make the code less clear. Of course I can refactor a new class to hold the event handler, and attach and detach it to the event, and, y’know, yeah, fine. But, geez, adding a class to a project introduces a mental burden, too, one that seems to me at least as great as “oh, that variable’s a flag that’s used by that event handler. It doesn’t mean anything anywhere else.”

Also, note that I don’t have any other behavior in the event handler. If it were “if firstTimeThrought { code }else{ other code}” the decision to refactor would be easy. But this is trivial cyclomatic complexity: 1 entry point, 1 decision, 1 exit point.

What do you think?

Novell a good fit for ex-Borland Delphi Group

Of all the scenarios being mooted on the purchase of Borland’s line of compilers and IDEs, the one that I like the most is Novell.

Novell’s Mono project includes an implementation of the .NET CLI that runs on Windows, Linux, OS X, and a number of UNIXes. The Delphi group has an IDE, compilers and compiler expertise including top optimization quality, and experience with VM development. Technology wise, the two groups are highly complementary.

The only thing I wonder is if Novell has the fire in the belly to recognize that now is a viable time for an alternative to the Visual Studio-Eclipse polarity. Few people realize that the window on the next phase of development is opening now. We’re entering a disruptive phase in software development (ref. Windows 3.1 and the rise of OO, the WWW and the rise of Java, the dot-com crash and the rise of agile techniques). The change in programming context to multicore will cause a disruption in software development.