Archive for April 2006

Laterooms, GC, and Mono

Ouch. A company called Laterooms.com gave up on Mono, saying that it’s GC was essentially non-existent (maybe because it didn’t pack memory?). Via [Cook Computing]

Predicate Dispatch

Okay, since I’ve spent the whole damn day talking about other people’s languages, let me tell you something that I would give a lot of thought to if I were designing a language. Consider C# 3.0 extension syntax:

static void Foo(this String s){ … }
static void Foo(this Int32 i) { … }

s = “hello”;
i = 42;
s.Foo(); //resolves to Foo(this String s)
i.Foo();  //resolves to Foo(this Int32 i)

Now consider:

static List Sort(this List where this.Length < 1000000) { … }
static List Sort(this List where this.Length >= 1000000) { … }

Or:

static Double SquareRoot(this Double i where i >- 0) { … }
static Complex SquareRoot(this Double i where i < 0) { … }

Or:

static int Minimum(this TreeNode where treeNode.left.InstanceOf(EmptyNode){ return this.data; }
static int Minimum(this TreeNode) { return this.left.Minimum(); }

This idea of dispatching not just on the type of this, but on more complex (and dynamic) conditions, is called “predicate dispatch.” Putting aside the specifics of syntax, I think that predicate dispatch is to object dispatch what unit testing is to explicit typing: a more general and more domain-specific alternative that provides more flexibility, although with potentially serious runtime implications. But maybe not as much as you’d think.

Note the similarity to COmega’s chords:

public class Buffer {  
  public async Put(string s);  
  public string Get() & Put(string s) { return s; }

Calls to Buffer.Get() block until at least one call to Buffer.Put() has concluded, and then Get() actually has access to the s parameter of Put().

An abstraction that increases structural flexibility, allows domain-specific flexibility a la unit testing, and might address concurrency? Me likey.

Trends in language syntax

Who said I wasn’t fond of Delphi? Look: Delphi is a great tool. I wish absolutely nothing but the best for DevCo and the Delphi development community.

 

However…

 

I have yet to hear an argument that makes me think that the Delphi language is going to experience a rennaissance in popularity that will make it anywhere near as influential as, say, Turbo Pascal.

 

A correspondent who tells me that C# is more a descendant of Delphi than C++ argues my point for me. Let’s take a look at how Anders Hejlsberg’s languages have differed from their immediate predecessors:

  • Turbo Pascal : IDE
  • Delphi : Visual builders, components
  • J++ : C-style syntax, delegates, declarative interop
  • C# 1.0 : attribute metadata, Common Language Runtime, relaxed exception specificity (no checked exceptions)
  • C# 2.0 : generics, partial classes, anonymous delegates
  • C# 3.0 : type inference, relational operators including projection and selection (i.e.,dynamic classes), full closures, extension methods

So if you were to say “trends in Anders Hejlsberg’s work,” I think it would certainly be fair to say that, aside from the obvious switch to a C-style syntax, you also see a trend away from Niklaus Wirth’s philosophy of an explicit nested structuring of programmatic components. It’s absolutely true that Microsoft’s imprimatur is a major part of C#’s popularity, but let’s also give credit to Anders Hejlsberg for having a pretty darn good sense of the market (as both a reflection of what is needed and as a thought-leader in advocating what he believes).

 

Two more examples of the trend away from structural explicitness: In C# 3.0, there are extension methods, which allow an instance method to be specified independently of the file(s) in which the class was originally declared. For instance, static void Foo(this String s){ ? body ? } makes it possible to call Foo() from all instances of type String (in other words, string s = “hello”; s.Foo(); becomes legal code).

 

Similarly, here’s a simple class in Ruby:

class Foo

 @bar

 def Baz(x)

  @bar = x

 end

end

 

Simple enough, right? Class Foo has instance variable @bar that’s assigned in method Baz(). But because Ruby has syntax to indicate that a symbol is an instance variable (the @ prefix), the above class can be equivalently declared as:

 

class Foo

def Baz(x)

  @bar = x

 end

end

 

Why is an explicit declaration of instance variables necessary? The instance variable @bar is part of the programmatic structure that is Foo, but it’s declaration (to the extent that it has one at all) is in implicit in the use of that instance variable within methods.

 

Now, let’s talk about files. In the days when C and Turbo Pascal were very influential, there was a close correspondence between a compilation unit, a file, and programmatic components. The matching up of symbolic names between compilation units was a big deal ? in the world of C, this link stage is often the lengthiest stage. The associated language syntax actively worked against the design-time experience of the programmer, since the programmer has a very dynamic set of interests that span multiple (what today we’d call) namespaces. You still see remnants of this in IDEs today: a parser used to provide Intellisense / code completion is much more complex and fragile than a parser used to generate code (the Intellisense parsers in Visual Studio 2005 seem to be a primary source of bugs reported for that product).

 

The correspondence between files, compilation units, and programmatic components made a lot of sense in the days before capacious hard-drives and RAM. Nowadays, the correspondence makes far less sense: we can keep windows open with however many buffers we need (buffers? Hah! Not only do I date myself, I show how the file / RAM false dichotomy still part of me). When we search for a symbol, we expect the IDE to resolve across files and namespaces. The main justification for files today is as units of source control!

 

Templates / generics and components (especially GUI-related) cracked, within the mainstream, the file-compilation unit-programmatic component correspondence. GUI builders inherently provide a design-time view of structures under construction. We increasingly expect the same type of design-time access to data, XML representations, object graphs, etc.

 

Why would anyone think that this trend towards increasing design-time views of programmatic components will stop at the class boundary? Surely mainstream languages will evolve to have an increasingly blurry distinction between design-time and run-time programatic structures.

 

Smalltalk-80 (as in 1980, mind you) anticipated all of this.

 

Put aside the Smalltalk syntax and just look at this browser (well, it’s Squeak, but it’s the first image returned by Google). The panes along the top are, I think, self-explanatory and the pane at the bottom is the pane in which you edit. Note the absence of a “project view” or a file / class correspondence. Are they missed? Only a little, in that your classes and code can have a tendency to disappear into the forest of library functions. But in general, this browser remains a view of the mainstream future.

Smalltalk browser

The other relevant aspect of Smalltalk’s model is the non-distinction between program construction and execution. In Smalltalk, the view of your program (whether as part of construction or delivered to the customer) is not necessarily built from first principles when you run a program. During program construction, this means that you can shut things down in the middle of debugging and return at a later date to the exact same situation. When the program is deployed, it means that there’s much more flexibility about how to approach issues like customization.

 

Any edits to classes made in one view will propagate automatically to other views, not due to machinations of a separate IDE, but due to the fact that the IDE is part of the image in which you are working (you have the flexibility to strip down the image for deployment, if desired).

 

<pause?>

Darn it, I can’t find a picture that communicates how the Workspace / Image work. Perhaps a Smalltalker can offer something in the comments?.

</pause>

 

Although a solid argument can be made that Smalltalk (and Lisp, as well) may enjoy a rennaissance based on these types of structural advantages (“Smalltalk is the next big thing: always has been, always will be”), I want to move on away from languages entirely and talk about two really important trends that will, undoubtedly, influence future language design: unit testing and concurrence.

 

A major argument of those who favor implicitly-typed languages is that “unit testing replaces type information,” that double SquareRoot(uint i){ ? } has no advantages over def SquareRoot(i){ ? } so long as the compilation process includes executing something along the lines of { i = -1; SquareRoot(i) ? } and that execution results in the appropriate condition (whether that’s returning NaN or throwing an exception or whatever).

 

Today, unit tests are invariably developed as components external to the class being tested. The existence or non-existence of a unit test is not reflected in the tested class. The obvious problem is that possibility of absence: not only are non-comprehensive test suites a possibility, it takes considerable effort to ensure the existence of an even trivially comprehensive test suite (simple coverage, much less comprehensive coverage of corner cases). It is inevitable that there will rise a metadata-based connection between unit tests and classes under test. I can imagine a language that looks something like:

 

[Fit(pre = { i = 0}, post = { result = 0 })]
[Fit(pre = { i = 4}, post = { result = 2 })]
[Fit(pre = { i = -1}, post = { throws ArgumentException} )]
?etc?
def SquareRoot(i) { ? }
   

Now, my syntax just violated the trend away from structural explicitness, so maybe it would be, instead: Fit(Library.SquareRoot, pre = { i = 0}, post = { result = 0 }), but the upshot would be that the method Library.SquareRoot would always carry the association of its preconditions, postconditions, and invariants. The Eiffel language already has this design-by-contract metadata association, although it doesn’t (I believe) support Fit-style pre-post value pairings.

 

I find it amusing when implicit typing / unit-testing advocates say explicit typing doesn’t have benefits. What are unit tests but explicit constraints on the behavior and range of variables? Unit tests are type information. Very detailed, very domain-specific type information. Implicit typers often try to have it both ways: praising in one breath for the speed of implicit typing and then speaking of the confidence resulting from a “green bar” comprehensive test suite, but those are two diametrically opposed different development scenarios. Developing a comprehensive test suite in an implicitly typed language is faster than developing such a suite in an explicitly typed language, but a thorough test suite takes longer than “finger-typing” explicit type information. (I think there’s a happy medium in a test suite that is comprehensive on publicly visible components and not necessarily so with internally visible components. This is why I hesitate to advocate test-driven development.)

 

On to concurrence. The coming era of manycore machines is going to hit us like a tsunami. The major impediment to the historical success of Lisp and Smalltalk was relative performance. I’m not at all saying that Lisp and Smalltalk couldn’t be “fast enough” for their programmers, but they always faced scenarios where they consumed many times the resources of code written in lower-level languages (Ruby today faces the same problem: it will be interesting to see if enough Moore’s generations have passed so no one notices). In the future, when we get past 2 or 4 cores, performance will be dependent on concurrency abstractions that must be in syntax. You can’t library your way out (well, you can, but only in the sense that you need a library plus you have to not do a bunch of things that are in your language. If you want to know how successful that type of approach is, talk to someone who knows the history of J2EE).

 

I don’t have a strong opinion about the form of the concurrency abstractions that will succeed, but I guarantee you that by 2016, any language that doesn’t have such abstractions will not be considered professional quality.

 

For the past 18 months, during which I’ve been writing quite a few columns on trends in language syntax, I’ve been expecting someone to challenge me to explain the state of JavaScript (yeah, yeah: ECMAScript. Blech.): a C-derived syntax, implicit typing, a flexible object model, Web aware, and essentially universally deployed. By my logic, JavaScript should be much more popular than it is. I gotta’ admit, it’s a head scratcher. Sure, there’s not a great JavaScript IDE, but to me that’s a cart-before-the-horse thing: if a lot of people were slavering to code JavaScript, a good IDE would have emerged by now.

 

Further, JavaScript is the language for Flash and, aside from Macromedia’s incompetent marketing, why Flash doesn’t absolutely own the professional “rich Internet application” market is utterly beyond me. Which touches on AJAX. If I were in a debating club, I could say “JavaScript is successful, it’s the language of AJAX, and AJAX is the cool new thing.”

 

But I don’t think that’s the truth, I don’t think you can look at JavaScript and say, “yeah, in the world of professional programming, that’s a tool that people embrace.” I think people kind of begrudgingly say “Yeah, okay, if we need to do that in the browser I’ll do it in JavaScript,” and then they sigh and check the NBA finals (hey, the Larry O’Brien Trophy!).

 

Is it the lack of an IDE? The association of JavaScript with the browser and the resulting ugliness of all that <script> tag, single-quote / double-quote stuff? The libraries? The lack of marketing?

Hey, it’s a blog entry. I don’t have to come to any kind of conclusion.

Mort/Elvis/Einstein: The Humpty Dumpty Personae

There’s been a rash of criticism about Microsoft’s Mort/Elvis/Einstein personas. A few months ago, I swiped at M/E/E and triggered some correspondence. Part of that was the surprising lack-of-results for a search for a document that defines the Mort/Elvis/Einstein personas! One certainly justifiable criticism of M/E/E is that they (especially Mort) seem to mean whatever the speaker wishes them to mean, just as for Humpty Dumpty. Does Mort program in more than one language? Is a former C++ programmer who’s now a product manager a “Mort”? Would Mort use a DSL if one was available? I think the answer to these questions is, “whichever answer furthers your argument.”

 

David Intersimone on DevCo, the viability of Delphi, and Turbo Ruby

I just got off the phone with David Intersimone. My recent SD Times column on “DevCo,” (the codename for the spin-off of Borland’s languages and database teams) ruffled some feathers, particularly when I described the Delphi language as “well past its peak, and with its Pascal roots ? on the wrong side of trends in syntax.”

 

To address that line specifically, David I said that was “like saying that BASIC died in 1968?.Languages don’t die unless [language designers] stop innovating in them.” Which is true. But I wasn’t speaking of missing features, I was speaking primarily about the trend away from structural explicitness and secondarily about the prevalence of C-language syntax.  Is Delphi something that a lot of people are intrigued by? Not if this treemap of sales of books about programming languages (taken from Tim O’Reilly) reflects broad trends (and I think it does). Perhaps I’m holding Borland / DevCo to too high a standard, but I think it speaks poorly to the language that there are more people reading about Vbscript than Delphi (I take the response as given that: “they don’t read books because they’re too busy making money.” Sure.)

 

I highly doubt that the addition of features such as generics, closures, or even LINQ to Delphi will be sufficient to cause a resurgence in popularity (although they’ll undoubtedly be welcomed by existing users). A resurgence in the popularity of LISP or Smalltalk is unlikely but to my mind either is more likely than a resurgence in popularity of Delphi. I just don’t see this decade’s market embracing the explicitness of Pascal-like language design. It’s possible to imagine, though, a language that was backwards-compatible with today’s Delphi but which supported looser styles of creating program entities (just as VB’s “option explicit” essentially supports two different philosophical approaches). Such a language is what I meant by a “Delphi-in-name-only.”

 

We talked briefly about the Classhelper technology, which is exactly the sort of thing that I see as being important to future growth.

 

The growth of Delphi aside, David I shared several interesting points about the spin-off that may provide some food for thought:

 

“DevCo” consists of the development tools, the database technologies, and some “legacy products.” Obviously Delphi, JBuilder, and C++Builder, but also Interbase, JDataStore, Kylix, Turbo Assembler, and some others.

 

They have a .NET embeddable SQL database they’ve shown but not announced as a product.

 

“DevCo” has licenses for a number of Borland products (Together, RequisitePro, etc.) so that they can continue to sell the IDE in an integrated manner.

 

“Borland Developer Studio” has been the internal name for that which is installed by the DVD. In some places, they’ve marketed this as BDS, in other places as Delphi. This was the source of a little confusion on my part in the article, as I thought “Delphi” had become the overarching brand. One way or the other, the name for “DevCo”s products has not been determined and may even be determined by a contest among users! (Cute.)

 

David I says that Borland has a vested interest in the success of DevCo, as failure of DevCo would reflect poorly on Borland.

 

The leadership team of DevCo includes: Nigel Brown, Michael Swindell, Steve Todd, Alan Bauer, Ben Netick, and David I plus an internal Board of Directors.

 

They have 3-year roadmaps for Delphi, JBuilder, and Interbase.

 

Sarbanes-Oxley has lots of implications for software and other high-tech companies. It may be that under SOX you cannot add features in an update. (If this is the case, wow.)

 

The size of DevCo will be ~250-300 at the start.

 

DevCo will be headquartered in Scotts Valley. Borland will be HQ’ed in Cupertino. “That’s the plan” (but investors / buyers might overrule).

 

Delphi: generics coming, eventually will support LINQ.

 

David I doesn’t like referring to VCL as cross-platform. Likes to say “leveraging skills across implementations?…NET is not a platform, it’s a layer on top of Win32.”

 

In response to question about VCL on the Mac: “We’re looking into that.” Not on the roadmap. Time will tell; first and foremost is supporting existing customers. With bootcamp, will people develop in Windows on Mac hardware? Wondered David I.

 

“Wouldn’t be surprised” if DevCo eventually supported more languages than are currently supported. (Didn’t bite hard on my “Turbo Ruby” enthusiasms.) Mentioned

Interbuilder, a before-its-time JavaScript & Data tool. “Some of that DNA is still in DevCo.”

 

Sees a “healthy market” for “the JBuilder experience,” no matter on what technology that experience is built (originally, JBuilder was built on Delphi, then on Primetime, and in the future, on Eclipse).

 

We talked a little bit about the tension between spending resources on languages / IDEs and on databases. Traditionally, this has been a problem for Borland. David I is more on the languages side and didn’t really have an opinion on whether the world was looking for a new dBase / Paradox.

 

Finally, we heartily agreed that “developers matter” and that there were golden opportunities for bold companies producing great development tools.

Fucking Shit

We just found out that the fucking margins on Tina’s second fucking lumpectomy aren’t clear, meaning that she either has to get another fucking lumpectomy or a fucking mastectomy. Total fucking shit. And today’s our fourteenth fucking wedding anniversary.

This is the type of situation for which I keep swearing in reserve.

Learning to Program

John Montgomery wonders what would be good non-traditional ways to learn to program (where “traditional == text-based tutorial”). This is a subject dear to my heart and I started to write a post, but it looks like that’s turning into an article, so here I’ll just make the observation that the expectation of what is intriguing / cool about computers has dramatically changed in the past 20 years and this has created a greater tension between opposing desires: the desire to give power to the student and the desire to teach the key to computing power which is the utter plasticity of Turing machines.

LaPlante leaving Microsoft, Replaced by Outsider Andrew Kass

Rick LaPlante, who was largely responsible for Microsoft’s strategic embrace of Application Lifecycle Management and the “super-sizing” of the IDE into VSTS, is leaving Microsoft and turning over the keys to Andrew Kass. Kass is most recently SVP of Product Development at an Atlanta company called S1 that “delivers customer interaction software for financial and payment services.”

Kass has also held positions with Oracle, PeopleSoft and Living.com. As far as I can tell, he’s basically an expert in CRM development. He got some notice for writing a caching mechanism for ATG Dynamo, presumably in Java.

I don’t know Kass, so it’s hard to say how much of a change this represents, but certainly it seems like a significant shakeup.

Sun Pops Stack: Turnaround Unlikely

Scott McNealy, co-founder and long-time CEO of Sun, is stepping aside in the wake of a miserable financial quarter, elevating Jonathan Schwartz to the top spot at the company. “Stepping aside” may be a euphemism for “shown the door” by a frustrated board, but my take is that, directionally, there doesn’t appear to be any shakeup: Schwartz and Papadopoulos move up a level and that’s about it. Alan Zeichick suggests that McNealy’s exit may foreshadow a sale. May be, although I have a hard time figuring out for whom Sun would be a good investment.

Sun’s always been difficult to parse. There have been two eras when I thought they were going to take over the world (the early 90s and the late 90s) and other eras where I thought they were entirely irrelevant. With IBM clearly the most influential corporate entity in the Java world, now is one of those “irrelevant” periods. On the other hand, Sun has some brilliant people working for them, the type of people who can create industry-changing technologies. I thought that clockless CPUs might be that type of technology for Sun, but it was AMD, not Sun, that emerged as Intel’s major competitor (and, incidentally, showed yet again how conservative the market is about CPU instruction sets).

Notes on Sharks, Written While Waiting for My Wife to Emerge from Cancer Surgery

With sharks, there is no theme music. Between Hollywood and The Discovery Channel, you can become familiar with sharks: their grace, their lethality, their cartilaginous skeletal systems. An actual encounter, as far as detail or knowledge goes, is largely superfluous. The sinuous way they move and the apparent lack of effort in currents that forces others to retreat, the frame of open ocean around the shark ? perhaps you don’t get that without getting in the water.

The major impression one gets from a shark encounter is, in fact, that there is no background music, no Mantovani by way of John Williams, no escalating narrative. No overarching theme of death, or science, or stalking. There’s just, suddenly, a shark ? more often, two ? invariably closer than one expects and moving in utter silence. Not that SCUBA diving itself is quiet; the bubbles from your regulator are surprisingly loud. But fish — even large, deadly fish ? move in silence. This is, perhaps, the most unnerving aspect of a shark encounter. Not the confrontation with a dangerous animal, but the appearance, so close, with no announcement. It is a failure of our senses and this, more than the vanishingly small potential of a bite, causes us fear.

Once, in the Sea of Cortez, my wife and I were diving in the open ocean off a seamount. It was late Summer and the water was thick with plankton. We were deep, too, with no reference to the bottom and a plankton layer above us that made the surface invisible. It was as dark, I suppose, as evening.

Afterwards, we both described the same experience. The wall of hammerheads, hundreds strong, as far up, as far down, as far left and right as could be seen. We had both seen it for some seconds before it sorted out in our minds. And then in four or five breaths ? the metronome of diving ? each hammerhead in the wall changed direction, turned their tails to us, and moved effortlessly into the dark. We never saw the school again.

Another time, we were surfacing at the end of a dive ? again, a deep dive far from shore ? and again we had lost the bottom. This time it was a current and poor air management that had forced us to surface in blue water, with no references and far, far from the boat. We were shallow, perhaps fifteen feet below the surface, dawdling for the mandatory adjustment of partial pressures necessary to avoid “the bends.” I don’t remember why, I certainly don’t remember any “sense of being watched” or rising hairs, but I suddenly twisted around very fast. A shark was swimming towards me, with deliberate intention.

Since, I’ve been menaced by sharks three times (the dive boat joke is to parse the difference between “threatened” and “menaced,” with the punch line being that “menaced means: regardless of the facts, I was scared”). Each time, the eventual turning away, filled with grace and disinterest, was the most fearsome moment. The shark turns and swims away, no faster, no slower, no acknowledgement of the current that pulls you. At some point, the shark has been gone from sight for some breaths and your mind sorts it out and there’s only again the frame of the blue ocean.

div>