Knowing .NET Sometimes a coder who writes, sometimes a writer who codes. Formerly, a coder who edited magazines.

January 31, 2007

Service-Oriented Systems That Actually Do Something

Filed under: Uncategorized — admin @ 9:36 am

Sam Gentile says:

[W]hen people bitch about WS-*, I don’t get how its not obvious that “the main characteristics of Web services is communication over unreliable communication channels such as the Internet employing unreliable data transfer protocols such as HTTP, SMTP and FTP” and many of us need things like WS-RM and other standards to build real service-oriented systems that actually do something.

Since I’ve stopped being polite about WS-*, I’ll bite:

It’s just not the case that “real service-oriented systems that actually do something” require the WS-* stack. Very reliable, very scalable, very large systems accomplish transactions over Internet protocols without using WS-RM. iTunes has now executed over a billion transactions without using WS-RM. And, subjectively, while Internet protocols are technically unreliable, they are reliable enough for me to order books from Amazon, track packages via FedEx, send my articles and invoices via email, etc. Professionally, they’re reliable enough for a POX system I architected to transact ~$100M a year in airline tickets.

WS-* advocates will probably say that the transactions involved in the examples cited above are relatively simple — high volume, perhaps, but simple. “Financial services! Trading partners! Supply chains!” they will say. It is to these which, for years, I gave a “To be sure…” exemption. “REST works in most situations,” I would say, “Although, to be sure…” and then I would capitulate. Perhaps with a high-enough volume, or a large-enough amount of money, or a time-sensitive enough clock, or a complex-enough transaction, one needed WS-*.

I no longer concede that. I say that, six or seven years into the Web Services era, the onus is on the WS-* advocates to prove the need, because the advocates of KISS approaches have, I think, amply demonstrated the viability of their approaches.

Further, let’s posit for the sake of argument that reliability and re-ordering may be problems. I say that, in both those cases, the solution lies in higher, not lower, abstraction levels. If reliability is a problem, implement some form of visible ACK/NACK functionality (if you think about it, the idiom of “shopping cart checkout” that has evolved involves just such a higher-level ACK/NACK: “Press submit to finalize,” “Here is the page acknowledging your order; an email has been sent to you acknowledging the order as well.”). If reordering is a problem, first check for excessive conversational state, and second, put a frackin’ messageOrder element in your XML. Visible, visible, visible.

Would it be better to have such functionality “for free” in a library or tool? Would such functionality be burdensome and error-prone? Sure, why not? All programming is burdensome and error-prone. But I assert that the odds of being stumped by a problem are lower in a Keep It Simple Stupid, highly-visible, higher-abstraction-level solution than they are in a WS-* architecture involving more than one vendors’ service stack.

Further, I would argue that WS-* is unlikely to ultimately triumph for the very reason that it’s attempting to inject a low-abstraction-level layer between a (posited-for-the-sake-of-argument) insufficient infrastructure and the business-programming domain. That may work for a single vendor, with a unified analysis of the supposed shortcomings of the infrastructure and the business-programming domain. But with multiple vendors, who not only don’t share a single view, but whose view of the business-programming domain is inherently biased by their commercial interests, the confusion and slow progress that has characterized WS-* is more likely than not to continue.

January 30, 2007

Turing Award Recipient Jim Gray Missing At Sea

Filed under: Offtopic — admin @ 8:24 am

Jim Gray, who did fundamental work on transaction processing and won theTuring Award, is missing off California’s Farallon Islands. The good news is that weather has been good and he was sailing in a 40′ yacht, which ought to provide ample shelter for a few days. The bad news is that he was sailing alone and the ocean there can be nasty (cold, choppy, etc.)

Vista Install Problems

Filed under: Uncategorized — admin @ 8:06 am

I lag behind in this brave new era.

I’ve been running Vista in VMWare virtual machines and having an acceptable, but not good, experience. No glass, no NUMA (one of the few interesting APIs targetting concurrency), performance less than stellar.

However, with the time at hand to install Vista to the actual boot disk, I am stymied. I have a Tyan K8W S2885, an uncommon-to-rare motherboard with an SSI EEB 3.0 form (12″ x 13″) that is pretty dang tight once heatsinks and cables are attached.

Vista informs me that I need to get a driver for the “Primary AMD IDE Channel” which confuses even my friends at AMD.

January 29, 2007

Ruby In Steel’s Optional Type Assertions

Filed under: Uncategorized — admin @ 10:36 am

In order to provide Intellisense for Ruby, a language that does not have explicit typing, Ruby In Steel turns to type inference. The built-in inferencing can be aided by adding type assertions to a function, for instance:

#:return: => nil
#:arg: c => String
def Bar(c)
@field = c
   puts @field

The type assertion block can be automatically added by typing “##” in the line above the function/method declaration (it fills in the type with “Object” to start). I’m a proponent of explicit typing in non-trivial projects so this is potentially a big deal to me. What I need, though (consider this a feature request, SapphireSteel) is some form of FxCop-like reporting / enforcement of type assertion “coverage.”

That is, I would like to enforce a business rule “Ruby programs longer than 500 lines must have type assertions on all functions.” To me, this would be a win-win: you can develop as fast-and-loose as you want, but if you want to check code into a team project, you have to add type information (which, in my mind, is extremely important to the dominant task of understanding code).

To be sure, in my experience the “DocComments” facility in VS/C# (typing “///” triggers a documentation block for the parameters) is widely ignored and FxCop enforcement is resented, but I think documenting parameters in a strongly typed language often seems gratuitous (“string firstName: a string representing the first name” and so forth), while I think everyone admits that type information is helpful for comprehension.

First Look: Ruby In Steel

Filed under: Languages/Ruby — admin @ 9:36 am

Here’s the Ruby In Steel editing / debugging experience. Intellisense works dynamically — as soon as you define a function, it becomes available to Intellisense. The debugging experience seems to be the standard VS one (that is, pretty darned good).

REPL functionality is provided by IRB in a console window: not ideal, but convenient. There’s quick access to Rails, Rake, and Gems (see the second screenshot).

The install silently guessed wrong on my Ruby install location, which caused my very, very first “puts 2+2″ to fail, but it was easy enough to guess that the issue could be fixed under Tools | Options…

So far, so good: more to come as experiences develop. I look forward to putting this head-to-head against Komodo.


January 28, 2007

I’m Greg Benford — Nice!

Filed under: Offtopic — admin @ 3:30 pm
I am:
Gregory Benford

A master literary stylist who is also a working scientist.

Which science fiction writer are you?

January 27, 2007

Software Productivity: The Only Two Things That Matter

Filed under: Uncategorized — admin @ 9:28 am

Joel Spolsky’s review of Dreaming in Code makes the point that Chandler is yet another high-quality data point that, contrary to the initial exhortations, Open Source is not a significantly-more-productive development methodology. It turns out that Open Source is an interesting business model (somewhat to my surprise) and that free-as-in-beer is a killer competitive strategy (Eclipse or, for that matter, IE and not surprising to anyone).

This is not to bash Chandler’s ultimate deliverable: the Mozilla project would be a similar datapoint and Firefox is a great piece of software (and my choice of browser). OSS can certainly be high-quality (Apache being another exemplar). But at this point it’s clear that open-source development is not inherently fast. Joel fingers lack of analysis and design as Chandler’s shortcoming, but veterans (should) know that promoting A&D as inherently speedy is laughable.

I’m all for spending vast amounts of energy debating the incremental issues of languages, tools, development methodologies, design paradigms, and so forth, but let’s be clear that of all the things we know about software development, there are only two things that we know to be inherently highly productive:

  • Well-treated talented programmers; and
  • Iterative development incorporating client feedback

IDEs are Noise Compared to Version Control, Build System, and Bugtracking

Filed under: Languages — admin @ 8:45 am

I was struck by the statement “the version control system is a first order effect on software, along with two others – the build system and the bugtracker. Those choices impact absolutely everything else. Things like IDEs, by comparison, don’t matter at all,” in a post by Bill de h?ra. It’s not 100% true — innovations like integrated debugging (Turbo Pascal?), refactoring (IDEA), and the Smalltalk browser — can be enormous, but there is certainly more than a grain of truth to it.

IT Windfall from Vista == Consumer Costs

Filed under: Uncategorized — admin @ 8:36 am

Alan Zeichick points out the absurdity of the position, touted by Microsoft, that Windows Vista will “generate” $10 billion in new revenue for the California IT industry this year. Alan observes that IT revenue means that “someone else’s costs have to go up” and that it’s perverse to “celebrate a software update when its creator boasts that it will increase the cost of IT.” Good for those who charge for technology per se, but bad for dentists, manufacturers, schools, restaurants, and others in the large majority of business who do not directly profit from work in IT.

January 26, 2007

Caffeinated Donut == 2 Cups of Coffee: American Ingenuity in the 21st Century

Filed under: Offtopic — admin @ 9:08 am

There are those who say that American’s time is past. That this great country, this bastion of ingenuity, has lost it’s spark, it’s insight, it’s entrepreneurial spirit. To those people, I give molecular biologist Robert Bohannon, who has figured out how to mask the bitterness of caffeine in pastry, opening the way to The Buzz Donut.

I think this is a complete answer to the question of America’s greatness in the 21st century.

Older Posts »

Powered by WordPress