I’ve recently have been working with a client using a very well-known Web-tier library (which will go anonymous for certain reasons). Under heavy load, their system started responding terribly. They asked me to take a look at things.
To make a long story short, the library has a race condition in the assignment of a Web session’s state to the activated code-handling block. Under heavy concurrent load, a handler may be activated with a null session (and, delightfully, a few microseconds later another handler might find its underlying session state assigned to). Not surprisingly, my client’s code can’t do anything logical with a null session and generates an exception. Worse, though, the library thread, which is pooled, becomes permanently corrupted.
So once the first failure happens and the thread corrupted, it increased the load on the remaining threads, making their failure more likely. However, as the thread pool corruption grows, the time that the over-all application spends in states away from the race condition increases, the requests start to queue, and the system limps along (kind of).
One of the reasons why the company couldn’t figure out what was going on was that when they turned Debug logging on, the serialization of debugging statements to the logfile inadvertently throttled the system so that the deadly race condition didn’t occur.
So, after figuring out al this, my first recommendation was to change the use of the Web-tier library. In the meantime, though, I added a very few logging calls as an “Info” log-level, gave them a graphical “dashboard” showing the system at that level, and tested the system to data-tier saturation without triggering the race condition! (Oh, and I gave them a very detailed explanation of why they shouldn’t view the “synchronization by logging statement” trick as a fix!)
Mitch Walker provides an excellent screencast showing the use of components within XNA GSE. However, looking at it I kept thinking “Shouldn’t this be a domain-specific language”? I have to be careful here because, obviously, drag-and-drop designers have proven to be successful. But using the design surface as nothing but a bag for instances and the Properties window as a declarative manipulator… I’m just not sure that’s any clearer (and it certainly seems limited in flexibility) than a DSL.
This morning, even before coming across the screencast, I was thinking about the tension on this blog between: concurrency (the issue that I think is going to come to dominate professional programming), Ruby (a language which I think is coming to influence the mainstream), and Domain-Specific Languages (a technique that’s one of the “lost treasures” of the 70s-80s programming era).
One of the most regrettable things about current mainstream languages is that exploring language possibilities is a huge task: you want to explore “what if sprites were first-class” and you have to start by defining whitespace and digit tokens. One of the reasons “Little Languages”/DSLs are not as common an approach as they were in the 70s and 80s is because, in those days, whitespace and digits were a lot closer to the problem domains! Dealing with weird character sets, packed data, and custom binary representations was a very customary part of problem-solving. Nowadays, those types of issues are rarely at the forefront.
This put me in mind of this post on Modular Compilers and even more so, LISP macros. Code generation goes a little way, but my big problem with tag-based code generation is that once you get into semantic complexity, the difficulty inverts and suddenly you think “Why am I not generating this with a compiler tool?” (Which, perhaps, points the way towards a possible answer based on refactoring?)
This is one of those “no conclusion” posts…