Dr. Dobb’s Changes

When Software Development was killed, I predicted that Dr. Dobb’s wouldn’t change markedly. Boy, was I wrong. Editor-in-Chief Jonathan Erickson and Publisher Stan Barnes clearly decided that the time had come to create what is essentially a new magazine: I don’t think Dobb’s has changed this much since at least the late 80s.

The “new” Dobb’s really has taken a page from SD‘s playbook and dramatically increased the amount of technical-management-focused editorial. There’s a fraction of the source code that used to be Dobb’s signature.

They’ve dramatically changed the column lineups, which is a real surprise given Erickson’s loyalty to his long-term writers. Mike Swaine is still there, Scott Ambler and Rick Wayne came over from SD, and Pete Becker is writing the C++ column. This was a bold move must have been a hard one, both for the columnists and for Erickson. But  think it was a good choice.

They also redesigned the pages. Note to publishers: Do we really need to go through the whole “the new font is too small,” “you’re right: we’ve changed it back!” charade every time? It looks like the new page layouts are more flexible, although at least initially, I think the readability has gone down.

As an old-time competitor to the “old” Dr. Dobb’s, they’ve walked away from some of the things that made them hard to compete against: the “signifiers” of technical depth that came from their source-code and low-level articles. Pages of source code cue programmers “there is immediate value here.” When flipping through a magazine, an article on, say, computer security, will be much more eye-catching if there is accompanying source code: the programming reader stops and “checks out” the source code to see what’s going on. “Soft” articles, on the other hand, have a harder time catching the eye and coming to mind when the renew / resubscribe decision comes about.

It’s gotta’ be tough managing the editorial of a programmer’s magazine nowadays.

“The Core”: Worst. Movie. Ever?

One of the cable channels (FX, I think) has been playing “The Core” in medium rotation lately. I’ve been trying to expose myself to it in small amounts, to inoculate myself and accept it as cheesy “so bad it’s good” fun(ref. “The Fast and the Furious”: One of my favorite movies of the past few years.)

A few years ago, during a bout of business travel, I saw “The Day After Tomorrow,” like, 7 times on various airplanes and I really thought that was the worst that could be done, but I just saw the last 10 minutes or so of “The Core” yesterday. OMG. So bad. So very, very bad.

“More use of assembly” — Dubious prediction

InfoWorld’s Tom Yager wrote a column on the benefits of native code, but then went off the deep end with:

Here’s a native code prediction that’s way under your radar: We’ll see more use of assembly language. …Developers coding for new, controlled deployments can afford to set high requirements that include a 64-bit CPU, OS, and drivers. And if you know you’re coding for Opteron and you’re ready to write to that architecture, baby, life is a highway.

[via James Robertson]

I worked with assembly language. I knew assembly language. Assembly language was a friend of mine. And I have this to say: Assembly language isn’t coming back to the mainstream.

The point that native-OS code is worthwhile is dead on. The point that native-hardware code is worthwhile is, for numerics and media programming specialists, true: if you’re working with huge blocks of 8- and 16-bit integer data, packing the registers and using the wide-data ops is going to be worthwhile (assuming we’re talking about code that will be run hundreds or thousands of times). But the only reason to drop that far low-level is parallelism and assembly has very little (if any) advantage for expressing that. Even in the graphics world, where concurrent ops are already the norm, the trend has been towards higher-level languages (relatively speaking: shader languages look like C).

The idea that the concurrent revolution is going to be solved by old tools is dead wrong. Low-level C-derived tools? Possibly. (Or possibly not: a higher-level language that did for concurrency what Java did for memory management [solve 90+% of the problem in a relatively performant manner] could very well sweep the industry.)

Programming Quantum Computers

When I feel listless, I sometimes try to whet my brain by rubbing it on quantum mechanics, which requires math that’s absurdly difficult for a dilettante to understand. For years I’ve tinkered at implementing a simulator for programming quantum computers and really haven’t gotten anywhere. Well, now I can use Andr

Genetic Algorithms Outperform Humans In…

The Catalogue of Variable Frequency and Single-Resistance-Controlled Oscillators Employing A Single Differential Difference Complementary Current Conveyor which I imagine is self-explanatory to electrical engineers. Silver went to Multiobjective Genetic Algorithms for Multiscaling Excited-State Direct Dynamics in Photochemistry. Bronze prizes went to two things that I could actually understand:  A multi-population genetic algorithm for robust and fast ellipse detection  and Using Evolution to Learn How to Perform Interest Point Detection .

Posted in AI

When Your Nutshell Gets to 1300 Pages…

“Java in a Nutshell” weighs in at 1264 pages. Matt Croyden, sez:

[Y]our programming language just might be complicated when you have trouble telling the difference between its Nutshell book and a telephone book.

[via James Robertson]

This is somewhat unfair, as the bulk of “JiaN” is a library reference, but it’s certainly true that Java and C# have grown more complex as they’ve evolved, while certain other languages (Lisp, Smalltalk) have seen continuing simplicity as a feature of the language.

I think that one force in play in the market for programming language popularity is pressure towards a “collapse toward simplicity.” It’s not the only force, in my opinion it’s not likely to be the major force, but it certainly played a part in the rise of Java. Of course, Java was equally an example of the force towards familiarity: it seemed quite like C++. Similarly, I think one reason why Ruby is currently the belle of the ball is its similarity to Perl.

Most Useful UML Diagrams

According to “How UML Is Used,” an article in the May 2006 issue of CACM, the UML diagrams that most commonly “provide new info” above-and-beyond use-case narratives are:

  1. Class diagrams
  2. Statechart diagrams
  3. Sequence diagrams

Interestingly, “usage rates are not well explained by how much new information is provided.” Statecharts, the 2nd most useful diagram, are used in most projects by only perhaps 1/4 of practitioners. Use-case diagrams, in comparison, are the 2nd most commonly used type of chart, but are one of the least effective in terms of adding value to use-case narratives (well, yeah…). Class diagrams are both the most useful and most used, while sequence diagrams are commonly used by about half the practitioners.

Rounding out the studied diagrams (they skipped Object, Component, and Deployment diagrams), Collaboration and Activity diagrams are, when used, considered useful by more than 60% of practitioners.

Getting Things Done With OneNote 12

Getting Things Done With OneNote 12

Tuesday, July 11, 2006

8:23 AM

A year ago, I wrote about how I used OneNote flags to coordinate tasks according to the "Getting Things Done" philosophy. OneNote 12 goes worlds beyond the original OneNote as a platform for "GTD," so I thought I’d write about how I’ve adapted my original system.

 

One of the essential ideas in "GTD" is maintaining as few "collection buckets as you can get by with." Within Office 12, the two programs that are most likely to be used as collection buckets are Outlook and OneNote; my premise is that while Outlook has "tasks," OneNote is by far the superior program for managing them. In my system, Outlook is used for its Inbox, Calendar, and Contacts list, while OneNote is the central organizing tool.

 

The key to using OneNote as a GTD tool is that OneNote can instantly gather and summarize flagged items and group them by name, and filter them so that only unchecked items are visible. Once set up, this gives you immediate access to your "next action" items:

 

To do this, you have to customize your OneNote flags, a simple process that is marred only by the fact that instead of acting on the underlying notebook (which you’ll share between computers, as we’ll discuss later), customization is on a per-machine basis. So you have to perform this process on every machine.

 

In "GTD" every multistep task is a "project," every single task is an "action," and the next physical action you need to do is the "next action." The heart of GTD is breaking projects down into actions and next actions, so that your  to-do list is a set of achievable tasks "Buy 10 pounds of nails at Home Depot" rather than overwhelming things like "Build the house."

 

Additionally, I break down my projects into 3 categories: "Urgent" projects on which I should be concentrating, "Ongoing" projects, and "Deferred" projects (some people call these "Fallow" projects).

 

With that in mind, I customize my note flags. I use open checkboxes for actions, and starred checkboxes to indicate projects. I use green, blue, and yellow to indicate urgent, ongoing, and deferred categories:

You’ll notice that I additionally have a "Waiting" flag assigned to Ctl-9 and that the "Next Action" and "To Do" flags have an @ prepended so that they "sort" to the top of my "Note Flag Summary" view. Another important keyboard shortcut is Ctl-0, which clears all notes on an item. So now, you have assignment of actions and projects near at hand.

 

Organizing Projects

The original OneNote had a design philosophy of using a single notebook, with many sections, many pages, and many subpages. OneNote 2007 has a much more flexible philosophy, with multiple notebooks and hierarchical sections. One of the biggest decisions you can make in a OneNote-based GTD system is how you will organize projects — with notebooks, sections, or pages/subpages?

 

To be clear, you can make a project just using a hierarchy and note flags:

But generally, "real" projects involve gathering data and thoughts and meeting people and lots of sub-projects: in other words, they typically involve gathering all the other stuff OneNote excels at. And this is really the key reason why OneNote is perfect for "Getting Things Done": it’s not just a "To Do List" manager or an outliner. Unlike dedicated outliners, it doesn’t impose an outline or hierarchy on everything you do. That’s very important: to be able to take the note, capture the thought, etc. before it’s categorized / placed within a hierarchy.

 

For me, projects are best organized as either page/subpage combinations or as sections/subsections. Do not create a section for every project: it clutters your notebooks too quickly. Currently, I primarily use page/subpage combinations for personal projects and ongoing themes (blog entries, exercise goals, shopping lists, etc.) and use section/subsections to organize clients and projects (as a contractor, I create a subsection for each billable contract, and use "Print to OneNote" to keep convenient copies of the estimate / invoice / payment process.

 

I use a minimum of notebooks: Personal, Work, and Archive for my GTD-oriented activities and then a couple of others dedicated to my creative outlets and hobbies. When a task is checked completed, it is filtered out of the "Note Flag Summary," but during the Weekly Review, I delete completed trivial tasks and move finished projects / sections to the Archive notebook. (Of course, I visit and re-prioritize my projects and tasks.)

 

Perhaps my favorite feature in OneNote 12 is sharing notebooks between machines. With 7 machines, including 3 Tablet PCs, I may be an outlier, but even if you just have two machines, shared notebooks are an incredible boon. Essentially, this is one of those "it just works" facilities — when you create a notebook, say that you are going to share it between machines, and, bang!, OneNote keeps them synchronized — even when both are open simultaneously! It’s fantastic, I can be writing on my Tablet out on the porch, get stuck, go inside and do some keyboard-intensive research, pasting into OneNote, go back outside, and everything is synched perfectly.

 

Special bonus productivity program:

The other essential program to keep me productive is Sciral Consistency, which is almost perfect for tracking repeating tasks with soft deadlines.

As you can probably infer, you create a task and set "minimum" and "maximum" days for each cycle: do the bills every 10 to 15 days, exercise every 1-2 days, download Website logs every 20-40 days, etc. Here, you can see that I haven’t been exercising enough :-) and that I should haul trash and sweep the driveway in the next couple of days.

 

There are only two improvements I’d desperately love for Consistency: a version for my PDA (synchronized, of course) and the ability to attach a note to a "check," which would make Consistency an awesome training log.

 

Created with Microsoft Office OneNote 2007 (Beta)
One place for all your information

.2% of Patents Earn Out

According to an article in the May 2006 CACM?quoting Peter Drucker, “no more than one in 100 patents earn enough to pay back its development costs and patent fees, and no more than one in 500 recover all its expenses.”

The Language-Action Perspective: AI is Impossible?

With all my AI posts lately, I’m sorry I hadn’t realized that the May 2006 issue of the CACM had a theme on the language-action perspective, a critique by Terry Winograd and Fernando Flores that dates from 1986 whose essential point the CACM summarizes neatly:

[S]killful action always occurs in a context set by conversations, and in the conversations people perform speech acts by they commit to and generate the action. Expert behavior requires an extensive sensitivity to context and an ability to know what to commit to. Computing machines, which are purposely designed to process symbols independent of their context, have no hopes of becoming experts.

It’s a cutting insight and goes, I think, to why expert systems, for instance, initially seem very exciting but, in the real world, generally fail to provide a lot of value. (They’re great for training operators, though!)

Posted in AI