The good news is that I got a contract to write an article in which to discuss content-aware resizing (or rather, I got a contract for an article on performance tuning; I’ll use the resizing algorithm as the sample).
The bad news is that the profiler I’ll be discussing is native-code only, so it looks like I’ll be writing my program in C++.
I would still like to do a managed-code version and I’ll write my C++ with an eye towards portability, but unless I can figure out a simple seam-generation algorithm, I doubt that I’ll have the time. Right now, I think Boykov-Veksler-Zabih graph-cutting is the answer as far as runtime efficiency, but it’s not a trivial algorithm to implement. I spent last evening noodling around with other approaches that might not be as effective, but with which I’m more familiar; unfortunately, I didn’t see anything to convince me they would work.
Regular readers will not be surprised to hear that I spent the morning working on an implementation of this.
Figuring out the “next” seam is simple enough, but coming up with an optimal sequence is going to take more time. Stay tuned…
Now, would you be most interested in seeing this implemented in:
I just glanced over at my RSS aggregator and was surprised to see a full-page Web ad for something I obviously wouldn’t be interested in. I clicked the “back” button and saw the normal newspaper layout and a nondescript blogpost that had a Feedburner-supplied ad at the bottom.
Several people have let me know that comments often fail to post / validate. I’m planning on updating to the latest das Blog in a few weeks, after I do some traveling. Until then, I cannot afford to turn off CAPTCHA and I know that not every comment fails. My suggestion (irritating, I know) is to Ctl-A, Ctl-C your comment, try to post it and if it fails, at least you can just Ctl-V it.
This video of an image manipulation algorithm shown at SIGGRAPH is jaw-dropping. They calculate paths through an image that have low entropy and either delete or interpolate them, creating images that shrink or grow while not distorting the “interesting” elements. (via John Lam)
Here’s the paper from the conference.
I used to frequently attend, talk at, and help organize software development conferences. As a former magazine editor, I couldn’t help but compare the “information bandwidth” (if not the ultimate efficacy) of lectures with magazine articles and books. I concluded that successful one-hour lectures provided about the information of an 1,800-word technical article (which is about 1.5x the length I believe is most effective for a technical article). Attempts to provide more information in an hour-long lecture forced skipping important details, attempts to provide much less information made the lecture too fluffy (this observation was validated by attendee ratings. Not that attendee ratings are primarily correlated to information delivered, but that’s the topic of another post.)
This enforces the not-as-common-as-it-should-be wisdom that lectures are the least valuable aspect of professional conferences. Not that lectures are unimportant; you should attend conferences with great lectures, and even go out of your way to be in the hall as the lecture begins and ends, but you ought not to be terribly concerned about actually attending hour-long lectures.
Many conferences today are moving towards much shorter lectures: 15, 20, and 30 minutes. I think this dramatically increases value to the attendee (while making things significantly harder for the presenters and vastly harder for the organizers).
Phillip Greenspun, in a post on improving undergraduate CS quality that holds for professional training as well, concludes:
- Lecturing has been found to be extremely ineffective by all researchers. The FAA limits lectures to 20 minutes or so in U.S. flight schools.
- Lab and project work are where students learn the most. The school that adopted lab/projects as the core of their approach quickly zoomed to the first position among American undergrad schools of engineering (www.olin.edu).
- Engineers learn by doing progressively larger projects, not by doing what they’re told in one-week homework assignments or doing small pieces of a big project
- Everything that is part of a bachelor’s in CS can be taught as part of a project that has all phases of the engineering cycle, e.g., teach physics and calculus by assigning students to build a flight simulator
- It makes a lot of sense to separate teaching/coaching from grading and, in the Internet age, it is trivial to do so. Define the standard, but let others decide whether or not your students have met the standard.
- A student who graduates with a portfolio of 10 systems, each of which he or she understands completely and can show off the documentation as well as the function (on the Web or on the student’s laptop), is a student who will get a software development job.
Congratulations to my friends (and employers) at BZ Media, which has made the cut for Inc. Magazine’s “5,000 list,” ranking the fastest-growing private companies in the US.
The working groups of the C++0x committee are working hard to complete a major new standard for C++ (there’s a big meeting here in Kona in October). If you’re not intimate with C++, you may be surprised that such an important language has not had a standard threading model and that such a model is a major part of the C++0x version. This is actually part-and-parcel of the design philosophy that made C and C++ so important: the number of libraries dictated by the standard for C and C++ is much smaller than the BCL or Java’s SE libraries. This allows standard C and C++ to be available for hundreds of processors.
I recently read the public C++0x papers on threading (links below). The proposed threading model is non-radical and is based on Boost.Thread. The reasonable perspective is that this is a conservative decision thoroughly in keeping with C/C++’s long tradition of minimal hardware/OS assumptions.
The emotional perspective is that they’ve let slip by a golden opportunity to incorporate the best thinking about memory models. “Multi-threading and locking” is, I would argue, demonstrably broken for business programming. It just doesn’t work in a world of systems built from a combination of libraries and user-code; while you can create large applications based on this model, large multithreaded applications based on locking require not just care, but sophistication, at every level of coding. By standardizing an established systems-level model, C++0x foregoes an opportunity for leadership, albeit radical.
One of the real thought leaders when it comes to more sophisticated concurrency semantics is Herb Sutter. His Concur model (here’s a talk on Concur from PDC ’05) is, I think, a substantial step forward and I’ve really hoped to see it influence language design. Is Sutter, though, just an academic with flighty thoughts and little understanding of the difficulties of implementation? It seems unlikely, given that he’s the Chair of the ISO C++ standards committee. So you can see why there might have been an opportunity.
Multithreading proposals for C++0x:
I don’t really follow the discussion about social networking (I guess I’m either a little too old or a little too antisocial to “get” it), but it seems to me that FOAF + OpenID is “ob hack.” It seems to me that all that has to happen is that someone writes a local / smart client editor with the kind of nit-picky details that you have to deal with for a blog editor (FTP upload, WordPress plugin, etc.) and Bob’s your uncle.
Looks like the only programming tools for Tilera’s 64-Core CPU is a C compiler, but the day is fast approaching when we’re going to start seeing more and more of these types of tools in the mainstream.