Apparently, they have two low-cost catalysts that can help dissociate water into hydrogen and oxygen which apparently is an advantage because “”a lot of the cost of a solar panel is in the wiring, the packaging.” I don’t know how much H2 and O2 I’d want to have bubbling around on my roof…
In which D. Gentry makes the excellent point that high programming productivity is unlikely to result from a single cause, but from a multiplicity. He lauds typing speed and criticizes excessive swearing (dammit).
I am skeptical about “typing speed” but I’ve come to think that “navigation speed” is a key talent — I work with some guys who have mad zsh and vi skillz and I have to admit that their speed with searching and querying across multiple directories is well beyond what I can do. Since questions that span multiple modules are often the ones that are more challenging, their advantage in navigation speed multiplies their effectiveness.
Text, though, is not the only way to achieve high “navigation speed.” There are many programmers, including myself, who have more of a visual intuition. I quite like semantically-meaningful diagrams. Perhaps one of the reasons why CASE tools move in and out of fashion is that they are only valuable to that sizeable-but-not-universal group.
Perhaps this is yet another aspect of the field of software development where we see pendulum swings in popularity based not, as the proponents always argue, on fundamental advantages, but on personal and sociological reasons.
Yesterday, I heard a presentation on the Large Synoptic Survey Telescope, a planned telescope that will take a wide-angle photo of the sky, read the data out, and then move on to the next patch of the sky. This is in marked contrast to the highly planned targeting and relatively long observing time that is more traditional.
The LSST will generate 30 terabytes of data nightly and generate a 70 petabyte catalog (that’s a spicy meatball — 70,000,000 gigabytes). They expect to pick up 1,000,000 transient events per night. You could put all the astronomers and students in the world to the task, and they still couldn’t manually keep up with the data flow.
To make things even more challenging, the game-changing science is going to come from the most unusual stuff — the faintest stuff, the most short-lived stuff, the rarer stuff — since if it was bright, long-lived, and common, someone might have noticed it already.
The solution will call for wonderfully powerful parallelized machine learning systems. Watson ain’t in it, although it’s interesting to think how Watson-like analysis of publications might create initial starting places for partitioning the data.
Particle physics has already faced this enormous onslaught of data and it didn’t surprise me to see Fermilab in the LSST subscriber list. Nor, when I heard of the data volumes and the computational challenges, was it surprising that another subscriber was Google.