Friday Five 1 What Was Your Biggest Accomplishment This Year ST

Friday Five:

1. What was your biggest accomplishment this year?

Writing Thinking in C#, a 1,000-page technical book. Although I started it in September of 2001, I did 80% of the work in 2002.  

2. What was your biggest disappointment?

Despite having the content of Thinking in C# essentially finished in July, having the publication delayed for a year.

3. Will you be making any New Year’s resolutions?

Does “I’ve got to find some way to pay off these *#$@$ing credit cards!” count as a resolution?

4. Where will you be at midnight? Do you wish you could be somewhere else?

Not sure yet, but maybe Bald Hill in San Anselmo, overlooking San Francisco Bay. I’d rather be lots of places: either someplace snowy or someplace tropical. This is a dark and rainy season in the Bay Area.

5. Aside from (possibly) staying up late, do you have any other New Year’s traditions?

We go for a hike on New Year’s Day, but that’s just what we do on every holiday!

But That Doesnt Mean Im Not Happy To See You Too The LAT Notes The Conviction This Week

But that doesn’t mean I’m not happy to see you, too

The LAT notes the conviction this week of a man who smuggled rare monkeys into the U.S.–in his pants. Upon discovering four endangered birds and 50 exotic orchids in his suitcase, officials at Los Angeles International Airport stopped the man for questioning. When asked if he were hiding anything else, the man responded, “Yes. I’ve got monkeys in my pants.”

This A Hrefhttpradiocommentsuserlandcomcommentsu107288ampp178amplinkhttp3A2F2Fwwwthinkinginnet2F20022

This comment on this post asked if my argument meant that AI is impossible:

I don’t think that follows from my argument. The greatest challenge with AI is that we have no good theories on the non-material representations comprising consciousness. Brain mechanics we’re beginning to get a handle on, and you’ll find a lot of people who agree that consciousness is an emergent property of a number of largely independent sub-systems, but there is no compelling theory that says “To get to AI, we need to start here, and then go here, and then go here…” There are appeals to emotion — Doug Lenat’s Cycorp says “It’s just common sense…” that a massive database of facts is necessary while MIT’s Rodney Brooks say that Kismet-style “emotional robots” are the best route — but I could just as easily argue that the problems of internal representation, or language are the first step.

As a matter of fact, I do believe that language is the key — once we have a system that can reliably interpret Web pages (say), I think that it will be a small step to a system that can generate them and, in my opinion, that will bring us into the gray area of “maybe we have AI and maybe we don’t.” For instance, the algorithms that produce Google News have a surprising penchant for cricket — neither the world’s most popular sport nor the world’s most written-about sport. It’s quirky — and that’s a very interesting thing to say about a program.