Programmed my first Alexa skill: I was shocked by what I found!

Although I am pretty deeply entrenched in the Apple ecosystem, the recently-announced $50 Dot was so inexpensive I could not resist checking it out. (Before I go further: I work for Microsoft, so take that into account as you see fit.)

Out of the box, the Echo is very easy to setup for basic queries “Alexa, what’s my latitude and longitude?” and so forth. The Echo has a relatively lo-fi speaker and the integration with Sonos (what Amazon calls an “Alexa Skill”) is not yet available, so I haven’t used it all that much.

But there’s an API so you know I had to program something. My preferred solution for “computations in the cloud” is definitely Azure Functions written in F#, but for my first Alexa Skill I used Amazon Lambda running Python.

The first thing to focus on is that Alexa Skills are a separate service that can be programmed many ways, so there’s always going to be a certain amount of integration overhead in the form of multiple tabs open, jumping back and forth between the Alexa Skills and the Web server/service where you are handling the computation.

The Alexa Skills documentation is good, but there’s a good number of parts and I think it’s wise to write your first skill using Amazon Lambda, as I did. Amazon Lambda is often the default service in the documentation and there are often hyperlinks to the Lambda-specific page to do “X.”


A Skill for Gravity

A friend was talking to me about riflery and astonishing me with the flight times he was talking about. Alexa failed to answer some basic questions about ballistics (Alexa seems to me less capable than Google Assistant, Cortana, or Siri at answering freeform questions), offering me the perfect simple use-case for my first skill.

Minimum viable query: "What is the speed of an object that has fallen for 1.5 seconds?"

SWAG achievable: "How long would it take for an object dropped from the height of the Empire State Building to fall to the ground on Mars?"

The nice thing about my minimal query is that it’s both stateless and easy to answer with some math: all you need to answer is the duration of the drop and use a gravitational constant of -9.81. (Conversions from meters/second can come later.)

I followed the documentation on building an Alexa skill with a Lambda function to create an Alexa Skill named called “Gravity.” After naming, the next page of the Skill development site is “Interaction Model.” This is where I was shocked to discover:

Alexa doesn’t do natural language processing!

I ASS-U-ME’d that I would be receiving some programmatic structure that told me the “nominal subject” of the sentence was the noun speed and would allow me to search for a “prepositional modifier” whose “object” was the noun seconds and extract its modifier. That would allow me to recognize either of these sentences:

  • What is the speed of an object that has fallen for 1.5 seconds?; or
  • What's the velocity of an apple after after 1.5 seconds?

Or any of a large number of other sentences. Foxtype will show you such parsing in action at this (fascinating) page.

But no! As you can see in the screenshot below, the mapping of a recognized sentence to a programmatic “intent” is nothing but a string template! You either have to anticipate every single supported structure or you have to use wildcards and roll your own. (Honestly, I imagine that it’s not a long road before the wisest interaction model is Parse {utterance}.)

intents1

To be clear: ‘just’ voice recognition is extraordinarily hard and doing it in ambient environmental noise is insane. It’s only because Alexa already does this very, very hard task that it’s surprising to me that they don’t provide for some amount of the (also hard) task of parsing. The upside, of course, is that sound->utterance is decoupled from utterance->sentence. As far as I know, no one today provides “NLP as a Service” but it’s easy to imagine. (Although latency… Nope, nope, staying on topic…)

Returning to the screenshot above, you can see that it contains the bracketed template {duration}. The matching value will be associated with the key duration in calls to the Lambda function. And, to be honest, it’s a place where Alexa Kit does do some NLP.

You can help Alexa by specifying the type of the variables in your template text. For instance, I specified the duration variable as a NUMBER. Alexa does use NLP to transform the utterances meaningfully — so “one and a half” becomes “1.5” and so forth. I haven’t really explored the extent of this — does it turn “the Tuesday after New Year’s Day” into a well-formed date and so forth?

Alexa packages session data relating to an ongoing conversation and intent data and performs an RPC-like call (I actually don’t know the details) to the endpoint of your choice. In the case of Amazon Lambda, that’s the Amazon Resource Name (ARN) of your function.

The data structures it passes look like this:

{
  "session": {
    "sessionId": "SessionId.07dc1151-eb4e-4e12-98fa-64af3f59d82a",
    "application": {
      "applicationId": "amzn1.ask.skill.443f7cb5-ETC-dbecb288ff2d"
    },
    "attributes": {},
    "user": {
      "userId": "amzn1.ask.account.ETC"
    },
    "new": true
  },
  "request": {
    "type": "IntentRequest",
    "requestId": "EdwRequestId.13cf7a2b-0789-4244-879f-f4fae08f315f",
    "locale": "en-US",
    "timestamp": "2016-11-18T17:24:09Z",
    "intent": {
      "name": "FallingSpeedIntent",
      "slots": {
        "duration": {
          "name": "duration",
          "value": "1.5"
        }
      }
    }
  },
  "version": "1.0"
}

The values in the session object relate to a conversation and the values in the request object belong to a specific intent — in this case the FallingSpeedIntent with the duration argument set to “1.5”.

On the Lambda side of things

Amazon Lambda has a template function called ColorIs that provides an easy starting point. It supports session data, which my Gravity skill doesn’t require, so I actually ended up mostly deleting code (always my favorite thing). Given the JSON above, here’s how I route the request to a specific function:

def on_intent(intent_request, session):
""" Called when the user specifies an intent for this skill """

print("on_intent requestId=" + intent_request['requestId'] +
", sessionId=" + session['sessionId'])

intent = intent_request['intent']
intent_name = intent_request['intent']['name']

# Dispatch to your skill's intent handlers
if intent_name == "FallingSpeedIntent" :
return get_falling_speed(intent, session)

def get_falling_speed(intent, session):
session_attributes = {}
reprompt_text = None
should_end_session = True

g = -9.82 #meters per second squared

if "duration" in intent['slots']:
duration = float(intent['slots']['duration']['value'])
velocity = g * duration**2

speech_output = "At the end of " + str(duration) + " seconds, an object will be falling at " + ('%.1f' % velocity) + " meters per second. " + \
"Goodbye."
else:
speech_output = "Pretty fast I guess."

return build_response(session_attributes, build_speechlet_response(
intent['name'], speech_output, reprompt_text, should_end_session))

(Boilerplate not shown)

My Westworld prediction

var k = Convert.FromBase64String("vlqnRQo8YYXdqt3c7CahDninF6MgvRnqNEU+/tcbWdM=");
var iv = Convert.FromBase64String("gaXwv734Tu3+Jw1hgtNrzw==");
DecryptStringFromBytes(Convert.FromBase64String("Yr2XWzCxceStAF1BaUgaqmWcqFjzWskDDN4foaxfGEO5JHc/oKvgukkMHZuOiw+dK0JxnOhzC1ZA3QLqZZsQxFtjX+qvu0VRM0p6VEfcv18="), k, iv);

Review: 11-Day Diving on the Galapagos Master

Trip Review: Galapagos Master, 11-Night Liveaboard Diving

My wife and I recently returned from 11 days on The Galapagos Master, a 16-passenger liveaboard vessel whose itinerary includes Wolf and Darwin Islands.

The first thing to say about Galapagos diving is… Well, okay, the first thing to say about Galapagos diving is how incredible the fish life is. More on that in a minute…

The second thing to say about Galapagos diving is to talk about the temperature: temperature descriptions generally say something like “60-76F” and you might think “Well, I’ll plan for the middle of that estimate: 68F.” But that’s not right: the diving here is 60F or 76F, depending on where you dive. And even though almost exactly half the dives are in water that was in the mid-70s, the “feel” of the water temperature was determined by those in the 60F area. So 7mm hooded wetsuits and I envied the one person on our boat who dove in a drysuit. (My wife says her 5mm with a 3mm hooded steamer and a LavaCore was also okay, and she had more flexibility on the warmer dives.)

The other thing, for me, is gloves. I never wear gloves since in general I have no need to touch the reef. But in the Galapagos the large majority of dives involve tucking in to rocks and holding on in strong currents. Additionally, at Darwin, Wolf, and Cabo Douglas (Fernandina) there was surge.

And the rocks are covered in barnacles. I didn’t wear gloves for the first several dives and my hands got sliced up.

Dives are limited to 50 minutes. We were all diving nitrox and spending the majority of our dives at 60-80’, so I thought that duration was good: long enough to linger when the sights were good, short enough so that air consumption wasn’t a limiting factor, and brief enough that no-deco was very manageable (I had a few 4th dives where I was deco-limited.) In the cold water at Cape Douglas (marine iguanas) and Punta Vicente Roca (molas and penguins) the dives were shorter — 40 minutes. (The water was cold, but with the marine iguanas my max depth was 19’! I would have happily spent more time with them.)

Each buddy-team was given a DiveRite audible alarm powered by their low-pressure BCD inflator and a Nautilus Lifeline GPS/VHS radio. We never had any cause to use either, but the Nautilus, in particular, struck me as a showing a good concern for safety.

Itinerary and Diving

Our 11-day itinerary was: board the boat in Puerto Baquerizo Moreno on San Cristobal. Same day we had a 20-minute check-out dive in the harbor: cold, poor vis, just a chance to get your weight correct. (Some sea lions came in and played with us, which was fun.)

2nd day was a dive apiece at Mosquera and North Seymour Island. I thought these would be more “check out” style, but actually the dive at Mosquera was excellent! First hammerhead of the trip, a big school of mobula, schools of barracuda, steel pompano, big blue-spotted trevally, “some kind of bonito.” Our 2nd dive, at North Seymour, was apparently more-commonly a highlight, but we got somewhat skunked.

We did a brief land excursion on North Seymour and, for me, it was one of the highlights of the trip — our only chance to see nesting blue-footed boobies and frigate birds. We saw several males displaying, a few pairs “dancing,” and even one sitting on an egg. (I’m kicking myself at missing a post-dive-trip day-trip to Isla Lobos from San Cristobal to see more breeding birds.)

Then we steamed for Darwin Island. We apparently got a late start for this, leaving North Seymour at 4:30PM when we were “expected” to get out of there at 1:30PM. (But that’s a little confusing to me, as we may have dawdled an extra 30 minutes or whatever on the land expedition, but the overall schedule was set by the boat.)

The upshot was that we didn’t arrive at Darwin until 8:30AM and dove immediately. The next 4 days (2 at Darwin, 2 at Wolf, 4 dives per day) were amazing. Warm, with occasional hints of a thermocline, moderate-strong currents (I think once we had a spot with something like 2 knots), insane density of fish. Jacks, hammerheads, Galapagos sharks, yellowfin tuna, smaller tuna… just amazing.

Visibility was not “murky” but it was definitely “hazy.” Maybe 25-35’-ish total, but things at the limits of visibility were definitely more silhouette-y than defined. So even though there was tons of wildlife, you would really only very-clearly see maybe 3-4 close passes per dive.

Our panga (“Jaguar Sharks!”) was quite experienced (professional divemasters, marine biologist, etc.: with just over a thousand dives, I was by far the least experienced). I think on a difficulty scale of 1-10 for recreational diving, this was 7-8 stuff: cold, currents and surge, limited viz. This would not be a place for divers uncomfortable with their gear.

Additionally, particularly at the south side of Darwin’s Arch, if you drifted away from the group and were not in the panga right around that 50-minute mark, you could get close to some extremely dangerous wave breaks. The dive guides knew the topography and drifts very well and if you paid attention to the rules and stuck with them, it was all fine. But again, it was the type of place where a mistake that separated you from the group could get very serious, very quickly.

I could go on for thousands of words detailing the diving, but suffice it to say that it was great. There were endless schools of predators such as jacks and bonito as well as reef fish such as creole fish. The sharks varied depending on the specific dive location and time of day, but typically you’d settle in at 3 or 4 stops along the reefs and usually when you settled in, some amount of hammerheads and Galapagos would come by. Sometimes, when the current was strong, you’d be in a perfect situation where the hammerheads were slowly making their way up-current and it was just an unending conveyor belt.

Another fun thing to do at Darwin was to drift past the boat on its mooring: there must have been 20 silky sharks swimming along in its eddy and if you could hold on to a panga line or get into the eddy at the stern of the ship, you would just be surrounded by silkies. The islands and birdlife of Darwin and Wolf were fascinating to observe with binoculars from the stern of the ship.

After Darwin & Wolf, the diving was one location per day, usually with a single dive site dived only 3 times per day. In our case, we often dove, 7:45AM, 9:45AM, 11:45AM. Generally diving did seem to deteriorate as the morning progressed, so the only way I’d change that schedule would be a dive before breakfast, but that was never presented as an option to us. I think there was one more day when we had an after-lunch dive.

Fernandina had a beautiful deep-dive to see horn sharks and red-lipped batfish (coldest dive, with 58F on my computer and 95’ of neoprene compression). After that, we did 2 dives in 5 meters to see marine iguanas feeding. Absolutely amazing. I do want to say that when we first got in, I experienced the most powerful surge of the trip: the surge was so strong that it caused my octopus to freeflow and then, even with a good grip, I got peeled off a rock and pushed a solid 10 yards. Again, this is a situation where a less experienced person could make a serious mistake and try to re-grip rather than accept spending the next few waves being washed back and forth.

Another highlight of the trip then occurred: while crossing from Fernandina to Isabella I spotted a pod of orcas in the distance. They approached the boat and checked us out for ten minutes or more, swimming right alongside the boat, tilting on their sides to look up at us, etc.

As with the iguanas at Fernandina, the next two days were destination dives as well: one day to see molas (ocean sunfish) and penguins (snorkeled with one) and the next to see pelagic manta rays. These were fine and again the walls were beautiful, with abundant black coral and bushy brown gorgonians teeming with long-nose hawkfish.

Then a long cruise to Cousin’s Rock near Santiago and the final 2 dives of the trip. I feel silly downplaying any diving that involves a cave filled with white-tipped reef sharks and sea lions but compared to the other diving on the trip, this was anti-climactic.

The final half-day of the trip involved a bus ride to a farm in the Santa Cruz highlands to see giant tortoises. This was quite good: they were free-ranging and it seemed more natural than seeing them in pens. As a reminder of the threat of introduced species, I was bitten by a fire ant as I watched a giant tortoise.

Then we went down to Puerto Ayora and spent several hours, having a couple drinks and lazily shopping for souvenirs. The bus picked us up at 6:30PM and got us back to the boat near 8PM, where we had a final cocktail reception where the “tipping” occurred (more on that below) and then dinner. (Again, this was a case where the schedule was set by the boat, so the fact that we were all starving by the time we were fed near 9PM seems like something they could adjust.)

The next day we were back in Puerto Baquerizo Moreno and taken off the boat at 830AM.

The [official itinerary] describes the cadence as an early dive, breakfast, morning dive, lunch, and two afternoon dives. That’s not at all how ours worked: first, we never had an opportunity to dive before breakfast and on 3-day dives (most days not on Darwin and Wolf) we often did all 3 dives before lunch with only 45-minute breaks. That was fine, since diving was generally better in the morning and getting in and out of a cold wetsuit is a pain.

Accommodations

The Galapagos Master is the former Deep Blue (so you might search for other reviews under that name). There are 4 below- and 4 above-deck cabins. We had an above-deck cabin but I do not think they were worth extra: the windows did not open and being that far above the center-of-gravity of the ship may have made the motion a little more obvious. Other than the standard shipboard reaction of “Oh my god, where are we going to put all our stuff?” the two things that stand out are the beds, which were very uncomfortable (pads over wood, with two single beds pushed together so that the rails created a “chastity bump”), and the electric toilet, which was absolutely incapable of clearing solids (if you know what I mean) with anything less than 4-5 flushes. Toilet paper goes into a container at the side.

There is a salon where the in-door socializing happens, with a big-screen TV with HDMI inputs, so it was easy to do slideshows or watch movies from computers. The mess had one large table and a few smaller ones. It was well-configured for socializing. The sundeck was the major socializing place, and once clearly sported a bar and chairs, but only one chair was attached. Instead, you just lounged along the rails.

I’m not a “foodie” and I thought the food was fine, but I think there was a little eye-rolling from some more refined people. There was always a salad and some amount of vegetables, and then usually a fish and a meat dish with some starch like potatoes or rice or plantains. Often meals started with a soup and there was always a dessert. There were two vegetarians on our trip and the galley seemed to be able to accommodate them well enough. Soft drinks were complimentary, beers were $3 apiece and cocktails and spirits were $6 apiece. Bottles of wine were $25.

The dive deck was quite good, with individual stations along the rails, a wetsuit cleaning-tank and rack in the middle, 2 hot-water hoses with shower nozzles, and a large cameras-only tank. Under the tanks were cubby-holes with milk crates in which you kept your miscellaneous dive gear. Up a few steps from the dive deck was a passage with a long camera bench with 2 air blowers (well, 3, but one wasn’t working). On the other side there would be post-dive snacks and hot chocolate or ice tea. A nice thing were post-dive towels, neatly labeled with your station number, so you would be assured of getting one.

On the first day our nitrox was a little low, at 29+, but mostly we dove around 32% O2. Again, I thought the nitrox vs. time vs. depth balance was just right: you ended the 4-dive days close to deco-limited but I never got close to depth-limited.

“Tipping”

This is a pet peeve of mine. I’m from the US and I tip well because I know that, when “tipping” is a big part of how workers make their money, workers get absolutely screwed. Our trip had people from the US, England, and Germany, all of which have vastly different attitudes and expectations about tipping. And although “it’s absolutely up to you” there is a “recommended 10% tip” on your $600-a-day-per-person dive trip that is “an important part of how the crew make money.” This is total BS! If a fair wage for the crew amounts to $60 per day per guest, charge $660 per day and make tipping truly optional.

As one guest from England, who was not prepared to tip in cash (which is the only way), said “Half the trip fee goes to the cost of fuel for the ship. Why am I paying 10% of the fuel cost for ‘service’?”

Also, “tipping” this way creates perverse incentives. After safety, the most important role of the diveguides is to enforce the conservation rules, but that’s made more difficult when you rely on “tips” as a major part of your income for the trip. Which brings us to…

“That Guy”

One of the things we got ready to board the boat is that “there’s always one guy.” In our case, he was a German who fancied himself a “photo-journalist.” What that meant was that in his mind, because he made a few thousand Euros per year from stock photography fees, he was justified in breaking the rules: he didn’t stay in a line so that all divers were at an equal distance from the skittish hammerheads, he dive-bombed other photographers, he pushed my wife out of the way when she had the temerity to videotape a marine iguana with “just” a GoPro, and, worst of all, he would swim up to skittish animals such as hammerheads or mola and blast them with his strobes. He was clearly out of line time-and-time again, and the dive guides never confronted him.

This became a topic of every post-dive talk and we talked to our dive guide about it. He spoke at dinner about the importance of obeying the no-harassment rules and the dire consequences of breaking them. Then again, at breakfast the next day, he reiterated the importance of not chasing the animals.

And then, an hour later “That Guy” bombed a Mola and chased it away. Back on the panga, the other divers were saying stuff and when it became clear that the dive guide wasn’t going to say anything, I gave the guy both barrels. The upshot? Well, he didn’t dive with us anymore, but essentially he got a private dive guide and (according to reports) had a great time swimming up to and blasting pelagic mantas.

Such behavior will continue as long as the rest of us divers, whether photographers or not, tolerate it. We all want to see the animals as well as possible, we all paid a lot of money, we all would love a photo. But sometimes nature doesn’t accommodate our wishes. What we do in those circumstances is the test of our character and, if you label yourself a “photographer” (much less a “photo-journalist”), a test of your ethics.

Don’t be “that guy.”

Tracking Apple Pencil angles and pressure with Xamarin

Rumor has it that Apple will support the Apple Pencil in the forthcoming iPad. If so, more developers will want to use the new features of UITouch — force, angle, and elevation — supported by the incredibly-precise stylus.

Basically, it’s trivial:

— Force is UITouch.Force;
— Angle is UITouch.GetAzimuthAngle(UIView); and
— Angle above horizontal is UITouch.AltitudeAngle

(The UIView objects are there, I think, to make it easier to create a custom angular transform that is more natural to the task at hand — i.e., an artist could “rotate” the page slightly to accommodate the angle with which they like to work. I think.)

Anyhow, here’s some code:


namespace UITouch0

open System
open UIKit
open Foundation
open System.Drawing
open CoreGraphics

type ContentView(color : UIColor) as this = 
   inherit UIView()
   do this.BackgroundColor <- color

   let MaxRadius = 200.0
   let MaxStrokeWidth = nfloat 10.0

   //Mutable!
   member val Circle : (CGPoint * nfloat * nfloat * nfloat ) option = None with get, set

   member this.DrawTouch (touch : UITouch) = 
      let radius = (1.0 - (float touch.AltitudeAngle) / (Math.PI / 2.0)) * MaxRadius |> nfloat
      this.Circle <- Some (touch.LocationInView(this), radius, touch.GetAzimuthAngle(this), touch.Force)
      this.SetNeedsDisplay()


   override this.Draw rect = 

      match this.Circle with
      | Some (location, radius, angle, force) ->
         let rectUL = new CGPoint(location.X - radius, location.Y - radius)
         let rectSize = new CGSize(radius * (nfloat 2.0), radius * (nfloat 2.0))
         use g = UIGraphics.GetCurrentContext()
         let strokeWidth = force * MaxStrokeWidth
         g.SetLineWidth(strokeWidth)
         let hue = angle / nfloat (Math.PI * 2.0)
         let color = UIColor.FromHSB(hue, nfloat 1.0, nfloat 1.0) 
         g.SetStrokeColor(color.CGColor)
         g.AddEllipseInRect <| new CGRect(rectUL, rectSize)
         g.MoveTo (location.X, location.Y)
         let endX = location.X + nfloat (cos(float angle)) * radius
         let endY = location.Y + nfloat (sin(float angle)) * radius
         g.AddLineToPoint (endX, endY)
         g.StrokePath()
      | None -> ignore() 

type SimpleController() = 
   inherit UIViewController()
   override this.ViewDidLoad() = 
      this.View <- new ContentView(UIColor.Blue)

   override this.TouchesBegan(touches, evt) =
     let cv = this.View :?> ContentView
       
     touches |> Seq.map (fun o -> o :?> UITouch) |> Seq.iter cv.DrawTouch

   override this.TouchesMoved(touches, evt) = 
      let cv = this.View :?> ContentView
      touches |> Seq.map (fun o -> o :?> UITouch) |> Seq.iter cv.DrawTouch
   


[<Register("AppDelegate")>]
type AppDelegate() = 
   inherit UIApplicationDelegate()
   let window = new UIWindow(UIScreen.MainScreen.Bounds)

   override this.FinishedLaunching(app, options) = 
      let viewController = new SimpleController()
      viewController.Title <- "F# Rocks"
      let navController = new UINavigationController(viewController)
      window.RootViewController <- navController
      window.MakeKeyAndVisible()
      true

   
module Main = 
   [<EntryPoint>]
   let main args = 
      UIApplication.Main(args, null, "AppDelegate")
      0

And it looks like this:

Airport Time Capsule considered harmful

The premise of the Apple ecosystem is “It just works.” It is a world of hardware and software in which you pay a premium for not having to worry about fiddling with configurations and command-line options and incompatibility.

The Airport Time Capsule is a wireless router that also contains a hard drive for backups and media sharing. Bizarrely, though, the hard drive it contains is not accessible to OS X’s Drive Utility program, so a run-of-the-mill filesystem error can cause the disk to be inaccessible. It’s the antithesis of “It Just Works.” It’s “It Just Will Not Work.”

Don’t buy an Airport Time Capsule.

Experiment in Auto-Generated UML as a Documentation Tool

I wrote a program to automatically generate class diagrams, filtered by coupling. Here is the result for CoreBluetooth in iOS:

 

Screenshot 2014-12-20 08.33.14

 

You can see there are clusters around CBPeer, CBPeripheral, and CBCentral and that CBCharacteristic is another class with lots of references.

Obviously, huge class diagrams are more noise than signal, but if I further filtered this down to specific topics…?

I dunno’.

 

P.S. Yeah, yeah, they should be open diamonds, not filled diamonds.

 

Good Bye, Dr. Dobb’s

Today comes the shitty news that Dr. Dobb’s (…Journal of Computer Calisthenics and Orthodontia) is shutting down.

I would not have had the career I have had without DDJ: first as an inspiration, then as a competitor, and then as the last torch of technically rigorous, personally-voiced but professionally edited high-quality programming articles.

DDJ was the last of the great programming magazines and was, probably, the greatest. Only Byte could, perhaps, have an equal claim to the crown. All the rest of ours, an entire industry, envied their columnists, technical editors, and authors. Even the standouts (Microcornucopia, Programmer’s Journal, C/C++ Programmer’s Journal, Unix Review, PC Techniques, WinTech Journal, and, … hell, it’s my feed… Computer Language, Software Development, and Game Developer) could only occasionally match their quality.

Perhaps what I admired most about Dobb’s was that it never wavered from being a programming magazine. In the early 90s, I decreed that Computer Language would never again refer to our profession as “programming,” it would only be referred to as “software development.” We published articles about management, about architecture and design, we boasted (boasted) of how little source code we published (because we talked about “the real issues”). And while I think there was a valid point to be made, the truth is that programming — the infinitely challenging alchemy of turning sparks traveling through blocks of sand into computation and information  — is what drew me to the profession, why I will code when I retire, and why I would have a computer under the floorboards if programming were illegal. Dr. Dobb’s understood, and celebrated, that mysterious joy. Perhaps that is why it out-lasted all the rest.

Now, it seems like, if our industry has a face, it’s the face of an arrogant Silicon Valley douchebag who knows everything about monetization, socialization, and micro-localization and nothing about algorithms, memory models, and programming languages. Dr. Dobb’s wasn’t a magazine for venture capitalists or “Digital Prophet”s or “Brand-Story Architect“s. It was a magazine for hard-core coders, people who could appreciate the trade-offs in the design of a macro preprocessor, get an “ah-hah!” moment from reading an assembly language listing for a chip they didn’t know, or  grasp the theme of an implementation discussed over a year of columns.

It will be missed.

For Immediate Release…

FOR IMMEDIATE RELEASE:

NEANY Inc. to Exhibit Unmanned Solutions at AUVSI’s Unmanned Systems North America 2013

~ Arrow UAV, Ground Control Station, and Unmanned Surface Vehicle will be on display~

Hollywood, MD – August 07, 2013 (myPressManager.com) —

NEANY Inc., an industry leader in providing time-sensitive tactical solutions for a variety of missions, is a proud supporter at this year’s AUVSI’s Unmanned Systems North America 2013 in Washington, DC August 12 – 15. Conference attendees will see firsthand NEANY’s flagship UAS, the Arrow, integrated with Raytheon’s Pyros™, a UAS weapon specially designed for tactical level missions, and the aXiom™ 9000 Series, Tachyon’s state-of the-art Beyond Line-of-Site communications system. NEANY’s display will also feature its latest autonomous surface vehicle, the DragonSpy, equipped with the ARES Inc. 7.62mm Externally Powered Gun (EPG) mounted on L-3 Communications IOS’s Advanced Remote Weapon Station (ARWS). The DragonSpy is ideal for providing rapid response capabilities in maritime/littoral environments. In addition, visitors will have the opportunity to operate one of NEANY’s signature ground control stations to further experience the capabilities of these systems.

NEANY Inc. is a minority-owned, SBA 8(a)-certified research, design, test and evaluation engineering firm specializing in unmanned systems with integrated payloads supporting a variety of global missions. These missions include homeland defense and security, border and port patrol, urban mapping, counter-narcotics applications, disaster preparedness, and Intelligence, Surveillance, and Reconnaissance (ISR). In addition to unmanned systems, NEANY’s expertise includes ground control stations, systems integration, rapid prototype fabrication, pilot training, and theater deployment and logistics. NEANY continues to demonstrate unprecedented in-theater expertise that includes deployment-to-extraction logistical support as nearly 50% of NEANY’s personnel are currently forward deployed. In a period where financial resources are limited, NEANY is confident in its ability to offer cost-effective unmanned solutions capable of supporting national and international defense applications.

NEANY will have literature and personnel on hand to demonstrate and discuss our full line of available products and systems. Please take time to visit NEANY at booth #2103.

For more information on the NEANY advantage, please visit www.neanyinc.com.

For more information on AUVSI’s Unmanned Systems North America 2013, please visit

How many programmers are there?

According to Evans Data, the worldwide developer community will reach 29M by 2019. The largest growth is expected to come from China and, to a lesser extent, other developing economies.

I tend to be very skeptical about quantitative analysis of the developer community, and more-so when it comes to global analysis and forecasting, but I have no prima facie reason to criticize those numbers.

As always, I turn my attention to questions of the distribution of developer productivity. Is the distribution of talent among these 29M more like:

curves

A normal distribution would imply that the most effective team structures would be fairly democratic.

The “superprogrammer” distribution, in which an elite (but not vanishingly small) population is vastly more productive than median would imply the most effective team structure as being one structured like a surgical team (the team is structure in service to the elite member).

The “incompetent” distribution, in which a good number of exceptionally bad programmers manage to stay employed, implies that instead of seeking out “rockstars” and “ninjas,” teams should take a satisficing approach. In this world, the median professional programmer is pretty darn good, but sees a lot of unacceptable crap.

A belief in the “superprogrammer” distribution is prevalent, but the “incompetent” distribution best explains the world I’ve seen over the past 30 years.

How Many Python Programmers Are There?

Giles Thomas makes the case that the Python programming community numbers in the low millions. This seems right to me: that’s a large community, but it’s not quite at the level of the most popular programming languages. That size is supported by this chart, which has impressed me as “feeling right” when it comes to the popularity of various languages.

One point, though, is that Python has made very significant inroads in the scientific community, which I believe is a key influencer and leading indicator: the libraries that scientists build become building blocks for future work. When you look at the history of programming languages, you see that scientists and engineers were clear driving forces behind FORTRAN and although C and C++ were broadly popular, their performance benefits made them extremely popular in labs as well.

I’m not sure that the popularity of Python in labs is going to be captured by metrics that focus on the professional programming community, so if anything, I suspect that the Python community might even be a moderate amount larger than Giles suggests.