The Simplest Deep Learning Program That Could Possibly Work

Once upon a time, when I, a C programmer, first learned Smalltalk, I remember lamenting to J.D. Hildebrand “I just don’t get it: where’s the main()?” Eventually I figured it out, but the lesson remained: Sometimes when learning a new paradigm, what you need isn’t a huge tutorial, it’s the simplest thing possible.

With that in mind, here is the simplest Keras neural net that does something “hard” (learning and solving XOR) :

import numpy as np
from keras.models import Sequential
from keras.layers.core import Activation, Dense
from keras.optimizers import SGD

# Allocate the input and output arrays
X = np.zeros((4, 2), dtype='uint8')
y = np.zeros(4, dtype='uint8')

# Training data X[i] -> Y[i]
X[0] = [0, 0]
y[0] = 0
X[1] = [0, 1]
y[1] = 1
X[2] = [1, 0]
y[2] = 1
X[3] = [1, 1]
y[3] = 0

# Create a 2 (inputs) : 2 (middle) : 1 (output) model, with sigmoid activation
model = Sequential()
model.add(Dense(2, input_dim=2))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))

# Train using stochastic gradient descent
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)

# Run through the data `epochs` times
history = model.fit(X, y, epochs=10000, batch_size=4, verbose=0)

# Test the result (uses same X as used for training)
print (model.predict(X))

If you run this, there will be a startup time of several seconds while the libraries load and the model is built, and then you will start to see output from the call to fit. After the data has been run through 10,000 times, the model will then try to predict the output. As you’ll see, the neural network has learned the proper set of weights to solve the XOR logic gate.

Now draw the rest of the owl.

Posted in AI

Writing to Azure Storage With F#

This last weekend I participated in the “Hack for the Sea” hackathon. As part of that, I needed to store images and structured data to Azure Storage. The process is very straightforward using F#’s async capabilities.

First, you’ll need the connection string for your Azure Storage:

Use that to instantiate a CloudStorageAccount object:


let csa = CloudStorageAccount.Parse connectionString

Then, write method(s) to store the data in either Blob storage or Table storage:


// Put directly in Azure blob storage let photoSubmissionAsync (cloudStorageAccount : CloudStorageAccount) imageType (maybePhoto : IO.Stream option) imageName = async { match maybePhoto with | Some byteStream -> let containerName = "marinedebrispix" let ctb = cloudStorageAccount.CreateCloudBlobClient() let container = ctb.GetContainerReference containerName let blob = container.GetBlockBlobReference(imageName) //|> Async.AwaitTask blob.Properties.ContentType <- imageType do! blob.UploadFromStreamAsync(byteStream) |> Async.AwaitTask return true | None -> return false } // Put directly in Azure table storage let reportSubmissionAsync (cloudStorageAccount : CloudStorageAccount) report photoName = async { let ctc = cloudStorageAccount.CreateCloudTableClient() let table = ctc.GetTableReference("MarineDebris") let record = new ReportStorage(report) let insertOperation = record |> TableOperation.Insert let! tr = table.ExecuteAsync(insertOperation) |> Async.AwaitTask return tr.Etag |> Some }

The object passed to TableOperation.Insert must be a subclass of TableEntity:


type ReportStorage(report : Report) = inherit TableEntity( "MainPartition", report.Timestamp.ToString("o")) member val public Report = report |> toJson with get, set

 

 

Xamarin: You must explicitly unsubscribe from NSNotifications if IDisposable

In Xamarin, if you observe / subscribe to a particularly-named NSNotification in an object that is IDisposable (this includes any class descended from NSObject!), you MUST explicitly unsubscribe from it in your Dispose handler, or you will get a segfault (the system will attempt to call a method at a memory location that is no longer valid). The pattern looks like this:

class MyClass : NSObject
{
// instance variable
private NSObject notificationObservationHandle; 

      MyClass()
      { 
         notificationObservationHandle = NSNotificationCenter.DefaultCenter.AddObserver(notificationName, NotificationHandler);
      }

      void NotificationHandler(NSNotification notification)
      {
         // ... etc ...
      }

      private bool disposed = false;
      override protected void Dispose(bool disposing)
      {
         if (!disposed)
         {
               if (disposing)
               {
                  NSNotificationCenter.DefaultCenter.RemoveObserver(notificationObserverHandle);
               }
               disposed = true;
               base.Dispose();
         }
      }
}

I Didn’t Like “Enlightenment Now”

They say to never write a negative review of a book until it has received too many positive ones. Which brings us to “Enlightenment Now: The Case For Reason, Science, Humanism, and Progress,” by Steven Pinker.

The tl;dr is that he doesn’t actually argue this case, he just presents a bunch of under-reported optimistic curves and, in the face of problems that cannot be swept under the rug, assures us that if only we treat them as problems to be solved and not get depressed about them, all will be well. Hoo-rah!

If you say “Gee, that sounds like Pinker’s book ‘The Better Angels of Our Nature’, which was a good book!” I’d agree with you. If this book had been called “Even Better Angels of Our Nature” I’d have no problem with it. But Pinker’s “Case for Reason, etc.” is essentially “these curves happened, they correlate (kind of) with periods when ‘Enlightenment ideals’ were popular, therefore, Enlightenment ideals caused the curves!” That’s bad logic.

The only reason I’m criticizing this book is because I would love to engage a book that actually made the case for these ideals and wrestled with the question of why, while still broadly paid lip service to (the climate deniers don’t say “Science is wrong!” they claim that science is on their side), they seem to have lost traction in terms of driving societal action. Or, perhaps more in the vein of things Pinker likes to do, to discover that “no, history is always an ebb and flow and the tide of Enlightenment continues to roll in.” (I’d be happy to have that case made.)

Pinker wants us to believe that the curves of the book — global poverty, lifespan, wealth, etc. — are strongly predictive of future improvement and, over and over, frames the thought ‘But will that continue?’ as one of pessimism versus optimism. I am temperamentally an optimist, and can rationalize that (“Optimism gives you agency! Pessimism is demotivating!”). But Optimism bias is a cognitive mistake. The Enlightenment Ideal is to put aside optimism and pessimism and engage with the facts. Yes, it’s true that the Malthusians have been wrongly predicting “we’re just about to run out of capacity!” for 200 years, and “doom is unlikely” should be your starting point. But maybe humanity’s time on Earth is like that of an individual — ups and downs, and heartbreakingly limited, potentially with a long period of decline before the end. Hypochondriacs are consistently wrong, but in the end all of them can put “I told you so.” on their gravestone.

Beyond the problems of what the book engages in is what it just plain ignores. “The case for Enlightenment” is essentially a philosophical task and the proper balance of reason and passion have been discussed since (at least) the days of Plato and Aristotle. The word “Romanticism” only occurs twice in the book, in brief dismissals, and which is a worse reason to ignore it: not engaging with its explicitly anti-Enlightenment philosophy or deliberately ignoring it, knowing that many people happily identify themselves as romantics and might be less receptive of your position if it were posed as a choice?

“Enlightenment Now” isn’t a bad book. As “Even Better Angels of Our Nature” it’s fine. But ultimately it’s as shallow as a “pull yourselves up by your bootstraps!” self-help book.

fun-ny Faces : Face-based Augmented Reality with F# and the iPhone X

fun-ny Faces : Face-based Augmented Reality with F# and the iPhone X

Each year, the F# programming community creates an advent calendar of blog posts, coordinated by Sergey Tihon on his blog. This is my attempt to battle Impostor Syndrome and share something that might be of interest to the community, or at least amusing…

I was an Augmented Reality (AR) skeptic until I began experimenting with iOS 11’s ARKit framework. There’s something very compelling about seeing computer-generated imagery mapped into your physical space.

A feature of the iPhone X is the face-tracking sensors on the front side of the phone. While the primary use-case for these sensors is unlocking the phone, they additionally expose the facial geometry (2,304 triangles) to developers. This geometry can be used to create AR apps that place computer-generated geometry on top of the facial geometry at up to 60FPS.

Getting Started

In Visual Studio for Mac, choose “New solution…” and “Single-View App” for F#:

The resulting solution is a minimal iOS app, with an entry point defined in Main.fs, a UIApplicationDelegate in AppDelegate.fs, and a UIViewController in ViewController.fs. The iOS programming model is not only object-oriented but essentially a Smalltalk-style architecture, with a classic Model-View-Controller approach (complete with frustratingly little emphasis on the “Model” part) and a delegate-object pattern for customizing object life-cycles.

Although ARKit supports low-level access, by far the easiest way to program AR is to use an ARSCNView, which automatically handles the combination of camera and computer-generated imagery. The following code creates an ARSCNView, makes it full-screen (arsceneview.Frame ← this.View.Frame) and assigns it’s Delegate property to an instance of type ARDelegate (discussed later). When the view is about to appear, we specify that AR session should use an ARFaceTrackingConfiguration and that it should Run:

[<register ("ViewController")>]
type ViewController (handle:IntPtr) =
    inherit UIViewController (handle)

    let mutable arsceneview : ARSCNView = new ARSCNView()

    let ConfigureAR() = 
       let cfg = new ARFaceTrackingConfiguration()
       cfg.LightEstimationEnabled < - true
       cfg

    override this.DidReceiveMemoryWarning () =
      base.DidReceiveMemoryWarning ()

    override this.ViewDidLoad () =
      base.ViewDidLoad ()

      match ARFaceTrackingConfiguration.IsSupported with
      | false -> raise < | new NotImplementedException() 
      | true -> 
        arsceneview.Frame < - this.View.Frame
        arsceneview.Delegate <- new ARDelegate (ARSCNFaceGeometry.CreateFaceGeometry(arsceneview.Device, false))
        //arsceneview.DebugOptions <- ARSCNDebugOptions.ShowFeaturePoints + ARSCNDebugOptions.ShowWorldOrigin

        this.View.AddSubview arsceneview

    override this.ViewWillAppear willAnimate = 
        base.ViewWillAppear willAnimate

        // Configure ARKit 
        let configuration = new ARFaceTrackingConfiguration()

        // This method is called subsequent to `ViewDidLoad` so we know arsceneview is instantiated
        arsceneview.Session.Run (configuration , ARSessionRunOptions.ResetTracking ||| ARSessionRunOptions.RemoveExistingAnchors)


Once the AR session is running, it adds, removes, and modifies ARSCNNode objects that bridge the 3D scene-graph architecture of iOS’s SceneKit with real-world imagery. As it does so, it calls various methods of the ARSCNViewDelegate class, which we subclass in the previously-mentioned ARDelegate class:

// Delegate object for AR: called on adding and updating nodes
type ARDelegate(faceGeometry : ARSCNFaceGeometry) =
   inherit ARSCNViewDelegate()

   // The geometry to overlay on top of the ARFaceAnchor (recognized face)
   let faceNode = new Mask(faceGeometry)

   override this.DidAddNode (renderer, node, anchor) = 
      match anchor <> null && anchor :? ARFaceAnchor with 
      | true -> node.AddChildNode faceNode
      | false -> ignore()   

   override this.DidUpdateNode (renderer, node, anchor) = 

      match anchor <> null && anchor :? ARFaceAnchor with 
      | true -> faceNode.Update (anchor :?> ARFaceAnchor)
      | false -> ignore()


As you can see in DidAddNode and DidUpdateNode, we’re only interested when an ARFaceAnchor is added or updated. (This would be a good place for an active pattern if things got more complex.) As it’s name implies, an ARFaceAnchor relates the AR subsystems’ belief of a face’s real-world location and geometry with SceneKit values.

The Mask class is the last piece of the puzzle. We define it as a subtype of SCNNode, which means that it can hold geometry, textures, have animations, and so forth. It’s passed an ARSCNFaceGeometry which was ultimately instantiated back in the ViewController (new ARDelegate (ARSCNFaceGeometry.CreateFaceGeometry(arsceneview.Device, false)). As the AR subsystem recognizes face movement and changes (blinking eyes, the mouth opening and closing, etc.), calls to ARDelegate.DidUpdateNode are passed to Mask.Update, which updates the geometry with the latest values from the camera and AR subsystem:

member this.Update(anchor : ARFaceAnchor) =
    let faceGeometry = this.Geometry :?> ARSCNFaceGeometry

    faceGeometry.Update anchor.Geometry

While SceneKit geometries can have multiple SCNMaterial objects and every SCNMaterial multiple SCNMaterialProperty values, we can make a simple red mask with :

let mat = geometry.FirstMaterial
mat.Diffuse.ContentColor <- UIColor.Red // Basic: single-color mask

Or we can engage in virtual soccer-hooligan face painting with mat.Diffuse.ContentImage ← UIImage.FromFile "fsharp512.png" :

facepaint

The real opportunity here is undoubtedly for makeup, “face-swap,” and plastic surgery apps, but everyone also loves a superhero. The best mask in comics, I think, is that of Watchmen’s Rorschach, which presented ambiguous patterns matching the black-and-white morality of its wearer, Walter Kovacs.

We can set our face geometry’s material to an arbitrary SKScene SpriteKit animation with mat.Diffuse.ContentScene ← faceFun // Arbitrary SpriteKit scene.

I’ll admit that so far I have been stymied in my attempt to procedurally-generate a proper Rorschach mask. The closest I have gotten is a function that uses 3D Improved Perlin Noise that draws black if the texture is negative and white if positive. That looks like this:

Which is admittedly more Let That Be Your Last Battlefield than Watchmen.

Other things I’ve considered for face functions are: cellular automata, scrolling green code (you know, like the hackers in movies!), and the video feed from the back-facing camera. Ultimately though, all of that is just warm-up for the big challenge: deformation of the facial geometry mesh. If you get that working, I’d love to see the code!

All of my code is available on Github.

Programmatic AutoLayout Constraints Basics for Xamarin

  1. Create element without an explicit Frame.
  2. Set TranslatesAutoresizingMaskIntroConstraints = false
  3. Create an array of NSLayoutConstraints
  4. Work top-to-bottom, left-to-right, or vice versa. Do this consistently throughout program
  5. Use Layout Anchors
  6. Use the top-level UIViews SafeAreaLayoutGuide to position relative to the Window / screen
  7. For each dimension, set its location (LeadingAnchor / TopAnchor or TrailingAnchor / BottomAnchor)
  8. Either set the other location anchor or set the internal dimension (WidthAnchor / HeightAnchor)
  9. Call NSLayoutConstraint.ActivateConstraints after the UIView and any referenced UIView objects have been added to the View Hierarchy (compiles OK, but runtime exception)
toolbar = new UIToolbar();
toolbar.TranslatesAutoresizingMaskIntoConstraints = false;
var tbConstraints = new[]
{
toolbar.LeadingAnchor.ConstraintEqualTo(this.View.SafeAreaLayoutGuide.LeadingAnchor),
toolbar.TrailingAnchor.ConstraintEqualTo(this.View.SafeAreaLayoutGuide.TrailingAnchor),
toolbar.TopAnchor.ConstraintEqualTo(this.View.SafeAreaLayoutGuide.TopAnchor),
toolbar.HeightAnchor.ConstraintEqualTo(toolbar.IntrinsicContentSize.Height)
};
View.AddSubview(toolbar);
NSLayoutConstraint.ActivateConstraints(tbConstraints);

label = new UILabel();
label.Text = "This is the detail view";
label.TranslatesAutoresizingMaskIntoConstraints = false;
var lblConstraints = new[]
{
label.LeadingAnchor.ConstraintEqualTo(this.View.SafeAreaLayoutGuide.LeadingAnchor, 20.0f),
label.WidthAnchor.ConstraintEqualTo(label.IntrinsicContentSize.Width),
label.TopAnchor.ConstraintEqualTo(this.toolbar.BottomAnchor, 20.0f),
label.HeightAnchor.ConstraintEqualTo(label.IntrinsicContentSize.Height)
};
View.AddSubview(label);
NSLayoutConstraint.ActivateConstraints(lblConstraints);

Notes on installing TensorFlow with GPU Support

The best Tensorflow is the one you have on your machine.

In my opinion, the bottleneck on a DNN solution is not training, but data preparation and iterating your model to the point where it’s reasonable to start investing kilowatt-hours of electricity to the training. So I have Tensorflow on all my machines, including my Macs, even though as of Tensorflow 1.2 GPU support is simply not available for Tensorflow on the Mac. (I’m not sure what’s going on, but suspect it may have something to do with licensing NVidia’s CuDNN library.)

Having said that, GPU support for TensorFlow is much faster than CPU-only Tensorflow (in some quick tests on my Windows laptops, ~8x). With GPU-supported Tensorflow, it’s that much easier to iterate your model until your training and validation curves start to look encouraging. At that point, in my opinion it makes sense to move your training to the cloud. There’s a little more friction in terms of moving data and starting and stopping runs and you’re paying for processing, but hopefully you’ve gotten to the point where training time is the bottleneck.

Anyway…

Mac Tensorflow GPU: I’d like to think this will change in the future, but as of August 2017: Nope.

There are a very few people who seem to have figured out how to build Tensorflow with GPU support on the Mac from sources, but the hoop-jumping and yak shaving that seems necessary seems very high to me.

Windows Tensorflow GPU: Yes, but it’s a little finicky. Here are some install notes:

– Install NVidia Cuda 8 (not the Cuda 9 RC)
– Install NVidia CuDNN 5.1 (not the CuDNN 7!)
– Copy the CuDNN .dll to your Cuda /bin directory (probably /Program Files/NVidia GPU Computing Toolkit/Cuda/v8.0/bin/)
– Create an Anaconda environment from an administrative shell. Important: use –python=3.5
– Install tensorflow using:

pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.2.1-cp35-cp35m-win_amd64.whl

I think the “cp35” is the hint that you have to use Python 3.5, so if the page at https://www.tensorflow.org/install/install_windows changes to show a different .whl file, you’d have to set the python in your Anaconda environment differently.
– Validate that you’ve got GPU capability:

import tensorflow as tf
tf.Session().run(tf.constant('hello'))

This should result in a cascade of messages, many of which say that Tensorflow wasn’t compiled with various CPU instructions, but most importantly, towards the end you should see a message that begins:

Creating Tensorflow device (/gpu:0)

which indicates that, sure enough, Tensorflow is going to run on your GPU.

 

Hope this helps!

Posted in AI

Dell Infuriates Me

Sunday rant: I have a 2016 Dell XPS (high-end) laptop. I bought a Dell 25″ 4K monitor. And on Friday received a Dell Thunderbolt dock for the monitor. I plug it all together and although the monitor displays wonderfully, the dock is not passing USB through. So I start fiddling around with “unplug from dock, plug into laptop, confirm the peripheral is working,” stuff. And then the laptop BSODs. Machine boots, connects to dock, everything’s fine for 30 seconds, BSOD. Start to Google. “Update your laptop BIOS.” (For a fucking docking station!). It takes a goddamn hour to find the BIOS update on the Dell Website for their goddamn flagship laptop, but whatever.

Still BSODs. Now it’s telling me that I have to update the firmware on the dock. But I cannot update the firmware because if I attach the dock to the laptop to update it, it BSODs. So, there’s this few-second window before the BSOD where I see that I have to update my Thunderbolt Driver on the laptop.

So I download the driver and run the installer for the Thunderbolt Driver. The installer doesn’t give any option other than “Uninstall.” So I say “OK, I’ll uninstall and reinstall.” I uninstall. Fine. I go to reinstall. I’m told I don’t have sufficient permission. So I run as administrator. I still don’t have sufficient permission. So I end up editing the registery to turn off user protection. (Remember, this is all for a docking station).

I now can run the “install” option, but it refuses to continue because it sees some pre-existing value in the registry. (Which I take to mean it’s “Uninstall” function didn’t actually, you know, uninstall.) It then rolls back the Thunderbolt install and leaves me with my current situation:

A half-upgraded machine with user access protection turned off, less functionality than it had before, and it still BSODs whenever I turn on the dock. All with a respected company’s flagship hardware.

God.

Programmed my first Alexa skill: I was shocked by what I found!

Although I am pretty deeply entrenched in the Apple ecosystem, the recently-announced $50 Dot was so inexpensive I could not resist checking it out. (Before I go further: I work for Microsoft, so take that into account as you see fit.)

Out of the box, the Echo is very easy to setup for basic queries “Alexa, what’s my latitude and longitude?” and so forth. The Echo has a relatively lo-fi speaker and the integration with Sonos (what Amazon calls an “Alexa Skill”) is not yet available, so I haven’t used it all that much.

But there’s an API so you know I had to program something. My preferred solution for “computations in the cloud” is definitely Azure Functions written in F#, but for my first Alexa Skill I used Amazon Lambda running Python.

The first thing to focus on is that Alexa Skills are a separate service that can be programmed many ways, so there’s always going to be a certain amount of integration overhead in the form of multiple tabs open, jumping back and forth between the Alexa Skills and the Web server/service where you are handling the computation.

The Alexa Skills documentation is good, but there’s a good number of parts and I think it’s wise to write your first skill using Amazon Lambda, as I did. Amazon Lambda is often the default service in the documentation and there are often hyperlinks to the Lambda-specific page to do “X.”


A Skill for Gravity

A friend was talking to me about riflery and astonishing me with the flight times he was talking about. Alexa failed to answer some basic questions about ballistics (Alexa seems to me less capable than Google Assistant, Cortana, or Siri at answering freeform questions), offering me the perfect simple use-case for my first skill.

Minimum viable query: "What is the speed of an object that has fallen for 1.5 seconds?"

SWAG achievable: "How long would it take for an object dropped from the height of the Empire State Building to fall to the ground on Mars?"

The nice thing about my minimal query is that it’s both stateless and easy to answer with some math: all you need to answer is the duration of the drop and use a gravitational constant of -9.81. (Conversions from meters/second can come later.)

I followed the documentation on building an Alexa skill with a Lambda function to create an Alexa Skill named called “Gravity.” After naming, the next page of the Skill development site is “Interaction Model.” This is where I was shocked to discover:

Alexa doesn’t do natural language processing!

I ASS-U-ME’d that I would be receiving some programmatic structure that told me the “nominal subject” of the sentence was the noun speed and would allow me to search for a “prepositional modifier” whose “object” was the noun seconds and extract its modifier. That would allow me to recognize either of these sentences:

  • What is the speed of an object that has fallen for 1.5 seconds?; or
  • What's the velocity of an apple after after 1.5 seconds?

Or any of a large number of other sentences. Foxtype will show you such parsing in action at this (fascinating) page.

But no! As you can see in the screenshot below, the mapping of a recognized sentence to a programmatic “intent” is nothing but a string template! You either have to anticipate every single supported structure or you have to use wildcards and roll your own. (Honestly, I imagine that it’s not a long road before the wisest interaction model is Parse {utterance}.)

intents1

To be clear: ‘just’ voice recognition is extraordinarily hard and doing it in ambient environmental noise is insane. It’s only because Alexa already does this very, very hard task that it’s surprising to me that they don’t provide for some amount of the (also hard) task of parsing. The upside, of course, is that sound-&gt;utterance is decoupled from utterance-&gt;sentence. As far as I know, no one today provides “NLP as a Service” but it’s easy to imagine. (Although latency… Nope, nope, staying on topic…)

Returning to the screenshot above, you can see that it contains the bracketed template {duration}. The matching value will be associated with the key duration in calls to the Lambda function. And, to be honest, it’s a place where Alexa Kit does do some NLP.

You can help Alexa by specifying the type of the variables in your template text. For instance, I specified the duration variable as a NUMBER. Alexa does use NLP to transform the utterances meaningfully — so “one and a half” becomes “1.5” and so forth. I haven’t really explored the extent of this — does it turn “the Tuesday after New Year’s Day” into a well-formed date and so forth?

Alexa packages session data relating to an ongoing conversation and intent data and performs an RPC-like call (I actually don’t know the details) to the endpoint of your choice. In the case of Amazon Lambda, that’s the Amazon Resource Name (ARN) of your function.

The data structures it passes look like this:

[code lang=”javascript”]
{
"session": {
"sessionId": "SessionId.07dc1151-eb4e-4e12-98fa-64af3f59d82a",
"application": {
"applicationId": "amzn1.ask.skill.443f7cb5-ETC-dbecb288ff2d"
},
"attributes": {},
"user": {
"userId": "amzn1.ask.account.ETC"
},
"new": true
},
"request": {
"type": "IntentRequest",
"requestId": "EdwRequestId.13cf7a2b-0789-4244-879f-f4fae08f315f",
"locale": "en-US",
"timestamp": "2016-11-18T17:24:09Z",
"intent": {
"name": "FallingSpeedIntent",
"slots": {
"duration": {
"name": "duration",
"value": "1.5"
}
}
}
},
"version": "1.0"
}
[/code]

The values in the session object relate to a conversation and the values in the request object belong to a specific intent — in this case the FallingSpeedIntent with the duration argument set to “1.5”.

On the Lambda side of things

Amazon Lambda has a template function called ColorIs that provides an easy starting point. It supports session data, which my Gravity skill doesn’t require, so I actually ended up mostly deleting code (always my favorite thing). Given the JSON above, here’s how I route the request to a specific function:

[code lang=”python”]
def on_intent(intent_request, session):
""" Called when the user specifies an intent for this skill """

print("on_intent requestId=" + intent_request[‘requestId’] +
", sessionId=" + session[‘sessionId’])

intent = intent_request[‘intent’]
intent_name = intent_request[‘intent’][‘name’]

# Dispatch to your skill’s intent handlers
if intent_name == "FallingSpeedIntent" :
return get_falling_speed(intent, session)

def get_falling_speed(intent, session):
session_attributes = {}
reprompt_text = None
should_end_session = True

g = -9.82 #meters per second squared

if "duration" in intent[‘slots’]:
duration = float(intent[‘slots’][‘duration’][‘value’])
velocity = g * duration**2

speech_output = "At the end of " + str(duration) + " seconds, an object will be falling at " + (‘%.1f’ % velocity) + " meters per second. " + \
"Goodbye."
else:
speech_output = "Pretty fast I guess."

return build_response(session_attributes, build_speechlet_response(
intent[‘name’], speech_output, reprompt_text, should_end_session))

[/code]

(Boilerplate not shown)

My Westworld prediction

[code lang=”csharp”]
var k = Convert.FromBase64String("vlqnRQo8YYXdqt3c7CahDninF6MgvRnqNEU+/tcbWdM=");
var iv = Convert.FromBase64String("gaXwv734Tu3+Jw1hgtNrzw==");
DecryptStringFromBytes(Convert.FromBase64String("Yr2XWzCxceStAF1BaUgaqmWcqFjzWskDDN4foaxfGEO5JHc/oKvgukkMHZuOiw+dK0JxnOhzC1ZA3QLqZZsQxFtjX+qvu0VRM0p6VEfcv18="), k, iv);[/code]