fun-ny Faces : Face-based Augmented Reality with F# and the iPhone X

fun-ny Faces : Face-based Augmented Reality with F# and the iPhone X

Each year, the F# programming community creates an advent calendar of blog posts, coordinated by Sergey Tihon on his blog. This is my attempt to battle Impostor Syndrome and share something that might be of interest to the community, or at least amusing…

I was an Augmented Reality (AR) skeptic until I began experimenting with iOS 11’s ARKit framework. There’s something very compelling about seeing computer-generated imagery mapped into your physical space.

A feature of the iPhone X is the face-tracking sensors on the front side of the phone. While the primary use-case for these sensors is unlocking the phone, they additionally expose the facial geometry (2,304 triangles) to developers. This geometry can be used to create AR apps that place computer-generated geometry on top of the facial geometry at up to 60FPS.

Getting Started

In Visual Studio for Mac, choose “New solution…” and “Single-View App” for F#:

The resulting solution is a minimal iOS app, with an entry point defined in Main.fs, a UIApplicationDelegate in AppDelegate.fs, and a UIViewController in ViewController.fs. The iOS programming model is not only object-oriented but essentially a Smalltalk-style architecture, with a classic Model-View-Controller approach (complete with frustratingly little emphasis on the “Model” part) and a delegate-object pattern for customizing object life-cycles.

Although ARKit supports low-level access, by far the easiest way to program AR is to use an ARSCNView, which automatically handles the combination of camera and computer-generated imagery. The following code creates an ARSCNView, makes it full-screen (arsceneview.Frame ← this.View.Frame) and assigns it’s Delegate property to an instance of type ARDelegate (discussed later). When the view is about to appear, we specify that AR session should use an ARFaceTrackingConfiguration and that it should Run:

[<register ("ViewController")>]
type ViewController (handle:IntPtr) =
    inherit UIViewController (handle)

    let mutable arsceneview : ARSCNView = new ARSCNView()

    let ConfigureAR() = 
       let cfg = new ARFaceTrackingConfiguration()
       cfg.LightEstimationEnabled < - true

    override this.DidReceiveMemoryWarning () =
      base.DidReceiveMemoryWarning ()

    override this.ViewDidLoad () =
      base.ViewDidLoad ()

      match ARFaceTrackingConfiguration.IsSupported with
      | false -> raise < | new NotImplementedException() 
      | true -> 
        arsceneview.Frame < - this.View.Frame
        arsceneview.Delegate <- new ARDelegate (ARSCNFaceGeometry.CreateFaceGeometry(arsceneview.Device, false))
        //arsceneview.DebugOptions <- ARSCNDebugOptions.ShowFeaturePoints + ARSCNDebugOptions.ShowWorldOrigin

        this.View.AddSubview arsceneview

    override this.ViewWillAppear willAnimate = 
        base.ViewWillAppear willAnimate

        // Configure ARKit 
        let configuration = new ARFaceTrackingConfiguration()

        // This method is called subsequent to `ViewDidLoad` so we know arsceneview is instantiated
        arsceneview.Session.Run (configuration , ARSessionRunOptions.ResetTracking ||| ARSessionRunOptions.RemoveExistingAnchors)

Once the AR session is running, it adds, removes, and modifies ARSCNNode objects that bridge the 3D scene-graph architecture of iOS’s SceneKit with real-world imagery. As it does so, it calls various methods of the ARSCNViewDelegate class, which we subclass in the previously-mentioned ARDelegate class:

// Delegate object for AR: called on adding and updating nodes
type ARDelegate(faceGeometry : ARSCNFaceGeometry) =
   inherit ARSCNViewDelegate()

   // The geometry to overlay on top of the ARFaceAnchor (recognized face)
   let faceNode = new Mask(faceGeometry)

   override this.DidAddNode (renderer, node, anchor) = 
      match anchor <> null && anchor :? ARFaceAnchor with 
      | true -> node.AddChildNode faceNode
      | false -> ignore()   

   override this.DidUpdateNode (renderer, node, anchor) = 

      match anchor <> null && anchor :? ARFaceAnchor with 
      | true -> faceNode.Update (anchor :?> ARFaceAnchor)
      | false -> ignore()

As you can see in DidAddNode and DidUpdateNode, we’re only interested when an ARFaceAnchor is added or updated. (This would be a good place for an active pattern if things got more complex.) As it’s name implies, an ARFaceAnchor relates the AR subsystems’ belief of a face’s real-world location and geometry with SceneKit values.

The Mask class is the last piece of the puzzle. We define it as a subtype of SCNNode, which means that it can hold geometry, textures, have animations, and so forth. It’s passed an ARSCNFaceGeometry which was ultimately instantiated back in the ViewController (new ARDelegate (ARSCNFaceGeometry.CreateFaceGeometry(arsceneview.Device, false)). As the AR subsystem recognizes face movement and changes (blinking eyes, the mouth opening and closing, etc.), calls to ARDelegate.DidUpdateNode are passed to Mask.Update, which updates the geometry with the latest values from the camera and AR subsystem:

member this.Update(anchor : ARFaceAnchor) =
    let faceGeometry = this.Geometry :?> ARSCNFaceGeometry

    faceGeometry.Update anchor.Geometry

While SceneKit geometries can have multiple SCNMaterial objects and every SCNMaterial multiple SCNMaterialProperty values, we can make a simple red mask with :

let mat = geometry.FirstMaterial
mat.Diffuse.ContentColor <- UIColor.Red // Basic: single-color mask

Or we can engage in virtual soccer-hooligan face painting with mat.Diffuse.ContentImage ← UIImage.FromFile "fsharp512.png" :


The real opportunity here is undoubtedly for makeup, “face-swap,” and plastic surgery apps, but everyone also loves a superhero. The best mask in comics, I think, is that of Watchmen’s Rorschach, which presented ambiguous patterns matching the black-and-white morality of its wearer, Walter Kovacs.

We can set our face geometry’s material to an arbitrary SKScene SpriteKit animation with mat.Diffuse.ContentScene ← faceFun // Arbitrary SpriteKit scene.

I’ll admit that so far I have been stymied in my attempt to procedurally-generate a proper Rorschach mask. The closest I have gotten is a function that uses 3D Improved Perlin Noise that draws black if the texture is negative and white if positive. That looks like this:

Which is admittedly more Let That Be Your Last Battlefield than Watchmen.

Other things I’ve considered for face functions are: cellular automata, scrolling green code (you know, like the hackers in movies!), and the video feed from the back-facing camera. Ultimately though, all of that is just warm-up for the big challenge: deformation of the facial geometry mesh. If you get that working, I’d love to see the code!

All of my code is available on Github.

Programmatic AutoLayout Constraints Basics for Xamarin

  1. Create element without an explicit Frame.
  2. Set TranslatesAutoresizingMaskIntroConstraints = false
  3. Create an array of NSLayoutConstraints
  4. Work top-to-bottom, left-to-right, or vice versa. Do this consistently throughout program
  5. Use Layout Anchors
  6. Use the top-level UIViews SafeAreaLayoutGuide to position relative to the Window / screen
  7. For each dimension, set its location (LeadingAnchor / TopAnchor or TrailingAnchor / BottomAnchor)
  8. Either set the other location anchor or set the internal dimension (WidthAnchor / HeightAnchor)
  9. Call NSLayoutConstraint.ActivateConstraints after the UIView and any referenced UIView objects have been added to the View Hierarchy (compiles OK, but runtime exception)
toolbar = new UIToolbar();
toolbar.TranslatesAutoresizingMaskIntoConstraints = false;
var tbConstraints = new[]

label = new UILabel();
label.Text = "This is the detail view";
label.TranslatesAutoresizingMaskIntoConstraints = false;
var lblConstraints = new[]
label.LeadingAnchor.ConstraintEqualTo(this.View.SafeAreaLayoutGuide.LeadingAnchor, 20.0f),
label.TopAnchor.ConstraintEqualTo(this.toolbar.BottomAnchor, 20.0f),

Debugging provisioning profiles on the command line

Raise your hand if you’ve ever struggled with getting your app’s bundle identifier, info.plist, and entitlements.plist to match up with your provisioning profile.

I tried to explain provisioning profiles using the ten-hundred most common words, but in slightly-less-common words, a development prov-pro associates: A team, a developer, an application identifier, privacy and security entitlements, and development devices.

While there’s no silver bullet, there is a way to dump the contents of a provisioning profile into a readable plist format. From the command-line, run:

security cms -D -i some.mobileprovision

Here, for instance, is the output of a provisioning profile for an app that uses SiriKit to trigger a workout:


As you can see, this is a convenient way to confirm the associations in the prov-pro, particularly entitlements, the app ID, and provisioned devices.

Mysterious crashes in your iOS 10 program? Check your info.plist

If you’re developing for iOS 10 and your app “silently” crashes (especially if it’s an older app), the culprit could well be the increased privacy requirements in iOS 10. Namepaces such as HomeKit now require specific privacy-related keys to be in your info.plist (for instance, NSHomeKitUsageDescription). If you don’t have them, the system automatically closes your application without an exception or Console.log message (if you run in the simulator, you may see a PRIVACY_VIOLATION notice in the stack trace).

Streaming a Web video to AppleTV with Xamarin

If you have the URL of a streaming video, it’s easy to display on an AppleTV, even though tvOS does not have a UIWebView (which would make it really easy). You have to use some AVFoundation code, such as:

[code lang=”csharp”]
var src = NSUrl.FromString("https://somevideo");
var asset = AVAsset.FromUrl(src);
var playerItem = new AVPlayerItem(asset);
var player = new AVPlayer (playerItem);
var playerLayer = AVPlayerLayer.FromPlayer (player);
//Might want to modify this so that it’s the same size as the source video
var frame = new CGRect (0, 0, this.View.Frame.Width, this.View.Frame.Height);
playerLayer.Frame = frame;
this.View.Layer.AddSublayer (playerLayer);
player.Play ();

Note: This won’t work with normal YouTube page URLs since the YouTube stream URLs are not directly accessible.

WWDC Remote Viewing Protips

I attended the 2015 WWDC and made these notes afterwards. Aside from the specifics re. the Apple Watch and AppleTV, they may be of value to those who are considering streaming sessions next week:

WWDC: Post-show Streaming is the Key to Value

From an editorial perspective, one thing that is clear about WWDC is that the main audience for the sessions is not the developers in attendance, but the much more diverse, more diffuse, and more transient on-line audience that will view the videos over the next months and even years.

WWDC Session Videos are great as overviews, poor as references

What I’ve come to realize is that WWDC sessions are great as overviews, but poor for depth. They are very much worth watching when you’re new to a framework, they’re somewhat worth watching if you haven’t programmed in the framework lately (you might see some class you hadn’t appreciated), but they are not the place to discover a way out of some corner-case or programming limitation.

Microsoft explicitly labels the depth of their conference talks as being 100-, 200-, or 300-level, and 300-level content at WWDC was vanishingly rare. (As I write this, I can only speak to the talks I physically attended, but several talks definitely promised more depth than they delivered.)

I wonder if this is an artifact of The Dog That Didn’t Bark aka Apple TV. It must have been pulled very late. Both Xcode and Apple’s Developer Site, which had to be updated to support the new OS betas, are littered with Apple TV references. Perhaps it was the case that some of these talks were put together quickly. (Although you wouldn’t guess it from the universally well-practiced speakers.)

The real keynote was the Platform State of the Union

Monday’s keynote was covered by news vans and live blogs and all that crap. There was, perhaps, 5 minutes of developer content in this 2.5-hour stemwinder. From the audience, anyway, the music stuff was awkward to the point of embarrassment.

Skip it and watch [Platform State of the Union] instead. This was the true developer’s keynote and contains an excellent overview of El Capitan, iOS 9, and watchOS. (By the way, the witty kids pronounce “watchOS” so that it rhymes with “nachos.”)

The Shocking Secret You Can Use to Determine Which Videos to Stream

Is that a proper 21st century headline?

Anyway, here’s the key: many sessions followed a standard naming practice:

— “Introduction to…” talks are 100-level (if that) “tables of content.” They hardly have any code on screen, but contain references to other videos that provide the 200- or 300-level content. If you’ve ever programmed in the namespace before, you can skip these talks.

— “What’s New In…” talks are 100-level “Release Notes.” There may be some code, but what you’re really looking for here are the new classes and general new capabilities. This is the video with which you should start if you have programmed in the framework before, even if you’re pretty comfortable. Again, all of these talks are good at referencing other, more substantive, talks. This is my main recommended tactic for finding deep content on frameworks with which you are familiar: it’s much more effective than guessing from session titles and descriptions.

— Beware talks that have the words “tips”, “tricks,” or “practices.” These were the talks that disappointed me. Such words traditionally mean 300-level content. If you’re an attendee and you’re budgeting precious in-conference time to “tricks” and “practices,” that’s a strong indicator that you’re familiar with the framework and are encountering its limitations and corner cases. But at WWDC, these sessions appear to be more focused on the newcomer or relatively inexperienced framework user.

Tracking Apple Pencil angles and pressure with Xamarin

Rumor has it that Apple will support the Apple Pencil in the forthcoming iPad. If so, more developers will want to use the new features of UITouch — force, angle, and elevation — supported by the incredibly-precise stylus.

Basically, it’s trivial:

— Force is UITouch.Force;
— Angle is UITouch.GetAzimuthAngle(UIView); and
— Angle above horizontal is UITouch.AltitudeAngle

(The UIView objects are there, I think, to make it easier to create a custom angular transform that is more natural to the task at hand — i.e., an artist could “rotate” the page slightly to accommodate the angle with which they like to work. I think.)

Anyhow, here’s some code:

[code lang=”fsharp”]

namespace UITouch0

open System
open UIKit
open Foundation
open System.Drawing
open CoreGraphics

type ContentView(color : UIColor) as this =
inherit UIView()
do this.BackgroundColor <- color

let MaxRadius = 200.0
let MaxStrokeWidth = nfloat 10.0

member val Circle : (CGPoint * nfloat * nfloat * nfloat ) option = None with get, set

member this.DrawTouch (touch : UITouch) =
let radius = (1.0 – (float touch.AltitudeAngle) / (Math.PI / 2.0)) * MaxRadius |> nfloat
this.Circle <- Some (touch.LocationInView(this), radius, touch.GetAzimuthAngle(this), touch.Force)

override this.Draw rect =

match this.Circle with
| Some (location, radius, angle, force) ->
let rectUL = new CGPoint(location.X – radius, location.Y – radius)
let rectSize = new CGSize(radius * (nfloat 2.0), radius * (nfloat 2.0))
use g = UIGraphics.GetCurrentContext()
let strokeWidth = force * MaxStrokeWidth
let hue = angle / nfloat (Math.PI * 2.0)
let color = UIColor.FromHSB(hue, nfloat 1.0, nfloat 1.0)
g.AddEllipseInRect <| new CGRect(rectUL, rectSize)
g.MoveTo (location.X, location.Y)
let endX = location.X + nfloat (cos(float angle)) * radius
let endY = location.Y + nfloat (sin(float angle)) * radius
g.AddLineToPoint (endX, endY)
| None -> ignore()

type SimpleController() =
inherit UIViewController()
override this.ViewDidLoad() =
this.View <- new ContentView(UIColor.Blue)

override this.TouchesBegan(touches, evt) =
let cv = this.View :?> ContentView

touches |> (fun o -> o :?> UITouch) |> Seq.iter cv.DrawTouch

override this.TouchesMoved(touches, evt) =
let cv = this.View :?> ContentView
touches |> (fun o -> o :?> UITouch) |> Seq.iter cv.DrawTouch

type AppDelegate() =
inherit UIApplicationDelegate()
let window = new UIWindow(UIScreen.MainScreen.Bounds)

override this.FinishedLaunching(app, options) =
let viewController = new SimpleController()
viewController.Title <- "F# Rocks"
let navController = new UINavigationController(viewController)
window.RootViewController <- navController

module Main =
let main args =
UIApplication.Main(args, null, "AppDelegate")


And it looks like this:

Hair-Tearing-Out-Thing Explainer (Provisioning Profiles):

There is a company called Round Red Food. They make brain-phones and brain-watches and brain-televisions. These brain-things run brain-books written by Round Red Food. But Round Red Food also allows other people to write brain-books.

Round Red Food wants to control what brain-books run on their brain-things. To do this, they give each brain-book it’s own long name. This is called the Brain-Thing-Name.

Brain-books are written by many people, who come and go all the time, but the brain-book is owned by a thing Round Red Food calls the Team. Round Red Food knows all the teams and gives each one it’s own funny name. This is called the Team-Name.

Every brain-book needs a name, too. This name is added to the Team-Name to make the Book-Name.

Round Red Food wants to control what brain-books do so that bad Teams cannot make their brain-books do bad things like listen to you without your okay. Each brain-book has to do things. Every brain-book needs to run, but some brain-books also need to know where they are. Some need to take pictures. All sorts of stuff, but you should always be able to say okay or “No, I don’t want to allow you to do that.” The things that a brain-book needs to do are called its Needs-Doing-Things.

The people who write brain-books for a Team are coming and going all the time. So Round Red Food wants to know who is working for what Teams. Instead of saying “Keep your Team-Name and your Book-Names all to yourself and change them every time someone comes or goes,” Round Red Food lets the people who work for a team hold onto a special thing. This special thing is a Something-Everyone-Knows/Something-Only-You-Know numbers thing. As long as both the Team and the person working for the team agree, this thing makes a promise that the person works for the team. This Promise-Paper can be broken by either the Team (if they make the person go) or the person (if they don’t want to work with the Team anymore).

So, remember:

* the brain-thing has a Brain-Thing-Name;
* the brain-book has a Book-Name and a Needs-Doing-Things thing;
* the person working for the Team has a Promise-Paper

The people writing the brain-book send all this stuff to Round Red Food’s Brain-Book Writer’s Place. Round Red Food sends them back something that says “OK, this person, who works for Team, can put the brain-book named Book-Name on Brain-Thing-Name, and the person reading the brain-book will be asked whether they want to allow the brain-book to do its Needs-Doing-Things things.”

This is called the Tearing-Hair-Out-Thing.

Brain-Thing-Name -> UDID
Book-Name -> AppID
Needs-Doing-Things -> Entitlements
Promise-Paper -> Certificate
Tearing-Hair-Out-Thing -> Provisioning Profile

Animating the stroke color of a CAShapeLayer with Xamarin

I wanted to indicate the most recent move in an AI-on-AI game of TicTacToe, so I wanted to have the most recent move be highlighted. The Xs and Os are CAShapeLayer objects.

Here’s the code to do it, featuring a very ugly hack to cast an IntPtr to an NSObject Including the use of SetTo and SetFrom to use a type that is not an NSObject in CABasicAnimation (thanks Sebastien!):

[code lang=”csharp”]
var layer = mark == ‘X’ ? ShapeLayer.XLayer (endFrame) : ShapeLayer.OLayer (endFrame);
layer.Position = origin;
this.Layer.AddSublayer (layer);

var animation = CABasicAnimation.FromKeyPath ("strokeColor");
animation.Duration = 0.5;

layer.AddAnimation (animation, "animateStrokeColor");

TideMonkey: Development Diary 0

I am publicly committing to developing “TideMonkey,” a tide-prediction application that will run on (at least) iOS and watchOS.

TideMonkey will be based on Xtide, an excellent piece of software developed by David Flater. At the moment, my hope is that it will be a very loose port, or what Flater refers to as a “non-port” that reuses the harmonics files of Xtide but is otherwise only loosely based on the source code. On the other hand, I know virtually nothing about the domain, so it is likely that I will have to hew pretty closely to Xtide’s algorithms, at least initially. Ideally I would like to be able to plugin different algorithms and compare their results with the canonical Xtide. Neural nets are a particular interest of mine and one would think that a harmonic series would be the type of thing that one could successfully train (if this ever happens, it won’t be for months and months and months).

I am battling the urge to dive right into coding. Instead, I know that I will be happy by investing in:

  • automation, and
  • testing, and
  • continuous integration

All of which argues for me to begin my journey by getting Xtide, which is written in C++, up and running in a CI server. For no particular reason (but it’s free for personal use) I’ve chosen to use TeamCity for my CI server.


There are several Xtide ports on Github to iOS or Android. The first one I tried was last updated in 2013 and doesn’t run on iOS 9 (it looks like a simple permissions issue, but it doesn’t run “straight from the cloud” and I don’t know if I want to deal with a port rather than just go with the original “straight from the horse’s mouth” Xtide source.

At the moment, I think I’ll work all inside the single “TideMonkey” Github repo. I’ll have to check license restrictions on that, and I don’t know how it will work out once the project structure starts to become more complicated, with testing and mobile development as part of it.


Creating TideMonkey Github report

MIT License

TideMonkey repo