Writing to Azure Storage With F#

This last weekend I participated in the “Hack for the Sea” hackathon. As part of that, I needed to store images and structured data to Azure Storage. The process is very straightforward using F#’s async capabilities.

First, you’ll need the connection string for your Azure Storage:

Use that to instantiate a CloudStorageAccount object:


let csa = CloudStorageAccount.Parse connectionString

Then, write method(s) to store the data in either Blob storage or Table storage:


// Put directly in Azure blob storage let photoSubmissionAsync (cloudStorageAccount : CloudStorageAccount) imageType (maybePhoto : IO.Stream option) imageName = async { match maybePhoto with | Some byteStream -> let containerName = "marinedebrispix" let ctb = cloudStorageAccount.CreateCloudBlobClient() let container = ctb.GetContainerReference containerName let blob = container.GetBlockBlobReference(imageName) //|> Async.AwaitTask blob.Properties.ContentType <- imageType do! blob.UploadFromStreamAsync(byteStream) |> Async.AwaitTask return true | None -> return false } // Put directly in Azure table storage let reportSubmissionAsync (cloudStorageAccount : CloudStorageAccount) report photoName = async { let ctc = cloudStorageAccount.CreateCloudTableClient() let table = ctc.GetTableReference("MarineDebris") let record = new ReportStorage(report) let insertOperation = record |> TableOperation.Insert let! tr = table.ExecuteAsync(insertOperation) |> Async.AwaitTask return tr.Etag |> Some }

The object passed to TableOperation.Insert must be a subclass of TableEntity:


type ReportStorage(report : Report) = inherit TableEntity( "MainPartition", report.Timestamp.ToString("o")) member val public Report = report |> toJson with get, set

 

 

fun-ny Faces : Face-based Augmented Reality with F# and the iPhone X

fun-ny Faces : Face-based Augmented Reality with F# and the iPhone X

Each year, the F# programming community creates an advent calendar of blog posts, coordinated by Sergey Tihon on his blog. This is my attempt to battle Impostor Syndrome and share something that might be of interest to the community, or at least amusing…

I was an Augmented Reality (AR) skeptic until I began experimenting with iOS 11’s ARKit framework. There’s something very compelling about seeing computer-generated imagery mapped into your physical space.

A feature of the iPhone X is the face-tracking sensors on the front side of the phone. While the primary use-case for these sensors is unlocking the phone, they additionally expose the facial geometry (2,304 triangles) to developers. This geometry can be used to create AR apps that place computer-generated geometry on top of the facial geometry at up to 60FPS.

Getting Started

In Visual Studio for Mac, choose “New solution…” and “Single-View App” for F#:

The resulting solution is a minimal iOS app, with an entry point defined in Main.fs, a UIApplicationDelegate in AppDelegate.fs, and a UIViewController in ViewController.fs. The iOS programming model is not only object-oriented but essentially a Smalltalk-style architecture, with a classic Model-View-Controller approach (complete with frustratingly little emphasis on the “Model” part) and a delegate-object pattern for customizing object life-cycles.

Although ARKit supports low-level access, by far the easiest way to program AR is to use an ARSCNView, which automatically handles the combination of camera and computer-generated imagery. The following code creates an ARSCNView, makes it full-screen (arsceneview.Frame ← this.View.Frame) and assigns it’s Delegate property to an instance of type ARDelegate (discussed later). When the view is about to appear, we specify that AR session should use an ARFaceTrackingConfiguration and that it should Run:

[<register ("ViewController")>]
type ViewController (handle:IntPtr) =
    inherit UIViewController (handle)

    let mutable arsceneview : ARSCNView = new ARSCNView()

    let ConfigureAR() = 
       let cfg = new ARFaceTrackingConfiguration()
       cfg.LightEstimationEnabled < - true
       cfg

    override this.DidReceiveMemoryWarning () =
      base.DidReceiveMemoryWarning ()

    override this.ViewDidLoad () =
      base.ViewDidLoad ()

      match ARFaceTrackingConfiguration.IsSupported with
      | false -> raise < | new NotImplementedException() 
      | true -> 
        arsceneview.Frame < - this.View.Frame
        arsceneview.Delegate <- new ARDelegate (ARSCNFaceGeometry.CreateFaceGeometry(arsceneview.Device, false))
        //arsceneview.DebugOptions <- ARSCNDebugOptions.ShowFeaturePoints + ARSCNDebugOptions.ShowWorldOrigin

        this.View.AddSubview arsceneview

    override this.ViewWillAppear willAnimate = 
        base.ViewWillAppear willAnimate

        // Configure ARKit 
        let configuration = new ARFaceTrackingConfiguration()

        // This method is called subsequent to `ViewDidLoad` so we know arsceneview is instantiated
        arsceneview.Session.Run (configuration , ARSessionRunOptions.ResetTracking ||| ARSessionRunOptions.RemoveExistingAnchors)


Once the AR session is running, it adds, removes, and modifies ARSCNNode objects that bridge the 3D scene-graph architecture of iOS’s SceneKit with real-world imagery. As it does so, it calls various methods of the ARSCNViewDelegate class, which we subclass in the previously-mentioned ARDelegate class:

// Delegate object for AR: called on adding and updating nodes
type ARDelegate(faceGeometry : ARSCNFaceGeometry) =
   inherit ARSCNViewDelegate()

   // The geometry to overlay on top of the ARFaceAnchor (recognized face)
   let faceNode = new Mask(faceGeometry)

   override this.DidAddNode (renderer, node, anchor) = 
      match anchor <> null && anchor :? ARFaceAnchor with 
      | true -> node.AddChildNode faceNode
      | false -> ignore()   

   override this.DidUpdateNode (renderer, node, anchor) = 

      match anchor <> null && anchor :? ARFaceAnchor with 
      | true -> faceNode.Update (anchor :?> ARFaceAnchor)
      | false -> ignore()


As you can see in DidAddNode and DidUpdateNode, we’re only interested when an ARFaceAnchor is added or updated. (This would be a good place for an active pattern if things got more complex.) As it’s name implies, an ARFaceAnchor relates the AR subsystems’ belief of a face’s real-world location and geometry with SceneKit values.

The Mask class is the last piece of the puzzle. We define it as a subtype of SCNNode, which means that it can hold geometry, textures, have animations, and so forth. It’s passed an ARSCNFaceGeometry which was ultimately instantiated back in the ViewController (new ARDelegate (ARSCNFaceGeometry.CreateFaceGeometry(arsceneview.Device, false)). As the AR subsystem recognizes face movement and changes (blinking eyes, the mouth opening and closing, etc.), calls to ARDelegate.DidUpdateNode are passed to Mask.Update, which updates the geometry with the latest values from the camera and AR subsystem:

member this.Update(anchor : ARFaceAnchor) =
    let faceGeometry = this.Geometry :?> ARSCNFaceGeometry

    faceGeometry.Update anchor.Geometry

While SceneKit geometries can have multiple SCNMaterial objects and every SCNMaterial multiple SCNMaterialProperty values, we can make a simple red mask with :

let mat = geometry.FirstMaterial
mat.Diffuse.ContentColor <- UIColor.Red // Basic: single-color mask

Or we can engage in virtual soccer-hooligan face painting with mat.Diffuse.ContentImage ← UIImage.FromFile "fsharp512.png" :

facepaint

The real opportunity here is undoubtedly for makeup, “face-swap,” and plastic surgery apps, but everyone also loves a superhero. The best mask in comics, I think, is that of Watchmen’s Rorschach, which presented ambiguous patterns matching the black-and-white morality of its wearer, Walter Kovacs.

We can set our face geometry’s material to an arbitrary SKScene SpriteKit animation with mat.Diffuse.ContentScene ← faceFun // Arbitrary SpriteKit scene.

I’ll admit that so far I have been stymied in my attempt to procedurally-generate a proper Rorschach mask. The closest I have gotten is a function that uses 3D Improved Perlin Noise that draws black if the texture is negative and white if positive. That looks like this:

Which is admittedly more Let That Be Your Last Battlefield than Watchmen.

Other things I’ve considered for face functions are: cellular automata, scrolling green code (you know, like the hackers in movies!), and the video feed from the back-facing camera. Ultimately though, all of that is just warm-up for the big challenge: deformation of the facial geometry mesh. If you get that working, I’d love to see the code!

All of my code is available on Github.

Programmatic AutoLayout Constraints Basics for Xamarin

  1. Create element without an explicit Frame.
  2. Set TranslatesAutoresizingMaskIntroConstraints = false
  3. Create an array of NSLayoutConstraints
  4. Work top-to-bottom, left-to-right, or vice versa. Do this consistently throughout program
  5. Use Layout Anchors
  6. Use the top-level UIViews SafeAreaLayoutGuide to position relative to the Window / screen
  7. For each dimension, set its location (LeadingAnchor / TopAnchor or TrailingAnchor / BottomAnchor)
  8. Either set the other location anchor or set the internal dimension (WidthAnchor / HeightAnchor)
  9. Call NSLayoutConstraint.ActivateConstraints after the UIView and any referenced UIView objects have been added to the View Hierarchy (compiles OK, but runtime exception)
toolbar = new UIToolbar();
toolbar.TranslatesAutoresizingMaskIntoConstraints = false;
var tbConstraints = new[]
{
toolbar.LeadingAnchor.ConstraintEqualTo(this.View.SafeAreaLayoutGuide.LeadingAnchor),
toolbar.TrailingAnchor.ConstraintEqualTo(this.View.SafeAreaLayoutGuide.TrailingAnchor),
toolbar.TopAnchor.ConstraintEqualTo(this.View.SafeAreaLayoutGuide.TopAnchor),
toolbar.HeightAnchor.ConstraintEqualTo(toolbar.IntrinsicContentSize.Height)
};
View.AddSubview(toolbar);
NSLayoutConstraint.ActivateConstraints(tbConstraints);

label = new UILabel();
label.Text = "This is the detail view";
label.TranslatesAutoresizingMaskIntoConstraints = false;
var lblConstraints = new[]
{
label.LeadingAnchor.ConstraintEqualTo(this.View.SafeAreaLayoutGuide.LeadingAnchor, 20.0f),
label.WidthAnchor.ConstraintEqualTo(label.IntrinsicContentSize.Width),
label.TopAnchor.ConstraintEqualTo(this.toolbar.BottomAnchor, 20.0f),
label.HeightAnchor.ConstraintEqualTo(label.IntrinsicContentSize.Height)
};
View.AddSubview(label);
NSLayoutConstraint.ActivateConstraints(lblConstraints);

Tracking Apple Pencil angles and pressure with Xamarin

Rumor has it that Apple will support the Apple Pencil in the forthcoming iPad. If so, more developers will want to use the new features of UITouch — force, angle, and elevation — supported by the incredibly-precise stylus.

Basically, it’s trivial:

— Force is UITouch.Force;
— Angle is UITouch.GetAzimuthAngle(UIView); and
— Angle above horizontal is UITouch.AltitudeAngle

(The UIView objects are there, I think, to make it easier to create a custom angular transform that is more natural to the task at hand — i.e., an artist could “rotate” the page slightly to accommodate the angle with which they like to work. I think.)

Anyhow, here’s some code:

[code lang=”fsharp”]

namespace UITouch0

open System
open UIKit
open Foundation
open System.Drawing
open CoreGraphics

type ContentView(color : UIColor) as this =
inherit UIView()
do this.BackgroundColor <- color

let MaxRadius = 200.0
let MaxStrokeWidth = nfloat 10.0

//Mutable!
member val Circle : (CGPoint * nfloat * nfloat * nfloat ) option = None with get, set

member this.DrawTouch (touch : UITouch) =
let radius = (1.0 – (float touch.AltitudeAngle) / (Math.PI / 2.0)) * MaxRadius |> nfloat
this.Circle <- Some (touch.LocationInView(this), radius, touch.GetAzimuthAngle(this), touch.Force)
this.SetNeedsDisplay()

override this.Draw rect =

match this.Circle with
| Some (location, radius, angle, force) ->
let rectUL = new CGPoint(location.X – radius, location.Y – radius)
let rectSize = new CGSize(radius * (nfloat 2.0), radius * (nfloat 2.0))
use g = UIGraphics.GetCurrentContext()
let strokeWidth = force * MaxStrokeWidth
g.SetLineWidth(strokeWidth)
let hue = angle / nfloat (Math.PI * 2.0)
let color = UIColor.FromHSB(hue, nfloat 1.0, nfloat 1.0)
g.SetStrokeColor(color.CGColor)
g.AddEllipseInRect <| new CGRect(rectUL, rectSize)
g.MoveTo (location.X, location.Y)
let endX = location.X + nfloat (cos(float angle)) * radius
let endY = location.Y + nfloat (sin(float angle)) * radius
g.AddLineToPoint (endX, endY)
g.StrokePath()
| None -> ignore()

type SimpleController() =
inherit UIViewController()
override this.ViewDidLoad() =
this.View <- new ContentView(UIColor.Blue)

override this.TouchesBegan(touches, evt) =
let cv = this.View :?> ContentView

touches |> Seq.map (fun o -> o :?> UITouch) |> Seq.iter cv.DrawTouch

override this.TouchesMoved(touches, evt) =
let cv = this.View :?> ContentView
touches |> Seq.map (fun o -> o :?> UITouch) |> Seq.iter cv.DrawTouch

[<Register("AppDelegate")>]
type AppDelegate() =
inherit UIApplicationDelegate()
let window = new UIWindow(UIScreen.MainScreen.Bounds)

override this.FinishedLaunching(app, options) =
let viewController = new SimpleController()
viewController.Title <- "F# Rocks"
let navController = new UINavigationController(viewController)
window.RootViewController <- navController
window.MakeKeyAndVisible()
true

module Main =
[<EntryPoint>]
let main args =
UIApplication.Main(args, null, "AppDelegate")
0

[/code]

And it looks like this:

Animating the stroke color of a CAShapeLayer with Xamarin

I wanted to indicate the most recent move in an AI-on-AI game of TicTacToe, so I wanted to have the most recent move be highlighted. The Xs and Os are CAShapeLayer objects.

Here’s the code to do it, featuring a very ugly hack to cast an IntPtr to an NSObject Including the use of SetTo and SetFrom to use a type that is not an NSObject in CABasicAnimation (thanks Sebastien!):

[code lang=”csharp”]
var layer = mark == ‘X’ ? ShapeLayer.XLayer (endFrame) : ShapeLayer.OLayer (endFrame);
layer.Position = origin;
this.Layer.AddSublayer (layer);

var animation = CABasicAnimation.FromKeyPath ("strokeColor");
animation.SetFrom(UIColor.Green.CGColor);
animation.SetTo(layer.StrokeColor);
animation.Duration = 0.5;

layer.AddAnimation (animation, "animateStrokeColor");
[/code]

TideMonkey: Development Diary 0

I am publicly committing to developing “TideMonkey,” a tide-prediction application that will run on (at least) iOS and watchOS.

TideMonkey will be based on Xtide, an excellent piece of software developed by David Flater. At the moment, my hope is that it will be a very loose port, or what Flater refers to as a “non-port” that reuses the harmonics files of Xtide but is otherwise only loosely based on the source code. On the other hand, I know virtually nothing about the domain, so it is likely that I will have to hew pretty closely to Xtide’s algorithms, at least initially. Ideally I would like to be able to plugin different algorithms and compare their results with the canonical Xtide. Neural nets are a particular interest of mine and one would think that a harmonic series would be the type of thing that one could successfully train (if this ever happens, it won’t be for months and months and months).

I am battling the urge to dive right into coding. Instead, I know that I will be happy by investing in:

  • automation, and
  • testing, and
  • continuous integration

All of which argues for me to begin my journey by getting Xtide, which is written in C++, up and running in a CI server. For no particular reason (but it’s free for personal use) I’ve chosen to use TeamCity for my CI server.


Hmm…

There are several Xtide ports on Github to iOS or Android. The first one I tried was last updated in 2013 and doesn’t run on iOS 9 (it looks like a simple permissions issue, but it doesn’t run “straight from the cloud” and I don’t know if I want to deal with a port rather than just go with the original “straight from the horse’s mouth” Xtide source.

At the moment, I think I’ll work all inside the single “TideMonkey” Github repo. I’ll have to check license restrictions on that, and I don’t know how it will work out once the project structure starts to become more complicated, with testing and mobile development as part of it.

Still,

Creating TideMonkey Github report

MIT License

TideMonkey repo

GameplayKit path-finding in iOS 9 with Xamarin.iOS

Easy-peasy, lemon-squeazy:

[code lang=”csharp”]
var a = GKGraphNode2D.FromPoint (new Vector2 (0, 5));
var b = GKGraphNode2D.FromPoint (new Vector2 (3, 0));
var c = GKGraphNode2D.FromPoint (new Vector2 (2, 6));
var d = GKGraphNode2D.FromPoint (new Vector2 (4, 6));
var e = GKGraphNode2D.FromPoint (new Vector2 (6, 5));
var f = GKGraphNode2D.FromPoint (new Vector2 (6, 0));

a.AddConnections (new [] { b, c }, false);
b.AddConnections (new [] { e, f }, false);
c.AddConnections (new [] { d }, false);
d.AddConnections (new [] { e, f }, false);

var graph = GKGraph.FromNodes(new [] { a, b, c, d, e, f });

var a2e = graph.FindPath (a, e); // [ a, c, d, e ]
var a2f = graph.FindPath (a, f); // [ a, b, f ]
[/code]

GKPathFindPath

FizzBuzz with iOS 9 GameplayKit Expert System in C# with Xam.iOS

OK, so this is silly, but:

[code lang=”csharp”]
var clearRule = GKRule.FromPredicate ((rules) => reset, rules => {
output = "";
reset = false;
});
clearRule.Salience = 1;

var fizzRule = GKRule.FromPredicate (mod (3), rules => {
output += "fizz";
});
fizzRule.Salience = 2;
var buzzRule = GKRule.FromPredicate (mod (5), rules => {
output += "buzz";
});
buzzRule.Salience = 2;

var outputRule = GKRule.FromPredicate (rules => true, rules => {
System.Console.WriteLine(output == "" ? input.ToString() : output);
reset = true;
});
outputRule.Salience = 3;

var rs = new GKRuleSystem ();
rs.AddRules (new [] {
clearRule,
fizzRule,
buzzRule,
outputRule
});

for (input = 1; input < 16; input++) {
rs.Evaluate ();
rs.Reset ();
}
[/code]

Output:

[code]
2015-08-04 13:08:47.164 GameplayKit0[46277:18357203] 1
2015-08-04 13:08:47.164 GameplayKit0[46277:18357203] 2
2015-08-04 13:08:50.338 GameplayKit0[46277:18357203] fizz
2015-08-04 13:08:50.338 GameplayKit0[46277:18357203] 4
2015-08-04 13:08:51.089 GameplayKit0[46277:18357203] buzz
2015-08-04 13:08:51.934 GameplayKit0[46277:18357203] fizz
2015-08-04 13:08:51.934 GameplayKit0[46277:18357203] 7
2015-08-04 13:08:51.935 GameplayKit0[46277:18357203] 8
2015-08-04 13:08:52.589 GameplayKit0[46277:18357203] fizz
2015-08-04 13:08:53.256 GameplayKit0[46277:18357203] buzz
2015-08-04 13:08:53.256 GameplayKit0[46277:18357203] 11
2015-08-04 13:08:53.872 GameplayKit0[46277:18357203] fizz
2015-08-04 13:08:53.873 GameplayKit0[46277:18357203] 13
2015-08-04 13:08:53.873 GameplayKit0[46277:18357203] 14
2015-08-04 13:08:55.005 GameplayKit0[46277:18357203] buzzfizz
[/code]

The important thing I learned is that you have to call GKRuleSystem.Reset() if you want evaluated GKRules to be re-evaluated.

How to: Handoff to a Xamarin iPhone app from Apple Watch

# How to: Handoff to a Xamarin iPhone app from Apple Watch

There are two ways to activate the parent (aka container) app from an Apple Watch app. You can either directly activate the container app using WKInterfaceController.OpenParentApplication or you can use Handoff.

Using Handoff is a little more complex, so I thought I’d write a quick little how-to. There are a few different Handoff scenarios, but perhaps the most common for the  Watch is: “On my watch I want to begin a task that I complete later on my iPhone.” So, for instance, some task that requires either more data-entry than is appropriate for the watch or some capabilities not available on the watch.

I want to keep the focus on the APIs, so instead of a real-world sample, I’m going to create a minimal example: a button on the Watch activates handoff and a status label on the phone app is updated when the user activity is continued on the phone.

As always, we have a single Solution with 3 projects: the parent App, the Watch extension, and the Watch App.

Napkin 10 05-06-15, 4.21.12 PM

Every handoff activity has a unique identifier. By convention, this is a domain-reversed string such as: com.xamarin.HandOffDemo.verb.

To trigger the Handoff behavior, the watch extension calls the WKInterfaceController.UpdateUserActivity method, with its first argument set equal to this identifier:

[code lang=”csharp”]
partial void ActivateHandoffClicked (WatchKit.WKInterfaceButton sender)
{
var userInfo = NSDictionary.FromObjectAndKey(new NSString(“value”), new NSString(“key”));
this.UpdateUserActivity(“com.xamarin.HandOffDemo.verb”, userInfo, null);
}
[/code]

The third argument is a NSUrl object that can be used for Handoff tasks that should be handled by Safari. But in our case, we’re handing the userInfo dictionary containing the very complex data associated with our handoff.

Your parent app registers its interest in this type of handoff within its info.plist. In the parent app info.plist, add a new array called NSUserActivityTypes and add to it a string with value com.xamarin.HandOffDemo.verb

Napkin 11 05-06-15, 4.29.20 PM

It’s possible that an app could be interested in certain handoffs, but not always be in a position to actually handle them. That logic can be placed in an override of the UIApplicationDelegate.WillContinueUserActivity method:

[code lang=”csharp”]
public override bool WillContinueUserActivity (UIApplication application, string userActivityType)
{
//Yeah, we can handle it
return true;
}
[/code]

Assuming that we return true from that method, the next step is to override the UIApplicationDelegate.ContinueUserActivity method:

An architectural issue that needs to be addressed is that this Handoff re-entry point is in the UIApplicationDelegate object, which of course does not have an associated user interface. There are several ways to handle this, but as a fan of reactive programming, I think the proper design is to create either an IObservable sequence or a more traditional C# event, which is what I do here:

[code lang=”csharp”]
public event EventHandler HandoffOccurred = delegate {};

public override bool ContinueUserActivity (UIApplication application, NSUserActivity userActivity, UIApplicationRestorationHandler completionHandler)
{
HandoffOccurred?.Invoke (this, userActivity.UserInfo);

return true;
}
[/code]

The third parameter, completionHandler used if you have references to custom UIResponder objects that should handle the user activity. In the case, you put those references in a NSArray and pass them to completionHandler, which will cause each of their ContinueUserActivity methods to be called. (Update: Apple would probably prefer this technique to my event, but I am not sure if it’s necessary, and it requires the UIApplicationDelegate to maintain a reference to the subscribing UIResponder, so you either have an ugly dependency or you have to implement some kind of Observer / Subscriber pattern. So I still would suggest an event or IObservable as the better solution.)

In the parent app’s main UIViewController class, I have:

[code lang=”csharp”]
public override void ViewDidLoad ()
{
base.ViewDidLoad ();

//Configure user experience for Handoff
var myAppDel = (AppDelegate) UIApplication.SharedApplication.Delegate;
myAppDel.HandoffOccurred += HandoffOccurred;
}

public void HandoffOccurred(object sender, NSDictionary userInfo)
{
InvokeOnMainThread( () =&gt; statusLabel.Text = userInfo[“key”].ToString() );
}
[/code]

And that’s all there is to it. Obviously, a real use-case would involve building a more complex NSDictionary holding the context of the watch interaction and a similarly complex handler in the parent app.

Now, when the Handoff is activated from the Apple Watch, my iPhone lock screen shows the icon of the parent app in the lower-left corner. If I drag that up, the parent app opens, the re-entry process begins, with UIApplicationDelegate.WillContinueUserActivity and UIApplicationDelegate.ContinueUserActivity.

Programming WatchKit with F#

Disclaimer: This is just a hack. I’m not in any position to make announcements about stuff, but Xamarin loves F# and I’m sure that better solutions than this are forthcoming. But this was fun to get running, so…

Xamarin just released it’s Preview of Watch Kit support and naturally, I had to see if it was possible to use F# to program the forthcoming Apple Watch. Yes, it is.

As always with Watch Kit Apps, the Xamarin solution consists of three projects:

  1. A Parent app that is a normal iOS app;
  2. An Extension that runs on a connected iPhone and executes the program logic; and
  3. A Watch App that runs on the Watch and is essentially a remote display for the Extension App

You can read much more about this at Xamarin’s Watch Kit Documentation site.

To create an F# Watch solution, first create an F#-based Parent App. Then, add the Extension project to that app, an F#-based Custom Keyboard Extension. Finally, add a Watch App from the C#/iOS/Unified/Apple Watch solution template.

The Watch App consists only of a storyboard and resources. It doesn’t actually have any C# (or F#) code in it.

Follow these instructions to [set project references and identifiers].

Switching the Extension from Custom Keyboard to Watch Kit

You will have to manually edit the info.plist of the Watch Extension:

  • In NSExtension, switch the NSExtensionPointIdentifier to com.apple.watchkit; and
  • Under NSExtensionAttributes, add a WKAppBundleIdentifier key to the identifier of your Watch App (e.g., com.xamarin.FWatch1.watchkitapp)

Screenshot 2015-01-21 15.02.07

Now you can get rid of the template F# code and replace it with something like this:

[code lang=”fsharp”]
namespace WatchX

open System
open UIKit
open Foundation
open WatchKit

type InterfaceController(ip : IntPtr) =
inherit WKInterfaceController(ip)

override this.Awake (context) =
System.Console.WriteLine("Hello F#")
this.myLabel.SetText("F# |> I ♡")
[/code]

Again, this is covered in much more detail in Xamarin’s docs, but every scene in the Watch App’s storyboard is backed by a subtype of WKInterfaceController. Since it’s loaded from a Storyboard, it uses the constructor that takes an IntPtr. The Awake method is called when the controller is instantiated.

The Hacky Part

Xamarin has not yet released designer support for Watch Kit, so for now, you need to edit your Watch App’s Storyboard in XCode Interface Builder.

That’s not the hacky part.

Once you’ve designed your UI, you have to hand-edit the Storyboard XML, adding connections elements that define your outlets (properties) and actions (event-handlers). You have to set the destination attribute to refer to the id of the associated control:

Screenshot 2015-01-21 15.37.52

But really, that’s the only ugly part! ;-)

Back in your Extension app, you now have to use attributes to link up your F# code with elements within the Storyboard. The RegisterAttribute on your WKInterfaceController links to the customClass attribute of the controller element, and the name of your OutletAttribute properties must correspond to the property attribute of the outlet elements. Finally, the selector attribute of your action elements must have a corresponding ActionAttribute :

[code lang=”fsharp”]
namespace WatchX

open System
open UIKit
open Foundation
open WatchKit

[<register ("InterfaceController")>]
type InterfaceController(ip : IntPtr) =
inherit WKInterfaceController(ip)

let mutable label : WKInterfaceLabel = null
let mutable button : WKInterfaceButton = null

let mutable clickCount = 0

[<outlet>]
member this.myLabel with get() = label
member this.myLabel with set(v) = label < – v

[<Outlet>]
member this.myButton with get() = button
member this.myButton with set(v) = button < – v

[<Action("OnButtonPress")>]
member this.OnButtonPush () =
clickCount < – clickCount + 1
sprintf "Pressed %d times" clickCount
|> this.myLabel.SetText

override this.Awake (context) =
System.Console.WriteLine("Hello F#")
this.myLabel.SetText("F# |> I ♡")

[/code]

And that’s really all there is to putting F# on your wrist!

Github project