Archive for the ‘Languages’ Category.

My Favorite iOS 7 APIs Part 3 : CoreMotion (iPhone 5S only)

The new M7 coprocessor in the iPhone 5S makes pedometer apps trivial:

    if(CMStepCounter.IsStepCountingAvailable)
    {
        var counter = new CMStepCounter();
        //Last 8 hours
	counter.QueryStepCount(NSDate.FromTimeIntervalSinceNow(-8 * 60 * 60), NSDate.Now, NSOperationQueue.CurrentQueue, StepQueryHandler);
    }

    void StepQueryHandler(int nssteps, NSError error)
    {
        Console.WriteLine(nssteps);
    }

My Favorite iOS 7 APIs: Multipeer Connectivity

Multipeer Connectivity allows you to discover and share data with other iOS devices within Bluetooth radio range or on the same WiFi subnet. It is much easier to use than Bonjour.

I wrote a simple MPC chat program in Xamarin.iOS.

There’s necessarily a few hundred lines of code, but 90% of it is just the scaffolding necessary to support a four-view application. The actual discovery and communication is done with just a handful of code.

There are two phases for Multipeer Connectivity: Discovery and the Session phase. During the Discovery phase, one device acts as a coordinator or browser, and many devices advertise their interest in connecting. Devices advertise their interest in sharing a protocol defined by a string.

I created a base class DiscoveryViewController : UIViewController for both the advertising and browsing:

//Base class for browser and advertiser view controllers
public class DiscoveryViewController : UIViewController
{
	public MCPeerID PeerID { get; private set; }

	public MCSession Session { get; private set; }

	protected const string SERVICE_STRING = "xam-chat";

	public DiscoveryViewController(string peerID) : base()
	{
		PeerID = new MCPeerID(peerID);
	}

	public override void ViewDidLoad()
	{
		base.ViewDidLoad();

		Session = new MCSession(PeerID);
		Session.Delegate = new DiscoverySessionDelegate(this);
	}

	public void Status(string str)
	{
		StatusChanged(this, new TArgs<string>(str));
	}

	public event EventHandler<targs <string>> StatusChanged;
}

This base class holds a PeerID (essentially, the nickname for the device in the chat), an MCSession (the actual connection), and a SERVICE_STRING that specifies what type of MPC session I support (“xam-chat”). Additionally, it exposes an event StatusChanged (which is subscribed to by a UILabel in the DiscoveryView class (not shown, because it’s trivial).

Events relating to the MCSession are handled by ChatSessionDelegate, but those occur after discovery, so putting that aside for now, let’s look at how simple are the AdvertiserController and BrowserController subtypes of DiscoveryViewController:

public class AdvertiserController : DiscoveryViewController
{
	MCNearbyServiceAdvertiser advertiser;

	public AdvertiserController(string peerID) : base(peerID)
	{
	}

	public override void DidReceiveMemoryWarning()
	{
		// Releases the view if it doesn't have a superview.
		base.DidReceiveMemoryWarning();

		// Release any cached data, images, etc that aren't in use.
	}

	public override void ViewDidLoad()
	{
		base.ViewDidLoad();

		View = new DiscoveryView("Advertiser", this);
		var emptyDict = new NSDictionary();
		Status("Starting advertising...");

		advertiser = new MCNearbyServiceAdvertiser(PeerID, emptyDict, SERVICE_STRING);
		advertiser.Delegate = new MyNearbyAdvertiserDelegate(this);
		advertiser.StartAdvertisingPeer();
	}
}

class MyNearbyAdvertiserDelegate : MCNearbyServiceAdvertiserDelegate
{
	AdvertiserController parent;

	public MyNearbyAdvertiserDelegate(AdvertiserController parent)
	{
		this.parent = parent;
	}

	public override void DidReceiveInvitationFromPeer(MCNearbyServiceAdvertiser advertiser, MCPeerID peerID, NSData context, MCNearbyServiceAdvertiserInvitationHandler invitationHandler)
	{
		parent.Status("Received Invite");
		invitationHandler(true, parent.Session);
	}
}

public class BrowserController : DiscoveryViewController
{
	MCNearbyServiceBrowser browser;

	public BrowserController(string peerID) : base(peerID)
	{
	}

	public override void DidReceiveMemoryWarning()
	{
		// Releases the view if it doesn't have a superview.
		base.DidReceiveMemoryWarning();

		// Release any cached data, images, etc that aren't in use.
	}

	public override void ViewDidLoad()
	{
		base.ViewDidLoad();

		View = new DiscoveryView("Browser", this);

		browser = new MCNearbyServiceBrowser(PeerID, SERVICE_STRING);
		browser.Delegate = new MyBrowserDelegate(this);

		Status("Starting browsing...");
		browser.StartBrowsingForPeers();
	}

	class MyBrowserDelegate : MCNearbyServiceBrowserDelegate
	{
		BrowserController parent;
		NSData context;

		public MyBrowserDelegate(BrowserController parent)
		{
			this.parent = parent;
			context = new NSData();
		}

		public override void FoundPeer(MCNearbyServiceBrowser browser, MCPeerID peerID, NSDictionary info)
		{
			parent.Status("Found peer " + peerID.DisplayName);
			browser.InvitePeer(peerID, parent.Session, context, 60);
		}

		public override void LostPeer(MCNearbyServiceBrowser browser, MCPeerID peerID)
		{
			parent.Status("Lost peer " + peerID.DisplayName);
		}

		public override void DidNotStartBrowsingForPeers(MCNearbyServiceBrowser browser, NSError error)
		{
			parent.Status("DidNotStartBrowingForPeers " + error.Description);
		}
	}
}

Quite a few lines, but very straightforward: the advertiser uses the iOS class MCNearbyServiceAdvertiser and the browser uses the class MCNearbyServiceBrowser. The browser’s delegate responds to discovery by calling MCNearbyServiceBrowser.InvitePeer and the advertiser’s delegate responds to an invitation by passing true to the invitationHandler.

The Chat Session

When the invitation is accepted, it’s time for the ChatSessionDelegate to take over:

public class ChatSessionDelegate : MCSessionDelegate
{
	public DiscoveryViewController Parent{ get; protected set; }

	public ChatViewController ChatController
	{
		get; 
		set;
	}

	public ChatSessionDelegate(DiscoveryViewController parent)
	{
		Parent = parent;
	}

	public override void DidChangeState(MCSession session, MCPeerID peerID, MCSessionState state)
	{
		switch(state)
		{
		case MCSessionState.Connected:
			Console.WriteLine("Connected to " + peerID.DisplayName);
			InvokeOnMainThread(() => Parent.NavigationController.PushViewController(new ChatViewController(Parent.Session, Parent.PeerID, peerID, this), true));
			break;
		case MCSessionState.Connecting:
			Console.WriteLine("Connecting to " + peerID.DisplayName);
			break;
		case MCSessionState.NotConnected:
			Console.WriteLine("No longer connected to " + peerID.DisplayName);
			break;
		default:
			throw new ArgumentOutOfRangeException();
		}
	}

	public override void DidReceiveData(MCSession session, MonoTouch.Foundation.NSData data, MCPeerID peerID)
	{

		if(ChatController != null)
		{
			InvokeOnMainThread(() => ChatController.Message(String.Format("{0} : {1}", peerID.DisplayName, data.ToString())));
		}
	}

	public override void DidStartReceivingResource(MCSession session, string resourceName, MCPeerID fromPeer, MonoTouch.Foundation.NSProgress progress)
	{
		InvokeOnMainThread(() => new UIAlertView("Msg", "DidStartReceivingResource()", null, "OK", null).Show());

	}

	public override void DidFinishReceivingResource(MCSession session, string resourceName, MCPeerID formPeer, MonoTouch.Foundation.NSUrl localUrl, out MonoTouch.Foundation.NSError error)
	{
		InvokeOnMainThread(() => new UIAlertView("Msg", "DidFinishReceivingResource()", null, "OK", null).Show());
		error = null;

	}

	public override void DidReceiveStream(MCSession session, MonoTouch.Foundation.NSInputStream stream, string streamName, MCPeerID peerID)
	{
		InvokeOnMainThread(() => new UIAlertView("Msg", "DidReceiveStream()", null, "OK", null).Show());

	}
}

Again, this is mostly scaffolding, but be sure to note that it expects to be called on a background thread and uses InvokeOnMainThread to manipulate the UI. It also relies on the ChatViewController:

public class ChatViewController : UIViewController, IMessager
{
	protected MCSession Session { get; private set; }

	protected MCPeerID Me { get; private set; }

	protected MCPeerID Them { get; private set; }

	ChatView cv;

	public ChatViewController(MCSession session, MCPeerID me, MCPeerID them, ChatSessionDelegate delObj) : base()
	{
		this.Session = session;
		this.Me = me;
		this.Them = them;

		delObj.ChatController = this;
	}

	public override void DidReceiveMemoryWarning()
	{
		// Releases the view if it doesn't have a superview.
		base.DidReceiveMemoryWarning();

		// Release any cached data, images, etc that aren't in use.
	}

	public override void ViewDidLoad()
	{
		base.ViewDidLoad();

		cv = new ChatView(this);
		View = cv;

		cv.SendRequest += (s, e) => {
			var msg = e.Value;
			var peers = Session.ConnectedPeers;
			NSError error = null;
			Session.SendData(NSData.FromString(msg), peers, MCSessionSendDataMode.Reliable, out error);
			if(error != null)
			{
				new UIAlertView("Error", error.ToString(), null, "OK", null).Show();
			}
		};
	}

	public void Message(string str)
	{
		MessageReceived(this, new TArgs<string>(str));
	}

	public event EventHandler<targs <string>> MessageReceived = delegate {};
}

Again, it’s the simplicity that stands out: Session.SendData is used to transmit a string. The SendRequest event is wired to a UITextField and the MessageReceived event is wired to a UILabel:

public class ChatView : UIView
{
	readonly UITextField message;
	readonly UIButton sendButton;
	readonly UILabel incoming;

	public ChatView(IMessager msgr)
	{
		BackgroundColor = UIColor.White;

		message = new UITextField(new RectangleF(10, 54, 100, 44)) {
			Placeholder = "Message"
		};
		AddSubview(message);

		sendButton = new UIButton(UIButtonType.System) {
			Frame = new RectangleF(220, 54, 50, 44)
		};
		sendButton.SetTitle("Send", UIControlState.Normal);
		AddSubview(sendButton);

		incoming = new UILabel(new RectangleF(10, 114, 100, 44));
		AddSubview(incoming);

		sendButton.TouchUpInside += (sender, e) => SendRequest(this, new TArgs<string>(message.Text));
		msgr.MessageReceived += (s, e) => incoming.Text = e.Value;
	}

	public event EventHandler<targs <string>> SendRequest = delegate {};
}

The ChatViewController.Message method is called by the ChatSessionDelegate.DidReceiveData method.

And that’s really all there is to it.

Dynamic Type in iOS 7: Not Quite as “Dynamic” as You Might Think

One of the nice features in iOS 7 for old fogeys such as myself is that the user can use the general Settings to increase and decrease the fonts used in apps. This is called “Dynamic Type.” Judging by developer forums, I’m not the only one who thought that this was something that was built in to the various widgets. It’s not. To do this in your own app, you have to respond to the ContentSizeCategoryChanged notification and invalidate the layout in any widgets you want to have change size. In Xamarin.iOS, the code looks like this:

public class ContentView : UIView
{
    public ContentView()
    {
       var txt = new UITextView(UIScreen.MainScreen.Bounds);
       txt.Text = "Lorem ipsum dolor ...";
       ResetDynamicType();
       //Respond to notification of change
        UIApplication.Notifications.ObserveContentSizeCategoryChanged((s,e) => {
          ResetDynamicType();
        });
        AddSubview(txt);
    }
    public void ResetDynamicType()
    {
        txt.Font = UIFont.PreferredFontForTextStyle(UIFontTextStyle.Body);
    }
}

The crucial point being that you have a ResetDynamicType method (or whatever you want to call it) that you call both at initialization and then again every time you get notified of a request to change font size (if you want, you can read the new size from the e in the lambda). So “Dynamic Type” isn’t really anything special in terms of display: it’s still up to the application developer to have a function that’s called. What is dynamic is the value returned by UIFont.PreferredFontForTextStyle, which varies based on the user’s Settings.

Xamarin Code for iBeacons

Did I mention how easy it is to track an iBeacon using Xamarin?

locationManager = new CLLocationManager();
var beaconId = new NSUuid("E437C1AF-36CE-4BBC-BBE2-6CE802977C46");
var beaconRegion = new CLBeaconRegion(beaconId, "My Beacon");
locationManager.RegionEntered += (s, e) => {
    if(e.Region.Identifier == "My Beacon")
    {
        Console.WriteLine("Found My Beacon");
        //Fire up ranging
        locationManager.StartRangingBeacons(beaconRegion);
        locationManager.DidRangeBeacons += (lm, rangeEvents) => {
            switch(rangeEvents.Beacons[0].Proximity)
            {
            case CLProximity.Far: 
                Console.WriteLine("You're getting colder!");
                break;
            case CLProximity.Near:
                Console.WriteLine("You're getting warmer!");
                break;
            case CLProximity.Immediate:
                Console.WriteLine("You're red hot!");
                break;
            case CLProximity.Unknown: 
                Console.WriteLine("I can't tell");
                break;
            default:
                throw new ArgumentOutOfRangeException();
            }
        };
    }
};
locationManager.StartMonitoring(beaconRegion);
//Create a beacon
var peripheralManager = new CBPeripheralManager(new MyPeripheralDelegate(), DispatchQueue.DefaultGlobalQueue, new NSDictionary());
var beaconOptions = beaconRegion.GetPeripheralData(null);
peripheralManager.StartAdvertising(beaconOptions);

My Favorite iOS 7 APIs Part 1: iBeacons and Multipeer Connectivity

Since Xamarin provides full native capabilities, developers don’t need to wait for us to exploit iOS 7′s awesome new APIs, such as:

  • iBeacon: This, to my mind, is the stealth API of the release. An iBeacon is a Bluetooth device (just iOS devices for now, but Apple says they’ll release a Bluetooth profile for h/w manufacturers) that broadcasts a UUID (the UUID is intended to be shared between many devices, e.g., a store-chain will have a UUID and all their stores will broadcast it: new store’s geofence works instantly). The UUID travels up to Apple and apps that monitor for that UUID get alerted when they enter a geofence around the beacon. Within the beacon’s region, BT, not GPS, is used to indicate proximity. Pair that with…

  • Multipeer Connectivity: Ad Hoc messaging and data with none of the hassle. Broadcast a protocol string (“com.MyCompany.MyApp”) and everyone in BT range or on the same WiFi network advertising their interest in that protocol string gets an alert and, boom!, you’ve got Birds of a Feather. (Whoever writes the “Fetish Friend Finder” app using iBeacon and MPC is going to retire early. Of course, there are only 2^122 GUIDs, so you couldn’t track every kink.) (UPDATE: A sample chat app I wrote )

iBeacons can be combined to create many actionable zones within a physical location:

Here’s some Xamarin code

3D Maps in iOS 7 with Xamarin

It’s trivially simple to show 3D maps in iOS 7:

    var target = new CLLocationCoordinate2D(37.7952, -122.4028);
    var viewPoint = new CLLocationCoordinate2D(37.8009, -122.4100);
    //Enable 3D buildings
    mapView.ShowsBuildings = true;
    mapView.PitchEnabled = true;

    var camera = MKMapCamera.CameraLookingAtCenterCoordinate(target, viewPoint, 500);
    mapView.Camera = camera;

MKMapCamera

Full Screen Content and EdgesForExtendedLayout in iOS 7

One of the difference that jumps out dramatically to a programmer — especially those of us who typically build our UIs in code rather than using a visual design surface — is the new “full-screen content” concept.

This is particularly evident with UINavigationControllers. This picture shows the difference between the default mode (UIViewController.EdgesForExtendedLayout = UIRectEdge.All) and the “iOS 6″-style (UIViewController.EdgesForExtendedLayout = UIRectEdge.None).

You can see that in UIRectEdge.All mode, the current UIView‘s drawing rectangle covers the whole screen — you can see the diagonals extend under the navigation bar, toolbar, and even the status bar, and you can see the blue tint coming up through those elements (they are also blurred, which you cannot see in the image).

ChromeCast Xamarin Binding and Sample Source Code on GitHub

Due to popular demand…

Here is source code for a preliminary Xamarin.iOS binding for Google’s ChromeCast

and

Here is C# source code for a simple iOS app that casts a video URL

In order for this to work, you’ll need:

This is just source code, not a step-by-step walkthrough. Everything associated with this is in beta and I don’t want to invest a lot of time making things just so at this point.

You can read an overview of the programming model here.

# ChromeCast Home Media Server: Xamarin.iOS FTW!

As I blogged about last weekend, I got a ChromeCast and had a simple-enough time creating an iOS-binding library for Xamarin.iOS, allowing me to program the ChromeCast in C# (or F#, maybe next weekend…).

This weekend, I wrote a simple Home Media Server that allows me to stream… well, all my ChromeCast-compatible media, primarily mp4s. Here’s how I did it…

ChromeCast Programming: Intro

Essentially the ChromeCast is nothing but a Chrome browser on your TV. If you want to display HTML, no problem, but what you probably want to display is a great big video div:

<video id="vid" style="position:absolute;top:100;left:0;height:80%;width:100%">

But where does this HTML come from? Here’s the first kind-of-bummer about ChromeCast: Every ChromeCast application is associated with a GUID that Google provides you. Google maintains a map of GUID->URLs. And, since you have to send them your ChromeCast serial to get a GUID, it’s a safe bet they check the hardware, too. When you start an application with: session.StartSessionWithApplication("93d43262-ffff-ffff-ffff-fff9f0766cc1"), the ChromeCast always loads the associated URL (in my case, “http://10.0.1.35/XamCast”):

So, as a prerequisite, you need:

  • A ChromeCast that’s been “whitelisted” for development by Google;
  • A Google-supplied GUID that maps to a URL on your home network (a URL you decided during the “whitelist” application to Google)
  • A WebServer at that URL

It’s important to realize that what’s at that URL is not your media, but your “receiver app”: which might be plain HTML but which is likely to be HTML with some JavaScript using the ChromeCast Receiver API that allows you to manipulate things like volume and playback position, etc. I basically just use this file from Google’s demo, with minor tweaks.

Home Media Server : Intro

So if you want to stream your home media, you need a WebServer configured to serve your media. This doesn’t have to be the same as your App Server (it probably will be, but conceptually it doesn’t have to be):

The structure is straightforward:

  1. The mobile controller gets a list of media from the Media Server
  2. The application user selects a piece of media
  3. The controller sends the selected URL (and other data) to the ChromeCast
  4. The ChromeCast loads the media-URL from the Media Server

For me, the “App Server” and “Media Server” are the same thing: an Apache instance running on my desktop Mac.

ChromeCast Media-Serving : Components and Life-Cycle

This is a rough sequence diagram showing the steps in getting a piece of media playing on the ChromeCast using the Xamarin.iOS binding:

  1. Initialization
    1. Create a GCKContext;
    2. Create a GCKDeviceManager, passing the GCKContext;
    3. Create a GCKDeviceManagerListener; hand it to the GCKDeviceManager;
    4. Call GCKDeviceManager.StartScan
  2. Configuring a session
    1. When GCKDeviceManagerListener.CameOnline is called…
    2. Create a GCKApplicationSession;
    3. Create a GCKSessionDelegate, passing the GCKApplicationSession
  3. Playing media
    1. After GCKSessionDelegate.ApplicationSessionDidStart is called…
    2. Create a GCKMediaProtocolMessageStream;
    3. Get the Channel property of the GCKApplicationSession (type GCKApplicationChannel);
    4. Attach the GCKMediaProtocolMessageStream to the GCKApplicationChannel
    5. Create a GCKContentMetadata with the selected media’s URL
    6. Call GCKMediaProtocolMessageStream.LoadMediaWithContentId, passing in the GCKContentMetadata

Here’s the core code:

public override void ApplicationSessionDidStart()
{
    var channel = session.Channel; 
    if(channel == null)
    {
        Console.WriteLine("Channel is null");
    }
    else
    {
        Console.WriteLine("We have a channel");
        mpms = new GCKMediaProtocolMessageStream();
        Console.WriteLine("Initiated ramp");
        channel.AttachMessageStream(mpms);

        LoadMedia();
    }
}

private void LoadMedia()
{
    Console.WriteLine("Loading media...");
    var mediaUrl = Media.Url;
    var mediaContentId = mediaUrl.ToString();
    var dict = new NSDictionary();
    var mData = new GCKContentMetadata(Media.Title, Media.ThumbnailUrl, dict);

    Console.WriteLine(mData);
    var cmd = mpms.LoadMediaWithContentID(mediaContentId, mData, true);
    Console.WriteLine("Command executed?  " + cmd);
}

Plans

The core of a real home media server for the ChromeCast is the Web Server and the UI of the mobile application that browses it and chooses media. To turn this hack into a turnkey solution, you’d need to:

  • Run a public Chromecast application server that
    • Deferred the URL of the media server to the client
  • Write the media server, with all the necessary admin
  • Write a nice client app, that stored the mapping between the public ChromeCast app server and the (strictly-local) media server
  • Make a great user interface for selecting media
  • Make a great user interface for controlling the media

I have no plans on doing any of that stuff. What I plan on doing once ChromeCast and iOS 7 are out of beta is:

  • Make a nicer binding of the ChromeCast API and put it up for free on the Xamarin Component Store; and
  • Play around with serving media and blogging about anything interesting that comes up

Conclusion

The real thing that I wanted to do was see if Xamarin.iOS worked well with ChromeCast (resounding “Yes!”) and come up with a hack for my own use.

Achievement Unlocked.

ChromeCast Home Media Server with Xamarin

Code to follow…

whale_on_tv

div>