Josh Smith and I have been busy building a tool to help us tell a story while live coding in our presentations and workshops. We’ve decided to open source it and we’re ready to share it with you.
A while back Apple wrote a Mac app called Demo Monkey that gave you a list of things you could click on to put them on the pasteboard. It made presenting code a little bit easier since you could break up what you wanted to show into chunks, weave it into a narrative, and paste as you go to demonstrate.
But that required having a window visible on your Mac with a list of clips. And if you’re like me, you still needed script notes somewhere near by to stay on track and remember what clip goes where in the flow.
That’s why we wrote KeyGrip. After iterating on a few different ideas, we settled on a Mac server and a Universal iOS client. Run the client on an iPad Mini and it shows your presenter notes interspersed with code snippets—all generated from a Markdown file. If you tap on any of the code snippets, it instantly shows up on your Mac’s pasteboard.
The Mac and iOS apps communicate seamlessly over Bonjour. All you have to do is make sure they have the same string identifier so they can find each other. The Mac server also live pushes changes to the iOS client while you work on the Markdown script. You can get into a slick editing workflow where you adjust your notes and try out your code examples.
Yeah, I’m biased. But I gotta say…this thing is like magic.
You can download a binary of the Mac server right from the README. You can download the source and build the iOS app to install it on your favorite device. Enjoy!
Oh, and special thanks to Derek Briggs for the icon. He’s got plans to help us polish up the interface a bit over time, too. :)✦ Permalink
The UI Screen Shooter scripts have been updated for Xcode 5.1! I’m quite
pleased with the results. The
instruments command line tool now lets us
specify the simulator device directly from the command line. I’ve cleaned up
the scripts and they are much easier to follow. Kudos to Apple on this! And
thanks to Christoph Koehler’s issue that brought all this to my
If you’d like more details on what changed, read on.
You no longer have to force Xcode to pick the iPhone architecture with the
TARGETED_DEVICE_FAMILY configuration parameter. Previously, the
command line tool would not let you pick whether you wanted to run on the iPad
or iPhone simulators. If an app was marked as universal, Instruments would
always launch the app in the iPad simulator. The hack to get around that was to
1 which would force Xcode to build the
app as iPhone only. Instruments would then oblige and only launch the iPhone
In order to get screenshots on both iPad and iPhone, you had to build twice
TARGETED_DEVICE_FAMILY settings. It was a real pain, but it
You also no longer have to muck with the simulator preference files to force
the simulator to launch in a specific language. Previously, I hacked together a
shell script that used
PlistBuddy to alter the preference files, forcing the
simulator to think only a specific language and locale was available. But
thanks to a post by Ole Begemann on
NSUserDefaults, I realized that I
can force the simulator to pick a locale by just passing special command line
And that’s not all! You also no longer need to force the simulator to a specific device model with AppleScript! Previously (you can see a theme here), I used an AppleScript that launched the simulator and picked the proper device type from the Hardware menu. When Instruments next launched, it would use the previous simulator setting. Again, it worked but it was a horrible hack.
Update: Brad Grzesiak just pointed out to me that there’s no longer
the need for my pty/tty hack in the
unix_instruments wrapper since the
Instruments command line tool no longer buffers it’s output when piped. We
still need the wrapper script, though, because Instruments doesn’t return a
opportunity I can get to remove my workaround code.
All that changed in Xcode 5.1 because Instruments now supports specifying the simulator hardware type and iOS version all from the command line! To find out what options you currently have you on your machine, just type the following:
instruments -w help
And then you’ll see something like this:
iPhone Retina (3.5-inch) - Simulator - iOS 7.1 iPhone Retina (4-inch) - Simulator - iOS 7.1 iPhone Retina (4-inch 64-bit) - Simulator - iOS 7.1 iPad Retina - Simulator - iOS 7.1 iPad Retina (64-bit) - Simulator - iOS 7.1 ...
Finally! It doesn’t matter what you put after the
-w flag. You just need to
pass something invalid and
instruments gives you the valid options. Pass one
of these strings in like so to use it:
instruments -w "iPad Retina - Simulator - iOS 7.1" ...
Note that you need the quotation marks because of the spaces in the full name
of the simulator hardware type and version. Also, the
-w flag must come at
the start of the command line, before any other flags. Otherwise you get
Check out the full screen shooter repository for more details. Use this as the basis to write your own screen shooting scripts. Enjoy!✦ Permalink
With the release of Ash Furrow’s new book on Functional Reactive Programming in iOS, I’ve seen a lot of debate about whether or not this whole paradigm shift is a good idea. Why do it? Isn’t what Apple provides with their specific flavor of Model-View-Controller sufficient? What the heck is Model-View-ViewModel?
OMG! The acronyms! It burns us!1
That’s what leads me to this post. I would like to build a bridge that stretches across the divide between the traditional way to build iOS applications and the strange new world with MVVM, Reactive Cocoa, RXCollections and the like.
First off, you are not stupid if you have trouble groking the README and documentation of Reactive Cocoa. It’s a very different paradigm than what grew up around Objective-C. It’s a clever abuse of the C preprocessor—a layer on top of Objective-C, which itself is a layer on top of C, etc.
All this extra syntax makes it hard to just jump in and read the code. It helps to know how and why the syntaxes grow into what they are today. After all, we’re quite forgiving when it comes to the block syntax Apple gave to us, right? Right?!?!
The hacked and layered nature of things like Reactive Cocoa have downsides, for sure. But I’m excited to see experiments giving us new primitives with which to compose our application logic. And it’s not just academic. Github is behind Reactive Cocoa and applies it to real world problems in their Mac and iOS applications.
So, back to view-models.
Think about what a table view data source does in your application.
It’s not your “model” layer in a classic MVC sense. Your models might be, say,
NSManagedObjects as part of a Core Data store. These models have validation
rules, transformed properties, and relationships that construct what the core
of the application does.
A table view data source is none of these things. It’s purely a layer between the table view and the model. The model defines lists of things, but the table view data source transform those lists into sections and rows. It also returns the actual table view cells, but that’s not what I’m focusing on here. The key is its role as a middle-tier data transformer.
By default, because of Apple’s templates and documentation, we typically use table view controllers as the data source. There’s nothing wrong with that in itself, but it can lead to very large view controllers that both manage representations of screen state (navigation, transitions, and so forth) and feed data to the table views.
I’ve found great success creating separate
NSObjects that implement the
UITableViewDataSource protocol and dedicating those as data sources of table
views. We end up with more focused objects, they’re simpler to test, and it
leads us this realization…
We’ve just made a bunch of view-models!
Yes, the people who work with MVVM all the time will rightly criticize that statement. I’m overgeneralizing a bit, but bear with me. My point is that the table view data source is a “model” layer between the actual list of data and the table view that displays it.
I bring this up to show that the core of MVVM isn’t as foreign a concept on iOS as it seems. If you read Ash’s book, you’ll get a heavy dose of all the other things that MVVM brings with it, but you can at least get a head start in your understanding by thinking of the way the data source sits as a transforming layer between your model and your view.
Imagine this same mechanism generalized to other areas. Instead of bogging down
NSManagedObject subclasses with extra methods to display data in
different formats, you build intermediate view-model objects that your views
could consume and watch. If you update a primitive integer on your model, the
view-model sees that change and updates the formatted
representation. And the view sees that change and updates the text label on
Next, imagine you had some sort of syntax to describe the bindings between the model, the view-model, and the view. If you’re still tracking with me then you’re ready to take a look at things like Reactive Cocoa. It’s a syntax built on top of Objective-C through which you bind subscribers to producers of events. It’s like KVO but with broader implications.
I hope this helps a bit if you’ve been curious about the Reactive Cocoa craze on iOS. I don’t think it’s above critique, but don’t write it off just because it isn’t the traditional way. We need more experiments like this that help us describe problems with primitive structures.
Count me in.
You know, if we wanted to order the acronym based on the flow of data it would be Model-ViewModel-View…but that’s just cruel.↩