Capturing a photo programmatically with Objective-C using QTKit

I have been quite bored lately. When you are bored you want to make things. One thing I have always wanted to do is programmatically take a photo (that means to have my code use the built-in camera to take photos and use in my application) using my Mac’s built in iSight-camera for some fun manipulation.

A simple search on Google returns some outdated results on old programs and other stuff related to capturing video using old APIs. However, when I recently dove into QTKit (QuickTime’s API) I found some promising things.

This post will be a straightforward demo of how to grab photos from your iSight camera (or any other connected cameras). I’ll post some code and explain the different steps and what they mean. And… The best parts about reading tutorials on QTKit and using the computers camera are that you get to see a lot of random pictures by the demoer himself. And here is my contribution: I.e, what you should have at the end of this tutorial.

Yours truly, testing some code

Doing this you notice how silly you look while compiling code.

Prerequisites

You should know some Objective-C, how to use XCode and have some basic knowledge of Cocoa (or Cocoa touch, for iOS, but this code will not work on iOS)

Code

The first thing you have to do is include the QTKit-framework in your Xcode-project. While your at it you’ll need the QuartzCore-framework too (for image processing). Adding frameworks in Xcode 4 is a bit different from 3. You press the project name in the file navigator and there you get a list of frameworks for your project. Adding is simply pressing the +-sign and finding your framework.

To keep this simple I’ll just post a working code sample with lots and lots of comments.

//
// PhotoGrabber.h
// By Erik Rothoff Andersson <erikrothoff.com>
//
#import <Foundation/Foundation.h>
#import <QTKit/QTKit.h>
@protocol PhotoGrabberDelegate <NSObject>
- (void)photoGrabbed:(NSImage*)image;
@end
@interface PhotoGrabber : NSObject {
CVImageBufferRef currentImage;
QTCaptureDevice *video;
QTCaptureDecompressedVideoOutput *output;
QTCaptureInput *input;
QTCaptureSession *session;
id<PhotoGrabberDelegate> delegate;
}
@property (nonatomic, assign) id<PhotoGrabberDelegate> delegate;
- (void)grabPhoto;
- (NSString*)deviceName;
@end
//
// PhotoGrabber.m
// By Erik Rothoff Andersson <erikrothoff.com>
//
#import "PhotoGrabber.h"
@implementation PhotoGrabber
@synthesize delegate;
- (id)init
{
if ( (self = [super init]) )
{
NSError *error = nil;
// Acquire a device, we will also have objects for getting input
// from the device, and another for output
video = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeVideo];
BOOL success = [video open:&error];
if ( ! success || error )
{
NSLog(@"Did not succeed in acquire: %d", success);
NSLog(@"Error: %@", [error localizedDescription]);
return nil;
}
// QTCaptureDeviceInput is the object that will use the
// device as input, i.e. handle the photo-taking
input = [[QTCaptureDeviceInput alloc] initWithDevice:video];
// Session handles the input and output of both objects
session = [[QTCaptureSession alloc] init];
// Add our input object as input for this particular session
// (the code is pretty self-explanatory)
success = [session addInput:input error:&error];
if ( ! success || error )
{
NSLog(@"Did not succeed in connecting input to session: %d", success);
NSLog(@"Error: %@", [error localizedDescription]);
return nil;
}
// Create an object for outputing the video
// The input will tell the session object that it has taken
// some data, which will in turn send this to the output
// object, which has a delegate that you defined
output = [[QTCaptureDecompressedVideoOutput alloc] init];
// This is the delegate. Note the
// captureOutput:didOutputVideoFrame...-method of this
// object. That is the method which will be called when
// a photo has been taken.
[output setDelegate:self];
// Add the output-object for the session
success = [session addOutput:output error:&error];
if ( ! success || error )
{
NSLog(@"Did succeed in connecting output to session: %d", success);
NSLog(@"Error: %@", [error localizedDescription]);
return nil;
}
// Because the input stream is video we will be getting
// many frames after each other, we take the first one
// we get and store it, and don't accept any more after
// we already have one
currentImage = nil;
}
return self;
}
// This is the method to use when you want to initialize a grab
- (void)grabPhoto
{
[session startRunning];
}
// The device-name will most likely be "Built-in iSight camera"
- (NSString*)deviceName
{
return [video localizedDisplayName];
}
// QTCapture delegate method, called when a frame has been loaded by the camera
- (void)captureOutput:(QTCaptureOutput *)captureOutput didOutputVideoFrame:(CVImageBufferRef)videoFrame withSampleBuffer:(QTSampleBuffer *)sampleBuffer fromConnection:(QTCaptureConnection *)connection
{
// If we already have an image we should use that instead
if ( currentImage ) return;
// Retain the videoFrame so it won't disappear
// don't forget to release!
CVBufferRetain(videoFrame);
// The Apple docs state that this action must be synchronized
// as this method will be run on another thread
@synchronized (self) {
currentImage = videoFrame;
}
// As stated above, this method will be called on another thread, so
// we perform the selector that handles the image on the main thread
[self performSelectorOnMainThread:@selector(saveImage) withObject:nil waitUntilDone:NO];
}
// Called from QTCapture delegate method
- (void)saveImage
{
// Stop the session so we don't record anything more
[session stopRunning];
// Convert the image to a NSImage with JPEG representation
// This is a bit tricky and involves taking the raw data
// and turning it into an NSImage containing the image
// as JPEG
NSCIImageRep *imageRep = [NSCIImageRep imageRepWithCIImage:[CIImage imageWithCVImageBuffer:currentImage]];
NSImage *image = [[NSImage alloc] initWithSize:[imageRep size]];
[image addRepresentation:imageRep];
NSData *bitmapData = [image TIFFRepresentation];
NSBitmapImageRep *bitmapRep = [NSBitmapImageRep imageRepWithData:bitmapData];
NSData *imageData = [bitmapRep representationUsingType:NSJPEGFileType properties:nil];
[image release];
image = [[NSImage alloc] initWithData:imageData];
// Call delegate callback
if ( [self.delegate respondsToSelector:@selector(photoGrabbed:)] )
[self.delegate photoGrabbed:image];
// Clean up after us
[image release];
CVBufferRelease(currentImage);
currentImage = nil;
}
- (void)dealloc
{
self.delegate = nil;
// Just close/turn off everything if it's running
if ( [session isRunning] )
[session stopRunning];
if ( [video isOpen] )
[video close];
// Remove input/output
[session removeInput:input];
[session removeOutput:output];
[input release];
[session release];
[output release];
[super dealloc];
}
@end

This class presents you with an easy interface for grabbing photos. As illustrated:

// How to use the Gist from https://gist.github.com/1038480
// By Erik Rothoff Andersson <erikrothoff.com>
#import "PhotoGrabber.h"
@implementation MyClass
- (void)doSomething
{
PhotoGrabber *grabber = [[PhotoGrabber alloc] init];
grabber.delegate = self;
[grabber grabPhoto];
}
- (void)photoGrabbed:(NSImage*)image
{
// image is the image from the camera
// store it to a file or show, manipulate it, have fun
}
@end

Posted in Anything | 9 Comments

9 responses to “Capturing a photo programmatically with Objective-C using QTKit”

  1. Christopher Dawes says:

    Thanks so much for an extremely useful bit of code!

  2. Ken says:

    Thanks for this piece of code. It saved me a lot of stumbling and aggravation as I was trying to figure this out on my own.

    However, when I test this in a small application, I see that something is getting allocated in memory that is not being released. I’ve tried to figure out what is happening and where, but just can’t seem to get it.
    I’m not sure if it makes a difference when using garbage collection.
    Any ideas?

    Thanks again.

  3. Dat says:

    Any ideas on how to measure the ambient light using your isight camera?

  4. Shiela Dixon says:

    Thank you Erik – very good of you to publish a complete class. Are you happy for people to use this in commercial projects (with credit of course).

    Dat – I have tried what you have asked. By grabbing an image from the camera, reducing it to a single pixel and then measuring the brightness of that pixel. It works to a point, but the problem is that the camera adjusts its exposure for the light conditions. So you can measure very light and very dark but it’s very flat in between. Much better to access the computer’s ambient light sensor (Google IOServiceGetMatchingService and AppleLMUController). This works really well but I believe the calibration is different on different models of computer.

  5. Ashish says:

    What if i want to do this for iOS?

  6. Sadiq says:

    I have problem running this code with OS x 10.9
    i have compiled it using arc flags but the photo grabbed delegate is not getting called . and the delegate didOutputVideoFrame is also not getting called

    Thanks
    -Sadiq

  7. Hernando says:

    Is it possible to use my iphone as camera? instead of the default camera of mac? like quicktime able to mirror view the phone?

  8. Le@rner says:

    this is very useful code,

    i want to use it in my application,

    please explain me how can i use it in my Nswindowcontroller class and which type of control i use to display the camera from where i capture an image.

Leave a Reply to Christopher Dawes Cancel reply

Your email address will not be published. Required fields are marked *