Ho una piccola app di realtà aumentata che sto sviluppando e vorrei sapere come salvare uno screenshot di ciò che l'utente vede con un tocco di un pulsante o un timer.iPhone cattura screenshot di realtà aumentata con AVCaptureVideoPreviewLayer
L'app funziona sovrapponendo il feed della telecamera live sopra un altro UIView. Posso salvare screenshot usando il pulsante di accensione + il tasto home, questi sono salvati nel rullino fotografico. Tuttavia, Apple non renderà AVCaptureVideoPreviewLayer, anche se chiedo alla finestra di salvarsi. Creerà un pezzo di tela trasparente in cui si trova il livello di anteprima.
Qual è il modo corretto per un'app di realtà aumentata per salvare screenshot, tra cui trasparenza e sottoview?
//displaying a live preview on one of the views
-(void)startCapture
{
captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = nil;
// AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in videoDevices) {
if(useFrontCamera){
if (device.position == AVCaptureDevicePositionFront) {
//FRONT-FACING CAMERA EXISTS
audioCaptureDevice = device;
break;
}
}else{
if (device.position == AVCaptureDevicePositionBack) {
//Rear-FACING CAMERA EXISTS
audioCaptureDevice = device;
break;
}
}
}
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
if (audioInput) {
[captureSession addInput:audioInput];
}
else {
// Handle the failure.
}
if([captureSession canAddOutput:captureOutput]){
captureOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureOutput setAlwaysDiscardsLateVideoFrames:YES];
[captureOutput setSampleBufferDelegate:self queue:queue];
[captureOutput setVideoSettings:videoSettings];
dispatch_release(queue);
}else{
//handle failure
}
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
UIView *aView = arOverlayView;
previewLayer.frame =CGRectMake(0,0, arOverlayView.frame.size.width,arOverlayView.frame.size.height); // Assume you want the preview layer to fill the view.
[aView.layer addSublayer:previewLayer];
[captureSession startRunning];
}
//ask the entire window to draw itself in a graphics context. This call will not render
// l'AVCaptureVideoPreviewLayer. Deve essere sostituito con una vista UIImageView o GL. // vedi seguente codice per la creazione di un aggiornamento dinamico UIImageView - (void) saveScreenshot {
UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, self,
@selector(image:didFinishSavingWithError:contextInfo:), nil);
}
//image saved to camera roll callback
- (void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error
contextInfo:(void *)contextInfo
{
// Was there an error?
if (error != NULL)
{
// Show error message...
NSLog(@"save failed");
}
else // No errors
{
NSLog(@"save successful");
// Show message image successfully saved
}
}
Ecco il codice per creare l'immagine:
// è necessario aggiungere la controller della vista come un delegare alla uscita della telecamera per essere avvisati dei dati buffereed
-(void)activateCameraFeed
{
//this is the code responsible for capturing feed for still image processing
dispatch_queue_t queue = dispatch_queue_create("com.AugmentedRealityGlamour.ImageCaptureQueue", NULL);
captureOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureOutput setAlwaysDiscardsLateVideoFrames:YES];
[captureOutput setSampleBufferDelegate:self queue:queue];
[captureOutput setVideoSettings:videoSettings];
dispatch_release(queue);
//......configure audio feed, add inputs and outputs
}
//buffer delegate callback
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if (ignoreImageStream)
return;
[self performImageCaptureFrom:sampleBuffer];
}
Creare un UIImage:
- (void) performImageCaptureFrom:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer;
if (CMSampleBufferGetNumSamples(sampleBuffer) != 1)
return;
if (!CMSampleBufferIsValid(sampleBuffer))
return;
if (!CMSampleBufferDataIsReady(sampleBuffer))
return;
imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (CVPixelBufferGetPixelFormatType(imageBuffer) != kCVPixelFormatType_32BGRA)
return;
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGImageRef newImage = nil;
if (cameraDeviceSetting == CameraDeviceSetting640x480)
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
newImage = CGBitmapContextCreateImage(newContext);
CGColorSpaceRelease(colorSpace);
CGContextRelease(newContext);
}
else
{
uint8_t *tempAddress = malloc(640 * 4 * 480);
memcpy(tempAddress, baseAddress, bytesPerRow * height);
baseAddress = tempAddress;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst);
newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
newContext = CGBitmapContextCreate(baseAddress, 640, 480, 8, 640*4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextScaleCTM(newContext, (CGFloat)640/(CGFloat)width, (CGFloat)480/(CGFloat)height);
CGContextDrawImage(newContext, CGRectMake(0,0,640,480), newImage);
CGImageRelease(newImage);
newImage = CGBitmapContextCreateImage(newContext);
CGColorSpaceRelease(colorSpace);
CGContextRelease(newContext);
free(tempAddress);
}
if (newImage != nil)
{
//modified for iOS5.0 with ARC
tempImage = [[UIImage alloc] initWithCGImage:newImage scale:(CGFloat)1.0 orientation:cameraImageOrientation];
CGImageRelease(newImage);
//this call creates the illusion of a preview layer, while we are actively switching images created with this method
[self performSelectorOnMainThread:@selector(newCameraImageNotification:) withObject:tempImage waitUntilDone:YES];
}
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
aggiornamento l'interfaccia con un UIView che può effettivamente essere resa in un contesto grafico:
- (void) newCameraImageNotification:(UIImage*)newImage
{
if (newImage == nil)
return;
[arOverlayView setImage:newImage];
//or do more advanced processing of the image
}
Sei voler un'istantanea di ciò che è sullo schermo di programmazione invece di premere home/potere? Pubblicherò il codice comunque :) –
Per favore controlla: http://stackoverflow.com/questions/3397899/avcapturevideopreviewlayer-taking-a-snapshot/13576530#13576530. Spero che aiuto! –