iOS音视频(AVFoundation初探)

AVFoundation是苹果OS X系统和iOS系统中用于处理基于时间的媒体数据的高级框架,它在苹果体系所有的媒体资源分布图中的适用范围如下所示:

捕捉会话

  • AVCaptureSession :设备捕捉会话
  • AVCaptureDevice :捕捉设备
  • AVCaptureDeviceInput :捕捉设备输入
  • AVCaptureFileOutput :是个抽象类,继承于NSObject,必须使用它的子类
    • AVCaptureAudioDataOutput :一种捕获输出,用于记录音频,并在录制音频时提供对音频样本缓冲区的访问。
    • AVCaptureAudioPreviewOutput :一种捕获输出,与一个核心音频输出设备相关联、可用于播放由捕获会话捕获的音频。
    • AVCaptureDepthDataOutput :在兼容的摄像机设备上记录场景深度信息的捕获输出。
    • AVCaptureMetadataOutput :用于处理捕获会话AVCaptureSession产生的定时元数据的捕获输出。
    • AVCaptureStillImageOutput :在macOS中捕捉静止照片的捕获输出。该类在iOS 10.0中被弃用,并且不支持新的相机捕获功能,例如原始图像输出和实时照片。在iOS 10.0或更高版本中,使用AVCapturePhotoOutput类代替
    • AVCapturePhotoOutput :静态照片、动态照片和其他摄影工作流的捕获输出
    • AVCaptureVideoDataOutput :记录视频并提供对视频帧进行处理的捕获输出。
    • AVCaptureFileOutput :用于捕获输出的抽象超类,可将捕获数据记录到文件中。
    • AVCaptureMovieFileOutput :继承自AVCaptureFileOutput 将视频和音频记录到QuickTime电影文件的捕获输出
    • AVCaptureAudioFileOutput :继承自AVCaptureFileOutput,记录音频并将录制的音频保存到文件的捕获输出。
  • AVCaptureConnection :建立输入输出的连接

代理

AVFoundation中的代理主要有以下几种

  • AVCaptureAudioDataOutputSampleBufferDelegate & AVCaptureVideoDataOutputSampleBufferDelegate

这两个代理是处理音频和视频的,它们可以获取实时拍照的视频流和实时录制的音频,使用场景一般是直播。代理方法如下:

- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
复制代码
  • AVCapturePhotoCaptureDelegate

AVCapturePhotoCaptureDelegate是 iOS 10.0 之后推出的。在 iOS 10.0 之前一般使用AVCaptureStillImageOutput实现拍照功能, iOS 10.0 AVCaptureStillImageOutput被废弃了,改用AVCapturePhotoOutput。代理方法如下:

// iOS 10.0 ~ iOS 11.0 的API
- (void)captureOutput:(AVCapturePhotoOutput *)output didFinishProcessingPhotoSampleBuffer:(nullable CMSampleBufferRef)photoSampleBuffer previewPhotoSampleBuffer:(nullable CMSampleBufferRef)previewPhotoSampleBuffer resolvedSettings:(AVCaptureResolvedPhotoSettings *)resolvedSettings bracketSettings:(nullable AVCaptureBracketedStillImageSettings *)bracketSettings error:(nullable NSError *)error API_DEPRECATED_WITH_REPLACEMENT("-captureOutput:didFinishProcessingPhoto:error:", ios(10.0, 11.0)) API_UNAVAILABLE(macos, macCatalyst);
复制代码
// iOS 11.0之后的API
- (void)captureOutput:(AVCapturePhotoOutput *)output didFinishProcessingPhoto:(AVCapturePhoto *)photo error:(nullable NSError *)error API_AVAILABLE(ios(11.0));
复制代码
  • AVCaptureFileOutputRecordingDelegate

AVCaptureFileOutputRecordingDelegate为AVCaptureFileOutput的协议接口,以响应记录单个文件的过程中发生的事件。AVCaptureFileOutput对象的委托必须实现该协议的方法。

经常被触发的方法

- (void)captureOutput:(AVCaptureFileOutput *)output willFinishRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray<AVCaptureConnection *> *)connections error:(nullable NSError *)error API_UNAVAILABLE(ios, watchos, tvos);
复制代码
- (void)captureOutput:(AVCaptureFileOutput *)output didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray<AVCaptureConnection *> *)connections error:(nullable NSError *)error;
复制代码

开始方法

- (void)captureOutput:(AVCaptureFileOutput *)output didStartRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray<AVCaptureConnection *> *)connections;
复制代码
  • AVCaptureMetadataOutputObjectsDelegate

AVCaptureMetadataOutputObjectsDelegate 用于接收由AVCaptureMetadataOutput产生的元数据的方法。人脸识别,二维码识别都是走这个代理。

- (void)captureOutput:(AVCaptureOutput *)output didOutputMetadataObjects:(NSArray<__kindof AVMetadataObject *> *)metadataObjects fromConnection:(AVCaptureConnection *)connection;
复制代码
  • AVCaptureDepthDataOutputDelegate

AVCaptureDepthDataOutputDelegate是 iOS 11.0 推出的代理,用于捕获摄像机设备上记录场景深度信息的输出

- (void)depthDataOutput:(AVCaptureDepthDataOutput *)output didOutputDepthData:(AVDepthData *)depthData timestamp:(CMTime)timestamp connection:(AVCaptureConnection *)connection;
复制代码
  • AVCaptureDataOutputSynchronizerDelegate

iOS 11 中,苹果添加了一个名为 AVCaptureDataOutputSynchronizer的新同步对象。它可以在单个统一回调中为给定呈现时间,提供所有可用数据,并传递一个称为AVCaptureSynchronizedDataCollection 的集合对象。

所以这样就可以指定一个主输出,一个最重要的输出,一个希望所有其他东西要同步的输出,然后只要它需要,就可以做这个工作, 以确保给定演示时间的所有数据在可用之前提供给单独的统一回调。它将为你提供输出的所有数据,或者如果确保没有特定输出的数据,它将继续提供与它有关的集合。

- (void)dataOutputSynchronizer:(AVCaptureDataOutputSynchronizer *)synchronizer didOutputSynchronizedDataCollection:(AVCaptureSynchronizedDataCollection *)synchronizedDataCollection;
复制代码

代码实现

拍照

AVCapturePhotoOutput

#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
@interface ViewController ()<AVCapturePhotoCaptureDelegate>
@property (nonatomic, strong) AVCaptureSession *captureSession;
@property (nonatomic, strong) AVCaptureDeviceInput *cameraInput;
@property (nonatomic, strong) AVCaptureVideoPreviewLayer *videoPreviewLayer;
@property (nonatomic, strong) AVCapturePhotoOutput *photoOutput;
@property (nonatomic, strong) AVCapturePhotoSettings *photoSettings;
@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view.
    // 初始化
    [self setupPhoto];
    // 开始捕捉
    [self.captureSession startRunning];
    // 添加手势,单击拍照
    UITapGestureRecognizer *tapOne = [[UITapGestureRecognizer alloc]initWithTarget:self action:@selector(tapOneAction:)];
    //单击
    tapOne.numberOfTapsRequired = 1;
    //单指
    tapOne.numberOfTouchesRequired = 1;
    [self.view addGestureRecognizer:tapOne];
    
}

- (void)tapOneAction:(UITapGestureRecognizer *)gestureRecognizer{
    // 每次都必需传入新的
    [self.photoOutput capturePhotoWithSettings:[AVCapturePhotoSettings photoSettingsFromPhotoSettings:self.photoSettings] delegate:self];
}

- (void)setupPhoto{
    // 获取默认的摄像头设备
    self.cameraInput = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];
    
    // 初始化
    self.photoOutput = [[AVCapturePhotoOutput alloc]init];
    // 配置输出的图片格式为JPEG
    self.photoSettings = [AVCapturePhotoSettings photoSettingsWithFormat:@{AVVideoCodecKey:AVVideoCodecTypeJPEG}];
    [self.photoOutput setPhotoSettingsForSceneMonitoring:self.photoSettings];
    
    [self.captureSession beginConfiguration];
    // 判断是否可以添加输入
    if ([self.captureSession canAddInput:self.cameraInput]) {
        [self.captureSession addInput:self.cameraInput];
    }
    // 判断是否可以添加输出
    if ([self.captureSession canAddOutput:self.photoOutput]) {
        [self.captureSession addOutput:self.photoOutput];
    }
    [self.captureSession commitConfiguration];
    
    // 显示
    self.videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:self.captureSession];
    self.videoPreviewLayer.frame = self.view.bounds;
    self.videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer:self.videoPreviewLayer];
    
}
#pragma mark - 懒加载
- (AVCaptureSession *)captureSession{
    if (!_captureSession) {
        _captureSession = [[AVCaptureSession alloc]init];
    }
    return _captureSession;
}
#pragma mark - AVCapturePhotoCaptureDelegate
- (void)captureOutput:(AVCapturePhotoOutput *)output didFinishProcessingPhoto:(AVCapturePhoto *)photo error:(nullable NSError *)error{
    NSData *data = photo.fileDataRepresentation;
    // 捕获到的图片
    UIImage *image = [UIImage imageWithData:data];
}
@end
复制代码

AVCaptureStillImageOutput

#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
@interface ViewController ()

@property (nonatomic, strong) AVCaptureSession *captureSession;
@property (nonatomic, strong) AVCaptureDeviceInput *cameraInput;
@property (nonatomic, strong) AVCaptureVideoPreviewLayer *videoPreviewLayer;
@property (strong, nonatomic) AVCaptureStillImageOutput *imageOutput;

@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view.
    // 初始化
    [self setupPhoto];
    // 开始捕捉
    [self.captureSession startRunning];
    // 添加手势,单击拍照
    UITapGestureRecognizer *tapOne = [[UITapGestureRecognizer alloc]initWithTarget:self action:@selector(tapOneAction:)];
    //单击
    tapOne.numberOfTapsRequired = 1;
    //单指
    tapOne.numberOfTouchesRequired = 1;
    [self.view addGestureRecognizer:tapOne];
    
}

- (void)tapOneAction:(UITapGestureRecognizer *)gestureRecognizer{
    [self captureStillImage];
}

- (void)captureStillImage {
    //获取连接
    AVCaptureConnection *connection = [self.imageOutput connectionWithMediaType:AVMediaTypeVideo];
    
    //定义一个handler 块,会返回1个图片的NSData数据
    id handler = ^(CMSampleBufferRef sampleBuffer,NSError *error)
                {
                    if (sampleBuffer != NULL) {
                        NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer];
                        // 捕获到的图片
                        UIImage *image = [[UIImage alloc]initWithData:imageData];
                    }else
                    {
                        NSLog(@"NULL sampleBuffer:%@",[error localizedDescription]);
                    }
                        
                };
    
    //捕捉静态图片
    [self.imageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:handler];
}

- (void)setupPhoto{
    // 获取默认的摄像头设备
    self.cameraInput = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];
    
    // AVCaptureStillImageOutput 实例 从摄像头捕捉静态图片
    self.imageOutput = [[AVCaptureStillImageOutput alloc]init];
    // 配置字典:希望捕捉到JPEG格式的图片
    self.imageOutput.outputSettings = @{AVVideoCodecKey:AVVideoCodecJPEG};
    
    [self.captureSession beginConfiguration];
    // 判断是否可以添加输入
    if ([self.captureSession canAddInput:self.cameraInput]) {
        [self.captureSession addInput:self.cameraInput];
    }
    // 判断是否可以添加输出
    if ([self.captureSession canAddOutput:self.imageOutput]) {
        [self.captureSession addOutput:self.imageOutput];
    }
    
    [self.captureSession commitConfiguration];
    
    // 显示
    self.videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:self.captureSession];
    self.videoPreviewLayer.frame = self.view.bounds;
    self.videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer:self.videoPreviewLayer];
    
}

#pragma mark - 懒加载
- (AVCaptureSession *)captureSession{
    if (!_captureSession) {
        _captureSession = [[AVCaptureSession alloc]init];
    }
    return _captureSession;
}
@end
复制代码

拍摄视频

AVCaptureMovieFileOutput

#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
#import <AVKit/AVKit.h>

@interface ViewController ()<AVCaptureFileOutputRecordingDelegate>

@property (nonatomic, strong) AVCaptureSession *captureSession;
@property (nonatomic, strong) AVCaptureDeviceInput *cameraInput;
@property (nonatomic, strong) AVCaptureVideoPreviewLayer *videoPreviewLayer;

@property (nonatomic, strong) AVCaptureDeviceInput *videoDataInput;
@property (nonatomic, strong) AVCaptureDeviceInput *frontCamera;
@property (nonatomic, strong) AVCaptureDeviceInput *backCamera;

@property (nonatomic, strong) AVCaptureInput *audioDeviceInput;

@property (nonatomic, strong) AVCaptureVideoDataOutput *videoDataOutput;
@property (nonatomic, strong) AVCaptureConnection *videoConnection;

@property (strong, nonatomic) AVCaptureMovieFileOutput *movieOutput;

@property (nonatomic, strong) NSURL *outputURL;

@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view.
    
    [self setupVideo];
    [self setupAudio];
    [self.captureSession startRunning];
    
    // 添加手势
    UITapGestureRecognizer *tapOne = [[UITapGestureRecognizer alloc]initWithTarget:self action:@selector(tapOneAction:)];
    //单击
    tapOne.numberOfTapsRequired = 1;
    //单指
    tapOne.numberOfTouchesRequired = 1;
    [self.view addGestureRecognizer:tapOne];
}

#pragma mark - 懒加载
- (NSURL *)outputURL {
    if (!_outputURL) {
        NSFileManager *fileManager = [NSFileManager defaultManager];

        
        
        NSString *mkdTemplate =
            [NSTemporaryDirectory() stringByAppendingPathComponent:@"kamera.XXXXXX"];

        const char *templateCString = [mkdTemplate fileSystemRepresentation];
        char *buffer = (char *)malloc(strlen(templateCString) + 1);
        strcpy(buffer, templateCString);

        NSString *directoryPath = nil;

        char *result = mkdtemp(buffer);
        if (result) {
            directoryPath = [fileManager stringWithFileSystemRepresentation:buffer
                                                              length:strlen(result)];
        }
        free(buffer);
        
        NSString *dirPath = directoryPath;
        
        if (dirPath) {
            NSString *filePath =
            [dirPath stringByAppendingPathComponent:@"kamera_movie.mov"];
            _outputURL = [NSURL fileURLWithPath:filePath];
        }
    }
    return _outputURL;
}

- (AVCaptureSession *)captureSession{
    if (!_captureSession) {
        _captureSession = [[AVCaptureSession alloc]init];
    }
    return _captureSession;
}

- (void)tapOneAction:(UITapGestureRecognizer *)gestureRecognizer{
    if (!self.movieOutput.isRecording) {
        [[NSFileManager defaultManager] removeItemAtURL:self.outputURL error:nil];
        [self.movieOutput startRecordingToOutputFileURL:self.outputURL recordingDelegate:self];
    }else{
        [self.movieOutput stopRecording];
    }
    
}

- (void)captureOutput:(AVCaptureFileOutput *)output didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray<AVCaptureConnection *> *)connections error:(nullable NSError *)error{
    NSLog(@"录制完毕");
//    NSString *cachePath=[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
//    NSString *savePath=[cachePath stringByAppendingPathComponent:outputFileURL];
    NSURL *saveUrl = outputFileURL;//[NSURL fileURLWithPath:savePath];

    // 通过文件的 url 获取到这个文件的资源
    AVURLAsset *avAsset = [[AVURLAsset alloc] initWithURL:saveUrl options:nil];
    // 用 AVAssetExportSession 这个类来导出资源中的属性
    NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:avAsset];

    // 压缩视频
    if ([compatiblePresets containsObject:AVAssetExportPresetLowQuality]) { // 导出属性是否包含低分辨率
    // 通过资源(AVURLAsset)来定义 AVAssetExportSession,得到资源属性来重新打包资源 (AVURLAsset, 将某一些属性重新定义
    AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:avAsset presetName:AVAssetExportPresetHighestQuality];
    // 设置导出文件的存放路径
    NSDateFormatter *formatter = [[NSDateFormatter alloc] init];
    [formatter setDateFormat:@"yyyy-MM-dd-HH:mm:ss"];
    NSDate    *date = [[NSDate alloc] init];
    NSString *outPutPath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, true) lastObject] stringByAppendingPathComponent:[NSString stringWithFormat:@"output-%@.mp4",[formatter stringFromDate:date]]];
    exportSession.outputURL = [NSURL fileURLWithPath:outPutPath];
    
    // 是否对网络进行优化
    exportSession.shouldOptimizeForNetworkUse = true;
    
    // 转换成MP4格式
    exportSession.outputFileType = AVFileTypeMPEG4;
    
    // 开始导出,导出后执行完成的block
    [exportSession exportAsynchronouslyWithCompletionHandler:^{
        // 如果导出的状态为完成
        if ([exportSession status] == AVAssetExportSessionStatusCompleted) {
            dispatch_async(dispatch_get_main_queue(), ^{
                AVPlayer *player = [AVPlayer playerWithURL:[NSURL fileURLWithPath:outPutPath]];
                AVPlayerViewController *playerViewController = [AVPlayerViewController new];
                playerViewController.player = player;
                [self presentViewController:playerViewController animated:YES completion:nil];
                [playerViewController.player play];
            });
        }
    }];
    }
}

#pragma mark - 视频初始化
- (void)setupVideo{
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *device in devices) {
        if (device.position == AVCaptureDevicePositionBack) {
            self.backCamera = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
        }else{
            self.frontCamera = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
        }
    }
    
    self.videoDataInput = self.backCamera;
    self.movieOutput = [[AVCaptureMovieFileOutput alloc]init];
    [self.captureSession beginConfiguration];
    if ([self.captureSession canAddInput:self.videoDataInput]) {
        [self.captureSession addInput:self.videoDataInput];
    }
    if ([self.captureSession canAddOutput:self.movieOutput]) {
        [self.captureSession addOutput:self.movieOutput];
    }
    if ([self.captureSession canSetSessionPreset:AVCaptureSessionPreset1920x1080]) {
        self.captureSession.sessionPreset = AVCaptureSessionPreset1920x1080;
    }
    [self.captureSession commitConfiguration];
    
    self.videoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
    self.videoPreviewLayer.frame = self.view.bounds;
    self.videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer:self.videoPreviewLayer];
    
}

#pragma mark - 音频初始化
- (void)setupAudio{
    AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    
    self.audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
    
    [self.captureSession beginConfiguration];
    if ([self.captureSession canAddInput:self.audioDeviceInput]) {
        [self.captureSession addInput:self.audioDeviceInput];
    }
    [self.captureSession commitConfiguration];
}
@end
复制代码

人脸识别

#import "FaceViewController.h"
#import <AVFoundation/AVFoundation.h>

@interface FaceViewController ()<AVCaptureMetadataOutputObjectsDelegate>

@property (nonatomic, strong) AVCaptureSession *captureSession;
@property (nonatomic, strong) AVCaptureDeviceInput *cameraInput;
@property (nonatomic, strong) AVCaptureVideoPreviewLayer *videoPreviewLayer;

@property (nonatomic, strong) AVCaptureDeviceInput *videoDataInput;
@property (nonatomic, strong) AVCaptureDeviceInput *frontCamera;
@property (nonatomic, strong) AVCaptureDeviceInput *backCamera;

@property(nonatomic,strong)AVCaptureMetadataOutput  *metadataOutput;

@property(nonatomic,strong)CALayer *overlayLayer;

@property(strong,nonatomic)NSMutableDictionary *faceLayers;

@end

@implementation FaceViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view.
    [self setupVideo];
    [self.captureSession startRunning];
}

#pragma mark - 视频初始化
- (void)setupVideo{
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *device in devices) {
        if (device.position == AVCaptureDevicePositionBack) {
            self.backCamera = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
        }else{
            self.frontCamera = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
        }
    }
    
    self.videoDataInput = self.frontCamera;
    
    self.metadataOutput = [[AVCaptureMetadataOutput alloc]init];
    
    
    [self.captureSession beginConfiguration];
    if ([self.captureSession canAddInput:self.videoDataInput]) {
        [self.captureSession addInput:self.videoDataInput];
    }
    if ([self.captureSession canSetSessionPreset:AVCaptureSessionPreset1920x1080]) {
        self.captureSession.sessionPreset = AVCaptureSessionPreset1920x1080;
    }
    if ([self.captureSession canAddOutput:self.metadataOutput]){
        [self.captureSession addOutput:self.metadataOutput];
        //获得人脸属性
        NSArray *metadatObjectTypes = @[AVMetadataObjectTypeFace];
        
        //设置metadataObjectTypes 指定对象输出的元数据类型。
        /*
         限制检查到元数据类型集合的做法是一种优化处理方法。可以减少我们实际感兴趣的对象数量
         支持多种元数据。这里只保留对人脸元数据感兴趣
         */
        self.metadataOutput.metadataObjectTypes = metadatObjectTypes;
        
        //创建主队列: 因为人脸检测用到了硬件加速,而且许多重要的任务都在主线程中执行,所以需要为这次参数指定主队列。
        dispatch_queue_t mainQueue = dispatch_get_main_queue();
        
        //通过设置AVCaptureVideoDataOutput的代理,就能获取捕获到一帧一帧数据
        [self.metadataOutput setMetadataObjectsDelegate:self queue:mainQueue];
    }
    [self.captureSession commitConfiguration];
    
    self.videoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
    self.videoPreviewLayer.frame = self.view.bounds;
    self.videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer:self.videoPreviewLayer];
    
    self.faceLayers = [NSMutableDictionary dictionary];
    self.overlayLayer = [CALayer layer];
    
    //设置它的frame
    self.overlayLayer.frame = self.view.bounds;
    
    //子图层形变 sublayerTransform属性   Core  Animation动画
    self.overlayLayer.sublayerTransform = CATransform3DMakePerspective(1000);
    
    //将子图层添加到预览图层来
    [self.videoPreviewLayer addSublayer:self.overlayLayer];
}

#pragma mark - 懒加载
- (AVCaptureSession *)captureSession{
    if (!_captureSession) {
        _captureSession = [[AVCaptureSession alloc]init];
    }
    return _captureSession;
}

#pragma mark - AVCaptureMetadataOutputObjectsDelegate
//捕捉数据
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {
    //使用循环,打印人脸数据
    for (AVMetadataFaceObject *face in metadataObjects) {
        NSLog(@"Face detected with ID:%li",(long)face.faceID);
        NSLog(@"Face bounds:%@",NSStringFromCGRect(face.bounds));
    }
    
    [self didDetectFaces:metadataObjects];
    
}

#pragma mark - 人脸处理
//将检测到的人脸进行可视化
- (void)didDetectFaces:(NSArray *)faces {

    //创建一个本地数组 保存转换后的人脸数据
    NSArray *transformedFaces = [self transformedFacesFromFaces:faces];
    
    //获取faceLayers的key,用于确定哪些人移除了视图并将对应的图层移出界面。
    /*
        支持同时识别10个人脸
     */
    NSMutableArray *lostFaces = [self.faceLayers.allKeys mutableCopy];
    
    //遍历每个转换的人脸对象
    for (AVMetadataFaceObject *face in transformedFaces) {
        
        //获取关联的faceID。这个属性唯一标识一个检测到的人脸
        NSNumber *faceID = @(face.faceID);
        
        //将对象从lostFaces 移除
        [lostFaces removeObject:faceID];
        
        //拿到当前faceID对应的layer
        CALayer *layer = self.faceLayers[faceID];
        
        //如果给定的faceID 没有找到对应的图层
        if (!layer) {
            
            //调用makeFaceLayer 创建一个新的人脸图层
            layer = [self makeFaceLayer];
            
            //将新的人脸图层添加到 overlayLayer上
            [self.overlayLayer addSublayer:layer];
            
            //将layer加入到字典中
            self.faceLayers[faceID] = layer;
            
        }
        
        //设置图层的transform属性 CATransform3DIdentity 图层默认变化 这样可以重新设置之前应用的变化
        layer.transform = CATransform3DIdentity;
        
        //图层的大小 = 人脸的大小
        layer.frame = face.bounds;
        
        //判断人脸对象是否具有有效的斜倾交。
        if (face.hasRollAngle) {
            
            //如果为YES,则获取相应的CATransform3D 值
            CATransform3D t = [self transformForRollAngle:face.rollAngle];
            
            //将它与标识变化关联在一起,并设置transform属性
            layer.transform = CATransform3DConcat(layer.transform, t);
        }
        
        
        //判断人脸对象是否具有有效的偏转角
        if (face.hasYawAngle) {
            
            //如果为YES,则获取相应的CATransform3D 值
            CATransform3D  t = [self transformForYawAngle:face.yawAngle];
            layer.transform = CATransform3DConcat(layer.transform, t);
            
        }
    }
    
    
    //遍历数组将剩下的人脸ID集合从上一个图层和faceLayers字典中移除
    for (NSNumber *faceID in lostFaces) {
        
        CALayer *layer = self.faceLayers[faceID];
        [layer removeFromSuperlayer];
        [self.faceLayers  removeObjectForKey:faceID];
    }
    
}


//将设备的坐标空间的人脸转换为视图空间的对象集合
- (NSArray *)transformedFacesFromFaces:(NSArray *)faces {

    NSMutableArray *transformeFaces = [NSMutableArray array];
    
    for (AVMetadataObject *face in faces) {
        
        //将摄像头的人脸数据 转换为 视图上的可展示的数据
        //简单说:UIKit的坐标 与 摄像头坐标系统(0,0)-(1,1)不一样。所以需要转换
        //转换需要考虑图层、镜像、视频重力、方向等因素 在iOS6.0之前需要开发者自己计算,但iOS6.0后提供方法
        AVMetadataObject *transformedFace = [self.videoPreviewLayer transformedMetadataObjectForMetadataObject:face];
        
        //转换成功后,加入到数组中
        [transformeFaces addObject:transformedFace];
        
        
    }
    return transformeFaces;
}

- (CALayer *)makeFaceLayer {

    //创建一个layer
    CALayer *layer = [CALayer layer];
    
    //边框宽度为5.0f
    layer.borderWidth = 5.0f;
    
    //边框颜色为红色
    layer.borderColor = [UIColor redColor].CGColor;
    
//    layer.contents = (id)[UIImage imageNamed:@"551.png"].CGImage;
    
    //返回layer
    return layer;
    
}



//将 RollAngle 的 rollAngleInDegrees 值转换为 CATransform3D
- (CATransform3D)transformForRollAngle:(CGFloat)rollAngleInDegrees {

    //将人脸对象得到的RollAngle 单位“度” 转为Core Animation需要的弧度值
    CGFloat rollAngleInRadians = THDegreesToRadians(rollAngleInDegrees);

    //将结果赋给CATransform3DMakeRotation x,y,z轴为0,0,1 得到绕Z轴倾斜角旋转转换
    return CATransform3DMakeRotation(rollAngleInRadians, 0.0f, 0.0f, 1.0f);
    
}


//将 YawAngle 的 yawAngleInDegrees 值转换为 CATransform3D
- (CATransform3D)transformForYawAngle:(CGFloat)yawAngleInDegrees {

    //将角度转换为弧度值
     CGFloat yawAngleInRaians = THDegreesToRadians(yawAngleInDegrees);
    
    //将结果CATransform3DMakeRotation x,y,z轴为0,-1,0 得到绕Y轴选择。
    //由于overlayer 需要应用sublayerTransform,所以图层会投射到z轴上,人脸从一侧转向另一侧会有3D 效果
    CATransform3D yawTransform = CATransform3DMakeRotation(yawAngleInRaians, 0.0f, -1.0f, 0.0f);
    
    //因为应用程序的界面固定为垂直方向,但需要为设备方向计算一个相应的旋转变换
    //如果不这样,会造成人脸图层的偏转效果不正确
    return CATransform3DConcat(yawTransform, [self orientationTransform]);
}

- (CATransform3D)orientationTransform {

    CGFloat angle = 0.0;
    //拿到设备方向
    switch ([UIDevice currentDevice].orientation) {
            
            //方向:下
        case UIDeviceOrientationPortraitUpsideDown:
            angle = M_PI;
            break;
            
            //方向:右
        case UIDeviceOrientationLandscapeRight:
            angle = -M_PI / 2.0f;
            break;
        
            //方向:左
        case UIDeviceOrientationLandscapeLeft:
            angle = M_PI /2.0f;
            break;

            //其他
        default:
            angle = 0.0f;
            break;
    }
    
    return CATransform3DMakeRotation(angle, 0.0f, 0.0f, 1.0f);
    
}

#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wunused"


static CGFloat THDegreesToRadians(CGFloat degrees) {

    return degrees * M_PI / 180;
}


static CATransform3D CATransform3DMakePerspective(CGFloat eyePosition) {
    
    
    //CATransform3D 图层的旋转,缩放,偏移,歪斜和应用的透
    //CATransform3DIdentity是单位矩阵,该矩阵没有缩放,旋转,歪斜,透视。该矩阵应用到图层上,就是设置默认值。
    CATransform3D  transform = CATransform3DIdentity;
    
    
    //透视效果(就是近大远小),是通过设置m34 m34 = -1.0/D 默认是0.D越小透视效果越明显
    //D:eyePosition 观察者到投射面的距离
    transform.m34 = -1.0/eyePosition;
    
    return transform;
    
    
}
@end
复制代码

参考文章

我来评几句
登录后评论

已发表评论数()

相关站点

+订阅
热门文章