天天看點

iOS 二維碼掃描-中間透明區域以及掃描區域設定

1. 二維碼掃描

在調用二維碼掃描之前,首先導入頭檔案:AVFoundation/AVFoundation.h,遵循其協定:AVCaptureMetadataOutputObjectsDelegate。并且定義所需要的device、session、input、output、videoPreviewLayer、backgroundView、interestRect,如下所示:
           
@property(nonatomic, strong) AVCaptureDevice *device;
@property(nonatomic, strong) AVCaptureSession *session;
@property(nonatomic, strong) AVCaptureDeviceInput *input;
@property(nonatomic, strong) AVCaptureMetadataOutput *output;
@property(nonatomic, strong) AVCaptureVideoPreviewLayer *videoLayer;
@property(nonatomic, strong) ContextView *backgroundView;
@property(nonatomic, assign) CGRect interestRect;
           

之後設定device、input、output、session、videoPreviewLayer,并且開啟二維碼掃描,如下:

- (void)opentAVCaptureSession {
    NSError *inputError = nil;
    self.device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    self.input = [AVCaptureDeviceInput deviceInputWithDevice:self.device error:&inputError];
    if (inputError) {
        NSLog(@"AVCaptureDeviceInput Error:%@",inputError.localizedDescription);
    }
    self.output = [[AVCaptureMetadataOutput alloc] init];
    [self.output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
    self.session = [[AVCaptureSession alloc] init];
    [self.session setSessionPreset:AVCaptureSessionPresetHigh];
    [self.session addInput:self.input];
    [self.session addOutput:self.output];
    self.output.metadataObjectTypes = @[AVMetadataObjectTypeQRCode, AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeCode128Code];
    self.videoLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.session];
    self.videoLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    self.videoLayer.frame = self.view.bounds;
    [self.view.layer insertSublayer:self.videoLayer above:];
    [self.view.layer addSublayer:self.videoLayer];
    //[self setInterestingRect];
    //self.backgroundView = [[ContextView alloc] initWithFrame:self.view.frame];
    [self.view addSubview:self.backgroundView];
    //[self addLayout];
    [self.session startRunning];
}
           

掃描成功之後,會調用代理函數:- (void)captureOutput:(AVCaptureOutput )captureOutput didOutputMetadataObjects:(NSArray )metadataObjects fromConnection:(AVCaptureConnection *)connection ,在該函數中調用停止二維碼掃描,并且擷取二維碼資訊:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {
    [self.session stopRunning];
    //[self.backgroundView invalidateTimer];
    AVMetadataMachineReadableCodeObject * metadataObject = [metadataObjects objectAtIndex :  ];
    NSLog(@"%@",metadataObject.stringValue);
}
           

至此,簡單的二維碼掃描已經做好,要想在真機上運作成功,還需要在info裡設定通路相機的申請:

iOS 二維碼掃描-中間透明區域以及掃描區域設定

2. 設定中間透明區域和掃描線

自定義一個繼承UIView的ContextView,在init函數中設定該view為透明,在drawRect函數中設定中間透明區域、透明區域的四個角線以及掃描線:

- (instancetype)initWithFrame:(CGRect)frame{
    self = [super initWithFrame:frame];
    if (self) {
        self.backgroundColor = [UIColor clearColor];
        self.opaque = NO; // 設定為透明的
    }
    return self;
}

- (void)drawRect:(CGRect)rect {
    CGRect mainRect = [UIScreen mainScreen].bounds;
    [self addClearRect:mainRect];
    [self addFourBorder:mainRect];
    [self addMovingLine:mainRect];
}
           

設定中間透明區域:

- (void)addClearRect:(CGRect)mainRect {
    CGFloat mainRectWidth = mainRect.size.width;
    CGFloat mainRectHeight = mainRect.size.height;
    [[UIColor colorWithWhite: alpha:] setFill];
    UIRectFill(mainRect);
    CGRect clearRect = CGRectMake(mainRectWidth/, mainRectHeight/ - *mainRectWidth/, *mainRectWidth/, *mainRectWidth/);
    CGRect clearIntersection = CGRectIntersection(clearRect, mainRect);
    [[UIColor clearColor] setFill];
    UIRectFill(clearIntersection);
}
           

設定四個角線:

- (void)addFourBorder:(CGRect)mainRect {
    CGFloat mainRectWidth = mainRect.size.width;
    CGFloat mainRectHeight = mainRect.size.height;
    CGContextRef ctx = UIGraphicsGetCurrentContext();
    CGContextSetLineWidth(ctx, );
    CGContextSetStrokeColorWithColor(ctx, [UIColor blueColor].CGColor);
    CGContextSetLineCap(ctx, kCGLineCapSquare);
    CGPoint upLeftPoints[] = {CGPointMake(mainRectWidth/, mainRectHeight/ - *mainRectWidth/), CGPointMake(mainRectWidth/ + , mainRectHeight/ - *mainRectWidth/), CGPointMake(mainRectWidth/, mainRectHeight/ - *mainRectWidth/), CGPointMake(mainRectWidth/, mainRectHeight/ - *mainRectWidth/ + )};
    CGPoint upRightPoints[] = {CGPointMake(*mainRectWidth/ - , mainRectHeight/ - *mainRectWidth/), CGPointMake(*mainRectWidth/, mainRectHeight/ - *mainRectWidth/), CGPointMake(*mainRectWidth/, mainRectHeight/ - *mainRectWidth/), CGPointMake(*mainRectWidth/, mainRectHeight/ - *mainRectWidth/ + )};
    CGPoint belowLeftPoints[] = {CGPointMake(mainRectWidth/, mainRectHeight/), CGPointMake(mainRectWidth/, mainRectHeight/ - ), CGPointMake(mainRectWidth/, mainRectHeight/), CGPointMake(mainRectWidth/ +, mainRectHeight/)};
    CGPoint belowRightPoints[] = {CGPointMake(*mainRectWidth/, mainRectHeight/), CGPointMake(*mainRectWidth/ - , mainRectHeight/), CGPointMake(*mainRectWidth/, mainRectHeight/), CGPointMake(*mainRectWidth/, mainRectHeight/ - )};
    CGContextStrokeLineSegments(ctx, upLeftPoints, );
    CGContextStrokeLineSegments(ctx, upRightPoints, );
    CGContextStrokeLineSegments(ctx, belowLeftPoints, );
    CGContextStrokeLineSegments(ctx, belowRightPoints, );
}
           

設定掃描線:

- (void)addMovingLine:(CGRect)mainRect {
    if (!lineView) {
        [self initLineView:mainRect];
    }
    timer = [NSTimer scheduledTimerWithTimeInterval:LineMovingDuration target:self selector:@selector(moveLine) userInfo:nil repeats:YES];
}

- (void)initLineView:(CGRect)mainRect {
    CGFloat mainRectWith = mainRect.size.width;
    CGFloat mainRectHeight = mainRect.size.height;
    lineView = [[UIImageView alloc] initWithFrame:CGRectMake(mainRectWith/, mainRectHeight/ - *mainRectWith/, *mainRectWith/, )];
    lineView.image = [UIImage imageNamed:@"line"];
    [self addSubview:lineView];
    lineY = lineView.frame.origin.y;
}

- (void)moveLine {
    [UIView animateWithDuration:LineMovingDuration animations:^{
        CGRect rect = lineView.frame;
        rect.origin.y = lineY;
        lineView.frame = rect;
    } completion:^(BOOL finished) {
        CGRect mainRect = [UIScreen mainScreen].bounds;
        CGFloat mainRectHeight = mainRect.size.height;
        CGFloat mainRectWith = mainRect.size.width;
        CGFloat maxLineY = mainRect.size.height/;
        if (lineY >= maxLineY) {
            lineY = mainRectHeight/ - *mainRectWith/;
        } else {
            lineY ++;
        }
    }];
}
           

最後在dealloc中停到timer,并定義invalidateTimer函數,以便在外面調用。

- (void)invalidateTimer {
    if (timer) {
        [timer invalidate];
        timer = nil;
    }
}

- (void)dealloc {
    if (timer) {
        [timer invalidate];
        timer = nil;
    }
}
           

最後,将backgroundView定義為自定義viewContextView,并且在opentAVCaptureSession函數中的[self.view.layer addSublayer:self.videoLayer]; 和[self.session startRunning];之間加入如下代碼即可:

self.backgroundView = [[ContextView alloc] initWithFrame:self.view.frame];
    [self.view addSubview:self.backgroundView];
           

3.設定掃描區域

AVCaptureMetadataOutput 中的屬性rectOfInterest設定掃描區域,但是其四個值都是0-1的數字,預設值是(0,0,1,1),是以未設定該屬性時,掃描區域是整個界面。具體設定如下:

- (void)setInterestingRect {
    CGRect mainRect = [UIScreen mainScreen].bounds;
    CGFloat mainRectWidth = mainRect.size.width;
    CGFloat mainRectHeight = mainRect.size.height;
    CGRect rect = CGRectMake(mainRectWidth/, mainRectHeight/ - *mainRectWidth/, *mainRectWidth/, *mainRectWidth/);
    self.interestRect = CGRectMake(rect.origin.y/mainRect.size.height, rect.origin.x/mainRect.size.width, rect.size.height/mainRect.size.height, rect.size.width/mainRect.size.width);//參照坐标是橫屏左上角
    [self.output setRectOfInterest:self.interestRect];
}
           

經過以上設定,就可以将掃描區域設定到具體區域之内。