Detect object on tap gesture

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP



Detect object on tap gesture



I have been working on a project using swift that allows the user to detect the object that the camera is looking at when he or she taps the screen. I followed this guide to understand the integration of the machine learning process for Ios. This allows a camera to pop-up on the screen and a description on the bottom that says what the item could be. Now I want to use a tap gesture to only get the label of the object when the screen is tapped one.



I tried doing this in the following code. I am a bit unsure where the


@objc func doubleTapped()
//code



goes in the code. I have tried this:


import UIKit
import AVKit
import Vision

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate

let identifierLabel: UILabel =
let label = UILabel()
label.backgroundColor = .white
label.textAlignment = .center
label.translatesAutoresizingMaskIntoConstraints = false
return label
()

override func viewDidLoad()
super.viewDidLoad()

let captureSession = AVCaptureSession()
captureSession.sessionPreset = .photo
let tap = UITapGestureRecognizer(target: self, action: #selector(doubleTapped))
tap.numberOfTapsRequired = 1
view.addGestureRecognizer(tap)
guard let captureDevice = AVCaptureDevice.default(for: .video) else return
guard let input = try? AVCaptureDeviceInput(device: captureDevice) else return
captureSession.addInput(input)

captureSession.startRunning()

let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
previewLayer.frame = view.frame

let dataOutput = AVCaptureVideoDataOutput()
dataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
captureSession.addOutput(dataOutput)


// VNImageRequestHandler(cgImage: <#T##CGImage#>, options: [:]).perform(<#T##requests: [VNRequest]##[VNRequest]#>)

setupIdentifierConfidenceLabel()



fileprivate func setupIdentifierConfidenceLabel()
view.addSubview(identifierLabel)
identifierLabel.bottomAnchor.constraint(equalTo: view.bottomAnchor, constant: -32).isActive = true
identifierLabel.leftAnchor.constraint(equalTo: view.leftAnchor).isActive = true
identifierLabel.rightAnchor.constraint(equalTo: view.rightAnchor).isActive = true
identifierLabel.heightAnchor.constraint(equalToConstant: 50).isActive = true


@objc func doubleTapped()
print("hello")


func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)
// print("Camera was able to capture a frame:", Date())

guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else return

// !!!Important
// make sure to go download the models at https://developer.apple.com/machine-learning/ scroll to the bottom
guard let model = try? VNCoreMLModel(for: Resnet50().model) else return
let request = VNCoreMLRequest(model: model) (finishedReq, err) in


guard let results = finishedReq.results as? [VNClassificationObservation] else return

guard let firstObservation = results.first else return

print(firstObservation.identifier, firstObservation.confidence)

DispatchQueue.main.async
self.identifierLabel.text = "(firstObservation.identifier) (firstObservation.confidence * 100)"




try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])





When I do not place the tap gesture, the label changes as the object the camera faces changes. When I add the tap gesture and its function, the code breaks.



I really appreciate the help.









By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

Firebase Auth - with Email and Password - Check user already registered

Dynamically update html content plain JS

Creating a leaderboard in HTML/JS