I am attempting to find the depth data at a certain point in the captured image and return the distance in meters
.
I have enabled depth data and am capturing the data alongside the image. I get the point from the X,Y coordinates of the center of the image (and when pressed) and convert it to the buffers index using
Int((width - touchPoint.x) * (height - touchPoint.y))
with WIDTH
and HEIGHT
being the dimensions of the captured image. I am not sure if this is the correct method to achieve this though.
I handle the depth data as such:
func handlePhotoDepthCalculation(point : Int) { guard let depth = self.photo else { return } // // Convert Disparity to Depth // let depthData = (depth.depthData as AVDepthData!).converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) let depthDataMap = depthData.depthDataMap //AVDepthData -> CVPixelBuffer // // Set Accuracy feedback // let accuracy = depthData.depthDataAccuracy switch (accuracy) { case .absolute: /* NOTE - Values within the depth map are absolutely accurate within the physical world. */ self.accuracyLbl.text = "Absolute" break case .relative: /* NOTE - Values within the depth data map are usable for foreground/background separation, but are not absolutely accurate in the physical world. iPhone always produces this. */ self.accuracyLbl.text = "Relative" } // // We convert the data // CVPixelBufferLockBaseAddress(depthDataMap, CVPixelBufferLockFlags(rawValue: 0)) let depthPointer = unsafeBitCast(CVPixelBufferGetBaseAddress(depthDataMap), to: UnsafeMutablePointer<Float32>.self) // // Get depth value for image center // let distanceAtXYPoint = depthPointer[point] // // Set UI // self.distanceLbl.text = "\(distanceAtXYPoint) m" //Returns distance in meters? self.filteredLbl.text = "\(depthData.isDepthDataFiltered)" }
I am not convinced I am getting the correct position. From my research as well it looks like accuracy is only returned in .relative
or .absolute
and not a float/integer?
1 Answers
Answers 1
Values indicating the general accuracy of a depth data map.
The accuracy of a depth data map is highly dependent on the camera calibration data used to generate it. If the camera's focal length cannot be precisely determined at the time of capture, scaling error in the z (depth) plane will be introduced. If the camera's optical center can't be precisely determined at capture time, principal point error will be introduced, leading to an offset error in the disparity estimate. These values report the accuracy of a map's values with respect to its reported units.
case relative
Values within the depth data map are usable for foreground/background separation, but are not absolutely accurate in the physical world.
case absolute
Values within the depth map are absolutely accurate within the physical world.
You have get CGPoint from AVDepthData buffer like hight and width like follow code.
// Useful data let width = CVPixelBufferGetWidth(depthDataMap) let height = CVPixelBufferGetHeight(depthDataMap)
0 comments:
Post a Comment