Showing posts with label avcapturesession. Show all posts
Showing posts with label avcapturesession. Show all posts

Sunday, November 19, 2017

How to read depth data at a CGPoint from AVDepthData buffer

Leave a Comment

I am attempting to find the depth data at a certain point in the captured image and return the distance in meters.

I have enabled depth data and am capturing the data alongside the image. I get the point from the X,Y coordinates of the center of the image (and when pressed) and convert it to the buffers index using

Int((width - touchPoint.x) * (height - touchPoint.y)) 

with WIDTH and HEIGHT being the dimensions of the captured image. I am not sure if this is the correct method to achieve this though.

I handle the depth data as such:

func handlePhotoDepthCalculation(point : Int) {      guard let depth = self.photo else {         return     }      //     // Convert Disparity to Depth     //     let depthData = (depth.depthData as AVDepthData!).converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)     let depthDataMap = depthData.depthDataMap //AVDepthData -> CVPixelBuffer      //     // Set Accuracy feedback     //     let accuracy = depthData.depthDataAccuracy     switch (accuracy) {     case .absolute:         /*          NOTE - Values within the depth map are absolutely          accurate within the physical world.         */         self.accuracyLbl.text = "Absolute"         break     case .relative:         /*          NOTE - Values within the depth data map are usable for          foreground/background separation, but are not absolutely          accurate in the physical world. iPhone always produces this.         */         self.accuracyLbl.text = "Relative"     }      //     // We convert the data     //     CVPixelBufferLockBaseAddress(depthDataMap, CVPixelBufferLockFlags(rawValue: 0))     let depthPointer = unsafeBitCast(CVPixelBufferGetBaseAddress(depthDataMap), to: UnsafeMutablePointer<Float32>.self)      //     // Get depth value for image center     //     let distanceAtXYPoint = depthPointer[point]      //     // Set UI     //     self.distanceLbl.text = "\(distanceAtXYPoint) m" //Returns distance in meters?     self.filteredLbl.text = "\(depthData.isDepthDataFiltered)"  } 

I am not convinced I am getting the correct position. From my research as well it looks like accuracy is only returned in .relative or .absolute and not a float/integer?

1 Answers

Answers 1

Values indicating the general accuracy of a depth data map.

The accuracy of a depth data map is highly dependent on the camera calibration data used to generate it. If the camera's focal length cannot be precisely determined at the time of capture, scaling error in the z (depth) plane will be introduced. If the camera's optical center can't be precisely determined at capture time, principal point error will be introduced, leading to an offset error in the disparity estimate. These values report the accuracy of a map's values with respect to its reported units.

case relative

Values within the depth data map are usable for foreground/background separation, but are not absolutely accurate in the physical world.

case absolute

Values within the depth map are absolutely accurate within the physical world.

You have get CGPoint from AVDepthData buffer like hight and width like follow code.

// Useful data  let width = CVPixelBufferGetWidth(depthDataMap)   let height = CVPixelBufferGetHeight(depthDataMap)  
Read More

Friday, May 6, 2016

Why are my app's videos uploading to Facebook as a blank green screen?

Leave a Comment

Don't know where else to ask this, so I thought to start here.

I have two video clips. Neither video clip has any audio.

One video clip is captured from the iPhone's camera via AVCaptureSession.

The second video clip is stored locally on the device.

I want to merge the two videos in a way that plays the captured video in its entirety, followed immediately by one second of the second video clip. I then merge the new video clip with a predetermined audio file and segue to my share menu where I save to the camera roll.

The final result plays exactly as it should in the camera roll. However, when I share the video to Facebook, the first video clip is distorted as a green or sometimes gray screen. The second clip plays fine when its time arrives. And the audio is fine throughout the entire thing.

I have no idea what is causing this.

Any help?

func mergeVideos() {      let videoAsset = AVAsset(URL: recordedVideoURL)      let videoAsset2 = AVAsset(URL: NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource("Credits", ofType: "mp4")!))      let audioAsset = AVAsset(URL: finalAudioURL)      // 1 - Create AVMutableComposition object.     let mixComposition = AVMutableComposition()      // 2 - Audio track      do {          let audioTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: 0)          try audioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, audioAsset.duration + CMTime(seconds: 1, preferredTimescale: 30)), ofTrack: audioAsset.tracksWithMediaType(AVMediaTypeAudio)[0], atTime: kCMTimeZero)      } catch {          print(error)      }      // 3 - Video tracks      do {          let videoTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))         try videoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0], atTime: kCMTimeZero)         try videoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, audioAsset.duration-videoAsset.duration + CMTime(seconds: 1, preferredTimescale: 30)), ofTrack: videoAsset2.tracksWithMediaType(AVMediaTypeVideo)[0], atTime: videoAsset.duration)      } catch {          print(error)      }      // 5 - Create Exporter     let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)     deleteFileAtURL(videoToShareURL)     exporter!.outputURL = videoToShareURL     exporter!.outputFileType = AVFileTypeMPEG4     exporter!.shouldOptimizeForNetworkUse = true      // 6 - Perform the Export     exporter!.exportAsynchronouslyWithCompletionHandler() {         dispatch_async(dispatch_get_main_queue(), { () -> Void in             hideSpinner()              self.performSegueWithIdentifier("backToShare", sender: self)          })      }  } 

1 Answers

Answers 1

This is counter intuitive, but are you sure this is a problem with iPhone and not Facebook ? There are cases where Facebook videos are not getting played properly and getting played with green screen and proper audio.

It's worth a shot to follow these links on Facebook Help Center and confirm

1 . Green screen on videos on facebook

2 . When I try to play a video I get sound but screen turns green

Read More

Friday, March 25, 2016

How to configure AVCaptureSession for high res still images and low res (video) preview?

Leave a Comment

I'd like to capture high resolution still images using AVCaptureSession. Therefore AVCaptureSession preset is set to Photo.

This is working well so far. On an iPhone 4 the final still image resolution is at its maximum of 2448x3264 pixels and the preview (video) resolution is 852x640 pixels.

Now, because the preview frames are analyzed to detect objects in the scene, I'd like to lower their resolution. How can this be done? I've tried to set AVVideoSettings with a lower width/height to AVCaptureVideoDataOutput, but this leads to the following error message:

AVCaptureVideoDataOutput setVideoSettings:] - videoSettings dictionary contains one or more unsupported (ignored) keys: (AVVideoHeightKey, AVVideoWidthKey 

So it seems this is not the right approach to configure the size of the preview frames received by AVCaptureVideoDataOutput / AVCaptureVideoDataOutputSampleBufferDelegate. Do you have any ideas how the resolution of the preview frames can be configured?

Any advise is welcome, Thank you.

2 Answers

Answers 1

If you want to specify the settings manually, you need to set activeFormat on the AVCaptureDevice. This will be implicitly set the session preset to AVCaptureSessionPresetInputPriority.

The activeFormat takes a AVCaptureDeviceFormat but you can only take one from the list of AVCaptureDevice.formats. You'll need to go through the list and find one that fits your needs. Specifically, check that highResolutionStillImageDimensions is high enough for desired still capture and formatDescription (which needs to be inspected with CMFormatDescription* functions, e.g., CMVideoFormatDescriptionGetDimensions) matches your desired preview settings.

Answers 2

To lower the size of the output of AVCaptureVideoDataOutput you can set the bitrate to be lower thus producing a small sample size.

commonly used keys for AVCaptureVideoDataOutput are:

AVVideoAverageBitRateKey AVVideoProfileLevelKey AVVideoExpectedSourceFrameRateKey AVVideoMaxKeyFrameIntervalKey 

For example:

private static let videoCompressionOptionsMedium = [AVVideoAverageBitRateKey : 1750000,                                                     AVVideoProfileLevelKey : AVVideoProfileLevelH264BaselineAutoLevel,                                                     AVVideoExpectedSourceFrameRateKey : Int(30),                                                     AVVideoMaxKeyFrameIntervalKey : Int(30)] 
Read More