I've created method below for merge (join, union, I am not sure which is right word for this, I want to make 1 audio from 2 or more and not one after each other but to play each at once). As inputs I have multiple audio files in .wav format and I want 1 .wav format at output.
func merge(audioUrls: [NSURL], resultName : String = "result") { let resultNameWithExtension = resultName + ".wav" //Create AVMutableComposition Object.This object will hold our multiple AVMutableCompositionTrack. let composition = AVMutableComposition() //create new file to receive data //let documentDirectoryURL = NSFileManager.defaultManager().URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask).first! let outputFilePath = NSTemporaryDirectory().stringByAppendingPathComponent(resultNameWithExtension) let fileDestinationUrl = NSURL(fileURLWithPath: outputFilePath) print(fileDestinationUrl) StorageManager.sharedInstance.deleteFileAtPath(NSTemporaryDirectory().stringByAppendingPathComponent(resultNameWithExtension)) var avAssets: [AVURLAsset] = [] var assetTracks: [AVAssetTrack] = [] var timeRanges: [CMTimeRange] = [] for audioUrl in audioUrls { let compositionAudioTrack:AVMutableCompositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID:kCMPersistentTrackID_Invalid) let avAsset = AVURLAsset(URL: audioUrl, options: nil) avAssets.append(avAsset) let assetTrack = avAsset.tracksWithMediaType(AVMediaTypeAudio)[0] assetTracks.append(assetTrack) let duration = assetTrack.timeRange.duration let timeRange = CMTimeRangeMake(kCMTimeZero, duration) timeRanges.append(timeRange) do { try compositionAudioTrack.insertTimeRange(timeRange, ofTrack: assetTrack, atTime: kCMTimeZero) } catch let error as NSError { print("compositionAudioTrack insert error: \(error)") } } let assetExport = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetPassthrough)! assetExport.outputFileType = AVFileTypeWAVE assetExport.outputURL = fileDestinationUrl assetExport.exportAsynchronouslyWithCompletionHandler({ self.delegate?.assetExportSessionDidFinishExport(assetExport, outputFilePath: outputFilePath) }) }
My problem is that it's not working and I don't know why. The error I get:
Error Domain=AVFoundationErrorDomain Code=-11838 "Operation Stopped" UserInfo={NSLocalizedDescription=Operation Stopped, NSLocalizedFailureReason=The operation is not supported for this media.})
When I change preset and output type to .m4a it's working but I need .wav. It should be working with .wav when I have inputs in same format right? Thanks for any help
1 Answers
Answers 1
Refer to this question and it appears this has been an outstanding bug since iOS 7. The advice there from DTS to file a bug appears to still be currently applicable, unfortunately.
What you could try instead is exporting with AVAssetWriter, along the lines of the code here: Converting CAF to WAV.
0 comments:
Post a Comment