L'obiettivo è catturare video a schermo intero su un dispositivo con Swift. Nel codice qui sotto, l'acquisizione video sembra avvenire a schermo intero (mentre la registrazione dell'anteprima della telecamera utilizza l'intero schermo), ma il rendering del video avviene a una risoluzione diversa. Per un 5S in particolare, sembra che l'acquisizione avvenga a 320x568
ma il rendering avviene a 320x480
.Swift: registrazioni video in una sola dimensione ma con dimensioni errate
Come è possibile acquisire e visualizzare video a schermo intero?
Codice per la cattura video:
private func initPBJVision() {
// Store PBJVision in var for convenience
let vision = PBJVision.sharedInstance()
// Configure PBJVision
vision.delegate = self
vision.cameraMode = PBJCameraMode.Video
vision.cameraOrientation = PBJCameraOrientation.Portrait
vision.focusMode = PBJFocusMode.ContinuousAutoFocus
vision.outputFormat = PBJOutputFormat.Preset
vision.cameraDevice = PBJCameraDevice.Back
// Let taps start/pause recording
let tapHandler = UITapGestureRecognizer(target: self, action: "doTap:")
view.addGestureRecognizer(tapHandler)
// Log status
print("Configured PBJVision")
}
private func startCameraPreview() {
// Store PBJVision in var for convenience
let vision = PBJVision.sharedInstance()
// Connect PBJVision camera preview to <videoView>
// -- Get preview width
let deviceWidth = CGRectGetWidth(view.frame)
let deviceHeight = CGRectGetHeight(view.frame)
// -- Configure PBJVision's preview layer
let previewLayer = vision.previewLayer
previewLayer.frame = CGRectMake(0, 0, deviceWidth, deviceHeight)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
...
}
Video codice di rendering:
func exportVideo(fileUrl: NSURL) {
// Create main composition object
let videoAsset = AVURLAsset(URL: fileUrl, options: nil)
let mainComposition = AVMutableComposition()
let compositionVideoTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
let compositionAudioTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
// -- Extract and apply video & audio tracks to composition
let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
} catch {
print("Error with insertTimeRange. Video error: \(error).")
}
do {
try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
} catch {
print("Error with insertTimeRange. Audio error: \(error).")
}
// Add text to video
// -- Create video composition object
let renderSize = compositionVideoTrack.naturalSize
let videoComposition = AVMutableVideoComposition()
videoComposition.renderSize = renderSize
videoComposition.frameDuration = CMTimeMake(Int64(1), Int32(videoFrameRate))
// -- Add instruction to video composition object
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack)
instruction.layerInstructions = [videoLayerInstruction]
videoComposition.instructions = [instruction]
// -- Define video frame
let videoFrame = CGRectMake(0, 0, renderSize.width, renderSize.height)
print("Video Frame: \(videoFrame)") // <-- Prints frame of 320x480 so render size already wrong here
...
hai visto il valore 'naturalSize'? che cos'è? E cosa farai dopo? Stai usando 'AVAssetExportSession'? – rkyr
@rkyr sì, we 'renderSize' è uguale a' naturalSize'. Al momento della stampa la dimensione del rendering è già sbagliata (320x480), ecco perché non è stato incluso più codice. Eventuali suggerimenti? – Crashalot