MeiCam SDK For iOS
3.14.0
|
Assuming the video is taken in a vertical resolution of 1280*720, and users want to generate a 720*720 video.
1)Create timeline NvsVideoResolution videoEditRes; videoEditRes.imageWidth = 720; videoEditRes.imageHeight = 720; videoEditRes.imagePAR = (NvsRational){1, 1}; NvsRational videoFps = {25, 1}; NvsAudioResolution audioEditRes; audioEditRes.sampleRate = 48000; audioEditRes.channelCount = 2; audioEditRes.sampleFormat = NvsAudSmpFmt_S16; //Create timeline. m_timeline = [streamingContext createTimeline:&videoEditRes videoFps:&videoFps audioEditRes:&audioEditRes];
2)Create tracks and clips. Path is the absolute path of the clip. NvsVideoTrack videoTrack = [m_timeline appendVideoTrack]; NvsVideoClip clip = [videoTrack appendClip:path];
3)Zoom in the video. [clip setPan:0 andScan:1];
For detailed settings, please refer toPan and Scan
4)Generate video. Path is the path to generate video. [m_streamingContext compileTimeline:m_timeline startTime:0 endTime:m_timeline.duration outputFilePath:path videoResolutionGrade:COMPILE_VIDEO_RESOLUTION_GRADE_720 videoBitrateGrade:COMPILE_BITRATE_GRADE_HIGH flags:0];
1)Create timeline,track, and clip. This part is the same as that of question one.
2)Add beauty effect. [clip appendBeautyFx];
3)Generate video.
1)Add multiple materials to create multiple clips when creating tracks and clips. NvsVideoTrack videoTrack = [m_timeline appendVideoTrack]; NvsVideoClip clip1 = [videoTrack appendClip:path1]; NvsVideoClip clip2 = [videoTrack appendClip:path2]; NvsVideoClip clip3 = [videoTrack appendClip:path3]; NvsVideoClip clip4 = [videoTrack appendClip:path4]; NvsVideoClip clip5 = [videoTrack appendClip:path5];
2)Generate video. [m_streamingContext compileTimeline:m_timeline startTime:0 endTime:m_timeline.duration outputFilePath:path videoResolutionGrade:COMPILE_VIDEO_RESOLUTION_GRADE_720 videoBitrateGrade:COMPILE_BITRATE_GRADE_HIGH flags:0];
In this way a file can be generated.
A simple picture-in-picture effect refers to superimposed effect of two images(videos) with two different resolutions, such as a horizontally-shot image(video) and a vertical-shot image(video), being added to two tracks. In addition, the Transform 2D effect can realize zooming in and out, rotation, and increasing the level transparency to the video. NvsVideoTrack videoTrack1 = [m_timeline appendVideoTrack]; NvsVideoTrack videoTrack2 = [m_timeline appendVideoTrack]; NvsVideoClip clip1 = [videoTrack1 appendClip:path1]; NvsVideoClip clip2 = [videoTrack2 appendClip:path2];
There are two ways to add a watermark: one can be done by the sticker function, in which users are required to send a watermarked image which will be done by Meishe. The finished watermark file is a file with UUID as the name and .animatedsticker as the extension. With this file, users can realize the function of adding watermarks through API. NSMutableString *m_stickerId; NSString *packagePath = [appPath stringByAppendingPathComponent:"89740AEA-80D6-432A-B6DE-E7F6539C4121.animatedsticker"]; NvsAssetPackageManagerError error = [m_streamingContext.assetPackageManager installAssetPackage:packagePath license:nil type:NvsAssetPackageType_VideoFx sync:YES assetPackageId:m_stickerId]; if (error != NvsAssetPackageManagerError_NoError && error != NvsAssetPackageManagerError_AlreadyInstalled) { NSLog("Failed to install video fx package!"); package1Valid = false; }
[m_timeline addAnimatedSticker:0 duration:m_timeline.duration animatedStickerPackageId:_stickerPackageId];
The second way of adding a watermark is invocate the addWatermark() interface in the NvsTimeline class. [m_timeline addWatermark:path displayWidth:0 displayHeight:0 opacity:1 position:NvsTimelineWatermarkPosition_TopRight marginX:0 marginY:0];//Path is the path of the watermark file, which must be in a PNG or JPG format.
Check if the connectCapturePreviewWithLiveWindow() interface in the NvsStreamingContext class has been called normally, or if users call stop() on the NvsStreamingContext after calling startCapturePreivew(). Similarly, the case that from recording interface to play interface displays a black screen which might be caused by calling stop() of NvsStreamingContext after playbackTimeline(). It is also possible that the connectTimelineWithLiveWindow method on the NvsStreamingContext has not been called or called abnormally.
The NvsClor's fields are in float type, and R, G, B, and A have values from 0 to 1. If the given color values are 100, 100, 100 , they need to be divided by 255 respectively.
Calling playbackTimeline to play needs to preview for a while. In order to avoid this problem, users need to first call seekTimeline interface to 0 position. in this way the flash black problem will not occur.
The reasons may be that some mobile phone players do not support automatic rotation, which may cause the image orientation to be abnormal during video playback, and this may misleading users.
When using code confusion, please be careful to avoid apply confusion operation on the following classes. The correct way to avoid this error is as follows:
-keep class com.cdv.** {*;} -keep class com.meicam.** {*;}
When using effectsdk alone, please be careful to avoid apply confusion operation on the following classes. The correct way to avoid this error is as follows:
-keep class com.cdv.effect.** {*;} -keep class com.meicam.effect.** {*;}
The use of H265 for video shooting is as follows: NSMutableDictionary *config = [[NSMutableDictionary alloc] init]; [config setValue:"hevc" forKey:NVS_COMPILE_VIDEO_ENCODEC_NAME]; [context startRecording:filePath withFlags:0 withRecordConfigurations:config];
The use of H265 for video generation is as follows: NSMutableDictionary *config = [[NSMutableDictionary alloc] init]; [config setValue:"hevc" forKey:NVS_COMPILE_VIDEO_ENCODEC_NAME];//h265 mode context.compileConfigurations = config;//Setted before compileTimeline API invocked [context compileTimeline:timeline startTime:0 endTime:timeline.duration outputFilePath:ouputPath videoResolutionGrade:NvsCompileVideoResolutionGrade720 videoBitrateGrade:NvsCompileBitrateGradeHigh flags:0];
Attention: The SDK will evaluate user's phone's processing capability. If the phone is capable of process the recorded video, then the video will be recorded in resolution grade as set. If not, the SDK will lower the resolution grade to a level that the user's phone are capable of. For example: when using certain model of phone and setting to SUPER_HIGH grade, of which the phone are not capable of supporting, the SDK will lower the grade to HIGH or MEDIUM, thus resulting the grade recorded is not same as the grade user sets. Likewise, if recording without special effects (using the system's built-in camera), then the resolution grade will be determined according to camera's capability. If the camera cannot satisfy the grade setted,the SDK would lower the resolution grade when recording.