MeiCam SDK For iOS
3.14.0
|
Meishe SDK is dedicated to solve the technical threshold of mobile video development. It helps users quickly including the programmers which have only iOS interface development experience to develop the video recording and editing functions with excellent performance and rich rendering effects. Meishe SDK provides the following functionalities:
All MeiShe SDK APIs should be called within UI thread, otherwise unforeseen errors may occur in application. Only exception is the getFrameAtTime() from the class of NvsVideoFrameRetriever.
The required operating environment is iOS 9.0 version or higher.
If you do not have an SDK, please download the latest iOS development version from the MeiShe official website: https://www.meishesdk.com/downloads. The Xcode editor development guide is available from: https://www.meishesdk.com/ios/doc_ch/html/content/PortingGuide_8md.html。
For recording, the API applied is in the NvsStreamingContext class,including startCapturePreview:videoResGrade:flags:aspectRatio: which start capturing preview, startRecording: which start recording, and appendBuiltinCaptureVideoFx() which applies Special effects the video capture. Remark: all classes in Meishe SDK starts with "Nvs".
Please pay attention to the following two points when recording video:
If you want to see the specific function implementation of video recording, it is recommended to refer to the video capture module of SdkDemo. Note: Only the .mov and .mp4 files are supported for video recording and video compiling.
NvsStreamingContext is the streaming context class of the Meishe SDK, which can be regarded as the entrance to the entire SDK framework. When you start using the Meishe SDK, you need to initialize the NvsStreamingContext class first, and then Get the NvsStreamingContext object and use it elsewhere. NvsStreamingContext is a singleton class,please destroy the NvsStreamingContext class object when use the Meishe SDK no longer or the program exit. Please make sure that don't destroy the NvsStreamingContext object in the middle.
The codes of NvsStreamingContext initialization are as follow::
_context = [NvsStreamingContext sharedInstance];
Destroy the NvsStreamingContext object:
_context = nil; [NvsStreamingContext destroyInstance];
Note: Before the NvsStreamingContext is initialized, you need to call verifySdkLicenseFile: to verify the authorization file. The parameter of "sdkLicenseFilePath" is the path of the authorization file. If there is no authorization file, it is set to an empty string.
The NvsLiveWindow class is used to preview when recording or editing. The aspect ratio of NvsLiveWindow should be 1:1, 4:3, 16:9, 9:16, etc. It is best to match the parameter of "aspectRatio" for the API of startCapturePreview:videoResGrade:flags:aspectRatio:. Otherwise, the previewed image is the image which trims the video that has finished recording.
NvsLiveWindow fill mode:
typedef enum { //The image is evenly filled and trimed if necessary (default mode) NvsLiveWindowFillModePreserveAspectCrop = 0, //The image is evenly scaled to fit the window, no triming NvsLiveWindowFillModePreserveAspectFit, //The image is scaled to fit the window NvsLiveWindowFillModeStretch } NvsLiveWindowFillMode;
For the three fill modes, images are shown as below::
The codes are as follow:
//The count of available acquisition devices if ([_context captureDeviceCount] == 0) { NSLog(@"No equipment available for acquisition"); return; } //Connect the capture preview to the NvsLiveWindow control. if (![_context connectCapturePreviewWithLiveWindow:self.liveWindow]) { NSLog(@"Connection preview window failed"); return; } //Set the delegate for NvsStreamingContext (Users must set!!!) _context.delegate = self;
The capture preview of video is required before users start recording video.The codes are as follow:
//Start acquisition preview _aspectRatio.den = 1; _aspectRatio.num = 1; if (![_context startCapturePreview:0 videoResGrade:NvsVideoCaptureResolutionGradeHigh flags:0 aspectRatio:&m_aspectRatio]) { ...... }
There are two ways to record video:recording with effects and without effects. For more details, please refer to: https://www.meishesdk.com/ios/doc_ch/html/content/videoRecorderMode_8md.html
[_context startRecording:outputFilePath];
[_context startRecordingWithFx:outputFilePath];
The parameters of "outputFilePath" for startRecording: and startRecordingWithFx: are the path to the recorded video file.
Stop recording video:
[_context stopRecording];
Set whether the flash is on:
[_context toggleFlash:YES];
Auto focus:
CGPoint point = CGPointMake(100, 200); [_context startAutoFocus:point];
Set the exposure bias:
[_context setExposureBias:1];
Set value of zooming:
[_context setZoomFactor:0.8];
After adding a beauty effect, you can see the beauty effect in the preview window. When recording a video, the user needs to choose to record with a beauty or not according to the performance of the mobile phone. Beauty effects can be set with strength, whitening, reddening, basic beauty effects, the Intensity of basic beauty effects, sharpening. For specific beauty effects, please refer to the module of "Video Capture" in the "SdkDemo".
The code is as follows:
NvsCaptureVideoFx* beautyFx = [_context appendBeautyCaptureVideoFx];//add beauty effects [beautyFx setFloatVal:@"Strength" val:0.5];//Set the value of "Strength" [beautyFx setFloatVal:@"Whitening" val:0.5];//Set the value of "Whitening" [beautyFx setFloatVal:@"Reddening" val:0.5];//Set the value of "Reddening" [beautyFx setBooleanVal:@"Default Beauty Enabled" val:YES];//Set the value of "Default Beauty Enabled" [beautyFx setBooleanVal:@"Default Sharpen Enabled" val:YES];//Set the value of "Default Sharpen Enabled" [beautyFx setFloatVal:@"Default Intensity" val:0.5];//Set the value of "Default Intensity"
There are two types of captured effects: built-in captured effects and extended package effects which can obtain through resource package installation.
if you want to get the name of the built-in captured effect,please refer to the list: https://www.meishesdk.com/ios/doc_ch/html/content/FxNameList_8md.html。
Add and remove effects:
[_context appendBuiltinCaptureVideoFx:fxName]; [_context removeAllCaptureVideoFx];
When using the extended package effect, users need to install firstly the resource package and get the resource package ID, and then add the extended package effect. For example, the resource package here is installed in a synchronous method, if the resource package size is too large or it's based on needs, an asynchronous installation can be used.
_fxPackageId = [[NSMutableString alloc] initWithString:@""]; NSString *appPath =[[NSBundle mainBundle] bundlePath]; NSString *packagePath = [appPath stringByAppendingPathComponent:@"7FFCF99A-5336-4464-BACD-9D32D5D2DC5E.videofx"]; NvsAssetPackageManagerError error = [_context.assetPackageManager installAssetPackage:packagePath license:nil type:NvsAssetPackageType_VideoFx sync:YES assetPackageId:_fxPackageId]; if (error != NvsAssetPackageManagerError_NoError && error != NvsAssetPackageManagerError_AlreadyInstalled) { NSLog(@"Failed to install"); } //append video effect [_context appendPackagedCaptureVideoFx:_fxPackageId];
General steps to implement video editing:
Initialize firstlt the class of "NvsStreamingContext".If it has already been initialized, the object can be directly obtained.
NvsStreamingContext *_context = [NvsStreamingContext sharedInstance];
Creating a timeline is critical for editing. The video resolution of the timeline determines the maximum resolution (size) when compiling the video file. Please match the resolution of the timeline with the aspect ratio of NvsLiveWindow.
NvsVideoResolution videoEditRes; videoEditRes.imageWidth = 1280;//video resolution width videoEditRes.imageHeight = 720;//video resolution hight videoEditRes.imagePAR = (NvsRational){1, 1};//pixel ratio, set to 1:1 NvsRational videoFps = {25, 1};//frame rate, users can set 25 or 30, generally 25. NvsAudioResolution audioEditRes; audioEditRes.sampleRate = 48000;//audio sampling rate, users can set 48000 or 44100 audioEditRes.channelCount = 2;//count of audio channels audioEditRes.sampleFormat = NvsAudSmpFmt_S16;//audio sampling format
Create a timeline:
NvsTimeline *_timeline = [_context createTimeline:&videoEditRes videoFps:&videoFps audioEditRes:&audioEditRes];
Connect the timeline to the NvsLiveWindow control to preview images on the timeline:
if (![_context connectTimeline:_timeline withLiveWindow:self.liveWindow]) { NSLog(@"Failed to connect timeline to liveWindow!"); return; }
In general, create a video track and then add images or videos onto the track. The materials added to the track, we call them the clips. Both image and video clips are added to the track via the file path. Please note: If the image size is too large, you need to reduce the size of the image. The size of the reduced image had better matching the size of the resolution that creates the timeline.
Append video track:
NvsVideoTrack *_videoTrack = [_timeline appendVideoTrack];
Append audio track:
NvsAudioTrack *_audioTrack = [_timeline appendAudioTrack];
Append a clip:
NSString* videoUrl = @"file:///var/mobile/Media/DCIM/102APPLE/IMG_2625.MOV"; [_videoTrack appendClip:videoUrl];
For the interface of playback and seeking, the parameter of "videoSizeMode" is recommended to be set to "NvsVideoPreviewSizeModeLiveWindowSize". If there is no special requirement, the mode that set to "NvsVideoPreviewSizeModeFullSize" will affect performance. and "preload" is preloaded and set to YES. Note: The time unit of the Meishe SDK is microseconds, 1/1000000 seconds.
Video playback, the parameter of "endTime" for playbackTimeline:startTime:endTime:videoSizeMode:preload:flags: can be set _timeline.duration or -1.
[_context playbackTimeline:_timeline startTime:startTime endTime:_timeline.duration videoSizeMode:NvsVideoPreviewSizeModeLiveWindowSize preload:YES flags:0];
Video seeking:
[_context seekTimeline:_timeline timestamp:0 videoSizeMode:NvsVideoPreviewSizeModeLiveWindowSize flags:NvsStreamingEngineSeekFlag_ShowCaptionPoster | NvsStreamingEngineSeekFlag_ShowAnimatedStickerPoster];
Change the in and point points of the clip so that trim the clip.
NvsVideoClip *clip = [_videoTrack getClipWithIndex:0]; [clip changeTrimInPoint:data.startTime affectSibling:YES]; [clip changeTrimOutPoint:data.endTime affectSibling:YES];
Remove the clip:
[_videoTrack removeClip:0 keepSpace:NO];
The clips on the track can be interchanged, and the parameters of "clipIndex" and "destClipIndex" for moveClip:destClipIndex: represent the index of the two materials that are interchanged, respectively.
[_videoTrack moveClip:0 destClipIndex:1];
The NvsVideoTrack class provides appendClip:trimIn:trimOut that You can freely set the duration of the image on the track as needed. the parameter of "filePath" for appendClip:trimIn:trimOut: is the path of the picture material, "trimIn" is set to 0, and "trimOut" is set to 8000000,the result is that the picture display as 8 seconds.
[_videoTrack appendClip:asset.localIdentifier trimIn:0 trimOut:8000000];
If the image is added by appendClip:, the default display time of the image is 4 seconds.
The created timeline, the added video track and audio track are removed if they are no longer needed. The operation is as follows:
Remove the timeline:
[_context removeTimeline:_timeline];
Remove the video track:
[_timeline removeVideoTrack:0];
Remove the audio track:
[_timeline removeAudioTrack:0];
Adding music to a video is done by adding audio clips onto the audio track. Once the timeline is created, add an audio track via "appendAudioTrack" and add the music file as an audio clip to the audio track. You can add multiple pieces of music and the music will play continuously.
//add an audio track NvsAudioTrack *_audioTrack = [_timeline appendAudioTrack]; [_audioTrack appendClip:asset.localIdentifier];
Music triming is the same as video triming, and it's also trimed by setting the in and out points.
NvsAudioClip *clip = [_audioTrack getClipWithIndex:0]; [clip changeTrimInPoint:1000000 affectSibling:YES]; [clip changeTrimOutPoint:5000000 affectSibling:YES];
Adding, deleting, and getting captions are all performed on the timeline. You can refer to the caption editing module of the SdkDemo example.
Add the caption and set the duration of the caption which display.
[_timeline addCaption:@"Meishe SDK" inPoint:1000000 duration:5000000 captionStylePackageId:_captionStylePackageId];
Remove the caption and return the next caption object on the timeline. Returns nil if there is no next caption.
NvsTimelineCaption *caption = [_timeline getFirstCaption]; while (caption) { caption = [_timeline removeCaption:caption]; }
There are several ways to get captions:
//Get the first caption on the timeline NvsTimelineCaption *firstCaption = [_timeline getFirstCaption]; //Get the last caption on the timeline NvsTimelineCaption *lastCaption = [_timeline getLastCaption]; //Get the previous caption of the current caption on the timeline NvsTimelineCaption *prevCaption = [_timeline getPrevCaption:currentCaption]; //Get the next caption of the current caption on the timeline NvsTimelineCaption *nextCaption = [_timeline getNextCaption:currentCaption];
Get The captions according to the position on the timeline, and the List collection of the captions of the current position is returned. The sorting rules for the obtained caption list are as follows:
1.When adding, if the in points of captions are different, they are arranged in the order of the in points;
2.When adding, if the in points of captions are the same, they are arranged in the order of adding captions.
NSArray *captionArray = [_timeline getCaptionsByTimelinePosition:1000000];
Modifying the caption properties can be done by the methods of NvsTimelineCaption class. After getting captions, you can set the caption text, color, bold, italic, stroke, etc.
Take the example of modifying the caption text:
[currentCaption.setText:@"Meishe SDK"];
If it's a panorama caption, you can also set the polar angle of the caption center point, the azimuth of the caption center point, and so on. Take the polar angle of the center point of the caption as an example:
[currentCaption.setCenterPolarAngle:1.2];
After captions are acquired, you can modify the in points, out points, and offset values of the captions on the timeline.
//change the in point [currentCaption.changeInPoint:1000000]; //change the out point [currentCaption.changeOutPoint:5000000]; //Change the display position (the in and out points move the value of "offset" at the same time) [currentCaption.movePosition:1000000];
Adding, deleting, and getting animated stickers are also performed on the timeline. See the sticker module of "SdkDemo" demo.
Add an animated sticker:
[_timeline addAnimatedSticker:1000000 duration:5000000 animatedStickerPackageId:_stickerPackageId];
Remove the animated sticker and return to the next sticker of the current sticker. If there is no next sticker, return nil.
NvsTimelineAnimatedSticker *nextSticker = [_timeline removeAnimatedSticker:currentSticker];
There are several ways to get the animated stickers which added on the timeline.
//Get the first animated sticker on the timeline NvsTimelineAnimatedSticker *firstSticker = [_timeline getFirstAnimatedSticker]; //Get the last animated sticker on the timeline NvsTimelineAnimatedSticker *lastSticker = [_timeline getLastAnimatedSticker]; //Get the previous animated sticker of the current animated sticker on the timeline NvsTimelineAnimatedSticker *prevSticker = [_timeline getPrevAnimatedSticker:currentSticker]; //Get the next animated sticker of the current animated sticker on the timeline NvsTimelineAnimatedSticker *nextSticker = [_timeline getNextAnimatedSticker:currentSticker];
Get the animated stickers based on the position on the timeline and return the List collection that holds animated sticker object in the current position. The sorting rules for the obtained animated sticker list are as follows:
1.When adding, the in points are different,the animated stickers are arranged in the order of the in points;
2.When adding, the in points are the same, the animation stickers are arranged in the order which added.
NSArray *stickerArray = [_timeline getAnimatedStickersByTimelinePosition:5000000];
Modifying the sticker properties can be done by the method of NvsTimelineAnimatedSticker class. After getting the sticker, you can set the zoom value, horizontal flip, rotation angle, translation and so on.
Take the modified sticker scale as an example: [currentSticker setScale:1.2];
If it's a panorama animation sticker, you can also set the polar angle of the center point for the sticker, the azimuth angle of the center point for the sticker, and so on. Take the polar angle of the center point as an example:
[currentSticker setCenterPolarAngle:0.8];
After getting the sticker, you can modify the in point, out point and offset value of the animated sticker on the timeline.
//Change in point [currentSticker changeInPoint:1000000]; //Change out point [currentSticker changeOutPoint:5000000]; //Change the display position (in and out points move the value of "offset" at the same time) [currentSticker movePosition:1000000];
When editing a video, if you need to apply a theme, you can add and remove it through the timeline.
Apply a theme:
[_timeline applyTheme:_themePackageId];
Remove the current theme:
[_timeline removeCurrentTheme];
Get the package Id of current theme:
[_timeline getCurrentThemeId];
After applying the theme, you can set the theme title, trailer, theme music volume, etc. Take to set the theme title as an example:
[_timeline setThemeTitleCaptionText:@"Meishe SDK"];
Transitions include video transitions and audio transitions. Video transitions are set on the video track, and audio transitions are set on the audio track.
Video transitions include built-in transitions and package transitions. Here,set the video built-in transitions:
[_videoTrack setBuiltinTransition:0 withName:transName];
Video package transition:
[_videoTrack setPackagedTransition:1 withPackageId:m_transiPackageId];
Similarly, the audio transition is the same usage as the video transition, and the user can refer to it.
In video follow-up editing, several effects are often used, namely video effects (NvsVideoFx), audio effects (NvsAudioFx), and timeline video effects (NvsTimelineVideoFx).
Video effects are used on video clips, and each video clip can add several video effects. Video effects include built-in video effects, package video effects, and beauty effects.
Add a built-in video effect:
[videoClip appendBuiltinFx:fxName];
Add a package video effect:
[videoClip appendPackagedFx:_videoFxPackageId];
Add a beauty video effect:
[videoClip appendBeautyFx];
Removing the video effect includes removing the specified index of effect and removing all video effects.
Remove the specified index of vedio effect:
[videoClip removeFx:0];
Remove all video effects:
[videoClip removeAllFx];
Audio effects are used on audio clips, and each audio clip can add several audio effects.
Add an audio effect:
[audioClip appendFx:fxName];
Remove the specified index of audio effects:
[audioClip removeFx:0];
Timeline video effects are an effect that is used on the timeline, including built-in effects and package effects. Several timeline video effects can be added to the timeline.
Add timeline effects:
[_timeline addBuiltinTimelineVideoFx:1000000 duration:5000000 videoFxName:_fxName]; [_timeline addPackagedTimelineVideoFx:1000000 duration:5000000 videoFxPackageId:_fxPackageId];
There are several ways to get timeline effects.
//Get the first timeline video effect on the timeline NvsTimelineVideoFx *firstTimelineFx = [_timeline getFirstTimelineVideoFx]; //Get the last timeline video effect on the timeline NvsTimelineVideoFx *lastTimelineFx = [_timeline getLastTimelineVideoFx]; //Get the previous timeline video effect of the current timeline video effect on the timeline NvsTimelineVideoFx *prevTimelineFx = [_timeline getPrevTimelineVideoFx:currentTimelineFx]; //Get the next timeline video effect of the current timeline video effect on the timeline NvsTimelineVideoFx *nextTimelineFx = [_timeline getNextTimelineVideoFx:currentTimelineFx];
Gets the timeline video effects based on the position on the timeline, returning an array of video effects objects in current position of timeline. The ordering rules for the obtained timeline video effects array are as follows:
1.When adding, the in points of the timeline video effects are different, they are arranged in the order of the in points;
2.When adding, the in points of the timeline video effects are the same, they are arranged in the order of adding timeline video effects.
NSArray *timelineFxArray = [_timeline getTimelineVideoFxByTimelinePosition:5000000];
After you get the timeline effects, you can modify the in points, out points, and offset values of the timeline effects on the timeline.
//Change in point [currentTimelineFx changeInPoint:1000000]; //Change out point [currentTimelineFx changeOutPoint:5000000]; //Change the display position (in and out points move the value of "offset" at the same time) [currentTimelineFx movePosition:1000000];
The Meishe SDK uses compileTimeline:startTime:endTime:outputFilePath:videoResolutionGrade:videoBitrateGrade:flags: to compile a new video from the clip on the timeline.
Compiling video:
[_context compileTimeline:_timeline startTime:0 endTime:_timeline.duration outputFilePath:_outputFilePath videoResolutionGrade:NvsCompileVideoResolutionGrade720 videoBitrateGrade:NvsCompileBitrateGradeHigh flags:0]
The Meihshe SDK provides a rich library of Assets, including animated stickers, themes, caption styles, transitions, and more. The package can be downloaded from the web or provided by the Meishe SDK project team, and users can choose to use these packages as needed. The Meishe SDK manages these Asset packages through the NvsAssetPackageManager class, which can installe, upgrade, uninstalle, obtaine the status of the material package, version number, and so on.
Package installation:
//Synchronous installation is used here, if the package is too large, asynchronous mode can be used. NvsAssetPackageManagerError error = [_context.assetPackageManager installAssetPackage:package1Path license:nil type:NvsAssetPackageType_VideoFx sync:YES assetPackageId:_fxPackageId];
Package upgrade:
//Synchronous installation is used here, if the package is too large, asynchronous mode can be used. NvsAssetPackageManagerError error = [_context.assetPackageManager upgradeAssetPackage:package1Path license:nil type:NvsAssetPackageType_VideoFx sync:YES assetPackageId:_fxPackageId];
Package uninstallation:
NvsAssetPackageManagerError error = [_context.assetPackageManager uninstallAssetPackage:_fxPackageId type:NvsAssetPackageType_VideoFx];
The Meishe SDK provides a lot of delegate interfaces. If you want to query the status of the captured device, the recording status, video playback status, file compilation status, resource package installation status, etc., you must set the delegate and implement the corresponding delegate interface after creating the NvsStreamingContext object.
Set delegate:
_context.delegate = self;
The Meishe SDK version includes the speed version, the standard function version, and the full-featured PRO version. For the function points of each version, please refer to: https://www.meishesdk.com/editsdk, there will be a detailed introduction. Each user can choose to use according to their individual needs and please contact us for details.