MeiCam SDK For Web  3.12.1
Public Member Functions | List of all members
NveEffectContext Class Reference

Effect context Effect context is the entry to effect SDK framework, nveGetEffectContextInstance() used to get the unique effect context instance. More...

Public Member Functions

 constructor ()
 
 verifySdkLicenseFile (licenseFilePath)
 Verifies the SDK license file. More...
 
async verifySdkLicenseFileUrl (licenseFileUrl)
 Verifies the SDK license file url. More...
 
 setMaxQueuedRenderTask (maxQueuedRenderTask)
 Set maximum of queued render task. More...
 
 getMaxQueuedRenderTask ()
 Get maximum of queued render task. More...
 
 createVideoEffect (fxName, aspectRatio, workingInRealtimeMode=true)
 Create video effect object. More...
 
 createVideoTransition (fxName, aspectRatio)
 Create video transition object. More...
 
 createAnimatedSticker (inPoint, duration, isPanoramic, packageId, aspectRatio)
 Create animated sticker object. More...
 
 createCompoundCaption (inPoint, duration, packageId, aspectRatio)
 Create compound caption object. More...
 
 createCaption (text, inPoint, duration, isPanoramic, packageId, aspectRatio)
 Create caption object. More...
 
 createModularCaption (text, inPoint, duration, aspectRatio)
 Create modular caption object. More...
 
 renderEffects (effectInstanceArray, inputImageData, timestampMs, flags=0, hostBufferInfoExtArray=[], renderRect={})
 Rendering special array effects. More...
 
 renderEffectsWithMultiInputs (effectInstanceArray, inputImageDataArray, timestampMs, flags=0, hostBufferInfoExtArray=[], renderRect={})
 Rendering special array effects. More...
 
 renderEffect (effectInstance, inputImageData, timestampMs, flags=0, hostBufferInfoExtArray=[])
 Rendering effect. More...
 
 renderEffectWithMultiInputs (effectInstance, inputImageDataArray, timestampMs, flags=0, hostBufferInfoExtArray=[])
 Rendering effect. More...
 
 initHumanDetection (modelFilePath, licenseFilePath, features)
 Initializes human detection. Only once needed. More...
 
 initHumanDetectionExt (modelFilePath, licenseFilePath, features)
 Initializes human detection extention. Need use initHumanDetection first. More...
 
 setupHumanDetectionData (dataType, dataFilePath)
 Setup human detection data. More...
 
 closeHumanDetection ()
 Close the human detection mechanism. More...
 
 getAssetPackageManager ()
 Get asset package manager. More...
 
 inferenceTest (imageFilePath, modelFilePath, forwardType, threadNum, loopCount, flags)
 

Detailed Description

Effect context Effect context is the entry to effect SDK framework, nveGetEffectContextInstance() used to get the unique effect context instance.

Member Function Documentation

◆ closeHumanDetection()

NveEffectContext::closeHumanDetection ( )
inline

Close the human detection mechanism.

Returns
{}

◆ constructor()

NveEffectContext::constructor ( )
inline

@constructor

◆ createAnimatedSticker()

NveEffectContext::createAnimatedSticker (   inPoint,
  duration,
  isPanoramic,
  packageId,
  aspectRatio 
)
inline

Create animated sticker object.

Parameters
{Number}inPoint In point
{Number}duration Duration
{Boolean}isPanoramic Panoramic or not
{String}packageId Animated sticker package id
{String}aspectRatio Aspect ratio
Returns
{NveEffectInstance} Effect instance object

◆ createCaption()

NveEffectContext::createCaption (   text,
  inPoint,
  duration,
  isPanoramic,
  packageId,
  aspectRatio 
)
inline

Create caption object.

Parameters
{String}text Caption text
{Number}inPoint In point
{Number}duration Duration
{Boolean}isPanoramic Panoramic or not
{String}packageId Caption style package id
{String}aspectRatio Aspect ratio
Returns
{NveEffectInstance} Effect instance object

◆ createCompoundCaption()

NveEffectContext::createCompoundCaption (   inPoint,
  duration,
  packageId,
  aspectRatio 
)
inline

Create compound caption object.

Parameters
{Number}inPoint In point
{Number}duration Duration
{String}packageId Compound caption package id
{String}aspectRatio Aspect ratio
Returns
{NveEffectInstance} Effect instance object

◆ createModularCaption()

NveEffectContext::createModularCaption (   text,
  inPoint,
  duration,
  aspectRatio 
)
inline

Create modular caption object.

Parameters
{String}text Caption text
{Number}inPoint In point
{Number}duration Duration
{String}aspectRatio Aspect ratio
Returns
{NveEffectInstance} Effect instance object

◆ createVideoEffect()

NveEffectContext::createVideoEffect (   fxName,
  aspectRatio,
  workingInRealtimeMode = true 
)
inline

Create video effect object.

Parameters
{String}fxName For built-in video effects, it is the name of the effect. If it is a resource package effect, it is the resource package id.
{String}aspectRatio Aspect ratio
{Boolean}workingInRealtimeMode Working in realtime mode or not, default value is true.
Returns
{NveEffectInstance} Effect instance object

◆ createVideoTransition()

NveEffectContext::createVideoTransition (   fxName,
  aspectRatio 
)
inline

Create video transition object.

Parameters
{String}fxName For built-in video transitions, it is the name of the transition. If it is a resource package transition, it is the resource package id.
{String}aspectRatio Aspect ratio
Returns
{NveEffectInstance} Effect instance object

◆ getAssetPackageManager()

NveEffectContext::getAssetPackageManager ( )
inline

Get asset package manager.

Returns
{NveAssetPackageManager} Asset package manager object

◆ getMaxQueuedRenderTask()

NveEffectContext::getMaxQueuedRenderTask ( )
inline

Get maximum of queued render task.

Returns
{Number} Maximum value

◆ initHumanDetection()

NveEffectContext::initHumanDetection (   modelFilePath,
  licenseFilePath,
  features 
)
inline

Initializes human detection. Only once needed.

Parameters
{String}modelFilePath Mode file path
{String}licenseFilePath License file path
{Number}features Features, see NveHumanDetectionFeatureEnum for detail
Returns
{Boolean} YES indicates success, NO indicates fail.
See also
NveEffectContext#initHumanDetectionExt NveEffectContext#closeHumanDetection

◆ initHumanDetectionExt()

NveEffectContext::initHumanDetectionExt (   modelFilePath,
  licenseFilePath,
  features 
)
inline

Initializes human detection extention. Need use initHumanDetection first.

Parameters
{String}modelFilePath Mode file path
{String}licenseFilePath License file path
{Number}features Features, see NveHumanDetectionFeatureEnum for detail
Returns
{Boolean} YES indicates success, NO indicates fail.
See also
NveEffectContext#initHumanDetection NveEffectContext#closeHumanDetection

◆ renderEffect()

NveEffectContext::renderEffect (   effectInstance,
  inputImageData,
  timestampMs,
  flags = 0,
  hostBufferInfoExtArray = [] 
)
inline

Rendering effect.

Parameters
{NveEffectInstance}effectInstance Effect instance
{ImageData|VideoFrame}inputImageData Input image data
{Number}timestampMs Current rendering timestamp in millisecond
{Number}flags Flags
Returns
{Promise} Return Promise, reject status with Error string information, and resolve status with {imageBitmap:imageData, spentTime:spentTime} as output image data object while OutputImageBitmap flag being set, imageBitmap is output image data, a ImageBitmap object, spentTime is spent time for current rendering in millisecond, otherwise, resolve status with {imageData:imageData, spentTime:spentTime} as output image data object, imageData is output image data, a ImageData object, spentTime is spent time for current rendering in millisecond.

◆ renderEffects()

NveEffectContext::renderEffects (   effectInstanceArray,
  inputImageData,
  timestampMs,
  flags = 0,
  hostBufferInfoExtArray = [],
  renderRect = {} 
)
inline

Rendering special array effects.

Parameters
{NveEffectInstance[]}effectInstanceArray Effect instance array
{ImageData|VideoFrame}inputImageData Input image data
{Number}timestampMs Current rendering timestamp in millisecond
{Number}flags Flags
Returns
{Promise} Return Promise, reject status with Error string information, and resolve status with {imageBitmap:imageData, spentTime:spentTime} as output image data object while OutputImageBitmap flag being set, imageBitmap is output image data, a ImageBitmap object, spentTime is spent time for current rendering in millisecond, otherwise, resolve status with {imageData:imageData, spentTime:spentTime} as output image data object, imageData is output image data, a ImageData object, spentTime is spent time for current rendering in millisecond.

◆ renderEffectsWithMultiInputs()

NveEffectContext::renderEffectsWithMultiInputs (   effectInstanceArray,
  inputImageDataArray,
  timestampMs,
  flags = 0,
  hostBufferInfoExtArray = [],
  renderRect = {} 
)
inline

Rendering special array effects.

Parameters
{NveEffectInstance[]}effectInstanceArray Effect instance array
{ImageData[]|VideoFrame[]}inputImageDataArray Input image data array
{Number}timestampMs Current rendering timestamp in millisecond
{Number}flags Flags
Returns
{Promise} Return Promise, reject status with Error string information, and resolve status with {imageBitmap:imageData, spentTime:spentTime} as output image data object while OutputImageBitmap flag being set, imageBitmap is output image data, a ImageBitmap object, spentTime is spent time for current rendering in millisecond, otherwise, resolve status with {imageData:imageData, spentTime:spentTime} as output image data object, imageData is output image data, a ImageData object, spentTime is spent time for current rendering in millisecond.

◆ renderEffectWithMultiInputs()

NveEffectContext::renderEffectWithMultiInputs (   effectInstance,
  inputImageDataArray,
  timestampMs,
  flags = 0,
  hostBufferInfoExtArray = [] 
)
inline

Rendering effect.

Parameters
{NveEffectInstance}effectInstance Effect instance
{ImageData[]|VideoFrame[]}inputImageDataArray Input image data array
{Number}timestampMs Current rendering timestamp in millisecond
{Number}flags Flags
Returns
{Promise} Return Promise, reject status with Error string information, and resolve status with {imageBitmap:imageData, spentTime:spentTime} as output image data object while OutputImageBitmap flag being set, imageBitmap is output image data, a ImageBitmap object, spentTime is spent time for current rendering in millisecond, otherwise, resolve status with {imageData:imageData, spentTime:spentTime} as output image data object, imageData is output image data, a ImageData object, spentTime is spent time for current rendering in millisecond.

◆ setMaxQueuedRenderTask()

NveEffectContext::setMaxQueuedRenderTask (   maxQueuedRenderTask)
inline

Set maximum of queued render task.

Parameters
{Number}maxQueuedRenderTask Maximum value Since effect render task will be scheduled to an internal thread and be executed asynchronously, too many queued pending effect render task will consume excessive memory, this method setup a queued render task limitation, when the current queued render task reach the limitation newly created effect render task will be dropped except DontDropFrame render flag was set.

◆ setupHumanDetectionData()

NveEffectContext::setupHumanDetectionData (   dataType,
  dataFilePath 
)
inline

Setup human detection data.

Parameters
{Number}dataType Data type of human detection, see NveHumanDetectionDataTypeEnum for detail
{String}dataFilePath Data file path
Returns
{Boolean} YES indicates success, NO indicates fail.

◆ verifySdkLicenseFile()

NveEffectContext::verifySdkLicenseFile (   licenseFilePath)
inline

Verifies the SDK license file.

Parameters
{String}licenseFilePath License file path
Returns
{Boolean} true indicates that the authorization verification is successful, and false indicates that it fails. If the verification fails, effects are ineffective.

◆ verifySdkLicenseFileUrl()

async NveEffectContext::verifySdkLicenseFileUrl (   licenseFileUrl)
inline

Verifies the SDK license file url.

Parameters
{String}licenseFileUrl License file url
Returns
{Boolean} true indicates that the authorization verification is successful, and false indicates that it fails. If the verification fails, effects are ineffective.

The documentation for this class was generated from the following file: