AWSRekognitionGetFaceDetectionResponse Class Reference

Inherits from AWSModel : AWSMTLModel
Declared in AWSRekognitionModel.h
AWSRekognitionModel.m

  faces

An array of faces detected in the video. Each element contains a detected face's details and the time, in milliseconds from the start of the video, the face was detected.

@property (nonatomic, strong) NSArray<AWSRekognitionFaceDetection*> *faces

Declared In

AWSRekognitionModel.h

  jobStatus

The current status of the face detection job.

@property (nonatomic, assign) AWSRekognitionVideoJobStatus jobStatus

Declared In

AWSRekognitionModel.h

  nextToken

If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of faces.

@property (nonatomic, strong) NSString *nextToken

Declared In

AWSRekognitionModel.h

  statusMessage

If the job fails, StatusMessage provides a descriptive error message.

@property (nonatomic, strong) NSString *statusMessage

Declared In

AWSRekognitionModel.h

  videoMetadata

Information about a video that Amazon Rekognition Video analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

@property (nonatomic, strong) AWSRekognitionVideoMetadata *videoMetadata

Declared In

AWSRekognitionModel.h