AWSRekognitionGetFaceSearchResponse Class Reference

Inherits from AWSModel : AWSMTLModel
Declared in AWSRekognitionModel.h
AWSRekognitionModel.m

  jobStatus

The current status of the face search job.

@property (nonatomic, assign) AWSRekognitionVideoJobStatus jobStatus

Declared In

AWSRekognitionModel.h

  nextToken

If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of search results.

@property (nonatomic, strong) NSString *nextToken

Declared In

AWSRekognitionModel.h

  persons

An array of persons, , in the video whose face(s) match the face(s) in an Amazon Rekognition collection. It also includes time information for when persons are matched in the video. You specify the input collection in an initial call to StartFaceSearch. Each Persons element includes a time the person was matched, face match details (FaceMatches) for matching faces in the collection, and person information (Person) for the matched person.

@property (nonatomic, strong) NSArray<AWSRekognitionPersonMatch*> *persons

Declared In

AWSRekognitionModel.h

  statusMessage

If the job fails, StatusMessage provides a descriptive error message.

@property (nonatomic, strong) NSString *statusMessage

Declared In

AWSRekognitionModel.h

  videoMetadata

Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition Video operation.

@property (nonatomic, strong) AWSRekognitionVideoMetadata *videoMetadata

Declared In

AWSRekognitionModel.h