Ameba-Pro KVS Getting Started Guide
Getting Started Guide All information provided in this document is subject to legal disclaimers. © REALTEK 2020. All rights reserved.
30
8.5
Get the Result from Kinesis Data Stream Locally
Rekognition can now analyze the video stream from AmebaPro and put the results of its analysis onto a Kinesis Data Stream. For each frame it
analyses, Rekognition Video may find many faces and each face may have many potential matches. This information is detailed in JSON
documents that Rekognition Video puts onto the Kinesis Data Stream.
Get records from data stream in Python:
shard_iterator_response = self.kinesis_client.get_shard_iterator(
StreamName=KinesisDataStream_name,
ShardId='shardId-000000000000',
ShardIteratorType='LATEST'
)
get_response = self.kinesis_client.get_records(
ShardIterator=shard_iterator_response['ShardIterator']
)
8.6
Display Video frames Rendered with Bounding Boxes
The Rekognition results will provide the bounding boxes’ position of the detected face in the frame. If there are lots of faces in the video, you
can display all detected results returned from the get_records() API.
Following are the scenario of our demo:
AmebaPro monitors PC’s left screen, the sample photos are generated completely by AI (
AmebaPro put the video to KVS
Feed KVS as input of Rekognition
Rekognition
’
s result will be sent to Data Stream
Run a python code locally to get the video frame from KVS and Rekognition result from Data Stream
(Python packages: AWS Boto3 + OpenCV)
Display the video frame rendered with bounding boxes