DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Low-Code Development: Leverage low and no code to streamline your workflow so that you can focus on higher priorities.

DZone Security Research: Tell us your top security strategies in 2024, influence our research, and enter for a chance to win $!

Launch your software development career: Dive head first into the SDLC and learn how to build high-quality software and teams.

Open Source Migration Practices and Patterns: Explore key traits of migrating open-source software and its impact on software development.

Related

  • Leveraging Apache Airflow on AWS EKS (Part 3): Advanced Topics and Practical Use Cases
  • Leveraging Apache Airflow on AWS EKS (Part 1): Foundations of Data Orchestration in the Cloud
  • Leverage AWS Textract Service for Intelligent Invoice Processing in Account Payables Testing
  • Real-Time Data Architecture Frameworks

Trending

  • 10 ChatGPT Prompts To Boost Developer Productivity
  • Mastering Distributed Caching on AWS: Strategies, Services, and Best Practices
  • Data Integration Technology Maturity Curve 2024-2030
  • Agile vs. DevOps: What Sets Them Apart?
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Video Processing in AWS with Kinesis Video Streams

Video Processing in AWS with Kinesis Video Streams

In this article, I'll explain the basis of cloud streaming and how Amazon Kinesis Video Streams can help handle real-time video content from end-to-end.

By 
Eddie Segal user avatar
Eddie Segal
·
Aug. 12, 21 · Tutorial
Like (2)
Save
Tweet
Share
14.8K Views

Join the DZone community and get the full member experience.

Join For Free

Video content is dominating the web and is increasingly demanded by users. In addition, organizations are generating video content from a variety of sources—from mobile devices, surveillance cameras, and internet of things (IoT) sensors deployed in the field. Receiving all this data, storing it, and processing it to derive value, is a huge technical challenge. 

Luckily, cloud providers like Amazon have risen to the challenge. In this article, I’ll explain the basis of cloud streaming and how Amazon Kinesis Video Streams can help handle real-time video content end-to-end, from ingestion through processing and streaming to end-users.

What Is Cloud Video Streaming?

Historically, organizations wanting to stream video content to viewers had to set up a complex infrastructure, with dedicated hardware and software. This infrastructure had a high initial cost and was also expensive to maintain. It created a high barrier of entry for the production of video content.

Today, cloud video platforms are providing pre-configured, managed video streaming solutions. These cloud services typically involve the following elements:

  • Hardware is hosted in the cloud, scaled on-demand, and billed per actual use
  • Video content is uploaded to cloud storage
  • Video processing and streaming is performed by a cluster of servers dedicated to managing video content
  • Encoding and transcoding is fully automated, converting video inputs into formats that can be consumed by viewers

Now, many organizations are taking things one step further. Instead of just streaming a known set of video content to end-users, they are allowing users themselves to contribute video content. In other cases, audiovisual content comes from surveillance cameras, internet of things (IoT) devices, or even medical systems. All this content must be ingested in real-time, analyzed to derive insights, categorize or tag the content, and deliver it to viewers if necessary. This is where Kinesis Video Streams comes in.

Amazon Kinesis Video Streams

This fully managed AWS service, part of the extensive AWS big data framework, lets you stream video from any device directly to the cloud, and build applications that process or analyze video content, either in real-time or in batches. The service has two main elements:

  • Ingestion and storage for high volumes of real-time video data
  • Ability to access and manage video content to build custom applications

While typically, AWS pricing is based on payment for compute instances, storage, and networking, Kinesis Video Streams is packaged as a platform as a service (PaaS) offering and charges a fixed rate per GB of video content ingested, consumed, or stored via the service.

Kinesis Video Streams can be used to:

  • Capture video data from mobile devices, security cameras, or internet of things (IoT) devices
  • Gain access to frame-by-frame video data for fast processing
  • Store video data for a configurable retention period
  • Add time indexing to video, enabling batch processing and ad hoc access to historical data

How Kinesis Video Streams API Works

Kinesis Video Streams offers application programming interfaces (APIs) designed to help you create and manage your video streams (Produced API). The service also provides APIs dedicated to writing and reading media data into your streams (Consumer API). Let’s dive into each of these in a bit more detail.

Producer API 

To write media data to your Kinesis-based video streams, you need to use the PutMedia API. A PutMedia request tells the producer to send a stream of media fragments—self-contained frames sequences. Note that all frames that belong to a fragment should never have any dependency on other frames belonging to other fragments.

Consumer APIs 

Here are several APIs that enable data consumers to get information from video streams:

  • GetMedia—lets consumers get data by identifying a starting fragment. In response, the API returns fragments in the same order they were added to the video stream.

  • GetMediaFromFragmentList and ListFragments—let offline consumers, like batch processing applications, get data. This is achieved by explicitly fetching certain media fragments or video ranges. To fetch this data, the app needs to use both ListFragments and GetMediaFromFragmentList. It enables apps to identify video segments for a certain fragment range or time range, fetching those fragments sequentially or in parallel. 

After creating Kinesis video streams, you can send data to your streams. You can use libraries in your application code to extract data from media sources and then upload it to your streams.

Kinesis APIs vs Other Video APIs

As you ease into Kinesis, it might help to understand the differences and similarities between Kinesis Video Streams and other common APIs used with video content:

  • Kinesis is similar to the YouTube API and Facebook Live API, because it focuses on the ingestion of video content and does not directly perform actions on video content.
  • Kinesis is unlike the Watson API or Cloudinary video API which applies AI algorithms to automatically classify or transform video content.

Streaming Video With the Kinesis Streams API

Step 1: Get the IAM Access Key

Before you can access the Kinesis Streams API, you need to create an Identity and Access Management (IAM) user with the appropriate permissions:

  1. In your AWS account, select or create an IAM User with administrative permissions.
  2. Open the IAM console, select Users, click on the administrative user, and click the Security credentials tab.
  3. Click Create access key and save the value of the access key for the next steps.
  4. Under the Secret access key, click Show. Save the secret key securely for the next steps.

Step 2: Create a Video Stream

To create a video stream, use the AWS CLI to run the following command. 

Shell
 
$ aws kinesisvideo create-stream --stream-name "MyKinesisVideoStream" --data-retention-in-hours "24"


The --stream-name flag defines the name of the stream, which you can later use via the API. The --data-retention-in-hours flag defines how long Kinesis Video Stream should store the video data.

Step 3: Construct GStreamer Media Pipeline

GStreamer is a media framework used by many types of cameras and video sources. AWS provides a GStreamer plugin for Kinesis, which makes it easy to stream video from a webcam or other camera using Real-Time Streaming Protocol (RTSP) to Kinesis Video Streams. Typically, video streamed from GStreamer is encoded using H.264. 

This topic shows how to construct a GStreamer media pipeline capable of streaming video from a video source, such as a web camera or RTSP stream, typically connected through intermediate encoding stages (using H.264 encoding) to Kinesis Video Streams. 

To construct the media pipeline on your local machine:

  1. Download and install the Amazon Kinesis Streams C++ producer library, which includes the GStreamer plugin. It supports macOS, Ubuntu, Windows, or Raspberry Pi.
  2. To load the GStreamer plugin, run this command:

export GST_PLUGIN_PATH=`pwd`/build        

  1. Run GStreamer using the gst-launch-1.0 command (see GStreamer documentation for options). Specify the Video Streams Producer SDK as your “sink”, meaning that video output should be sent there. The sink element is kvssink. It has the following required parameters:

  • stream-name—name of the video stream you created in Step 2 above.

  • storage-size—device storage size in KB. 

  • access-key—The IAM access key you obtained in Step 1 

  • secret-key—the IAM secret key you obtained in Step 1 

Note: Instead of hard-coding the access and secret key, you can use credential-path below. Do not use both.

  • credential-path—path to a file with your IAM credentials.

Step 4: Stream Video

Finally the fun part! We now get to stream video from a camera on the local machine directly to Kinesis Video Streams. 

The following GStreamer commands for Ubuntu creates a media pipeline that:

  • Streams video from a network RTSP camera and encodes it in H.264 format
  • Streams video from a local USB camera and encodes it in H.264 format
  • Streams pre-encoded video in H.264 format from a USB camera

See more commands for other operating systems and encoding patterns here.

Step 5: Post-Processing

Once your video stream is available in Kinesis, you can perform a range of operations on the video content—including processing, storage, playback, and analysis, using the Kinesis Video Stream Parser Library. The library provides the following tools:

  • StreamingMkvReader—reads MKV elements from Kinesis video streams.
  • FragmentMetadataVisitor—retrieves metadata from video fragments and tracks.
  • OutputSegmentMerger—merges fragments in a video stream. 
  • KinesisVideoExample—a sample application that shows you how to use the library.

A detailed discussion of video post-processing with Kinesis Video Streams is beyond our scope—learn more about using these components in the documentation.

Conclusion

In this article I covered the basics of Kinesis Video Streams, explained the role of the Producer and Consumer APIs, and showed how to ingest, stream, and post-process video content in four steps:

  1. Set up IAM access: created an Amazon IAM access key
  2. Create a video stream: used the Amazon CLI to create a video stream entity, providing your IAM credentials
  3. Construct GStreamer media pipeline: we showed how to use the popular, open source GStreamer framework to generate video content on an end user device and ingest it into Kinesis
  4. Post-processing: retrieved video fragments from Kinesis and analyzed them using custom code or third-party applications.

I hope this will be of help as you take your ability to manage and process video streams to the next level.

Stream (computing) AWS Processing Big data

Opinions expressed by DZone contributors are their own.

Related

  • Leveraging Apache Airflow on AWS EKS (Part 3): Advanced Topics and Practical Use Cases
  • Leveraging Apache Airflow on AWS EKS (Part 1): Foundations of Data Orchestration in the Cloud
  • Leverage AWS Textract Service for Intelligent Invoice Processing in Account Payables Testing
  • Real-Time Data Architecture Frameworks

Partner Resources


Comments

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: