This use case describes the processing of video data for tasks such as fish counting, anomaly detection and summarization, dehazing, and other image processing techniques that can provide insight into the undersea environment.
The project files and source code are stored in a Github repository which will be made public when it is ready.
This project gives two Python 3 use cases for video and image process for the ONC Oceans 3.0 API and Sandbox. An Oceans 3.0 account is required (it's free) to obtain an API token to run the examples. The token can be obtained by creating an account and accessing the Web Services API tab. Update the params.json
file(s) with your token to run the use cases.
ONC has videos and camera located above and below water. Often, when processing image below water there are suboptimal viewing conditions due to poor lighting conditions, sediment, mechanical malfunctions, etc. This makes it difficult for ONC scientists to analyze these images for evaluating biodiversity, etc.
Dehazing is the process of remove haze from an image so objects that are difficult to see are illuminated.
The algorithm for dehaze is in the main script dehaze_script.py
. The tuneable parameters for a) connecting to the API, b) search the API, and c) dehaze algorithm are contained in params.json
.
Params
Workflow
[-1,1]
and normalized.refine=0
.Note: The chromatic values are often adjusted due to this process so the original colors are often not preserved exactly.
ONC deploys video feeds to multiple locations and has accumulated terabytes of video data. However, not all this video data is useful as it may have poor visibility or no events of interest. It is a time-consuming process to look through months and years worth of video data. This is produces a storage problem as a single day of video can produce 3 GB of video data. One way to automatically handle this is to use video summarization techniques to preprocess the data and remove or sample frames that might be of interest to ONC scientists.
The algorithm used was by Dash & Albu. The algorithm models the background using a Gaussion Mixture Model (GMM). This produces a lot of noise in underwater videos because of current and particulars in the water. To offset that, per pixel activation and ajustment functions were added. When a certain number of pixels are activated, the samples frames are added to the output summarized video. For most cases, roughly 50% of the video is reduced.
Params
Workflow
[0-1]