title

Tuesday, March 1, 2022

Experiment - merge / traffic networks

The goal of this project was to explore how to merge or fuse together real time video streams. It's about exhausting a scene down to colors and movement and repurpose this into an abstract representation. When info has been stripped away, the scenes can be merged together. First two networks are both a direct representation of the flow of traffic at different locations. The third data network is an attempt at trying to merge and fuse these two together.

It's a proof of concept of trying to analyse and gather info from live streams. To develop this further i would've done this with a camera where visitors can see themselves / their surroundings being analysed and taken apart and "merged" with other locations. As this is only a proof of concept I chose to analyse an already available live stream with a steady, continuous flow of movement. The live streams are taken from spatic.go.kr

This experiment grabs two random street cams around Seoul. Traffic from each camera is analysed and the color of each car registered is extracted and displayed as a small part of an abstract network. Traffic cameras are passive and running 24 - 7. As a result it will give different results based on what time it is. The third network represents the merge of the different scenes.

Analysis of scene -> breaking down each scene -> Merge of two scenes.

In the first two "networks" the x, y - position are chosen at random, the z - position is decided by the saturation of the color. In the third network x is position is controlled by brightness, z position by saturation and y is randomly chosen.

Issues
Originally I wanted to use traffic cameras / live streams from different parts of the world, e.g. Norway and Korea, but unfortunately I couldn't find any high quality streams from Norway. As a result it uses different traffic cams around Seoul instead.

Unfortunately the tracker is not that accurate. It's using contour / difference tracking via. Open CV where it looks for pixels that differ from a set background image. As a result it will also pick up light differences such as the headlights of the car. I realised halfway through the project that a more effective / accurate approach would be to use a machine learning model with TensorFlow trained on specific scenes. If I had to redo this project I would've gone with this approach instead / Google's "Teachable Machine" project.

Another issue is how there is not a limit to the amount of vertices. Ideally it would be time based, where every x seconds a vertex is deleted from the network. This way the network would be more representative of the current traffic as for example when there is a red light, most of the vertices would be deleted before the light becomes green.

Top left - original live stream, top right - grey scale copy to threshold, bottom left - background image, bottom right - threshold that is used for tracking.

No comments:

Post a Comment