In an earlier post, I wrote about the collection of audio and video. As a human learns from experiences, Big Bubba uses his senses to learn what happens when a traffic stop goes bad, which person in a group is the greatest threat, etc. That video can be seen here.
Big Bubba needs data to learn. Just a a child learns by experiencing an environment, Big Bubba learns about how to prioritize threats. Big Bubba learns if someone is wearing a vest with explosives that there are steps to be applied in neutralizing the threat.
But before any of that learning can take place, data needs to be collected. The data must map to a fair representation of the scenarios that Big Bubba will encounter. And the data must be in a format that machine learning algorithms can process.
We use a package called Caffe2. To use the package and to leverage GPU processing on NVIDIA hardware, we need to get our data into a specific format. The following video shows one way in which we satisfy this requirement.
Any questions:
6035056500