“Don’t worry when you are not recognized, but strive to be worthy of recognition.”
I have been at this programming thing more than four decades and still find things to get wildly excited about. For better or worse, my recent excitement about AI image recognition has bordered on obsession. It began with me working with multimedia text messages (MMS). At first, I was happy creating applications that sent and received images, but that quickly evolved into wanting to programmatically analyze those images. Once I had a decent recognition platform in place, I wanted something that took action when particular images were encountered. I am now ready to take what I learned from working with MMS into the world of real-time image recognition, analysis, and workflow processing.
Long time followers of my blog will remember previous posts where I shared my adventures in IoT (Internet of Things) technology. I used telemetry data from various sensors to drive different workflows. For example, creating a ServiceNow work item when the temperature of a blood storage system rose above 6 degrees centigrade. The work item would subsequently be used to schedule a technician to investigate and correct any problems.
It isn’t a stretch to think of a camera as a sensor and its images as telemetry data. Seeing a gun in an image can be just as effective at stopping a shooting as is hearing a gunshot.
Of course, cameras do not know what a gun is. They simply gather light and create digital approximations of that light. For any kind of understanding, you need to combine cameras with AI analysis. That analysis would then feed into a workflow that takes the necessary actions — create a work item, send a text or email, make a call, sound an alarm, open a gate, etc.
To take these thoughts out of my head and into something demonstrable, I built a test system that used my PC’s webcam as the input stream into an AI-driven application. For the ingress data stream, I configured the Windows camera application to take a picture every two seconds. Those pictures would then be intercepted and analyzed by my AI application in near real-time. Instead of simply dumping the output to the application console, I send the raw data as a chat message to an Avaya Spaces room. As I explain in the video below, this room could be used for dynamic team formation to handle a specific issue. In the near future, I plan on building an integration into ServiceNow. For now, though, understanding how the AI engine interprets what the camera sees is good enough.
To see a prototype of my application’s code in action, check out this Cheapo-Cheapo Productions video.
The possibilities for this technology are endless. Schools and businesses could use it to spot guns before they are fired. It could be applied to healthcare to recognize that a patient has fallen. Security can be enhanced by catching unauthorized people in secure areas. There is no shortage of ideas.