With Meraki MV Cameras, Seeing is Believing

Everything has beauty, but not everyone sees it.

Confucius

The best part of my job is when I get to explore different technologies in order to decide what is worth pursuing and what is best left ignored.   While playing with the former is more gratifying, it can also be quite fulfilling the moment I decide that I’ve done enough exploration and it’s time to set my work aside for something more promising.

I was recently asked to spend some time working with Meraki MV cameras.  Cisco makes and sells a number of different camera models, but they all essentially do the same three things – object detection, classification (currently person or vehicle), and tracking.

Using the Meraki dashboard, an administrator can divide a camera’s view into separate zones.  For example, a store might use a camera to create separate zones for the store’s entrance, exit, and cash register.  The MV camera will then report on how many people are in each zone and how long they’ve been there.

It is important to know that people detection is not facial recognition.  The camera does not know who you are.  It simply knows that you are a human being.  The same goes for vehicle detection.  A Meraki camera can recognize than an object is a generic car, but it cannot tell you the car’s make, model, license plate number, etc.

You can see a very informational explanation and demonstration of MV cameras in action here.

Meraki APIs

If you have been reading my blog for even the tiniest amount of time, you know that I love open systems and RESTful APIs (Application Programming Interface).  Thankfully, Meraki cameras provide a plethora of APIs for me to play with.  I can do everything from examine configuration settings, capture live and snapshot views, and retrieve analytics reports.

For instance, the MV Sense “Get Device Camera Analytics Zone History” API, returns the historical records for a particular zone.  If I run it against my camera, I receive a JSON array of analytics objects.  Each object represents one minute of viewing.

Note: For simplicity sake, I truncated the number of returned objects.

[

{

“startTs”: “2019-10-29T17:32:45.933Z”,

“endTs”: “2019-10-29T17:33:45.933Z”,

“averageCount”: 1.029,

“entrances”: 32

},

{

“startTs”: “2019-10-29T17:33:45.986Z”,

“endTs”: “2019-10-29T17:34:45.986Z”,

“averageCount”: 1.494,

“entrances”: 32

},

{

“startTs”: “2019-10-29T17:34:46.052Z”,

“endTs”: “2019-10-29T17:35:46.052Z”,

“averageCount”: 2.184,

“entrances”: 40

},

{

“startTs”: “2019-10-29T17:35:46.078Z”,

“endTs”: “2019-10-29T17:36:46.078Z”,

“averageCount”: 1.657,

“entrances”: 29

},

{

“startTs”: “2019-10-29T17:36:46.191Z”,

“endTs”: “2019-10-29T17:37:46.191Z”,

“averageCount”: 1.225,

“entrances”: 16

},

{

“startTs”: “2019-10-29T17:37:46.242Z”,

“endTs”: “2019-10-29T17:38:46.242Z”,

“averageCount”: 1.297,

“entrances”: 22

},

{

“startTs”: “2019-10-29T17:38:46.292Z”,

“endTs”: “2019-10-29T17:39:46.292Z”,

“averageCount”: 1.066,

“entrances”: 13

}

]

 

The field, averageCount, is defined as:

“How many people were in view of the camera on average over that minute (sampled at 5fps). For example, if one person stood there for 30 seconds, the result would be 0.5.”

MQTT

While APIs are important, they are essentially a way to poll for data.  More exciting to me is that Meraki supports an MQTT (MQ Telemetry Transport) interface that allows me to capture a real-time view of the data.

If you aren’t familiar with MQTT, the basic concept is quite simple.  A device that wants to publish information (e.g. camera activity) sends data in the form of a topic to an MQTT broker.  Client applications that wish to receive that data subscribe to the topics it is interested in seeing.  Pictorially, it looks like this:

In my case, I configured my Meraki camera to use a Mosquitto broker.  To make life simple, I used the cloud broker that Eclipse provides.  It can be found at:  tcp://test.mosquitto.org:1883

I then wrote a Java MQTT client that connected to the broker and subscribed to all the topics Meraki cameras published.  For MQTT functionality, I used the Eclipse paho library.

The code looks like this:

 

client = new MqttAsyncClient(“tcp://test.mosquitto.org:1883”, MqttClient.generateClientId(), null);

client.setCallback(new SimpleMqttCallBack());

IMqttToken token = client.connect();

token.waitForCompletion();

client.subscribe(“/merakimv/<my camera’s serial number>/#”, 0);

 

The SimpleMqttCallback object implements MqttCallback (org.eclipse.paho.client.mqttv3.MqttCallback) and supports three required methods:

  • messageArrived()
  • deliveryComplete()
  • connectionLost()

The work of the callback is done in messageArrived().  This method is invoked every time the broker receives topic data from my Meraki camera.  Specifically, it is invoked for these four topics:

  1. Number of people entrances in the camera’s complete field of view
  2. Number of people entrances in a specific zone
  3. Raw detection, a list containing person identifiers (oids) and their x and y coordinates
  4. Lux light values

Of these four topics, I am most interested in raw detection.  This topic informs me of how many people my camera sees and what their coordinates are.

The Fun Stuff

Creating an MQTT client and watching data flow into it is fun, but it’s mostly geeky fun.  To make my code more practical, I need to massage that data and bring it to the attention of not-so-nerdy people.  For that, I wrote a bot for WebEx Teams that can relay information from the client into a Teams room.  I can now use WebEx Teams to report on what the camera sees and doesn’t see:

Note how my bot reports when it sees human motion and when it does not.  At present, the bot is not reporting on how many people it sees, but that will be coming soon.

Since I like my bots to be both chatty and good listeners, I wrote a number of commands that the bot understands and responds to.  Specifically, I can ask the bot for a live camera view, a snapshot at the current time, and a statistics report.  This looks like this:

Note that live views and snapshots are returned as links.  A user would then click on the link to see what the camera sees.

The thought behind this is that a WebEx Teams room could be established to monitor one or more Meraki cameras.  The people who join the room would not only see Meraki camera events, but they could work with the bot to see camera details.

For example, a camera might be used to monitor a secure area.  Using my MQTT client and bot, a Teams room would report on people entering and leaving the room.  If someone entered after hours or lingered too long, the room would be notified and the people in the room would be able to take the appropriate actions.

Mischief Managed

I have only been working with Meraki MV cameras for a short period of time, but my mind is already brimming with thoughts and wild ideas.  Besides WebEx Teams, I can envision all sorts of communications mediums – voice calls, SMS text, web bridges, etc.  The idea is to capture and understand and demystify complicated camera data and report it to humans (and possibly machine learning platforms) in as many ways as are needed.

Stay tuned for future developments.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: