Image Analysis From AWS Rekognition and Avaya OneCloud CPaaS

“Everything has beauty, but not everyone sees it.”


These days, my programming is all about composability — combining different cloud services into something unique. In my previous two blogs, I combined Google Dialogflow ES and CX natural language processing with SMS text messages. Today, I want to blend image processing from Amazon with multimedia text message (MMS) services from Avaya. Although they are different and unrelated platforms, with a little programming orchestration, they fit together like a hand in a glove.

To get the most out of this article, I highly recommend that you familiarize yourself with AWS Rekognition (yes, that’s how they spell it). Rekognition is a very powerful multimedia processing and categorization platform that works on images, video segments, text, and streaming video. I’ve only scratched the surface of what it can do, but my initial thoughts are “Wow!” I’ve worked with several other image processing platforms and none compare with the deep analysis that Rekognition provides. All vendors’ solutions can recognize a flower, but Rekognition lets you know that a person is smelling that flower. That’s pretty amazing.

Make it So

To help understand the technology, I wrote a node.js Express application that integrates AWS Rekognition with Avaya OneCloud CPaaS. Specifically, the applications receives an MMS image (a texted photograph), uploads the image to an AWS S3 Bucket, applies AWS Rekognition to that image, deletes the image from the S3 Bucket, and returns the Rekognition labels (think of labels as the attributes of a photograph) to the sender. I would have loved to go straight from CPaaS to Rekognition, but that’s not the way the platform works. It only works with images in S3 Buckets.

While my application is meant to be a teaching tool, there are quite a few use-cases for image classification. Picture (pardon the pun) a contact center that can route incoming texts based on what it sees. Is healthcare too far a stretch for that? This technology can also be used to help agents better serve their customers by “screen popping” on an image’s attributes. Additionally, several image processing platforms have the ability to determine if an image is NSFW. This enables a workflow to take the appropriate action when a questionable image is encountered.

I deployed the application on my Linux server and texted in quite a few images. Below is a sample run using my Avaya Cloud Office (i.e. RingCentral) account. Note how accurately Rekognition categorizes each image. In the first image, it knows that this is a gathering of military sailors. In the second image, it not only sees the camera, it recognizes that it is being used for photography. I have yet to work with a platform that provides so much detail.

The naval image is from 1943 and that’s my father’s ship, the U.S.S. Dallas, receiving a Presidential Citation. Perhaps Dad is the “person” Rekognition has singled out. You can read about it in my article, A Memorial Day Remembrance.

The words after “I see:” are the Label array Name values from the Rekognition query. Here is the format of the returned data. Note the Confidence and BoundingBox values. They might be useful in an application more sophisticated than mine.

The Gory Details

Before you deploy the application, you need an Avaya OneCloud CPaaS account. From that account you need your:

CPaaS Account SID

CPaaS Account Token

To use any AWS API or SDK, you need an AWS account and the credentials that come with it. Since Rekognition works closely with AWS S3, S3 Bucket information is required, too. For this application, you need:

AWS Account Key

AWS Secret Key

AWS S3 Region

AWS S3 Bucket Name

Enough talk. Here is the node.js code.

This Avaya CPaaS application accepts incoming MMS images and returns image recognition labels
1.  Receive MMS message from Avaya CPaaS
2.  Upload image to AWS S3 Bucket
3.  Process image with AWS Image Rekognition
4.  Delete image from AWS S3 Bucket
5.  Text AWS Image Rekognition labels to sender

const express = require('express');
const request = require('request-promise');
const bodyParser = require('body-parser');
const cpaas = require('@avaya/cpaas');
var enums = cpaas.enums;
var ix = cpaas.inboundXml;
const AWS = require('aws-sdk');
const https = require('https');

// Change the following constants to match your environment
const CPAAS_URL = "";
const CPAAS_USER = "CPaaS Account SID";
const CPAAS_TOKEN = "CPaaS Auth Token";
const AWS_ACCESS_KEY_ID = "AWS Account Key";
const AWS_SECRET_ACCESS_KEY = "AWS Secret Key";
const AWS_REGION = "AWS S3 Region -- e.g. us-east-2";
const AWS_BUCKET = "AWS S3 Bucket Name -- e.g. my-cpaas-bucket"
const URL_PORT = 5097;  // Find an available port on your system

const CPAAS_SEND_SMS = "/SMS/Messages.json";
const basicAuth = "Basic " + Buffer.from(`${CPAAS_USER }:${CPAAS_TOKEN}`, "utf-8").toString("base64");

var app = express();

// Middleware to parse JSON
	extended: true

// Tell server to listen on port
var server = app.listen(URL_PORT, function() {
	var host = server.address().address;
	var port = server.address().port;
	console.log("AWSBot is listening on port %s", port)

// Initialize AWS objects
const imageClient = new AWS.Rekognition({
	accessKeyId: AWS_ACCESS_KEY_ID,
	secretAccessKey: AWS_SECRET_ACCESS_KEY,
	region: AWS_REGION

const s3 = new AWS.S3({
	accessKeyId: AWS_ACCESS_KEY_ID,
	secretAccessKey: AWS_SECRET_ACCESS_KEY,

// Entry point for a GET from a web browser
app.get('/', function(req, res) {
	res.send("AWSBot is running.");

// Entry point for mms text'/cpaas-mms/', function(req, res) {
	processImage(req.body.From, req.body.To, req.body.Body, req.body.MediaUrl, res);

// Entry point for sms text'/cpaas-sms/', function(req, res) {
	processText(req.body.From, req.body.To, req.body.Body, res);

async function processImage(from, to, body, imageUrl, res) {
	await downloadFile(imageUrl, from, to);

async function downloadFile(imageUrl, from, to) {
	const filename = imageUrl.slice(imageUrl.lastIndexOf('/') + 1, imageUrl.indexOf('?'));
	const chunks = [];
	var buffer;
	const request = https.get(imageUrl, function(response) {
		response.on('data', chunk => chunks.push(Buffer.from(chunk)))
			.on('end', () => {
				buffer = Buffer.concat(chunks); // Convert chunks to Buffer object
				awsTransfer(filename, buffer, from, to);

function awsTransfer(filename, buffer, from, to) {
	var message = "I see: ";
	const mmsMetadata = {
		"type": "CPaaS MMS File",
		"from": from,
		"to": to
	// Upload file to S3 Bucket
	const params = {
		Bucket: AWS_BUCKET,
		Key: filename,
		Body: buffer,
		Metadata: mmsMetadata
	s3.upload(params, function(err, data) {
		if (err) {
			throw err;
		// Send to AWS Image Rekognition for label detection		
		const imageParams = {
			Image: {
				S3Object: {
					Bucket: AWS_BUCKET,
					Name: params.Key
			MaxLabels: 10,
			MinConfidence: 75
		imageClient.detectLabels(imageParams, function(err, response) {
			if (err) {
				console.log(err, err.stack);
			} else {
				if (response.Labels.length > 0) {
					for (i = 0; i < response.Labels.length; i++) {
						if (i < response.Labels.length - 1) {
							message += response.Labels[i].Name + ", ";
						} else {
							message += response.Labels[i].Name;
				} else {
					message = "Label detection failed."
				// Delete file object from S3 Bucket -- Comment out if you wish to preserve file
				const deleteParams = {
					Bucket: AWS_BUCKET,
					Key: filename
				s3.deleteObject(deleteParams, function(err, data) {
					if (err) {
					} else {}
				// Text image labels to "from" number	
				const options = {
					body: `From=${to}&To=${from}&Body=${message}`,
					headers: {
						'Content-Type': 'text/plain',
						'Accept': 'application/json',
						'Authorization': basicAuth
					method: 'POST'
				var response =, function(e, r, body) {});

async function processText(from, to, body, res) {
	returnTextResponse(`Please text an image to obtain labels.`, from, to, res);

async function returnTextResponse(prompt, from, to, res) {
	var xmlDefinition = generateXMLText(from, to, prompt)
	var serverResponse = await buildCPaaSResponse(xmlDefinition);


function generateXMLText(customer, cpaas, body) {
	var sms = ix.sms({
		text: body,
		to: customer,
		from: cpaas

	var xml_content = [];
	var xmlDefinition = ix.response({
		content: xml_content
	return xmlDefinition;

async function buildCPaaSResponse(xmlDefinition) {
	var result = await {
		return xml;
	}).catch(function(err) {
		console.log('The generated XML is not valid!', err);
	return result;

After the application is deployed and running on a computer with a public IP address (something similar to my Linux server), you need to assign it to one of your CPaaS numbers. Note that I attached it to both the SMS and MMS weblinks (two different POST entries). The MMS weblink does the image processing and the SMS weblink simply tells the user to text in a photo.

Mischief Managed

My workdays aren’t always filled with exciting and productive programming, but the best of them are. I had to stretch myself a tad to pull this all together, but I am glad that I took the time. I can’t think of a better way of expressing how thrilled I am with cloud services than using them myself to build cool applications.

As always, feel free to reach out to me with any questions, comments, or suggestions for future articles. My best writing comes from your feedback.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: