Talk to Me, Goose — UX in a Bot World

Talk to Me, Goose — UX in a Bot World

Reading time: 5 mins

Top Gun is one of my all-time favorite movies. My parents recorded a TV broadcast on VHS tape and I wore it out. It’s got everything you could possibly want in an 80s flick: two Toms (Cruise and Skerritt), popped collars on polo shirts, and a completely awesome Kenny Loggins soundtrack. If you hear the first few bars of “Danger Zone” and do not feel pumped, you are a robot. Report for your scheduled maintenance and emotion chip upgrade.

Nerd nitpick side comment: In the film, the Navy F-14 fighter jets are up against “MiG-28” fighters. There’s no such thing! The bad guy airplanes in the film are American F-5 jets painted black.

I wanted desperately to be a pilot. Two things about pilot-hood intrigued me. First, you get to have a cool call sign. There is zero downside to people referring to you as “Iceman” or “Viper” (I could never think of a sufficiently cool call sign for myself). Second, you get to have a Radar Intercept Officer (RIO). In Top Gun, Goose was Maverick’s RIO. To my 10-year-old eyes, having a RIO meant having your best buddy with you all the time. He could call out relevant information about incoming bogeys, crack jokes about the bad guys, and help you keep your head in the game. A RIO was a whole other brain that you could add to your own while you flew. The pilot kept the plane in the air and pointed its weapons at bad guys. The RIO handled everything else.

As today’s technology makes great strides in artificial intelligence (AI) capabilities, we can expect our computer systems to take on more of a RIO role. And we’re already seeing it with smart assistant technologies. For example, instead of typing out precisely what we need the computer to do, we’re starting to be able to speak vague commands — instead of searching specifically for the Top Gun IMDB page, we can just ask Google Assistant what year Top Gun was released.

Year of Top Gun's release

It’s possible, right now, to connect almost any smart assistant tool to SAP software through SAPUI5 and OData services in SAP Gateway. This makes your current (and future!) SAP Fiori apps prime candidates for smart assistant integration. I’ll give you a little know-how on what’s possible, and then discuss some tips and tricks for using the current state-of-the-art smart assistants.

Data Without a Screen

Since SAP Fiori apps depend on OData services for their back-end connectivity, any app you develop comes with a ready-made API that can technically be called from places other than SAPUI5. This means your SAP Fiori apps come ready to make conversation. Several enterprising developers have written blogs on SCN with instructions.

The high-level flow goes like this:

  1. Using an Amazon Alexa-enabled device, you initiate a conversation and call up your smart agent. (Developers can make smart agents starting here.)
  2. The Alexa infrastructure handles a back-and-forth between you and the service, until you’ve provided the right information to match the configured intents and custom slots.
  3. Alexa can then call out to a webhook that you provide. This can be the remarkably easy-to-build Lambda infrastructure of serverless functions or your own RESTful service.
  4. You then program that webhook to call your OData service and interpret the resulting data.
  5. Next, you turn that data into a spoken word response and send it back to Alexa.
  6. The Alexa-enabled device speaks out whatever you’ve set up as a response.

There’s so much you could do with this! You could ask Alexa for your top customers by sales, a list of the workflow approvals due today, or your favorite work-from-home excuse.

Voice-Controlled Screen

SAPUI5 provides the right API tools to make this work a different way: Instead of having your smart device read information back to you, you can have it drive your screen to the right place. Imagine describing in natural language the dimensions of your query and what sort of graph you’d like to see, and then the screen just plops into place.

The high-level flow goes like this:

  1. Follow all the same steps as the previous flow, up to the webhook.
  2. In the server that powers your webhook, enable web sockets. Here is a Windows example you can set up quickly.
  3. Have your SAPUI5 code make use of the SAPUI5 websocket API.
  4. The websocket publishes events to the browser listening, and then in SAPUI5 you can respond to those events with corresponding navigation calls.
  5. If you want a quick way to do this, you can also use Firebase to handle the websocket and data sync stuff.

Done right, your screen now follows your commands. Imagine setting up a board room to respond to analytic queries and impress the C-level folks, or a quick way to hide the open Candy Crush window while away from your desk.

Tips and Tricks

Knowing that you have the power to make your voice do whatever you want can be intoxicating. Be smart in how you use this power!

●     Do not give voice agents low-level SAP system change access. Currently Google and Amazon tools do not do voice pattern recognition to distinguish who is issuing a voice command. “Log off all current users” means the same whether your Basis administrator or her six-year-old son said it.

●     Do not give voice agents the ability to respond with sensitive data. Who knows who will be listening when the question is asked?

●     Make sure that self-description is one of the first skills you give your agent. Users will need to know what their RIO can do.

●     You still have to program everything that the agents will do, so make smart choices about what will enhance and complement the SAP Fiori experience. It makes no sense to create a sales order with 30 required fields using voice automation.

SAP CoPilot

Of course, you may want to avoid writing code of your own to do this smart stuff. SAP is actively playing in this space. When SAP CoPilot takes off into general release, I expect that it will have far more SAP-specific capabilities than any one-off chatbot that you can make. If you watch the video, note that SAP CoPilot holds business data context across devices.

Context will be a key differentiator between general consumer smart devices and SAP CoPilot. Google Assistant is brilliant at knowing what you’re talking about even across multiple requests, but has no window into your business systems.

Looking Ahead

Time to take off my flight helmet and put on my wizard hat. Looking into my crystal ball, I predict the following with respect to bots:

●     Voice and text-operated bots continue to advance in capabilities. In a matter of a year or two, you’ll be surprised how much smarter they are.

●     Your users will come to the table already knowing a lot about working with bots. Consumer-facing bots are popping up everywhere, and will be out in front of internal enterprise bots.

●     The degree to which a business system can interoperate with voice and text interfaces will be a key differentiator in purchase decisions for large systems. You should be ready to have bots calling your system.

●     Soon thereafter, bots’ knowledge of other bots will grow in importance. SAP CoPilot should know how to talk to the Microsoft Dynamics bot, which should know how to talk to the Salesforce bot, and so on.

●     You won’t teach bots rote, explicit knowledge. You will teach them where to go to find information.

Get ready to have your RIO sitting right behind you, pointing out bogeys and cracking jokes.


More Resources

See All Related Content