The Google Assistant for Android developers – PART 1

Everybody knows about the Google Assistant, available on most Android and iPhone devices. As for “Hey Siri”, the well known “Ok Google” has entered common language. We probably all have already used it at least once if only to try it.

However, the scope of its field of action and implementation seems to be mysterious for us developers. At least, it was for me until recently.

Is it only a mobile feature? Can we use it in our application? What are the possibilities of interactions?

We will answer these questions and more through this article and we will then focus on what interest us here: how to build an interaction with our Android application through the Assistant.

The Google Assistant?

The Google Assistant is a conversational interface that we interact with, mainly using our voice. Though popular on smartphones, it is present on more than a billion different devices such has speakers, smart screens, cars, TVs and connected watches. Its main goal is to improve the discoverability and the interactions of our applications and websites.

We can for instance use it by a long press on the « Home » button of our smartphone or simply by saying “Ok Google” or “Hey Google”.

It is based on Natural Language Processing (NLP) and artificial intelligence in order to transform a vocal input into a request that a computer program can interpret. To put it simply, when the request of a user matches a specific grammar, the Assistant extracts the request parameters into schema.org entities and generate Android Deep Link URLs using the mapping given into the actions.xml file.

A lot of interactions, that we know of, already exist such as launching a video on YouTube or a song on Spotify, the displaying of an itinerary on Google Maps, the setting of a timer in our Clock application or the simple triggering of an internet search.

But there are way more possibilities.

Implementation on Android

But then, can I, Android developper, build my own interactions on the Assistant? The answer is yes, but to a certain extent.

Google offers to the developers the possibility to build their own interactions by using actions called App Actions that are intents holding a request to establish a link with our application.

There are many possibilities to build these actions. We will focus on the two that are relative to the Android development for our application:

  • The “Solutions” part which contains actions provided by Google to simplify everything for us. These actions are built-in intents that will allow us to build interactions in no time. However, they will not let us build conversations with the Assistant. See them as triggers, simple orders. Also, they can only be used to launch features in our applications with very few visual responses.
  • The “Custom Conversations” part, way more interesting from a developer perspective. These actions are build by the developer through DialogFlow and allow us to create « real » conversations with the Assistant. They can also provide visual interactions that will never require the launching of the application.

First, we will focus on the « Solutions » part in order to understand the concept of a simple App Action through its implementation in an application. Then, we will see the « Custom Conversations » part in a later article.

Implementation of a simple App Action

Before starting, it is important to know that by now, App Actions are still in developer preview. We will thus be able to build and test our actions but it will be impossible to trigger it through voice command in the Assistant. But don’t worry, we will still be able to visualize the result of our work into it.

We will create a simple application that will be launched using one of the build-in intents provided by Google. The goal is to launch a feature by specifying its name.

It will look like this:

  • a main activity which is a hub leading to 3 features through 3 buttons
  • one activity per feature displaying a title, a subtitle and an image

Finally, we will improve our interaction with the Assistant by displaying a Slice holding the information of the requested feature.

Let’s do this!

Prerequisite

In order to build an App Action for our application, we need to setup a few things. First, it is important to know that App Actions are only available starting Android 5 (API 21).
Also, it is necessary to implement deep links into our application to allow Google to link our actions to our activities. We will not cover this part here, but it is easy and quick to generate these deep links through the App Links Assistant accessible from the « Tools » tab of Android Studio.
Finally, it is primordial to have our application uploaded on the Google Play Console (a draft is enough) in order to be able to test our actions, and to be logged with the same account on the Console, in Android Studio and on our device / emulator.

The application

First, let’s build a basic application. The code being very simple and of no specific interest, it is not shown here but the code is available on GitHub. You will find there an AppActionsActivity which sets 3 listeners on 3 buttons allowing the launching of each feature. You will also notice the presence of an intent-filter in the manifest to handle deep links.

Adding of an App Action

Among the built-in intents provided by Google, we can find a lot of generic actions such as START_EXERCISE, CREATE_TAXI_RESERVATION or GET_ACCOUNT, each one enabling the launching of a specific feature in our application with the appropriate parameters.

We will use here the most generic of them all: OPEN_APP_FEATURE.

In order to do that, we need to create a new « xml » package into the « res » directory of our application and add a new actions.xml file there. This file will hold the structure of the actions that will be in our application and the different ways to access them (deep link or Slice) with the accepted and / or necessary parameters.

So let’s add our action to the actions.xml file:

You can notice the different parts:

 

intentName the name of the used built-in intent
fullfilmentMode the mode to fulfill the action, here a deep link
urlTemplate the URL template to use for our deep link with its parameters
intentParameter the name of the parameter taken from the URL that will be passed to the intent sent to our application
urlParameter the name of the parameter to map in the URL

 

We now need to point to this file in our AndroidManifest.xml:

Warning: at the moment, it is impossible to upload an APK or an AAB containing an AndroidManifest.xml pointing to an actions.xml file on the Google Play Console. You just need to remove it from the manifest before uploading it and then put it back locally to be able to test your App Actions.

We still need to handle the received intent within our application. To do this, let’s create a private method in the AppActionsActivity, called from onCreate, to extract the data from the intent in order to check that its type is Intent.ACTION_VIEW, that it contains the necessary parameters if needed and, in our case, redirect to the specified feature.

For the sake of this article, we decided to handle the redirection from the AppActionsActivity, but we could have decided to create one deep link per feature to avoid this redirection. Built-in intents only are pre-formatted entry points but the behavior resulting in the application is the responsibility of the developer, which offers great freedom.

It is time to test our App Action. To do so, you need to install the « App Actions Test Tool » plugin and launch it from the « Tools » tab in Android Studio. By clicking the « Create Preview » button, Google checks the presence of an application sharing the same application ID on the Google Play Console and then generates the necessary deep links automatically.

You can notice that our OPEN_APP_FEATURE App Action has been configured and that all we have left to do is type the name of the feature we want to launch before clicking « Run ». You can now see that the Assistant launches on your device / emulator and redirects us to our application then to the right feature.

 

Note that if the asked feature doesn’t exist, the application will still be launched by default.

Displaying of a Slice

In order to make our interaction with the Assistant more visual, we are now going to add a step between the request to the Assistant and the launching of our application through the implementation of the Android Slices.

Without going into the details of implementing Slices, we are going to see the configuration to use so that the Slice is sent to the Assistant.

We need to add one entry point into our actions.xml:

As previously with the DEEPLINK entry point, we can see:

fullfilmentMode the mode to fulfill the action, here a deep link
urlTemplate the URL template to use for our deep link with its parameters (it must respect this format: « content://{slice_authority}/… », the slice authority being declared in the manifest)
intentParameter the name of the parameter taken from the URL that will be passed to the intent sent to our application
urlParameter the name of the parameter to map in the URL

 

Thus, when you run the App Action from the test tool, the application will send a Slice holding the information of the requested feature (title, subtitle and image) to the Assistant and that will redirect us to the specified feature when clicked.

It will also be necessary to grant the permission to access Slices to the Assistant at launch by asking the permission in an Application class:

Conclusion

In this article, we have seen what is the Google Assistant, mainly from the Android perspective. We have built a small application allowing, in the first place, to access a specific feature and then to display a Slice into the Assistant redirecting to the displayed feature when clicked. And all of this very easily using the « Solutions » part offered by Google and providing built-in intents that can automatically handle deep links.

We will see the « Custom Conversations » part in a coming article in which we will focus on DialogFlow.

During my research, I encountered some difficulties to which I still don’t have answers. For instance, I wanted to use only one FeatureActivity that would be configured with an extra holding the requested feature. But my activity always stayed configured on the first invocation (cache system?). Also, I had many instabilities with the Assistant on an emulator. For the time being, I would advise to test your App Actions directly on your device (you may need to configure its locale to en-US).

Note that, when you App Action is ready, it is possible to submit it to Google through a form in order to be able to deploy it to production.

Finally, I would like to remind Elaine Batista Dias’s message, which I thank for her help, from her talk during the 2019 edition of Android Makers:

“Google Assistant is still new, think outside the box.”

See you in the next article.

3 commentaires sur “The Google Assistant for Android developers – PART 1”

  • hi, How to create intent.actions.EXAMPLE_NAME and intent parameters and how to register schema.org and deep link in using action.xml file
  • where do i write "one" value ?
  • Hi, Thanks for the above explanation but this isn't working. 1. Open_app_Feature test tool says "History" where you have shown "one" 2. I am getting this error in Test tool "Reminder: You must be logged in to the same Google account in both Android Studio and your test device to access your preview." Though I am logged in to Android Studio By My account that is used in mobile and Play console too. My app is already published to Play store. 3. The assistant says, no app is linked to this URL. Please help. Thank you.
    1. Laisser un commentaire

      Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *


      Ce formulaire est protégé par Google Recaptcha