Develop a secured Android application

le 30/09/2015 par Rémi Pradal
Tags: Software Engineering

Android applications are commonly used to process very sensitive data. It is the developer's responsibility to make sure that the information prompted by the user cannot be intercepted easily by a malicious people. The Open Web Application Security Project (OWASP) [9,10] tries to enumerate the potential security issues of a mobile application. Some of them are the system architect's responsibility (such as issues related to weak server sides control), some are the back end developper responsibility (issues related to authentification checks) and finally, some are purely related to the mobile application. In this article we will focus on the issues which can be tackled thanks to the Android mobile developer's action in itself. Therefore we will address here three potential vulnerability sources : risks when we communicate with a webservice (WS), potential leak of information when we store data on the device storage and vulnerabilities of having your application easily editable by a third party.

1. Secure Webservices calls

In sensitive applications which use a WS, the most important thing is to make sure that the data we share with our backend is secure. Indeed the safest application is useless if the requests made over the Internet are easily catchable.

Threat : Man in the middle attack (MITM)

There are two major risks when an application can be affected by a MITM attack.

  1. Information leak If a pirate can control the local network where the user uses the application, he can easily intercepts all the communications between the app and the WS stealthily.
  2. Webservice (WS) mimicking Someone with certain knowledge of the WS format can block the application call and provide its own fake response. In that case the user thinks its request has been performed whereas the request has never reached the backend.

It is quite easy to test how vulnerable to a MITM attack your application is : you just need to use a software which will be used as a proxy (for instance CharlesProxy [12]) and then set up your device to use the machine which has the proxy installed. If your application is not protected against MITM attack you will be able to see every request performed by your application. Now, imagine that one of your app user connects to your webservices through a "not safe" network : it is effortless for a pirate to install a proxy on the network routeur which will sniff all the requests in clear.

Attack origin : the TLS/SSL certificate chain

A minimum to ensure that our communications are safe is to use the HTTPS protocol i.e using a communication which is crypted thanks to the Transport Layer Security (TLS) or its predecessor the Secure Sockets Layer (SSL). Meanwhile, if this condition is necessary, it is not the only one that our system has to comply with. To understand that, let's have a look of how the SSL protocol works.

A SSL certificate is composed of (at least) three certificates

  • Root certificate. It is a certificate issued by a Certification Authority (CA), i.e. a trustable organization that will make sure that the whole transaction is safe.
  • Intermediate certificate(s). There can be several intermediate certificates. They make the link between the end-user certificate and the root certificate. It is a certificate which is dedicated to the server exposing the WS and which is signed by a root certificate.
  • End-user certificate. The end user certificate is a certificate proper to the WS physical server.

Android SSL native protection:

The Android network layer has an embedded list of CA certificates (more than one hundred, you can check the list in the preferences of your devices). Every HTTPS network call has to have one of these CA certificates at its certificates chain root.

Meanwhile there is nothing to make sure that the rest of the chain corresponds to the server we want to contact. For instance a pirate can do a man in the middle attack by buying an intermediate certificate to the CA. All the network transaction will be seen as valid by the system. This vulnerability is very common : a study has shown [1] that 73% of the application using HTTPS protocol do not check the certificate in a proper way.

How to make sure that we are connected to our back-end and that this connection is safe?

The solution to the issue described above is to manually check that the intermediate certificate (which is proper to a particular server) is a known certificate. This means that we have to store in the application this particular server certificate. It can be done in a resource file or directly in source code file as a constant.

We may wonder why we have to check the intermediate certificate instead of the end user. There is two reasons for that. The first one, as we will see later is that end user certificates have a short life time. The second one is a security reason : imagine that a hacker takes full control of your system, then we will own your private key (your server need that to sign his requests). Your application will see that the request is signed with the correct end user key and will allow the connection. If the verification is done thanks to intermediate server check, it is possible to remotely revoke the certificate by contacting the intermediate CA.

The Java class that is verifying that a SSL connection is secure is called an SSLSocketFactory. To create an SSLSocketFactory that will perform an intermediate certificate check we have to follow these different steps.

  1. Create a class inherited from X509TrustManger. This class is an abstract class from the java.net.ssl package which is dedicated to check the server side validity of a SSL socket.

    <script src="https://gist.github.com/rpradal/a2e30f9e7fe9988f6971.js"></script>

  2. Set a new default SSLSocketFactory. This code should be run before any network call.

    <script src="https://gist.github.com/rpradal/682ad4a88fa725b5f587.js"></script>

Certificate check potential drawbacks

  1. An intermediate certificate can expire (their lifetime is approximately 10 years). A solution can be to anticipate this change by adding the new certificate in the white list way before the certificate is changed.
  2. The intermediate CA can be compromised. If the intermediate CA is compromised, the security mechanism described will be completely useless. Indeed if the private key of the intermediate CA is detained by a hacker he will be able to forge a certificate chain which will have the same intermediate certificate as your certificate chain. In that case the pirate will be able to perform a MITM attack. Even if the CA are in theory safe it might happen such as in 2011 when DigiNotar was compromised [13]. If it happens the only thing to do is to change all the SSL certificate chain of the server and push a new version embedding the new intermediate certificate.
  3. The SSLSocketFactory trust policy is applied to all network calls in the application. If a sdk is embedded, it is necessary to embed the intermediate certificate of the remote server of these sdk. This can be problematic as it is not easy to anticipate the certificate modifications of these servers. This problem can be faced by dynamically injecting certificate. The application allows only one certificate (the main server one) and retrieves a dynamic list of authorized intermediate certificates when the application starts. Then, these certificates are added in the SSLContext trust manager.

To conclude, in most of the situations this intermediate check mechanism will ensure protection against MITM attack. When the hacker intercept the communication he will have to include its own certificate in the chain, the TrustManager will not recognize this certificate and will refuse the HTTPS connection.

2. Safe storage on device

The Android platform provides a convenient way to store preferences and even big files thanks to the SharedPreferences interface. Even if the data stored in these shared preferences is hidden in a masked directory, it is possible to retrieve the data easily if the device is rooted.

Consequently, if the information stored by the application is sensitive, it might be necessary to encrypt the data stored in the shared preferences. It is possible to do so in two ways :

  1. Use a cryptographic library to encrypt/decrypt the values (and eventually the keys) of the SharedPreferences. There are many state of the art java cryptographic library javax.crypto, Bouncycastle[2] and Concealed[3]

  2. Use a library providing a SharedPreferences wrapper. These libraries are very convenient as the developer does not have to care about which algorithm has to be used. Meanwhile, using these libraries can lead to a lack of flexibility and some of them are not using safe algorithms. Consequently they may not be trust to store very sensitive data. One of the most used libraries providing this kind of wrapping feature is SecurePrefences [4]. In you choose this solution, you can instantiate a SecurePreferences extending SharedPreferences in a very straightforward way :

    <script src="https://gist.github.com/anonymous/285b948ba77727c94298.js"></script>

These two methods are based on symmetric cypher algorithm such as AES (with an appropriate key size). It leads to wonder : which key should-we use ? Indeed if we use a static key, the preferences can be decrypted by retro-engineering the application. So, the best solution would be to use a pin-code/passphrase that the user has to type when the application starts. Another possibility is to use the Fingerprint API [15] (available since API 23) which provides a safe and fluent way to authenticate. Unfortunately this approach cannot fits every application's user experience. For instance if we want to display some information stored before the pin code is typed, then we cannot use this secure encryption system.

Hopefully Android provides a safe way to generate a key which will be unique for each couple application/device : the KeyStore. Android KeyStore's goal is to allow applications to put private keys in a place where they cannot be retrieved by another application or by materially accessing the data stored on the device. The mechanism is pretty simple : the first time, you run your application to check whether a private key linked to your application is present or not. If not, you generate one and you store it in the KeyStore. If the private key is already present, you can use it as a cryptographically safe key to decipher a SharedPreferences data thanks to the algorithms described above. Obaro Ogbo wrote a detailed article [11] describing in depth how to use the KeyStore to generate a Private/Public Key couple. The main drawback of the KeyStore is that it is available only since API 18. Still, there is a backport library which provides compatibility since API 14 [14] (this not an "official" backport so you have to use it at your own risk).

Consequently we can propose the following decision diagram when deciding which type of preference system we should use:

Secure Preferences flowchart

3.   Protect application against source code analysis and modification

Sometimes, an Android developer wants to make sure that its application will not be analyzed, read and eventually modified by anyone. There can be different reasons for this request :

  • We may want that a hacker will not be able to remove a lock in the application that is preventing non-paying user to use some features.
  • A risk when we develop a sensitive application is that a hacker modifies the application in a way that all the typed information are returned to him. Even if it cannot easily happen on the play store, there are many other places where a user could download a forged application which will steal all his data in a perfectly transparent way.

What every Android developer should keep in mind when developing a sensitize application is that retro-engineering an Android application is (quite) easy for an experienced people. That is particularly true if you use "native" Android application building. Indeed, due to the nature of most Android app (utilization of Java bytecode), it is easy to decompile the bytecode read it, modify and eventually rebuild a modified application [5].

In this part we will highlight some technical tools and some architectural rules that can mitigate these risks. Meanwhile we have to keep in mind that as the application is executed on a client device there is no 100% sure method to face these risks.

1. Write your valuable algorithms in server side This is an architectural guideline. Is all the value of your application is based on an algorithm, you obviously do not want that someone can easily read it, copy it and embed it is own application. In that case the best solution is to implement the algorithm in the server. The application will only feed a WS with the data to be processed and get the algorithm's return. The obvious drawback of such an architecture is that the central feature of your app cannot be used offline.

2. Do not keep your WS wide open If the value of your application is in data retrieved thanks to a WS, you have to secure these WS by sending a session token obtained during an authentication phase or by giving the user/password in each request. If you only use an authentication flag in the app preferences it is really easy to modify your application code to put this flag on the "always connected" state. The risk of doing this is that the user will have to type its used id and password regularly to extend the session.

3. Use Proguard to obfuscate your code Proguard is a very common tool used in Java projects. The Proguard tool performs three operations : the shrinking step (Unused code removal), the optimization step (some methods are inlined, unused method parameters removed etc..), and the obfuscation step. In the last one, the tool will rename all the class, attributes and methods names in all the java files in order to make them unreadable if the bytecode is decompiled. Of course Proguard makes sure that the JVM will be able to identify the different compiled elements. This tool is very interesting as it harden a lot the readability of the decompiled bytecode. Meanwhile even if the code elements are renamed it is always possible to guess the role of the obfuscated methods and attributes by retro-engineering it. Proguard generates also a mapping file which can be used to convert an obfuscated stacktrace to a readable one [6]. There are plenty of tutorial on the web explaining in detail how to configure Proguard, for instance in the Android documentation [7].

4. Use compiled library Thanks to the Java Native Interface (JNI), it is possible to use native code (compiled code) written in C or C++ and interface it with Java code. When developing Android application it is even easier to do that thanks to the Native Development Kit (NDK) which provides facilities to use compiled code in you applications. The overall mechanism is simple : you compile your C/C++ code (which must contain standard JNI entry points), and get a .so file. You will then include this library in you application project and its java interface. The major interest of compiled libraries is that the decompiled code will be much less readable as the .so library is made of native machine code and not of Java bytecode. A good practice (if it is practically convenient) would be to develop highly sensitive parts of the application in C or C++ (such as a top secret algorithm or a security layer) and interface it with the rest of the application classically written in Java. Still, there are several drawbacks of using NDK : we must compile the native library for all the different types of hardware architecture our application is targeting, we drop all possibility to have a decent stacktrace when a crash happens and it increases the code architecture a lot.

Conclusion

In this article, we proposed solutions which cover 3 OWASP's top ten mobile security issues[9]. As we said in the introduction, an application can be secured only if the system architecture linked to it is secured as well. One can develop a technically safe application, if the server based authentification system is poorly designed, all these efforts are pointless. Meanwhile it is the mobile developer responsibility to make sure that its security perimeter is flawless, this article proposed solutions to cover this perimeter.

References

[1] https://www.fireeye.com/blog/threat-research/2014/08/ssl-vulnerabilities-who-listens-when-android-applications-talk.html

[2] http://www.bouncycastle.org/

[3] https://code.facebook.com/posts/1419122541659395/introducing-conceal-efficient-storage-encryption-for-android/

[4] https://github.com/scottyab/secure-preferences

[5] http://geeknizer.com/decompile-reverse-engineer-android-apk/

[6] http://proguard.sourceforge.net/manual/retrace/examples.html

[7] http://developer.android.com/tools/help/proguard.html

[8] http://www.javaworld.com/article/2076513/java-concurrency/enhance-your-java-application-with-java-native-interface--jni-.html

[9] https://www.owasp.org/index.php/OWASP_Mobile_Security_Project#tab=Top_10_Mobile_Risks

[10] https://www.owasp.org/index.php/About_OWASP

[11] http://www.androidauthority.com/use-android-keystore-store-passwords-sensitive-information-623779/

[12] http://www.charlesproxy.com/

[13] https://threatpost.com/final-report-diginotar-hack-shows-total-compromise-ca-servers-103112/77170/

[14] https://github.com/pprados/android-keychain-backport

[15] https://developer.android.com/about/versions/marshmallow/android-6.0.html#fingerprint-authentication