Brave Mobile World: OAuth in Native Apps

Curity
8 min readJul 15, 2022

--

The article was written by Curity’s Michal Trojanowski and originally published on The New Stack

Mobile apps are ubiquitous these days, and unsurprisingly, many require users to log in.

Maybe an app needs to save the user’s preferences, access the user’s resources on a server or save the user’s data remotely. Whatever the case, apps need a way to securely authenticate users to access or manage their resources.

OAuth and OpenID Connect (OIDC), the two well-established standards for authentication and authorization, are very helpful for this task. Even though OAuth and OIDC were initially created with browsers in mind, they are just as useful in the native app world as in the World Wide Web.

However, implementing OAuth and OIDC in native apps introduces new challenges and risks that developers and architects should be aware of. To keep these protocols secure, a few things need to be considered.

Confidential vs. Public Clients

Applications that use OAuth or OIDC are called “clients” in OAuth terms and “relying parties” in OIDC terms. Both types fall into two groups: confidential and public clients.

Confidential clients are applications capable of keeping a secret private. Such clients can authenticate to the authorization server using their client ID and a secret, as they are sure that neither users nor malicious parties can read that secret. In most cases, these will be applications running on the backend, where no one has access to the application’s code.

On the other hand, public clients are applications that run in an unsafe environment, where anyone can access the application code. These clients cannot keep a secret and should never use flows that require authenticating at the authorization server. These applications might run entirely in the browser as single-age applications, or they may be mobile or desktop apps.

Mobile Clients Are Public Clients

A mobile application should never be registered as a confidential client at an authorization server — it’s essential to adhere to this rule. If the app was issued a secret, then the secret would have to be compiled into the code distributed to the end users’ mobile devices. This would mean that anyone could decompile the application to retrieve the secret and use it to impersonate the legitimate app.

The ability to impersonate an app can have various ramifications. Less serious ones, at least from the security and identity point of view, might mean that a malicious party uses your game app’s credentials to post tampered results to a scoreboard. At best, this might be discouraging to your other players. At worst, it might mean that someone will unjustly claim rewards for winning a ranking. More severe consequences can occur if a malicious app is distributed to users to steal their credentials or access their resources, as users would think that they are logging in to the legitimate app.

The Proof Key for Code Exchange (PKCE) standard was created to mitigate some of the issues of mobile applications being public clients. Using PKCE helps ensure the client that exchanges the authorization code for tokens is the same client that initiated the OAuth flow in the first place. It prevents malicious apps from stealing the authorization code and, subsequently, gaining access to a user’s resources.

According to current best practices, using PKCE is highly recommended for mobile apps. Still, PKCE is not a means of authenticating an app and shouldn’t be treated as a replacement for authentication. A malicious application can still use a legitimate client ID to log a user in. In this case, using PKCE would only verify that the same malicious application will be able to exchange the authorization code for tokens.

Redirect URI as a Way of Proving Provenance

An essential part of the security of an OAuth flow is using a user agent, like a browser, and the redirect Uniform Resource Identifier (URI). After the authorization server authenticates the user and collects any required consents, the server redirects the user back to the client. At this point, the authorization server needs to verify that the authorization code (or tokens, in an implicit flow) is sent back to the correct client, not to an imposter. This is where the browser and the redirect URI are useful. By leveraging the redirect URI, the server verifies that it sends the data back to the correct client, and the browser ensures this request is redirected to the right place. We all trust that the browser uses accurate DNS information about the domain and redirects to the correct target.

The same mechanism is used in native apps, but here the user is redirected back to an app instead of a web page. Redirection to a mobile app can be implemented in two ways, either via app links or deep links.

When using app links, your application registers its URL scheme, such as myapp://, and the operating system knows which app to open when the URL is called. However, this has some security issues as malicious applications could register the same scheme as a legitimate app and intercept redirects meant for it.

The other method, deep links, uses the https:// scheme and lets the user assign an application with the given domain. The application’s signature and package are also verified against a file hosted on the domain, providing another level of control. Because TLS is used, the user is certain that a genuine redirect is in play, and they can make sure to associate the redirect URI with the legitimate app. Deep linking is, therefore, currently the recommended way to handle redirect URIs in mobile applications.

WebView vs. In-App vs. External Browsers

Some time ago, Android and iOS introduced a way for developers to open web pages straight in their apps. WebViews were meant to boost the user experience as the need for opening a separate browser was removed. However, this solution had some security flaws. When a page is opened straight in a mobile app, the user cannot control whether they are browsing the correct domain. Thus, the user becomes more vulnerable to phishing attacks.

What is more, the host app could read the form inputs from the open page and harvest a user’s credentials. Therefore, it is now recommended that only in-app or external browsers are used when an app needs to open a page. By following this best practice, a user can verify the domain loaded by the app. Since the page is opened in a browser (even if through an in-app browser), a certain level of sandboxing is assured. The host app does not have access to the user’s inputs, nor can it read cookies exchanged with the authorization server.

Browsers, deep links and PKCE combine to create a robust solution for handling OAuth flows in native mobile applications. Users can verify the domain of the authorization server they are redirected to. Malicious apps also have a much harder time using a legitimate client ID to perform a login — thanks to the deep link, the authorization code would be handed back to the legitimate app, and PKCE will require that no app uses the code issued originally to another.

Granted, using an additional browser application can sometimes hinder usability. For example, deep links don’t return to the app without a user gesture such as a button click. Also, occasional browser changes or bugs can make OIDC for native apps challenging to maintain.

Dynamic Client Registration

One recommended method to handle the authentication of mobile apps is to implement Dynamic Client Registration (DCR). When using DCR, every instance of a mobile application registers itself at the authorization server as a separate client and gets its own secret. The app can then store that secret securely so that it’s not possible to steal it through code decompilation. Even if that secret is stolen, it represents only the concrete instance of the application and thus has limited usefulness.

DCR provides a decent solution to the mobile client authentication problem, but it has its drawbacks. To register itself at the DCR endpoint in the authorization server, the application might need some kind of authorization so the DCR functionality is not abused. This again leads to a problem of how to authorize that DCR call. Often it would mean, again, baking some kind of credentials into the application code. When DCR is used, it also means that the database of registered clients can potentially grow to enormous sizes if there are many instances of mobile application registration. This can make management difficult if, for example, you want to add a new scope to the mobile app, unless the authorization server provides features for dealing with this.

Attestation as Client Authentication Method

More modern versions of Android and iOS have introduced functionality that can be used in OAuth flows for client authentication: client attestation. The client attestation API uses asymmetric cryptographic keys stored securely on the device in a specialized hardware module. An application then uses these keys to sign assertions about the device and the application.

The resulting message proves that the user is running the legitimate application published in an application store. This can be achieved through asserting the application’s signature digest and package name. The message also proves that the device is not rooted or jailbroken. The keys used for attestation are themselves signed with certificates that have a trust chain up to root certificates published by Google and Apple. This enables any party to digest the signed message to verify its trustworthiness.

In an OAuth flow, mobile attestation can be used to sign JSON Web Tokens (JWTs) used for Client Assertion authentication. Using this method, the authorization server can ensure that it is dealing with the legitimate application, even though no hard-coded secret is used.

Future of OAuth Native App Security

RFC 8252, OAuth for Native Apps, was published in 2017, and the world has since moved on. Additional security is required without the usability and reliability problems introduced by browsers. That is where attestation proves its power. Attestation is treated as a first-class citizen in the mobile world, to the point where new solutions have emerged, such as Google SafetyNet API, to fill the gap for devices for which hardware attestation is not possible.

Attestation allows a mobile public client to be authenticated. Thus, the browser is no longer required to prove the client’s legitimacy. This allows developers to create OAuth flows that do not have to rely on an external user-agent. At Curity, this has allowed us to develop a Hypermedia Authentication API — a way for clients to perform OAuth and OpenID Connect without the browser. Our HAAPI implementation shows how mobile attestation proves to be a useful and powerful tool in building new features. The API enables any authentication method to be used, including App2App handover to specialist third-party authentication systems, such as BankID. We believe that implementations that drop their reliance on the browser in favor of attestation are the future of OAuth security in native apps.

Conclusion

Native applications differ significantly from backend applications that run securely on web servers. This influences many architecture and security decisions, and OAuth is no different. Native app developers should remember to follow best practices when implementing security with OAuth flows. Remembering that mobile applications cannot be treated as confidential OAuth clients is paramount to creating secure solutions. This fact calls for using other features that increase security, like PKCE, deep linking or attestation.

--

--

Curity
Curity

Written by Curity

Curity is the leading supplier of API-driven identity management, providing unified security for digital services. Visit curity.io or contact info@curity.io