Monthly Archives: November 2022

Prototype Pollution, an overlooked application security hole

Some well-known NPM packages, including ember.js, xmldom, loader-utlis and deep-object-diff, have recently been found to have a few prototype pollution vulnerabilities. I made the decision to investigate these vulnerabilities in order to see if there were any trends that we could identify and steer clear.

All of the NPM packages are open source, which allows us to evaluate where vulnerabilities were introduced and how they are remediated by reviewing the fixes. 

A brief Overview of Prototype Pollution Vulnerability

We may need to examine some fundamental concepts about JavaScript Prototype and how Prototype pollution vulnerabilities are introduced and how they could be exploited before we dig in to see if there are any common patterns in these vulnerable libraries.

What is Prototype in JavaScript

Every object in JavaScript has a built-in property, which is called its prototype. The prototype is itself an object, so the prototype will have its own prototype, making what’s called a prototype chain. The chain ends when we reach a prototype that has null for its own prototype.

The following code snippet defined an Object called Student, the prototype of the Student is Object and it has many predefined properties

One power of the Prototype is that it allows an object to inherit properties/attributes from its prototype. Under this case,  Student object could inherit and use all the predefined properties of its prototype Object.

Key takeaway 1:  an object could access the properties/attribute of its prototype due to the inheritance of Prototype. Under this example, the toString() function is defined by Student prototype Object and it could be accessed by any Student object.

Key takeaway 2:  An object could have many instance, they could shared the same Prototype properties 

Key takeaway 3: Meanwhile, JavaScript allows some Object attributes to be changed during runtime, that includes the prototype property; though overwriting the prototype of a default object is considered as a bad practice. 

How does prototype pollution occur and how to exploit them

As the term “prototype pollution” suggests, it happens when a hostile attacker has the ability to manipulate and alter an object’s prototype. Once an attacker could modify the object’s prototype,  all the instances that share the object prototype properties would be affected

Let us tweak the aforementioned code by modifying the toString() properties of the Object prototype to a customized function in order to clarify the explanation. 

When running the above code under the browser console, the studObj.toString() statement will be executed using the new toString() function once we update the toString() function of its Object prototype, and the same will be true for the newly generated object  {} since they both share the same Object Prototype.

If you look at the payload we used in the example above, you’ll see that it has three parts

Part 1: studObj.__proto__.__proto__.

This is intended to obtain the target Object.Prototype, you may see many more .__proto__. in the actual payloads if the prototype chain is very long.

Part 2: .toString 

Using this section, we may tell which function we wish to change or add to the Object.Prototype. In this example, we want to override the toString() function defined in the Object.Prototype

Part 3: = function() {return “I am changed”}

This part is used to set up the new value for the prototype property. 

In a nutshell, access to the Object.Prototype and the ability to modify or add its properties are required for a successful exploit. You may start to think how any app would allow these conditions to be met. In the next section, We will examine the four most current Prototype vulnerabilities to 1) see how the property pollution vulnerabilities are introduced into the codes and 2) see what sort of mitigation methods are employed to fix the prototype pollution.

Case studies for Prototype Pollution Vulnerability & Remediation

Case 1: Prototype Pollution Vulnerability in Ember.js < 4.4.3 (PR

Root Cause

Ember.js provides two functions EmberObject.setProperties or EmberObject.set to set properties of an object.

As there is no validation on the untrustPath variable, if an attacker defines the path to __proto__.__proto__.srcdoc, it would modify the property of the fundamental Object, which will affect all the objects inherited from it.

Mitigation

By referring to the remediation PR,  the mitigation method is to forbid specific keyword __proto__ and constructor to block a prototype chain access. 

Case 2: Potential Prototype Pollution vulnerability in xmldom < 0.7.7

Root Cause

The potential prototype pollution vulnerability (CVE-2022-37616) is caused when this library provides the following function to copy one DOM element to another. (I marked it as potential because a valid POC has not been provided and this copy is NOT performing a deep clone). 

Mitigation

The hasOwnProperty() method is used in this mitigation procedure (PR) to determine whether the src object has the requested property as its own property (as opposed to inheriting the Prototype property). The copy function will fail if the src attribute is inherited from its prototype to prevent a malicious user to access the Prototype Property

Case 3: Prototype Pollution in webpack loader-utils < 2.0.3

Root Cause

This potential vulnerability (CVE-2022-37601) was caused by the queryParse() function when parsing the query parameter and composing a new Object called result with the query parameter name and value.

Mitigation

The mitigation method is very straightforward by using Object.create(null) on objects created as Object.create(null) does NOT inherit from Object.prototype. It means the created Object is not able to access the Prototype Property

Case 4: Prototype Pollution in deep-object-diff

Root Cause

This weakness was introduced when two items were compared deeply and the difference between the two objects was returned as a new object. Due to the possibility that the difference contains prototype property and is under the attacker’s control. It enables the prototyping of pollution.

Mitigation

Similar to Case 3, the mitigation method is using Object.create(null) to create an object where Prototype property could not be inherited. 

When going through all the reported vulnerabilities, cit seems these vulnerabilities are likely to be introduced when following operation occurs

  1. Path assignment to set the property of an Object.  (Case 1, Case 3)
  2. Clone/Copy an object   (Case 2, Case 4)

The common mitigation method includes

  1. Create objects without prototypes inheritance Object.create(null) (case 3,case 4)
  2. Use hasOwnProperty()  function to check  whether a property is on your actual object or inherited via the prototype   (Case 2)
  3. Validate the user input with specific keyword filtering (Case 1)

Prototype Vulnerabilities, a much wider security issue

After reviewing several real-world examples of how prototype vulnerabilities could be developed and how to exploit them, I continued by examining some NPM open source libraries to determine whether prototype pollution is a widely ignored issue in the NPM open source libraries.

Without spending too much effort,  I discovered and verified that two open source libraries used for merging objects are vulnerable to Prototype vulnerability by scraping the source code under Github using specific patterns listed below. 

These two libraries appear to be rather dated though there are still hundreds of downloads every week, however I have contacted the maintainer but have not yet received a response.

All of these instances, in my opinion, are really the tip of the iceberg in terms of how JavaScript libraries are vulnerable to prototype pollution.

I see two possible explanations for why this vulnerability is so prevalent: 1) Given that this vulnerability is very recent, few developers appear to be completely aware of it. 2) At the moment, automation methods to find this kind of vulnerability are not very mature. 

How to Identify OAuth2 Vulnerabilities and Mitigate Risks

OAuth2 Vulnerability Case Studies based on HackerOne Public Disclosure Reports

Even though OAuth2 has been the industry-standard authorization framework since it replaced OAuth1 in 2012, its many complexities have led to potential security issues. For security engineers, it’s vital to understand what OAuth2 is, how it works, and how poor implementation can lead to vulnerabilities.

In this article, we’ll highlight some common attack vectors or vulnerabilities against OAuth2 by referring to HackerOne public disclosure reports with actual cases  — and explain how to mitigate those.

What is OAuth2?

OAuth2 is a widely used framework for access delegation, which allows users to grant limited access to one application (client application) by requesting the resource of the users hosted in another application or website (resource application).

If you have some basic knowledge about OAuth2 but have never implemented OAuth2 in your application, the two diagrams below might help you to refresh the concepts of OAuth2 and the mechanism of how it works. The diagrams illustrate the workflow for two common OAuth2 grant types, authorization code grant (Figure 1) and the still-in-use but deemed insecure implicit grant (Figure 2).

OAuth2 itself is fundamentally complicated as it is designed to resolve the vital authentication part in many complex web environments (mobile app, web server, etc). Due to the complexity, many security engineers may not fully understand the power of OAuth2. As a consequence, we are observing many security issues caused by a misconfiguration or poor implementation of OAuth2. To make it worse, some exploitations against these OAuth2 misconfigurations are extremely simple and easy to launch.

Common OAuth2 Vulnerabilities from HackerOne’s Public Disclosure

Let’s take a look at some common attack vectors or vulnerabilities against OAuth2 by referring to HackerOne public disclosure reports. We hope the explanations of these vulnerabilities are clear by making reference to the actual exploitation disclosure. At the end of each of the following sections, you will also learn how to mitigate these vulnerabilities.

Vulnerability 1: Missing validation in redirect_uri leads to access token takeover

HackerOne Reports:

https://hackerone.com/reports/665651

https://hackerone.com/reports/405100

The redirect_uri parameter in the OAuth2 workflow is used by the authorization server as a location or address to deliver the access_token or auth_code by means of a browser redirect. In Figure 1, we described that the redirect_uri parameter is initialized by the client application as part of the request to the authorization server under step 2 when a web user clicks the login button. After the authorization server validates the credentials (step 6), it will send back the auth_token (or access_token for an implicit grant step 7 in Figure 2) as a parameter to the redirect_uri used in step 2.

If a malicious user could trigger the victim to send a request to the authorization server with a redirect_uri controlled by the attacker and the authorization server is NOT validating the redirect_uri,  the access_token will be sent to the URI controlled by the attacker.

The case of stealing users’ OAuth tokens via redirect_uri is, unfortunately, a typical one, where the authorization server performs a poor validation on the redirect_uri and the attacker is able to bypass the validation with a malicious link they control.

Mitigation

Implement a robust redirect_uri validation on the authorization server by considering the following approach:

  1. Perform a match between client_id and report_uri to ensure the report_uri matches with the client_id stored in the authorization server. 
  2. Use a whitelist approach if the number of client applications is manageable.

Vulnerability 2: Missing state parameter validation leads to CSRF attack

HackerOne Reports

https://hackerone.com/reports/111218

https://hackerone.com/reports/13555

In OAuth2 implementation, the state parameter (initialized under step 2) allows client applications to restore the previous state of the user. The state parameter preserves some state object set by the client in the authorization request and makes it available to the client in the response. 

Here is the correct implementation of the state parameter:

  1. The client application initialized the request to the authorization server with a state parameter in the request URL (Step 2).
  2. The client application stores the state parameter value in the current session (Step 2).
  3. The authorization server sends the access_token back to the client application (Step 7 in Figure 2) together with a state parameter.
  4. Client application performs a match between the state stored in the current session and the state parameter sent back from the authorization server. If matching, the access_token will be consumed by the client application. Otherwise, it will be discarded so that it could prevent the CSRF attack.

However, since the state parameter is not required for a successful OAuth2 workflow,  it is very often this parameter is omitted or ignored during OAuth2 implementation. Without validation on the state parameter, CSRF attack could be launched easily against the client application.

This HackerOne report is a very good example to explain how an attacker could attach their account to a different account under the client application due to the lack of the state parameter. Sometimes, even the state parameter is present in the callback request from the authorization server, but it is still possible the state parameter is not validated, leaving the application vulnerable to CSRF attack.

Mitigation

Ensure the state parameter is passed between requests and state validation is implemented so that an attacker could not attach their account to the victim’s account.  

Vulnerability 3: Client_secret mistakenly disclosed to the public 

Hackerone Report

https://hackerone.com/reports/272824

https://hackerone.com/reports/397527

The client_secret is used by the client application to make a request to the authorization server to exchange the auth code to the access token (step 8 in Figure 1). The client_secret is a secret known only to the client application and the authorization server. 

Some developers may accidentally disclose the client_secret to end users because the access_token retrieve request (step 8 in Figure 1) is mistakenly executed by some front-end JavaScript code rather than performed by the back-end channel.

In reference to this HackerOne Report about token disclosure, the client_secret is publicly exposed in the HTML page as the exchanging auth_code with access_token (Step 8 in Figure 1) process is executed by a piece of JavaScript code.

Mitigation

To avoid disclosing client_secret to the public, it is best for developers to understand the need of implementing OAuth2, as there are different OAuth2 options to adopt for different applications. If your client application has a back-end server, the client_secret should never be exposed to the public, as the interaction with the authorization server could be completed in a back-end channel. If your client application is a single-page web application or mobile app, you should choose a different OAuth2 type. For example, use the Authorization Code grant with PKCE instead.

Vulnerability 4: Pre-account takeover 

Hackerone Report

https://hackerone.com/reports/1074047

A pre-account takeover could occur when the following two conditions are met: 

  1. The client application supports multiple authentication methods, using a login with a password and a third-party service (like Facebook or Google) as an OAuth authentication provider.
  2. Either the client application or the third-party service does not perform email verification during the signup process.

This HackerOne report details how a misconfigured OAuth can lead to pre-account takeover:

  1. Attacker creates an account with a victim’s email address and the attacker’s password before the victim has registered on the client application.
  2. The victim then logs in through a third-party service, like Google or Facebook.
  3. The victim performs some sensitive actions in the client application. The client application will save these actions, and it will probably use the email address as an identifier for its users in the database.
  4. Now, the attacker could log in as the victim and read the sensitive data added by the victim by using the victim’s email address and the attacker’s password created by step 1.

Mitigation

Perform email validation when creating a new user.  

Vulnerability 5: OAuth2 access_token is leaked through referrer header 

Hackerone Reports

https://hackerone.com/reports/835437

https://hackerone.com/reports/787160

https://hackerone.com/reports/202781

One weak design of OAuth2 itself is that it passes the access_token in the URL for implicit grant type. Once you put sensitive data in a URI, you risk exposing this data to third-party applications. This applies to OAuth2 implementation as well. 

In this HackerOne report about access_token smuggling, for example, the access_token was exposed to a third-party website controlled by the attacker after a chained redirection by taking advantage of the referrer header.

Mitigation

As this is a design issue of OAuth2, the easiest mitigation method would be 

strengthening the referrer header policy with <meta name=”referrer” content=”origin” />.

Vulnerability 6: OAuth2 login bypass due to lack of access_token validation

Hackerone Report

https://hackerone.com/reports/245408

A lack of access_token validation by the client application makes it possible for an attacker to log in to other users’ accounts without knowing their password.

Once the authorization server sends the access_token back to the client application, client applications sometimes need to bind the access_token with a user identity so that it can store it as a session. The exploitation of this vulnerability happens when an attacker binds their access_token with any user identity and then impersonates that user without logging in.

In this HackerOne report, the security researcher was able to log in as any user just by supplying the victim’s email address only because the client application did not validate whether the access_token belongs to the correct owner. 

Mitigation

Validation should be performed on the client side to check whether the user owns the access_token.

Summary

The OAuth2 framework is complicated and provides many flexibilities for implementation. However, due to this flexibility, the security of OAuth2 implementation is in the hands of the developers. With that said, developers with a strong security mindset can make implementation more secure; on the contrary, developers with less security training are likely to impose some security holes during OAuth2 implementation. For any organization, it’s vital to train and educate your developers with the latest security best practices to reduce risk during OAuth2 implementation.