November 2015

Please note that republishing this article in full or in part is only allowed under the conditions described here.

HTTP Evasions Explained - Part 8 - Borderline Robustness


This is part eight in a series which explains the evasions done by HTTP Evader. This part looks into the excessive and inconsistent robustness attempts done by the browser vendors and how this can be used to evade firewalls. As an example, in the following HTTP response the character "\000" inside the field name will be simply ignored by Chrome and Opera, while several firewalls will not understand it and pass it through:

   HTTP/1.1 200 ok
   Transfer\000-Encoding: chunked
   malware with chunked encoding

The previous article in this series was Part 7 - Lucky Numbers and the next part is Part 9 - How To Fix Inspection.

The Robustness Principle

The Robustness Principle effectively means, that a software should be strict (i.e. standard conforming) in what it sends but tolerant in what it accepts.

Unfortunately browsers are more tolerant against bad HTTP than most firewalls expect. Together with the common behavior of firewalls to pass everything which does not really looks bad this will lead to several evasions. Let's have a closer look against the abyss of robustness.


The character "\000" (i.e. byte 0x0) has a long tradition in causing security problems. The Chrome browser (and other browsers based on the same engine, like Opera) ignores this character in almost all places so this example is a HTTP response header were a chunked body encoding is expected by Chrome:

   \000HTTP/1\000.1 200 ok

Ignorance if \000 inside field names and field values is unique for Chrome and Opera, but the other cases are also accepted by more browsers. Sure enough such kind of behavior is usually not expected by firewalls.

HTTP/1.1 vs. http/1.1 vs. HTTP/2.1 vs. HTTP/0.9 ...

For HTTP 1.0 and HTTP 1.1 the status line of the response header starts with the string "HTTP/" (all upper case!) followed by the HTTP version. There are no other version than 1.0 and 1.1 allowed but browsers happily accept variations:

End of the header

While Attack of the White-Space covers different line endings I did not notice then how different the browsers behave regarding the end of the header. The end of the header according to the standard is an empty line, i.e. "\r\n\r\n" and all browsers except "\n\n" too. But, for IE and Edge "empty" has a slightly different meaning and they accept space and tabulator as part of the emptiness. This behavior difference is best used with compression, because in this case the compressed data must start with the first byte of the body.

Thus this HTTP response works for IE and Edge:

   HTTP/1.1 200 ok\r\n
   Content-Encoding: gzip\r\n
   gzipped data

While all the other browsers will need an additional line after the pseudo-empty line containing the tabulator.

   HTTP/1.1 200 ok\r\n
   Content-Encoding: gzip\r\n
   gzipped data

Of course firewalls behave differently and often can be bypassed this way. Another variation of the header end is "\n\r\r\n" instead of "\r\n\r\n" (first two bytes swapped) which is accepted by IE, Edge and Safari.

Other broken HTTP

Experiments with spaces, non-ASCII characters or simply complete invalid lines (like no "key:value") in the header show that most browsers simply ignore such lines. Other broken lines get interpreted in a creative way, like in the case of a double colon between key ("Transfer-Encoding") and value ("chunked"):

   HTTP/1.1 200 ok

With this HTTP response Firefox and Safari treat the data as chunked, while all the other browsers treat it as plain (non-chunked) data. Interpretation in the firewalls differ, so that about 25% of the firewalls in my test reports can be bypassed.

There is more

These are just a few examples where the protocol handling in firewalls is no match against the overly broad and inconsistent robustness of browsers. If you are interested to look into more detail at these browser quirks or want to see how the firewall at your site deals with these problems you can find out yourself at the the HTTP Evader test site. Or maybe you want to know more about HTTP Evader and read about other bypasses.

Robustness Principle: More Harm than Good?

From the point of security the robustness principle is a nightmare, because it means that you have to protect a client where the software behaves in an unknown way. Apart from that nobody will really notice if the behavior of a web server conflicts with the standard because the browsers will handle it anyway. Thus you will find enough broken servers out there. And there is no good way for the browsers to make the behavior strict again because otherwise they might fail to work with existing servers.

With HTML it is widely known that browsers behave differently in edge cases or with invalid HTML and it is extensively documented how this can be abused to evade filters in the XSS Filter Evasion Cheat Sheet or the HTML5 Security Cheat Sheet. I also recommend to read The Tangled Web by Michal Zalewski which gives a deep look into the associated problems for securing web applications.

I'm not aware of any similar documentation for the borderline robustness at the HTTP level. But apart from the firewall evasion tests HTTP Evader also provides an extensive test suite to examine the robustness behavior of the web browsers.