An Attackers Perspective Discuss how an attacker looks at th
An Attacker’s Perspective Discuss how an attacker looks at the system
Solution
The Good Old Days
In 1996, computer security expert Elias Levy, better known by the alias Aleph One, submitted an article to the hacker magazine Phrack entitled Smashing the Stack for Fun and Profit. It was the first detailed and public description of both how buffer overrunsworked, and how to leverage them to gain total control over the target system (This was the vulnerability type used by the Morris worm).
On many C implementations it is possible to corrupt the execution stack by writing past the end of an array… Code that does this is said to smash the stack, and can cause return from the routine to jump to a random address. This can produce some of the most insidious data-dependent bugs known to mankind.
This flaw, caused by a programmer allowing more data to be written than space was allocated for, corrupts the internal state of the program. In the days of Aleph One, once a hacker found such a bug, he would proceed by overwriting the return address of a function. Understanding this requires a brief digression into compilers:
When a function is called, the program sets up a semi-isolated environment in which that function is run (known as the function’s stack space). Before the function begins, however, the program records its current location and writes that address to memory, the idea being that once the function ends, the program can simply look up the saved address of where it was before the function call (the stored return address) and resume where it left off.
However, as the aforementioned buffer overrun allows an attacker to change the program’s internal state, hackers can change the stored address to redirect execution to their own malicious code!
The Dawn of Exploit Mitigations
Before a hacker can fully exploit a buffer overrun, he needs malicious code to be already within the program’s memory. While this may seem like it would stop would-be-hackers in their tracks, early computers made absolutely no distinction between data and executable code, allowing hackers to disguise injected code as normal text. In response to the proliferation of hacks exploiting buffer overruns, security vendors began distributing early protections, both in software (the PaX team in 2000) and in hardware (Intel and AMD in 2004), known as DEP/W^X/NX/XD/AVP (quite a lot of different names). This protection added a systematic distinction between code and data: mandating that every word of memory could be writeable or executable—but not both. It was believed that this would forever stop programs being exploited and hacked, but all it did was start an arms race.
Payload Already Inside: Introduction to Code Reuse Attacks
Now that they could no longer directly insert and run arbitrary code, hackers realized that they still had a way to get the target application to execute specific instructions: ret2libc. The crux being that as they could shape the program’s internal state to however they wanted, hackers could mimic arbitrary calls to powerful system-level libraries, in effect telling the program to “resume execution” inside the library itself!
The canonical example, and the one from which this technique draws its name, is forging a call to libc_system(“sh\\x00”), which creates an interactive command session between the hacker and the computer itself.
Randomization Based Defenses
The next wave of protections were based around removing critical knowledge for the attacker’s control. In direct response to the rise of ret2libc, software and operating system companies began shuffling around the addresses of libraries every time a program is run. Suitably called ASLR, or Address Space Layout Randomization, for a time, this completely threw a wrench in the underlying idea behind ret2libc, for how can you tell the program to resume execution in the library if you don’t know where the library is! However, the story doesn’t end here. Much like how hackers could reuse code from loaded libraries rather than injecting their own, particularly savvy researchers and attackers noticed that they could chain together small sets of machine instructions from the program’s code itself, essentially making the program a bootloader for a custom virtual machine! The machine instructions that compose a program have no idea that they’re being executed in a very odd order, and will happily compute and then passes execution onto the next segment. Significantly, this requires absolutely no interaction with a third party libra
