15

What if any, are the security considerations of deciding to use an x64 vs x86 architecture?

1
  • 7
    x64 is an extension of x86. You should disambiguate your question by clarifying whether you're actually asking about 64-bit apps. As your question stands it less of a "versus" and more of a "whether or not you want guacamole on your burrito".
    – MonkeyZeus
    Commented Sep 15, 2021 at 16:40

1 Answer 1

44

EDIT: To be clear, this answer addresses the question of "Is it more secure to compile my app for 32-bit x86 vs x86-64?", and does not address the question of which hardware is more secure.

This is a partial duplicate of Do 64-bit applications have any security merits over 32-bit applications?, but broader and also that question is quite old.

In most ways, x64 is better. A few reasons:

  • Because size_t is 64 bits, it's far less likely that an integer overflow will occur due to addition or multiplication, which makes some common buffer overflow scenarios less likely.
  • Because the actual amount of physical or virtual memory that can be allocated is much lower than SIZE_T_MAX, some scenarios that might otherwise lead to overflows will instead lead to memory allocation failures, which are easier to detect. For example, if an attacker can instantiate an arbitrary number of 32-byte structs in a buffer based in the length of an input string, trying to get 0x0800 0001 of them will overflow a 32-bit size_t but not a 64-bit one while being a long (but not impossibly long) input, and 0x0800 0000 0000 0001 of them will overflow a 64-bit size_t but would also be an utterly impossibly long input.
  • Because the address space is so much larger, higher-entropy ASLR can be used. With low-entropy ASLR, an attacker that can quickly try the same attack multiple times, or that can try attacking many victims at once, may succeed simply out of sheer luck. The smaller address space also sometimes limits what relocations are even possible for loaded libraries, which also impairs ASLR. This is the main aspect of the question linked above.
  • Because 32-bit processes can only allocate (4GB-kernel reserved space) memory, a large but finite memory leak or expensive allocation can starve or crash them, where a 64-bit process would be fine (assuming there's enough physical RAM).

However, it's not all upside. One advantage of 32-bit processes is that a misbehaving one simply can't consume all available RAM on a modern system. I once found my PC running extremely slowly, and realized it was because a chat app had a remotely triggerable memory leak and had consumed all 32GB of RAM on my PC. While in this case it wasn't actually malicious, it was a sober reminder that a bug (or vuln, even if not usable for code execution) in single misbehaving app, running in the background, can severely impair the functioning of the whole computer. If the app had been 32-bit, it would only have been able to consume a few GB, and performance of the machine wouldn't have been meaningfully impacted at all.

However, the correct way to achieve this advantage is with limits and/or sandboxes. All major consumer and server OSes have ways to put limits on how many resources a process can consume, which are both more customizable than simply relying on the 32-bit limit, and can be applied to any program regardless of how it was compiled. Furthermore, sandboxing can reduce the impact of other vulnerabilities (or non-exploitable bugs).

10
  • 1
    Incidentally I once saw a buffer overrun in 6502 attacking x64 Ubuntu; the first stage shellcode was indeed in 6502 machine code yet it ran; and this was necessary for the exploit to work at all as x64 shellcode would not have done anything useful.
    – Joshua
    Commented Sep 14, 2021 at 19:37
  • 3
    Sounds like the vulnerable system in that case was running a 6502 chip, not actually x64 at all, and the x64 target simply trusted the 6502 system? An AMD64/EM64T processor isn't going to execute 6502 machine code (outside of an emulator). In any case, there's no vulnerability there that has to do with the instruction set; just a vulnerable program using an 8-bit CPU that is trusted by another system (possibly in the same physical chassis, but almost certainly a distinct chip) that happens to run on a modern CPU. The x64 CPU could instead have been x86, ia64, Alpha, MIPS, ARM64, whatever.
    – CBHacking
    Commented Sep 14, 2021 at 22:26
  • 2
    A practical point worth raising is that choosing an x86_32 architecture in 2021 probably means you're using a very old CPU, unless you're using embedded stuff from a vendor like VIA. Both Intel and AMD had switched to 64-bit CPU architectures by 2007. That predates a wide range of security-relevant features, including IOMMUs and virtualisation. Going back just a few years before the x86_64 switchover, you wouldn't have even been guaranteed NX bit support in all consumer processors.
    – Polynomial
    Commented Sep 15, 2021 at 14:31
  • 2
    @Polynomial Good point, I was answering the question from the perspective of "compile for x86 vs. x64", rather than talking about hardware decisions. I should make that clearer.
    – CBHacking
    Commented Sep 15, 2021 at 17:21
  • 2
    @TheD Windows provides the APIs to limit process memory usage (among other things) - see SetInformationJobObject - and since the question (as I answered it) is from the developer's perspective, the developer is free (and expected) to set any limits they want. External processes can also do that, as the tools (both first- and third-party) in your linked question are doing. It would be cool if MS provided an in-box tool for it on all SKUs, but the APIs are the critical thing.
    – CBHacking
    Commented Sep 15, 2021 at 18:34

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .