AUTOSAR FO EXP SecurityOverview
AUTOSAR FO EXP SecurityOverview
AUTOSAR FO EXP SecurityOverview
AUTOSAR FO R23-11
Disclaimer
This work (specification and/or software implementation) and the material contained in
it, as released by AUTOSAR, is for the purpose of information only. AUTOSAR and the
companies that have contributed to it shall not be liable for any use of the work.
The material contained in this work is protected by copyright and other types of intel-
lectual property rights. The commercial exploitation of the material contained in this
work requires a license to such intellectual property rights.
This work may be utilized or reproduced without any modification, in any form or by
any means, for informational purposes only. For any other purpose, no part of the work
may be utilized or reproduced, in any form or by any means, without permission in
writing from the publisher.
The work has been developed for automotive applications only. It has neither been
developed, nor tested for non-automotive applications.
The word AUTOSAR and the AUTOSAR logo are registered trademarks.
Contents
1 Introduction 4
1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Definition of terms and acronyms 5
2.1 Acronyms and abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Related Documentation 6
3.1 Input documents & related standards and norms . . . . . . . . . . . . . 6
4 Security Overview 8
4.1 Protected Runtime Environment . . . . . . . . . . . . . . . . . . . . . . . 8
4.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1.2 Protection against Memory Corruption Attacks . . . . . . . . . 9
4.1.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.1.4 Secure Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.1.5 Attacks and Countermeasures . . . . . . . . . . . . . . . . . . 11
4.1.5.1 Code Corruption Attack . . . . . . . . . . . . . . . . . 12
4.1.5.2 Control-flow Hijack Attack . . . . . . . . . . . . . . . . 12
4.1.5.3 Data-only Attack . . . . . . . . . . . . . . . . . . . . . 14
4.1.5.4 Information Leak . . . . . . . . . . . . . . . . . . . . . 14
4.1.6 Existing Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1.6.1 Write xor Execute, Data Execution Prevention (DEP) . 14
4.1.6.2 Stack Smashing Protection (SSP) . . . . . . . . . . . 15
4.1.6.3 Address Space Layout Randomization (ASLR) . . . . 17
4.1.6.4 Control-flow Integrity (CFI) . . . . . . . . . . . . . . . . 19
4.1.6.5 Code Pointer Integrity (CPI), Code Pointer Separa-
tion (CPS) . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.1.6.6 Pointer Authentication . . . . . . . . . . . . . . . . . . 21
4.1.7 Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1.8 Horizontal Isolation . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1.8.1 Virtual Memory . . . . . . . . . . . . . . . . . . . . . . 22
4.1.9 OS-Level Virtualization . . . . . . . . . . . . . . . . . . . . . . 23
4.1.10 Vertical Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1 Introduction
This explanatory document provides additional information regarding secure design for
the AUTOSAR standards. This document is currently limited to the AUTOSAR Adap-
tive Platform. Support for the AUTOSAR Classic Platform may be added in
a future release of this document.
1.1 Objectives
This document explains security features which could be utilized within the AUTOSAR
Adaptive Platform. The motivation is to provide standardized and portable se-
curity for Adaptive Applications as well as the whole AUTOSAR Adaptive
Platform.
1.2 Scope
This document shall be explanatory and help the security engineer to identify secu-
rity related topics within Adaptive Applications and the AUTOSAR Adaptive
Platform.
The content of this document will address the following topics:
• Protection against memory corruption attacks.
• Isolation of software components between each other.
• Isolation of the operating system from software components.
• Existing security solutions
3 Related Documentation
[20] Flipping bits in memory without accessing them:An experimental study of DRAM
disturbance errors
[21] Drammer:Deterministic Rowhammer Attacks on Mobile Platforms
[22] ANVIL:Software-Based Protection Against Next-Generation Rowhammer Attacks
[23] A seccomp overview
https://lwn.net/Articles/656307/
[24] Frequently Asked Questions for FreeBSD 10.X and 11.X
https://www.freebsd.org/doc/en/books/faq/security.html
[25] pledge(2)
https://man.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man2/pledge.2
4 Security Overview
4.1.1 Introduction
4.1.3 Overview
The exploitation of vulnerabilities and its mitigation is a complex topic. Computer se-
curity researchers continuously develop new attacks and corresponding defenses. A
general model for memory corruption attacks and the corresponding protecting tech-
niques is described in [2]. The model (cf. Figure 4.1) summarizes the general causes
of vulnerabilities, the way to exploit them according to the targeted impact, as well as
mitigation policies on the individual attack stages for four types of attacks: Code cor-
ruption attack, Control-flow hijack attack, Data-only attack, and Information leak. On
each attack stage they define several policies that must hold to prevent a successful
attack.
The first two stages are common for all attack types and describe the root cause of
vulnerabilities. In the first stage a memory corruption manipulates a pointer. When
this invalid pointer is then dereferenced, a corruption is triggered. A pointer is invalid
if it is an out-of-bounds pointer, i.e. pointing out of the bounds of a previously allo-
Figure 4.1: Attack model from [2] demonstrating four attack types, policies mitigating
the attacks in different attack stages
A first measure to counter vulnerabilities at their root is to avoid mistakes and errors
in the first place. To reach this goal programmers have to take care of many pitfalls
during the development process. A simple example is the usage of unsafe functions
from the standard C library like strcpy(). It copies a null character terminated char-
acter string to a buffer until the null character is reached. If the allocated destination
buffer is not large enough, the function still copies characters behind the end of the
buffer and thus overwrites other data. This is one of many pitfalls commonly known
as a buffer overflow and can be used by an attacker, for example, to overwrite the
stored return pointer if the buffer is allocated on the stack. For the given example a
programmer should use safer variants instead. To that end, many standard C library
1
http://cwe.mitre.org/data/definitions/659.html
functions have been supplemented with versions including a bounds check, for str-
cpy() this is strncpy(). Therewith the length of the input is limited and the buffer, if
it is allocated properly, does not get overflowed. While there are supposedly more safe
functions, such as strncpy(), they come with their own quirks and flaws. But also
more complex, context related issues must be considered.
To avoid memory leaks, dangling pointers, or multiple deallocations of memory, usage
of the “resource acquisition is initialization” (RAII) idiom is strongly encouraged. The
goal is to make sure every previously allocated resource will be deallocated properly.
Applying RAII to C++ means to encapsulate the allocation and corresponding deallo-
cating of a resource into a dedicated class. While the resource will be allocated in the
constructor, it’s meant to be deallocated by the destructor of the same object. This en-
sures proper resource deallocation at the end of the objects lifetime. However, it must
be made sure the encapsulating objects lifetime is limited to the time the resource in
question is in use. Usually, this is achieved by creating block scope objects placed on
the stack. While RAII could also be used for memory allocations on heap, it is not lim-
ited to this use case. RAII also helps to prevent deadlocks when dealing with mutexes
were the whole logic being protected by the mutex will be executed in context of the ob-
ject acquiring and releasing a lock. It’s also noteworthy that RAII does not necessarily
require an encapsulating class, but could also be implemented by means of compiler
extensions were a stack object, e.g., a local variable or structure, could be associated
with a corresponding clean up function. Such extensions are available for LLVM and
other compilers like GCC supporting GNU syntax. This makes it possible to apply RAII
to plain C.
In practice programmers should have coding guidelines at hand like the MISRA for
safety-related systems. Unfortunately the C++ Core Guidelines do not cover a rule set
for security related issues [3]. But there are third party guidelines which deals with
these issues, like the SEI CERT C++ Coding Standard 2 .
Since these guides are very comprehensive only few programmers will follow them
in practice due to time reasons. However, there are some tools that can support the
check against some rules in a static code analysis, e.g. Flawfinder 3 , RATS 4 , or CodeS-
onar 5 . In any case the application of tools does not guarantee vulnerability-free code
as runtime conditions (or execution contexts) are out-of-scope of such tools. Similarly,
identified vulnerabilities may not be fixed sufficiently with respect to runtime behavior.
For a second line of defense it is assumed that vulnerabilities are present and that
some will always be inserted by developers. Therefore a requirement for enhanced
2
https://www.securecoding.cert.org/confluence/pages/viewpage.action?
pageId=637
3
https://www.dwheeler.com/flawfinder/
4
https://security.web.cern.ch/security/recommendations/en/codetools/rats.
shtml
5
https://www.grammatech.com/products/codesonar
A Code Corruption Attack intends to manipulate the executable instructions in the text
segment in the virtual memory space and to breach the Code Integrity policy (cf. Fig-
ure 4.1).
The countermeasure is to set the memory pages containing code to read-only or W ⊕
X (write xor execute), respectively. It has to be implemented on both system levels, i.e.
the processor as well as the operating system. The MMU of the CPU has to provide
fine-grained memory permission layout (e.g. NX-bit [4, p.248]) or the operating system
should emulate this. Further, the operating system level has to support the underlying
permission layout, e.g. like W ⊕ X [5] or Data Execution Prevention (DEP) . But care
has to be taken if self-modifying code and Just-In-Time (JIT) compilation is used, as
the generated code must first be written to writeable pages, which are then set to be
executable.
A Control-flow Hijack Attack starts with the exploitation of a memory corruption to mod-
ify a code pointer so that it points to an attacker defined address and the Code Pointer
Integrity policy is harmed (cf. Figure 4.1). This pointer is then used by a indirect control
flow transfer in the original code. Therewith the control-flow is diverted from the original
and so its Control-flow Integrity is violated. The last step is the execution of the exploit
payload. The literature distinguishes between two approaches: code-injection attacks
and code-reuse attacks. While code-injection attacks [6] are based on injecting arbi-
trary and custom instructions (a.k.a. shellcode) into the memory as exploit payload,
code-reuse attacks, such as Return Oriented Programming (ROP) [7], Jump Oriented
Programming (JOP) [8], and return-to-libc [9], utilize existing code in the process mem-
ory to construct so called gadgets, which enable the targeted malicious functionality.
All in all, a control-flow hijack attack will be successful if the integrity of a code pointer
and of the control-flow are broken. Further, the value of the target address of the
malicious functionality must be known and in the case of code-injection, the memory
pages holding the injected code must be executable.
The goal of Code Pointer Integrity (CPI) is satisfied if all dereferences that either deref-
erence or access sensitive pointers, such as direct and references to code pointers, are
not modified (cf. [10]). There are a few recent feasible approaches which detect the al-
teration of code pointers and references to code pointers at this early stage. The Code
Pointer Integrity [10] mechanism provides full memory safety but just for direct and
references to code pointers. According to the authors, this approach protects against
all control-flow hijack attacks. CPI is a combination of static code analysis to identify all
sensitive pointers, rewrite the program to protect identified pointers in a safe memory
region, called safe-stack, and instruction level isolation that controls the access to the
safe region. These mechanisms require both compiler and runtime support and comes
with an overhead of ca. 8% on runtime. An additional requirement is Code Integrity.
Code Pointer Separation (CPS) [10] is a relaxation of CPI.
Among others, CPS limits the set of protected pointers to code pointers (no indirec-
tions) to lower the overhead to ca. 2%. It still has strong detection guarantees.
A further approach to detect modification on code pointers is Pointer Authentication
[11]. This approach uses cryptographic functions to authenticate and verify pointers
before dereferencing. Pointers are replaced by a generated signature and cannot be
dereferenced directly. This requires compiler and instruction set support. To reduce
overhead, hardware acceleration for the cryptographic primitives is required.
With Stack Smashing Protection (SSP) stack-based buffer overflows, which overwrite
the saved return address, can be detected. Therefore, a pseudo-random value, also
known as Stack Canary or Stack Cookie, is inserted on the stack before the saved
base pointer and the saved return address as part of the function prologue. When
the function returns and the function epilogue is performed, the value is compared to
a protected copy. If the value is overwritten by a buffer overflow targeting the return
address, the program aborts, because the values do not match anymore.
As mentioned before, code-injection attacks require executable memory pages for the
injected instructions. With principle of W ⊕ X, also called Data Execution Prevention
(DEP), a memory page is either flagged as writable or executable, but not both. This
prevents that instructions overwrite data memory such as stack or heap and execute
it afterwards. The approach requires a fine-grained page permissions support either
by the MMU of the CPU and the so called NX-bit (No-execute bit) or emulated in
Software as described in Section 4.1.5.1. Moreover, the employed operating system
must support it.
Code-reuse attacks are not affected by the W ⊕ X mechanism since existing memory
regions marked as executable are utilized and no additional code must be injected into
the memory. To mitigate such kind of attacks currently deployed countermeasures are
implemented on previous attack stages. On the fourth attack stage it is stated that
the target address of the malicious functionality must be known. In general, the at-
tacker knows or can just estimate an address in the virtual address space since it is
static for a binary after compilation. A countermeasure in practice is the obfuscation
of the address space layout by Address Space Layout Randomization (ASLR) [12].
Therefore, the locations of various memory segments get randomized which makes it
harder to predict the correct target addresses. ASLR requires high entropy to prevent
brute-force de-randomization attacks and depends on the prevention of unintended
Information Leaks (Section 4.1.5.4) that are used by dynamically constructed exploit
payloads. To guarantee high entropy ASLR should be implemented on 64-bit archi-
tectures (or above). Additionally, every memory area must be randomized, including
stack, heap, main code segment, and libraries.
In addition to ASLR the policy Control-flow Integrity intends to detect a diversion of the
original control-flow. Established techniques are: Shadow Stack and Control-flow in-
tegrity (Abadi) (CFI) [13]. The idea of shadow stack is to push the saved return address
to a separate shadow stack so that it is compared upon a function return. In addition,
CFI also protects indirect calls and jumps as well. The original CFI creates a static
control-flow graph by determining valid targets statically and give them a unique iden-
tity (ID). Afterwards, calls and returns are instrumented to compare the target address
to the ID before jumping there. It is required to protect valid targets from overwritten by
W ⊕ X.
A memory corruption can also be exploited to modify security critical data that is not re-
lated to control-flow data. For instance, exploiting a buffer overflow to alter a conditional
construct can lead to unintended program behavior. Therewith the policy Data Integrity
is violated. Techniques such as Data Space Randomization and Write Integrity Testing
(WIT) makes it harder to perform such kinds of attacks but they are not established in
practice yet.
Memory corruption attacks are also used to leak memory contents. Therewith proba-
bilistic countermeasures like ASLR can be circumvented by the knowledge of randomly
generated data. As for the data-only attack Data Space Randomization might help to
mitigate information leakage.
6
https://archives.fedoraproject.org/pub/archive/fedora/linux/core/1/x86_
64/os/RELEASE-NOTES.html
7
http://people.redhat.com/mingo/exec-shield/
8
https://www.redhat.com/f/pdf/rhel/WHP0006US_Execshield.pdf
9
https://en.wikibooks.org/wiki/Grsecurity/Appendix/Grsecurity_and_PaX_
Configuration_Options#Enforce_non-executable_kernel_pages
10
https://en.wikibooks.org/wiki/Grsecurity/Appendix/Grsecurity_and_PaX_
Configuration_Options#Paging_based_non-executable_pages
11
https://source.android.com/security/#memory-management-security-enhancements
12
http://www.openbsd.org/33.html
13
http://www.netbsd.org/docs/kernel/non-exec.html
14
https://gcc.gnu.org/onlinedocs/gcc-7.2.0/gcc/Instrumentation-Options.
html#Instrumentation-Options
15
https://clang.llvm.org/docs/ClangCommandLineReference.html#
cmdoption-clang-fstack-protector
16
https://software.intel.com/en-us/node/523162
17
http://www.keil.com/support/man/docs/armcc/armcc_chr1359124940593.htm
18
https://lwn.net/Articles/569635/
19
http://selinuxproject.org/~jmorris/lss2013_slides/cook_kaslr.pdf
20
https://pax.grsecurity.net/docs/aslr.txt
21
http://www.redhat.com/f/pdf/rhel/WHP0006US_Execshield.pdf
22
https://en.wikibooks.org/wiki/Grsecurity/Appendix/Grsecurity_and_PaX_
Configuration_Options#Restrict_mprotect.28.29
23
https://source.android.com/security/enhancements/enhancements41
24
https://www.openbsd.org/plus44.html
25
https://netbsd.org/releases/formal-5/NetBSD-5.0.html
26
https://hardenedbsd.org/content/freebsd-and-hardenedbsd-feature-comparisons
27
https://gcc.gnu.org/onlinedocs/gcc-7.2.0/gcc/Code-Gen-Options.html#
Code-Gen-Options
28
https://wiki.debian.org/Hardening#gcc_-pie_-fPIE
29
https://clang.llvm.org/docs/ControlFlowIntegrity.html
30
https://software.intel.com/en-us/node/523278
31
https://software.intel.com/en-us/node/523158
32
http://www.keil.com/support/man/docs/armclang_dev/armclang_dev_
chr1405439371691.htm
33
https://clang.llvm.org/docs/ControlFlowIntegrity.html
34
https://gcc.gnu.org/onlinedocs/gcc-7.2.0/gcc/Code-Gen-Options.html#
Code-Gen-Options
35
https://clang.llvm.org/docs/ControlFlowIntegrity.html
36
http://dslab.epfl.ch/proj/cpi/
37
https://clang.llvm.org/docs/SafeStack.html
4.1.7 Isolation
38
https://community.arm.com/processors/b/blog/posts/armv8-a-architecture-2016-additions
39
https://gcc.gnu.org/gcc-7/changes.html
40
https://lwn.net/Articles/719270/
41
https://gcc.gnu.org/onlinedocs/gcc-7.1.0/gcc/AArch64-Options.html#
AArch64-Options
isolation between applications and vertical isolation between applications and the OS
– and can be combined accordingly.
The oldest and most prevalent isolation mechanism is the concept of virtual mem-
ory, i.e. presenting each running software component of a system with its own virtual
address space, which is mapped to the available physical memory by the operating
system. The origins date back to the 1950s, with the original intention of hiding the
fragmentation of physical memory, but offers the possibility of isolating software com-
ponents. As each component operates in a virtual address space, it cannot access the
memory of other components (unless explicitly allowed by the operating system). This
feature requires hardware support, e.g. by a Memory Management Unit (MMU), but
this support is nearly ubiquitous in most computer systems except very small micro-
controllers.
An extension of this concept is that of virtual machines (or virtualization), whereby
multiple virtual machines (VM) are emulated by a hypervisor. Each VM may run a
complete operating system, depending on the degree of virtualization even in an un-
modified state. The hypervisor controls the access of each VM to the physical hardware
components, or even emulate certain components such as network interfaces. Similar
to virtual memory, the virtualization requires dedicated hardware support to achieve an
appropriate level of performance and security.
Note that both approaches have limitations. The isolation provided by virtual memory
or virtualization is only as strong as the operating system or hypervisor itself, as a ma-
licious application might take control over the OS or hypervisor through a programming
error (the measures described in section 4.1.10 intend to minimize this attack surface).
Similarly, a malicious application might use its access to other hardware components
to circumvent the isolation, as some hardware components may have unrestricted ac-
cess to the systems memory. “IOMMUs”, a technique which presents hardware com-
ponents with a virtual address space, can be used to counter this. Lastly, an attacker
might use the volatile properties of physical memory itself to circumvent the isolation.
For example, in [20] the “rowhammer” attack is described, which is capable of flipping
bits in memory locations usually inaccessible to an application. The attack uses pro-
longed reads to a memory location performed in quick succession, which in the case
of “DRAM” memory will cause neighbouring memory cells to change state. This effect
has been shown to be capable of raising the privileges of an application in Linux and
Android (cf. [21]) systems. System designers must consider using one or more mitiga-
tions to such attacks, for example, by using memory with error correction, or software
mitigations as shown in [22].
Isolating the operating system from applications, often called sandboxing is an another
important aspect of a protected runtime environment. The basic idea is to limit the
capabilities of a process, i.e. restricting what a process can do44 . The classic way to
42
https://linuxcontainers.org/
43
https://www.freebsd.org/doc/handbook/jails.html
44
Not to be confused with “what a process does”, as the behaviour of a process is changed by an
attack.
do this is by dropping privileges as soon as they are not needed anymore, i.e. privilege
revocation. For example, the ping command requires root privileges on UNIX systems
to create a raw network socket, but will drop its privileges to a regular user after creating
it. Ideally, a software component should drop (or never be given) any privileges it does
not require, or drop them as soon as they are no longer required. As with the example
of ping, any software component must then be structured with a setup phase, in which
all advanced privileges are used and subsequently dropped. The following shows a
few examples of operating system functionalities, which allow an application to drop
capabilities or privileges.
OS Family Solution Description
Linux Seccomp Mode 1 (Strict) Processes on the Linux operating system
[23] can limit their set of allowed system calls
in a very easy manner. The allowed sub-
set is extremely strict, limiting the process
to read, write, exit and sigreturn.
Once activated, this Seccomp mode can-
not be deactivated again.
Linux Seccomp Mode 2 (Filter) A more recent version of the Seccomp
[23] mode, the seccomp filter mode, allows
a much more fine-grained control. The
concept is to allow a process to attach
a small filter program, which will check
each system call. This filter program must
be a “Berkley Packet Filter” (BPF), a very
restricted form of byte-code, which will
be executed by the kernel before each
system call. This filter can than exam-
ine each system call, for example, check
which system call is made or what the pa-
rameters are set. The filter then returns
a decision as to what the kernel should
do. The system call can be allowed or
blocked and if the call is blocked, sev-
eral choices can be made as to how it is
blocked. The filter may decide to imme-
diately kill the process, to simply let the
system call return an error, send a sig-
nal to the offending process or to notify a
tracer attached to the program (such as a
debugger). A notable property of these
filters is, that they are inherited by the
spawned child processes, which enables
a setup of a filter before a potentially dan-
gerous process is started.
FreeBSD Securelevel [24, chapter This functionality limits the possibilities of
13] all processes running on a system in in-
cremental levels, which cannot be low-
ered once entered (until a reboot of the
system).