Security threats shown below, some information derived from Rescorla's
presentation[2] on network security and the Internet.
- Basic Considerations
- Many programs are exploitable due to programming errors called "bugs"
- Buffer overflows
- Stack smashes
- Heap smashes
- Race conditions
- Tempfile races
- Non-tempfile races
- Periods of weakness (such as before a security policy is applied
but after a service is running)
- Format string bugs
- Potential information leak to help evade mitigation methods
- Potential attack method which may be able to evade solutions
intended to stop other attacks based on other bugs
- Integer overflows
- Cross site scripting (XSS)
- Bugs trigger various exploit techniques
- Code injection and shellcode
- Return-to-Libc
- Basic information leaking
- Basic privilege escallation
- Network Considerations
- All RFCs must have security considerations since RFC 1543[1]
- These sections are often thin, sometimes recommending running the
protocol over VPN such as IPSec
- Many insecure protocols exist
- Internet cannot be trusted
- Attackers can read your traffic
- Attackers can forge your traffic
- Attackers have reasonable computational ability, and will put it to
work against your network
- End systems must be assumed to be secure after any reasonable
checks
- Many types of attacks exist
- Blind active attacks
- Passive attacks
- Password sniffing
- CI sniffing (CCN, SSN, etc)
- Data gathering (for example, of cyphertext to use to break crypto
keys)
- Active attacks
- Password guessing (brute force or dictionary)
- Brute force (of ASLR, SSP, etc)
- Basic countermeasures are important
- Confidentiality of non-public data from non-authorized access
- Authentication/integrity of data to prevent reading untrusted data
on secure systems, in case of malicious data[3][4][5]
- Authorization of users accessing the system, such as for shell
access, FTP, SMTP, HTTPS, etc.
- Object and channel security
- Public key encryption and signing using asymetric algorithms such
as RSA[6] and DSA/ElGamal[7][8]
- Private keys are per-entity; a private key is created for an
entity (a session, node, human, or organization), and no other
entity should ever share that private key
- Object security guarantees the integrity of objects
- Signing objects such as e-mail or documents prevents them from
being altered, as the signature is based on a private piece of data,
an thus cannot be reproduced after an alteration
- Objects can be encrypted with public key (asymetric) encryption
to assure confidentiality when transferring over insecure channels
- Application level integration is often necessary
- Examples include S/MIME, PGP, and SSL
- Channel security guarantees the integrity of a channel
- Channels cannot be signed; however, signed pieces of random data
can be exchanged within a secure channel at the beginning to
facilitate authentication and effectively act as a signature for the
session
- Peers establish an encrypted channel by i.e. exchanging randomly
generated public keys (SSL) or utilizing previously established
public keys (VPN)
- Channel is only secure as long as it is up, i.e. e-mail
store-and-forward cannot use channel security
- Any data can be exchanged over an encrypted channel
- Examples include SSL, IPSEC (VPN), TLS (VPN)
- Policy may restrict access to certain materials such as racism,
violence, and pornography; but content control is always imperfect
- Host Security
- Squashing bugs on the host
- Always security audit source code routinely
- Stack smashes
- Mitigatable with ProPolice[9]
- Overhead
- 8% theoretical maximum
- In practice, overhead is minimal, usually less than 1%
- Overhead cannot be definitely quantified as it varies based on
program flow
- Maintenance overhead: programs must be rebuilt with ProPolice
enabled in gcc
- Protects
- Local variables
- Passed arguments
- Stack frame pointer
- Return pointer
- Fails to protect
- Data in poorly ordered structures
- Buffer clobbering from other buffers
- Data in other stackframes
- Mitigatable with LibSafe[10]
- Overhead
- Each call to protected functions causes a complex check;
therefor, overhead cannot be directly quantified overall
- Experiments indicate that the performance overhead is
negligible
- Maintenance: Programs can be linked with libsafe using an
LD_PRELOAD, or libsafe can be merged with glibc
- Protects
- Protections are limited to standard library functions:
strcpy(), strcat(), getwd(), gets(), realpath(), [vf]scanf(),
[v]sprintf()
- All data outside the stack frame which the buffer resides
in
- Although the stack frame pointer and return pointer are not
protected, overflows cannot supply enough data to load shellcode
or alternative stack frames for ret2libc
- Fails to protect
- Data inside the same stack frame as the buffer and above the
buffer
- Despite the fact that stack frames and shellcode can't fit on
the stack, exploits involving ill processing of legitimately
loaded data[3][4][5] as well may still be able to succeed if
careful
- Temporary file races
- GrSecurity[11] can mitigate tempflie races with "Linking
Restrictions"
- Overhead
- Overhead is negligible: simple extra checks done at symlink
following and hardlink creation
- Maintenance overhead: enable a single option in the kernel;
no userspace changes needed
- Protects
- Symlinks are considered dangerous if they are in world-writable
chmod()+t directories, the owner of the link is not the owner of
the directory, and the owner of the link is not the owner of the
process trying to follow it
- All users including root are forbade to follow symlinks in
situations where the link may have been created
illegitimately
- Users may not hardlink to files they do not own
- Fails to protect
- If the directory is world-writable -t, the restrictions do not
apply, and races may still happen
- This protection scheme works based on privilege separation by
user; MAC systems which implement multiple root users in different
security contexts may still have problems under ill-designed
systems and policy
- mkstemp() and mkdtemp()
- Supplied functions mandated by POSIX 1003.1
- Create files and directories with secure permissions in a way
which prevents race conditions
- Overhead
- Overhead is negligible: these functions are packaged code to
perform all necessary checks needed to securely create temporary
files
- Maintenance overhead: programs must be written to use these
functions to make temporary files; therefor, auditing must be done
periodically to look for other methods of creating temporary
files, which must be replaced as a security precaution
- Protects
- Using these functions makes it impossible to create a race
condition or information leak involving temporary files and
directories
- Fails to protect
- Programs not written to use these functions will not benefit
from the security and simplicity they offer in implementing
temporary files and may contain bugs which allow race conditions
or information leaks
- Preventing exploits on the host
- Code injection
- PaX Executable Space Protections
- Overhead
- Negligible: uses hardware NX bit
- <1% on x86 with emulated NX bit: 0.7% measured for
SEGMEXEC, almost 0 in most cases for PAGEEXEC
- Maintenance overhead: Minimal; some apps must be marked with
administrative tools to remove protections
- Maintenance overhead: Broken application code can usually be
rewritten later so that protections may be reapplied
- Protects
- PaX protects against all forms of code injection directly into
memory by creating a separation between memory which is created
writable and memory which is created executable
- Fails to protect
- Executable space protections alone won't protect against
ret2libc attacks used for indirect code injection
- Programs may still mprotect() memory so that it becomes
writable and executable, or executable after shellcode is written
to it
- Some programs and libraries set off the protections and need
them disabled; programs using affected libraries must be marked
and lose protections globally
- PaX mprotect() restrictions
- Overhead
- Negligible: single added decision
- Protects
- Enhances executable space protections by increasing the
separation to a separation between memory which may have been
writable and memory which may be executed
- Allows for administrative control over memory policy so that
the administrator has a definite idea of which programs may need
to be more frequently audited, or which may need to be fixed so
that they work with executable space protections
- Fails to protect
- Some programs want or need to generate code during runtime in
memory. Many JIT and Mono programs generate code in memory
needlessly, as they can use proper temporary files mmap()ed into
memory for a one-time cost per run; but realtime machine emulators
such as Qemu and VMWare need to generate code in memory by design,
else they suffer from continuous heavy performance loss. These
programs must have mprotect() restrictions disabled
- It is still possible to perform a ret2libc attack and use
open(), write(), and mmap() to spit executable code out into a
file and map it into memory in the same way that JIT programs can
generate code under full restrictions
- PaX trampoline emulation
- Trampolines are small bits of code that get executed on the
stack
- Trampolines are also referenced as "nested functions"
- Trampolines require an executable stack
- Overhead
- Negligible: added codepath in trapping execution on the stack
to test for a trampoline and allow it if it's setting the
protection off
- Protects
- Enhances executable space protections by allowing a simple and
semi-common form of runtime code generation to occur without
requiring reduction of security
- Fails to protect
- Does not allow generic runtime code generation, hence there are
still tasks which will require reduced security to run
- Address Space Layout Randomization
- PaX supplies high-quality randomization
- Anonymous mappings: 16 bits (32b), 26 bits (64b)
- Heap (ET_EXEC): 14 bits
- Heap (ET_DYN): 24 bits (32b), 32 bits (64b)
- Main executable (ET_EXEC): 16 bits (32b), 25 bits (64b)
- Main executable (ET_DYN): 16 bits (32b), 25 bits (64b)
- Shared library/mmap(): 16 bits (32b), 25 bits (64b)
- Stack randomization: 24 bits (32b), 32 bits (64b)
- Overhead
- Negligible: random number generation added to selection
routines used to chose the base of various segments of memory in
virtual memory space
- Protects
- A random heap and stack base prevents injected code and data
from being easily located if the randomization is significant
- Random mmap() bases prevent the addresses of libraries and of
ET_DYN position independent executables from being known,
preventing ret2libc attacks
- Random mmap() bases also allow large malloc() calls to be
randomized
- Fails to protect
- Daemons which fork() or continuously respawn can be brute
forced in a given period to probabilisticly defeat ASLR; however,
brute force deterrance such as that supplied by GrSecurity extends
this period significantly, for example from 216 seconds to likely
3 weeks
- Process brute force deterrance
- GrSecurity provides process brute force deterrance
- Prevents the brute forcing of ASLR
- Prevents the brute forcing of canary-based Stack Smash
Protection such as ProPolice
- Only for fork()ing daemons which retain the same address
space
- Overhead
- Negligible: single added check and codepath
- Heavy when triggered: fork() calls for the highest parent of
the same binary are queued and one is executed every 30 seconds
until the administrator restarts the daemon
- Protects
- Daemons which fork() children to handle connections have the
period in which ASLR can be broken extended by approximately 140
times (from 216 seconds to 3 weeks; much longer on 64b archs)
- Fails to protect
- Each attack has a probability of guessing the proper relavent
addresses; entropy is lost after each attack, and so
administrators must still respond quickly to detected attacks
- Return-to-libc
- Address Space Layout Randomization
- Same ASLR supplied by PaX to aid in preventing code injection
attacks
- Overhead
- Protects
- Stack address (code injection)
- mmap() base (libraries, et_dyn executables, anonymous
mappings, and large malloc()s)
- Heap base (malloc())
- Fails to protect
- Daemons which fork() or continuously respawn can be brute
forced; see ASLR and brute force deterrance above
- Basic information leaking
- Information leaking can reveal important and useful information,
such as the address space layout
- This can invalidate other protections, such as ASLR
- Task obscurity
- Prevent users from viewing other users' processes
- Supplied by GrSecurity as /proc filesystem restrictions
- Overhead
- Negligible: simple tests added to existing codepaths
- Protects
- Prevents address space information leaks, such as from
/proc/[pid]/maps
- /proc/[pid]/maps can be obscured directly
- Enhances ASLR and task randomization by protecting information
that continuously changes even if discovered in another
session
- Fails to protect
- Tasks under the same user in different security contexts may
be visible accross security contexts, unless the MAC system
enforcing these security contexts also enforces task
obscurity
- Process IDs can be fairly predictable with no applied
randomization, which can provide some information to attack tasks
if certain APIs aren't properly restricted
- Task randomization
- Randomly assign Process IDs to tasks
- Overhead
- Negligible: random number generation
- Protects
- Prevents users from guessing the PIDs of certain interesting
tasks, especially daemons
- Fails to protect
- Only useful in the presence of task obscurity and thus requires
the same considerations with regard to security contexts
- OS obscurity through randomization
- Network countermeasure to complicate uncoordinated attacks on the
host from uninformed sources
- Randomly assign various network data such as TCP source ports and
IP sequence numbers
- Overhead
- Negligible: random number generation
- Protects
- Prevents network tools from utilizing OS fingerprinting, which
can be used by an attacker to select the proper exploits to attack
security models such as ASLR via brute force
- Fails to protect
- Only useful if the random number generator can't be
fingerprinted
- Only useful if many operating systems obscure themselves
- Attackers with prior knowledge of the system won't be thrown by
this obscurity, as the information being obscured is a single
unchanging variable rather than a continuously randomized
variable
- Host Inecurity
- Wild bugs still on the host
- Heap smashes
- Non-tempfile filesystem races
- Program races
- Format string bugs
- Integer overflows
- XSS
- Working exploits still on the host
- Information leaks: only certain basic leaks are plugged
- Basic privilege escallation: kernel flaws can still cause this
- It is not likely that all classes of bugs or all classes of exploits
can be prevented
- Privilege escallation can likely only be stopped by source code
auditing, as it is often caused by kernel bugs
- Privilege escallation can be contained by finegrained security provided
by mandatory access control systems; however, this is not appropriate in
all environments and still affords some added privileges to an
attacker
- There are some considerations for the future that this threat model
must address:
- This threat model does not yet incorporate an explaination of fork()
bombs and fork() bomb defusers
- This threat model has not addressed viruses and code signing[12]
- Advanced isolation should be studied
|