Security

Phasic Linux will offer the best in available generic security. Security will be based on known methods of exploitation and analysis of the potential entry point of threatening data in order to design a security net to prevent intruders from getting malignant data inside the network, whether it be one machine connected to a cable modem or a multi-hundred-thousand node internal LAN.

Threat Model

A threat model is needed to effectively deploy security solutions. Threat models allow you to visualize and understand the environment and pinpoint the various problems caused by the environment. From here, you can decide where to focus and how to solve problems.

Below is a simple diagram of a potential threat model. Though threat models are not always diagrams, a diagram helps to aid the understanding of what threats there are and where they come from. Normally, a complete threat model would make a very large diagram much bigger than the likely screen resolution of the reader; and a diagram including the solutions would be quite huge.

Phasic Threat Model

Without a threat model, basic security issues such as buffer overflows and code injection can be examined easily. These issues are obvious and are easily fixed. However, without a real threat model, the solutions chosen may not cover all possible methods of triggering such exploits and may leave the protection vulnerable, or simply leave other methods not previously considered still useful to an attacker.

By examining the situation deeply and designing a threat model rather than opportunisticly protecting well-known security attack vectors, a constant idea of potential threats is kept, allowing people to continuously work towards the protection against current threats and the discovery of new threats not previously accounted for.

Attack Points

The Phasic Linux Threat Model defines attack points based on the point nearest the system that the exploit occurs at. This is usually the protocol or delivery method for bad data, or the node from which the attack occurs.

Below is a list of the attack points we have noticed. All attack points are assumed to potentially trigger all forms of exploits in all application spaces and in kernel space, depending on the nature of the exploit used from the attack point.

  • The Internet

    The Internet is an attack point. Malignant data to trigger exploits may come in files or corrupted protocol data sent over HTTP (Web browsing), IM protocols, SMTP to outside servers, FTP, streaming media, or any other data drawn down from explicit network requests to nodes outside the internal network. The attack doesn't have to happen right away; downloaded files could be trojans, or could contain worms or viruses. Most often this leads to local user access.

  • E-Mail

    Worms, viruses, trojans, and malicious content can be delivered through attachments or deformed content in e-mail designed to exploit various programs or the e-mail client itself. E-mail coming from internal SMTP/POP3 servers can still contain malignant content, and steps should be taken to remove potentially dangerous materials. Most often this leads to local user access.

  • Server Requests

    Production servers host e-mail, HTTP, FTP, and other services to the internal LAN or to the Internet. In either case, the production server is exposed to potential attack points. Exploitation here can be serious.

  • Local User

    There is no such thing as a local exploit. Local attacks can come from comproised local accounts. Users may have worms or trojans installed unknowingly, or may be drones for attackers who have manually broken user security and gotten a local shell. Local exploits usually mean root, but can mean other user accounts or micro-rooting (race conditions).

Bugs

Bugs in software allow a method of altering the software's flow to follow a previously unconsidered path which may potentially lead to an exploit. Mitigating the impact of bugs can be done by redefining previously undefined execution paths as immediate program termination. In simple terms, bugs need to be detected, and the program needs to be terminated when they're triggered.

Below is a list of the classes of bugs we noticed. All types of bugs are assumed to potentially trigger all forms of exploits in all application spaces and in kernel space, depending on the nature of the exploit used from the attack point.

  • Bad Design

    Bad design is the simplest possible bug. Bad design consists of guarantees made by the design of a process or a system which provide a guarantee that unauthenticated users can execute a custom process of program flow before an authentic user has a chance to examine it. Notable examples of bad design include the original behavior of Outlook Express with regard to attached script files; and various e-mail clients' handling of Java Script in HTML mail. These software packages automatically executed script upon loading of a message without allowing the user a chance to prevent it.

  • Stack Smash

    Stack Smashes are buffer overflows in areas allocated on the stack. They create a condition in which the data controlling program execution flow can be freely altered, allowing an attacker to execute code out of order or bring his own code. In some cases, simple changes to local variables during a call to a function can produce results desirable to the attacker.

  • Heap Smash

    Heap Smashes are buffer overflows on the program heap. Depending on the nature of the affected area, program flow can be altered by altering other data on the heap. This is especially dangerous when pointers are altered, as it may allow an attacker to modify arbitrary addresses of memory including the stack, allowing program data to be loaded to the heap and cause a heap or stack smash which uses the data as part of the attack.

  • Integer Overflows

    Integer overflows occur when integer values roll around. Upon increase above the maximum value or decrease below the minimum value a storage class can hold, the value becomes the opposite in respect. This is sometimes intentional in certain execution flows; but often it is not and this results in undefined and manipulable behavior.

  • Cross Site Scripting (XSS)

    XSS is when scripts from one site get executed on another site in the context of the second site. This may be client or server side script execution, and may lead to information leaking and more arbitrary code execution.

  • Filesystem Races

    Filesystem Races come in two flavors: tempfile and non-tempfile. Tempfile races occur on temporary files; while non-tempfile races occur on permenant files. They are the result of ill-set permissions and poor program flow which allows the creation of a file to be changed to the editing or replacement of an existing file, potentially destroying data or writing confidential information to files with permissions allowing other users to access them.

  • Program Races

    Program Races are races that occur in programs not involving the filesystem. These races could for example involve stopping a program at a particular time when it has access to confidential information to expand the window for a filesystem race.

  • Format String Bugs

    Format strings supplied by the user can contain invalid combinations of format specifiers which produces an undefined situation. Such bugs can be used to arbitrarily read and alter memory, averting buffer overflow protections and creating information leaks.

Exploits

Exploits are the end result of bugs or of other exploits.

  • Code Injection

    Code Injection, shellcode, execution of arbitrary code, all of these describe a situation in which an attacker has managed to execute code with privileges he isn't given by the system. The attacker may be a user exploiting another user, or an external entity such as a cracker or worm.

  • Return-to-Libc

    Return-to-Libc or ret2libc attacks involve returning to preexisting code, resulting in arbitrary execution of code. This can lead to memory protection evasion techniques that allow code injection, among other things.

  • Privilege Escallation

    Most if not all attacks aim for privilege escallation. Any successful exploit is privilege escallation indirectly, as the attacker gains privileges he previously didn't have by accessing data he wasn't given authorization to. More generally, however, privilege escallation describes the result of attacks which leave a stable process with access to data he doesn't have authorization for. Normally privilege escallation involves kernel exploits to change users or grant capabilities.

  • Information Leak

    Information leaks provide confidential information, either documents, passwords, or information about other processes which may be useful to attack them. Leaking an arbitrary chunk of memory may leave an attacker with access to a frobnicated version of a password or credit card number; while reading /proc/[pid]/maps for processes owned by other users may give addresses useful in an attack.

Threats

Security threats shown below, some information derived from Rescorla's presentation[2] on network security and the Internet.

  • Basic Considerations
    • Many programs are exploitable due to programming errors called "bugs"
      • Buffer overflows
        • Stack smashes
        • Heap smashes
      • Race conditions
        • Tempfile races
        • Non-tempfile races
        • Periods of weakness (such as before a security policy is applied but after a service is running)
      • Format string bugs
        • Potential information leak to help evade mitigation methods
        • Potential attack method which may be able to evade solutions intended to stop other attacks based on other bugs
      • Integer overflows
      • Cross site scripting (XSS)
    • Bugs trigger various exploit techniques
      • Code injection and shellcode
      • Return-to-Libc
      • Basic information leaking
      • Basic privilege escallation
  • Network Considerations
    • All RFCs must have security considerations since RFC 1543[1]
      • These sections are often thin, sometimes recommending running the protocol over VPN such as IPSec
      • Many insecure protocols exist
    • Internet cannot be trusted
      • Attackers can read your traffic
      • Attackers can forge your traffic
      • Attackers have reasonable computational ability, and will put it to work against your network
      • End systems must be assumed to be secure after any reasonable checks
      • Many types of attacks exist
        • Blind active attacks
          • SYN/ping flooding
        • Passive attacks
          • Password sniffing
          • CI sniffing (CCN, SSN, etc)
          • Data gathering (for example, of cyphertext to use to break crypto keys)
        • Active attacks
          • Password guessing (brute force or dictionary)
          • Brute force (of ASLR, SSP, etc)
      • Basic countermeasures are important
        • Confidentiality of non-public data from non-authorized access
        • Authentication/integrity of data to prevent reading untrusted data on secure systems, in case of malicious data[3][4][5]
        • Authorization of users accessing the system, such as for shell access, FTP, SMTP, HTTPS, etc.
      • Object and channel security
        • Public key encryption and signing using asymetric algorithms such as RSA[6] and DSA/ElGamal[7][8]
          • Private keys are per-entity; a private key is created for an entity (a session, node, human, or organization), and no other entity should ever share that private key
        • Object security guarantees the integrity of objects
          • Signing objects such as e-mail or documents prevents them from being altered, as the signature is based on a private piece of data, an thus cannot be reproduced after an alteration
          • Objects can be encrypted with public key (asymetric) encryption to assure confidentiality when transferring over insecure channels
          • Application level integration is often necessary
          • Examples include S/MIME, PGP, and SSL
        • Channel security guarantees the integrity of a channel
          • Channels cannot be signed; however, signed pieces of random data can be exchanged within a secure channel at the beginning to facilitate authentication and effectively act as a signature for the session
          • Peers establish an encrypted channel by i.e. exchanging randomly generated public keys (SSL) or utilizing previously established public keys (VPN)
          • Channel is only secure as long as it is up, i.e. e-mail store-and-forward cannot use channel security
          • Any data can be exchanged over an encrypted channel
          • Examples include SSL, IPSEC (VPN), TLS (VPN)
      • Policy may restrict access to certain materials such as racism, violence, and pornography; but content control is always imperfect
  • Host Security
    • Squashing bugs on the host
      • Always security audit source code routinely
      • Stack smashes
        • Mitigatable with ProPolice[9]
          • Overhead
            • 8% theoretical maximum
            • In practice, overhead is minimal, usually less than 1%
            • Overhead cannot be definitely quantified as it varies based on program flow
            • Maintenance overhead: programs must be rebuilt with ProPolice enabled in gcc
          • Protects
            • Local variables
            • Passed arguments
            • Stack frame pointer
            • Return pointer
          • Fails to protect
            • Data in poorly ordered structures
            • Buffer clobbering from other buffers
            • Data in other stackframes
        • Mitigatable with LibSafe[10]
          • Overhead
            • Each call to protected functions causes a complex check; therefor, overhead cannot be directly quantified overall
            • Experiments indicate that the performance overhead is negligible
            • Maintenance: Programs can be linked with libsafe using an LD_PRELOAD, or libsafe can be merged with glibc
          • Protects
            • Protections are limited to standard library functions: strcpy(), strcat(), getwd(), gets(), realpath(), [vf]scanf(), [v]sprintf()
            • All data outside the stack frame which the buffer resides in
            • Although the stack frame pointer and return pointer are not protected, overflows cannot supply enough data to load shellcode or alternative stack frames for ret2libc
          • Fails to protect
            • Data inside the same stack frame as the buffer and above the buffer
            • Despite the fact that stack frames and shellcode can't fit on the stack, exploits involving ill processing of legitimately loaded data[3][4][5] as well may still be able to succeed if careful
      • Temporary file races
        • GrSecurity[11] can mitigate tempflie races with "Linking Restrictions"
          • Overhead
            • Overhead is negligible: simple extra checks done at symlink following and hardlink creation
            • Maintenance overhead: enable a single option in the kernel; no userspace changes needed
          • Protects
            • Symlinks are considered dangerous if they are in world-writable chmod()+t directories, the owner of the link is not the owner of the directory, and the owner of the link is not the owner of the process trying to follow it
            • All users including root are forbade to follow symlinks in situations where the link may have been created illegitimately
            • Users may not hardlink to files they do not own
          • Fails to protect
            • If the directory is world-writable -t, the restrictions do not apply, and races may still happen
            • This protection scheme works based on privilege separation by user; MAC systems which implement multiple root users in different security contexts may still have problems under ill-designed systems and policy
        • mkstemp() and mkdtemp()
          • Supplied functions mandated by POSIX 1003.1
          • Create files and directories with secure permissions in a way which prevents race conditions
          • Overhead
            • Overhead is negligible: these functions are packaged code to perform all necessary checks needed to securely create temporary files
            • Maintenance overhead: programs must be written to use these functions to make temporary files; therefor, auditing must be done periodically to look for other methods of creating temporary files, which must be replaced as a security precaution
          • Protects
            • Using these functions makes it impossible to create a race condition or information leak involving temporary files and directories
          • Fails to protect
            • Programs not written to use these functions will not benefit from the security and simplicity they offer in implementing temporary files and may contain bugs which allow race conditions or information leaks
    • Preventing exploits on the host
      • Code injection
        • PaX Executable Space Protections
          • Overhead
            • Negligible: uses hardware NX bit
            • <1% on x86 with emulated NX bit: 0.7% measured for SEGMEXEC, almost 0 in most cases for PAGEEXEC
            • Maintenance overhead: Minimal; some apps must be marked with administrative tools to remove protections
            • Maintenance overhead: Broken application code can usually be rewritten later so that protections may be reapplied
          • Protects
            • PaX protects against all forms of code injection directly into memory by creating a separation between memory which is created writable and memory which is created executable
          • Fails to protect
            • Executable space protections alone won't protect against ret2libc attacks used for indirect code injection
            • Programs may still mprotect() memory so that it becomes writable and executable, or executable after shellcode is written to it
            • Some programs and libraries set off the protections and need them disabled; programs using affected libraries must be marked and lose protections globally
        • PaX mprotect() restrictions
          • Overhead
            • Negligible: single added decision
          • Protects
            • Enhances executable space protections by increasing the separation to a separation between memory which may have been writable and memory which may be executed
            • Allows for administrative control over memory policy so that the administrator has a definite idea of which programs may need to be more frequently audited, or which may need to be fixed so that they work with executable space protections
          • Fails to protect
            • Some programs want or need to generate code during runtime in memory. Many JIT and Mono programs generate code in memory needlessly, as they can use proper temporary files mmap()ed into memory for a one-time cost per run; but realtime machine emulators such as Qemu and VMWare need to generate code in memory by design, else they suffer from continuous heavy performance loss. These programs must have mprotect() restrictions disabled
            • It is still possible to perform a ret2libc attack and use open(), write(), and mmap() to spit executable code out into a file and map it into memory in the same way that JIT programs can generate code under full restrictions
        • PaX trampoline emulation
          • Trampolines are small bits of code that get executed on the stack
          • Trampolines are also referenced as "nested functions"
          • Trampolines require an executable stack
          • Overhead
            • Negligible: added codepath in trapping execution on the stack to test for a trampoline and allow it if it's setting the protection off
          • Protects
            • Enhances executable space protections by allowing a simple and semi-common form of runtime code generation to occur without requiring reduction of security
          • Fails to protect
            • Does not allow generic runtime code generation, hence there are still tasks which will require reduced security to run
        • Address Space Layout Randomization
          • PaX supplies high-quality randomization
            • Anonymous mappings: 16 bits (32b), 26 bits (64b)
            • Heap (ET_EXEC): 14 bits
            • Heap (ET_DYN): 24 bits (32b), 32 bits (64b)
            • Main executable (ET_EXEC): 16 bits (32b), 25 bits (64b)
            • Main executable (ET_DYN): 16 bits (32b), 25 bits (64b)
            • Shared library/mmap(): 16 bits (32b), 25 bits (64b)
            • Stack randomization: 24 bits (32b), 32 bits (64b)
          • Overhead
            • Negligible: random number generation added to selection routines used to chose the base of various segments of memory in virtual memory space
          • Protects
            • A random heap and stack base prevents injected code and data from being easily located if the randomization is significant
            • Random mmap() bases prevent the addresses of libraries and of ET_DYN position independent executables from being known, preventing ret2libc attacks
            • Random mmap() bases also allow large malloc() calls to be randomized
          • Fails to protect
            • Daemons which fork() or continuously respawn can be brute forced in a given period to probabilisticly defeat ASLR; however, brute force deterrance such as that supplied by GrSecurity extends this period significantly, for example from 216 seconds to likely 3 weeks
        • Process brute force deterrance
          • GrSecurity provides process brute force deterrance
            • Prevents the brute forcing of ASLR
            • Prevents the brute forcing of canary-based Stack Smash Protection such as ProPolice
            • Only for fork()ing daemons which retain the same address space
          • Overhead
            • Negligible: single added check and codepath
            • Heavy when triggered: fork() calls for the highest parent of the same binary are queued and one is executed every 30 seconds until the administrator restarts the daemon
          • Protects
            • Daemons which fork() children to handle connections have the period in which ASLR can be broken extended by approximately 140 times (from 216 seconds to 3 weeks; much longer on 64b archs)
          • Fails to protect
            • Each attack has a probability of guessing the proper relavent addresses; entropy is lost after each attack, and so administrators must still respond quickly to detected attacks
      • Return-to-libc
        • Address Space Layout Randomization
          • Same ASLR supplied by PaX to aid in preventing code injection attacks
          • Overhead
            • Negligible: see above
          • Protects
            • Stack address (code injection)
            • mmap() base (libraries, et_dyn executables, anonymous mappings, and large malloc()s)
            • Heap base (malloc())
          • Fails to protect
            • Daemons which fork() or continuously respawn can be brute forced; see ASLR and brute force deterrance above
      • Basic information leaking
        • Information leaking can reveal important and useful information, such as the address space layout
        • This can invalidate other protections, such as ASLR
        • Task obscurity
          • Prevent users from viewing other users' processes
          • Supplied by GrSecurity as /proc filesystem restrictions
          • Overhead
            • Negligible: simple tests added to existing codepaths
          • Protects
            • Prevents address space information leaks, such as from /proc/[pid]/maps
            • /proc/[pid]/maps can be obscured directly
            • Enhances ASLR and task randomization by protecting information that continuously changes even if discovered in another session
          • Fails to protect
            • Tasks under the same user in different security contexts may be visible accross security contexts, unless the MAC system enforcing these security contexts also enforces task obscurity
            • Process IDs can be fairly predictable with no applied randomization, which can provide some information to attack tasks if certain APIs aren't properly restricted
        • Task randomization
          • Randomly assign Process IDs to tasks
          • Overhead
            • Negligible: random number generation
          • Protects
            • Prevents users from guessing the PIDs of certain interesting tasks, especially daemons
          • Fails to protect
            • Only useful in the presence of task obscurity and thus requires the same considerations with regard to security contexts
        • OS obscurity through randomization
          • Network countermeasure to complicate uncoordinated attacks on the host from uninformed sources
          • Randomly assign various network data such as TCP source ports and IP sequence numbers
          • Overhead
            • Negligible: random number generation
          • Protects
            • Prevents network tools from utilizing OS fingerprinting, which can be used by an attacker to select the proper exploits to attack security models such as ASLR via brute force
          • Fails to protect
            • Only useful if the random number generator can't be fingerprinted
            • Only useful if many operating systems obscure themselves
            • Attackers with prior knowledge of the system won't be thrown by this obscurity, as the information being obscured is a single unchanging variable rather than a continuously randomized variable
  • Host Inecurity
    • Wild bugs still on the host
      • Heap smashes
      • Non-tempfile filesystem races
      • Program races
      • Format string bugs
      • Integer overflows
      • XSS
    • Working exploits still on the host
      • Information leaks: only certain basic leaks are plugged
      • Basic privilege escallation: kernel flaws can still cause this
    • It is not likely that all classes of bugs or all classes of exploits can be prevented
    • Privilege escallation can likely only be stopped by source code auditing, as it is often caused by kernel bugs
    • Privilege escallation can be contained by finegrained security provided by mandatory access control systems; however, this is not appropriate in all environments and still affords some added privileges to an attacker
    • There are some considerations for the future that this threat model must address:
      • This threat model does not yet incorporate an explaination of fork() bombs and fork() bomb defusers
      • This threat model has not addressed viruses and code signing[12]
      • Advanced isolation should be studied

References

  1. J. Postel. RFC 1543. October, 1993.
  2. Eric Rescorla. Guidelines for Authors of Security Considerations Sections.
  3. US-CERT. Technical Cyber Security Alert TA04-217A: Multiple Vulnerabilities in libpng. August 4, 2004.
  4. US-CERT. Technical Cyber Security Alert TA04-260A: Microsoft Windows JPEG component buffer overflow. September 16, 2004.
  5. Assaf Reshef to Bugtraq. Windows ANI File Parsing Proof Of Concept (MS05-002). Jan 12, 2005.
  6. Wikipedians. RSA.
  7. Wikipedians. Digital Signature Algorithm.
  8. Wikipedians. ElGamal Discrete Logarithm Cryptosystem.
  9. Hiroaki Etoh, Kunikazu Yoda. Protecting from Stack-smashing Attacks. June 19, 2000.
  10. libsafe Web Site.
  11. Spengler, Brad. GrSecurity Web Site.
  12. The DigSig team. The DigSig Project. March, 2004.