Thursday, August 7, 2014

Application Security

Von Neumann = A fundamental aspect of von Neumann architecture on which most computers today are based on is that there is no inherent difference between data and programming (instructions) representations in memory. Th erefore, we cannot tell whether the pattern 4Eh (00101110) is the letter N or a decrement operation code (commonly known as opcode). Similarly, the pattern 72h (01110010) may be the letter r or the fi rst byte of the “jump if below” opcode. Th erefore, without proper input validation, an attacker can provide input data that may actually be an instruction for the system to do something unintended  Linus’ law basically is based on the premise that with more people reviewing the source code (as in the case of open source), more security bugs can be detected and hence improve security.

Clark and Wilson model is an integrity model from which entity and referential integrity (RDBMS integrity) rules are derived  important characteristic of bytecode = A programming language like Java compiles source code intoa sort of pseudo-object code called bytecode. Th e bytecode is then processed by the interpreter (called the Java Virtual Machine, or JVM) for the CPU to run. Because the bytecode is already fairly close to object code, the interpretation process is much faster than for other interpreted languages. And because bytecode is still undergoing an interpretation, a given Java program will run on any machine that has a JVM. Memory management and sandboxing are important security aspects that apply to the programming language Java, but not to bytecode itself.

Th e debate over whether a pseudo-object (bytecode) representation can be easily reverse engineered is debatable and inconclusive. Because bytecode is more pseudo-object representation of the source code, reversing to source code is in fact considered less difficult than from object or executable code.

A covert channel or confinement problem is an information flow issue. It is a communication channel allowing two cooperating processes to transfer information in such a way that it violates the system’s security policy. Th ere are two types of covert channels, viz. storage and timing. A covert storage channel involves the direct or indirect reading of a storage location by one process and a direct or indirect reading of the same storage location by another process. Typically, a covert storage channel involves a finite resource, such as a memory location or sectoron a disk that is shared by two subjects at different security levels. Th is scenario is a description of a covert storage channel. A covert timing channel depends upon being able to influence the rate that some other process is able to acquire resources, such as the CPU, memory, or I/O devices. Covert channels as opposed to what should be the case (overt channels) could lead to denial of service and object reuse has to do with disclosure protection when objects in memory are reused by diff erent processes.


TOC/TOU is a common type of attack that occurs when some control changes between the time that the system security functions checkthe contents of variables and the time the variables actually are used during operations. For instance, a user logs on to a system in the morning and later is fi red. As a result of the termination, the security administrator removes the user from the user database. Because the user did not log off , he or she still has access to the system and might try to get even. Logic bombs are software modules set up to run in a quiescent state, but to monitor for a specifi c condition or set of conditions and to activate their payload under those conditions. Remote-access trojans are malicious programs designed to be installed, usually remotely, after systems are installed and working. Phishing attempts to get the user to provide information that will be useful for identity theft-type frauds

most eff ective defense against a buff er overfl ow attack is Encode the output Buff er overfl ows can result when a program fi lls up the assigned buff er of memory with more data than its buff er can hold. When the program begins to write beyond the end of the buff er, the program’s execution path can be changed, or data can be written into areas used by the operating system itself. A buff er overfl ow is caused by improper (or lacking) bounds checking on input to a program. By checking for the bounds (boundaries) of allowable input size, buff er overfl ow can be mitigated. Disallowing dynamic construction of queries is a defense against injection attacks and encoding the output mitigates scripting attacks. Th e collection of dangling objects in memory (garbage) can be requested but not necessarily forced and proper memory management can help mitigate buff er overfl ow attacks, but the most eff ective defenses against buff er overfl ow is bounds checking and proper error checking

Defect prevention rather than defect removal is characteristic of which of the following software development methodology= Cleanroom  In cleanroom software development methodology, the goal is to write code correctly the frst time, rather than trying to find the problems once they are there. Essentially, it focuses on defect prevention rather than defect removal. Th e waterfall methodology is extremely structured and its key distinguishing characteristic is that each phase (stage) must be completed before moving on to the next, in order to prevent ad hoc scope creep. A distinguishing feature of the spiral model is that in each phase of the waterfall there are four substages, based on the common Deming PDCA (Plan-Do-Check-Act) model; in particular, a risk assessment review (Check).

CASE is the technique of using computers and computer utilities to help with the systematic analysis, design, development, implementation, and maintenance of software  Sandboxing   One of the control mechanisms for mobile code is the sandbox. The sandbox provides a protective area for program execution. Limits are placed on the amount of memory and processor resources the program can consume. If the program exceeds these limits, the Web browser terminates the process and logs an error code. Th is can ensure the safety of the browser’s performance.

Salami scam: A variant on the concept of logic bombs involves what is known as the salami scam. Th e basic idea involves siphoning off small amounts of money (in some versions, fractions of a cent) credited to a specifi c account, over a large number of transactions.

Hoaxes use an odd kind of social engineering, relying on people’s naturally gregarious nature and desire to communicate, and on a sense of urgency and importance, using the ambition that people have to be the fi rst to provide important new information.

Two most common forms of attacks against databases are  Aggregation and inference

Aggregation is the ability to combine nonsensitive data from separate sources to create sensitive information. For example, a user takes two or more unclassifi ed pieces of data and combines them to form a classifi ed piece of data that then becomes unauthorized for that user. Th us, the combined data sensitivity can be greater than the classifi cation of individual parts. Inference  is the ability to deduce (infer) sensitive or restricted information from observing available information. Essentially, users may be able to determine unauthorized information from what information they can access and may never need to directly access unauthorized data. For example, if a user is reviewing authorized information about patients, such as the medications they have been prescribed, the user may be able to determine the illness. Inference is one of the hardest threats to control.
Web applications attacks
a. Injection and scripting
b. Session hijacking and cookie poisoning
d. Bypassing authentication and insecure cryptography

A property that ensures only valid or legal transactions that do not violate any user-defi ned integrity constraints in DBMS technologies is known as Consistency

ACID test, which stands for atomicity, consistency, isolation, and durability, is an important DBMS concept. Atomicity is when all the parts of a transaction’s execution are either all committed or all rolled back—do it all or not at all.
Consistency occurs when the database is transformed from one valid state to another valid state. A transaction is allowed only if it follows user-defi ned integrity constraints. Illegal transactions are not allowed, and if an integrity constraint cannot be satisfi ed, the transaction is rolled back to its previously valid state and the user is informed that the transaction has failed. Isolation is the process guaranteeing the results of a transaction are invisible to other transactions until the transaction is complete. Durability ensures the results of a completed transaction are permanent and can
survive future system and media failures, that is, once they are done, they cannot be undone. Th is is similar to transaction persistence

Expert systems are comprised of a knowledge base comprising modeled human experience and Inference engine Th e expert system uses a knowledge base (a collection of all the data, or knowledge, on a particular matter) and a set of algorithms or rules that infer new facts from knowledge and incoming data. Th e knowledge base could be the human experience that is available in an organization. Because the system reacts to a set of rules, if the rules are faulty, the response will also be faulty. Also, because human decision is removed from the point of action, if an error were to occur, the reaction time from a human would be longer.

Best defense against session hijacking and man-in-the-middle (MITM) attacks is Unique and random identification.
The use on non-predictable (randomized) and unique identifiers to identify sessions between two communicating parties is the best defense against session hijacking and man-in-the-middle attacks.

Encryption provides disclosure protection.
Prepared statements or procedures at the database layer, reduces the likelihood of injection attacks.
A database view is a preventive security control measure against disclosure attacks.

cross-site scripting/Buff er overfl ows/SQL injection Countermeasure:
Without appropriate output encoding, the script can be actively read and executed by the browser causing denial of service.
The collection of dangling objects in memory (garbage) can be requested but not necessarily
forced and proper memory management can help mitigate buff er overfl ow
attacks, but the most eff ective defenses against buff er overfl ow is bounds checking
and proper error checking

Disallowing dynamic construction of queries is a defense against injection attacks and encoding the output mitigates scripting attacks.

Locard’s principal of exchange: Locard’s principle of exchange states that when a crime is committed,
the perpetrators leave something behind and take something with them, hence the exchange. Th is principle allows us to identify aspects of the persons responsible, even with a purely digital crime scene.

Single sign on is also known as Password Synchronization.

Probing: To give an attacker a road map of the network

Redundant servers: A primary server that mirrors its data to a secondary server

Server cluster: A group of independent servers that are managed as a single system.

802.5: defines a token-passing ring access method
802.11a: wireless networking in the 5GHz band with speeds of up to 54 Mbps
802.11b: a wireless LAN in the 2.4 GHz band with speeds up to 11 Mbps.
802.3: describes a bus topology using CSMA/CD at 10 Mbps.

UTP cable category Category 4 is rated for 16 Mbps.[ UTP Category 4 cabling is common in later Token Ring networks and is rated for up to 16 Mbps.]
Category 5 : 100Mbps
Category 6: 155 Mbps
Category 7: 1Gbps

UTP: Coax consists of two insulated wires wrapped around each other in a regular spiral pattern
Fiber-optic cable: Coax carries signals as light waves
[Coax requires fixed spacing between connections]

Unicast: describes a packet sent from a single source to a single destination
Broadcast: describes a packet sent to all nodes on the network segment.
Anycast: refers to communication between any sender and the nearest of a group of receivers in a network

10Base-2: 10 Mbps thinnet coax cabling rated to 185 meters maximum length
10Base-5: 10 Mbps thicknet coax cabling rated to 500 meters maximum length
10Base-F: 10 Mbps baseband optical fiber
100Base-T: 100 Mbps unshielded twisted pair cabling

OSI reference model Session Layer protocol, standard, or interface: SQL, RPC, ASP, DNA SCP

MIDI --> Presentation Layer--> Video+ Audio

48-bit, 12-digit hexadecimal number known as the Media Access Control (MAC) address:  The first three bytes (or first half) of the six-byte MAC address is the manufacturers identifier. This can be a good troubleshooting aid if a network device is acting up, as it will isolate the brand of the failing device.

802.1D : spanning tree protocol is an Ethernet link-management protocol that provides link redundancy while preventing routing loops.
Since only one active path can exist for an Ethernet network to route properly, the STP algorithm calculates and manages the best loop-free path through the network.

IEEE 802.5, specifies a token-passing ring access method for LANs.
IEEE 802.3, specifies an Ethernet bus topology using Carrier Sense Multiple Access Control/Carrier Detect (CSMA/CD).
IEEE 802.11, is the IEEE standard that specifies 1 Mbps and 2 Mbps wireless connectivity in the 2.4 MHz ISM (Industrial, Scientific, Medical) band.

legal IP address ranges specified by RFC1976:
a. 10.0.0.0 - 10.255.255.255
b. 172.16.0.0 - 172.31.255.255
c. 192.168.0.0 - 192.168.255.255

ISDN Basic Rate Interface: It offers 2 B channels and 1 D channel carry user data at 64 Kbps each
ISDN Primary Rate Interface (PRI) for North America and Japan, with 23 B channels at 64 Kbps and one 64 Kbps D channel, for a total throughput of 1.544 Mbps.
ISDN PRI for Europe, Australia, and other parts of the world, with 30 B( 64 Kbps)  channels and one D channel, for a total throughput of 2.048 Mbps.

DOD layers conforms to the OSI transport layer:
Host-to-Host layer -- transport layer
Application layer -- Application, Presentation, and Session layers
Internet layer-- Network layer
Network Access Layer --- Data Link and Physical layers

Internetwork packet routing --> Network Layer

802.11a 5 GHz band 54 Mbps
802.11b 2.4 GHz 11 Mbps.
802.11g 2.4 GHz  20 Mbps up to 54 Mbps
802.15 2.4-2.5 GHz WPAN

UTP wiring is rated for 100BaseT --> Category 5   100 Mbps

Open Shortest Path First (OSPF) is a link-state hierarchical routing algorithm intended as a successor to RIP.
 least-cost routing, multipath routing, and load balancing.


Cat1 Under 1 MHz Analog voice, ISDN BRI
Cat2 1 MHz IBM 3270, AS/400, Apple LocalTalk
Cat3 16 MHz 10BaseT, 4 Mbps Token Ring
Cat4 20 MHz 16 Mbps Token Ring
Cat5 100 MHz 10/100BaseT

Isochronous: data transmission method in which data is sent continuously and doesnt use either an internal clocking source or start/stop bits for timing.

asynchronous, is a data transmission method using a start bit at the beginning of the data value, and a stop bit at the end of the value.

synchronous, is a messageframed transmission method that uses clocking pulses to match the speed of the data transmission.

pleisiochronous, is a transmission method that uses more than one timing source, sometimes running at different speeds. This method may require master and slave clock devices.

RAID LEVEL DESCRIPTION
RAID 0 Multiple Drive Striping
RAID 1 Disk Mirroring
RAID 3 Single Parity Drive
RAID 5 Distributed Parity Information


cutthrough and store-and-forward switching

a. A store-and-forward switch reads the whole packet and checks its validity before sending it to the next destination.
b. Both methods operate at layer two of the OSI reference model.
c. A cut-through switch reads only the header on the incoming data packet.
A cut-through switch introduces less latency than a store-and forward switch

SOCKS protocol
It is sometimes referred to as an application-level proxy.
It operates in the transport layer of the OSI model.
Network applications need to be SOCKS-ified to operate.

dial-up hacking:
War Dialing
 Demon Dialing
ToneLoc

not War Walking

War Walking (or War Driving) refers to scanning for 802.11-based wireless network information, by either driving or walking with a laptop, a wireless adapter in promiscuous mode, some type of scanning software such as NetStumbler or AiroPeek, and a Global Positioning System (GPS).

War Dialing, is a method used to hack into computers by using a software program to automatically call a large pool of telephone numbers to search for those that have a modem attached.

Demon Dialing, similar to War Dialing, is a tool used to attack one modem using brute force to guess the password and gain access.

ToneLoc, was one of the first war-dialing tools used by phone phreakers.Ž

A back doorŽ into a network means --> Mechanisms created by hackers to gain network access at a later time.

honey pot uses a dummy server with bogus applications as a decoy for intruders.

Like a dual-homed host, a screened-host firewall uses two network cards to connect to the trusted and untrusted networks, but screened-host firewall adds a screening router between the host and the untrusted network.

screened-subnet firewall :uses two NICs also, but has two screening routers with the host acting as a proxy server on its own network segment. One screening router controls traffic local to the network while the second monitors and controls incoming and outgoing Internet traffic.

Most common drawbacks to using a dual-homed host firewall --> Internal routing may accidentally become enabled.
A dual-homed host uses two NICs to attach to two separate networks, commonly a trusted network and an untrusted network. Its important that the internal routing function of the host be disabled to create an application-layer chokepoint and filter packets. Many systems come with routing enabled by default, such as IP forwarding, which makes the firewall useless.

Firewall type that uses a dynamic state table to inspect the content of packets --> stateful-inspection firewall

A stateful-inspection firewall intercepts incoming packets at the Netxwork level, then uses an Inspection Engine to extract state-related information from upper layers. It maintains the information in a dynamic state table and evaluates subsequent connection attempts.

X.25, defines an interface to the first commercially successful connection-oriented packet-switching network, in which the packets travel over virtual circuits.

Frame Relay, was a successor to X.25, and offers a connection-oriented packet-switching network.

Asynchronous Transfer Mode (ATM), was developed from an outgrowth of ISDN standards, and is fast-packet, connection-oriented, cell-switching technology.

Switched Multimegabit Data Service (SMDS) is a high-speed, connectionless, packet-switching public network service that extends LAN-like performance to a metropolitan area network (MAN) or a wide area network (WAN). Its generally delivered over a SONET ring with a maximum effective service radius of around 30 miles.

802.3 uses a LengthŽ field which indicates the number of data bytes that are in the data field.
Ethernet II uses a TypeŽ field in the same 2 bytes to identify the message protocol type.
Both frame formats use a 8-byte Preamble field at the start of the packet, and a 4- byte Frame Check Sequence (FCS) field at the end of the packet, so

Fiber optic cabling as its physical media:

a. 100BaseFX
b 1000BaseLX
c. 1000BaseSX

100BaseFX, specifies a 100 Mbps baseband fiber optic CSMA/CD LAN.
1000BaseLX, specifies a 1000Mbps CSMA/CD LAN over long wavelength fiber optics.
1000BaseSX, specifies a 1000Mbps CSMA/CD LAN over short wavelength fiber optics.

Which routing commonly broadcasts its routing table information to all other routers every minute.
Distance Vector Routing

Distance vector routing: uses the routing information protocol (RIP) to maintain a dynamic table of routing information that is updated regularly. It is the oldest and most common type of dynamic routing.

Static routing: defines a specific route in a configuration file on the router and does not require the routers to exchange route information dynamically.

Link state routers: functions like distance vector routers, but only use firsthand information when building routing tables by maintaining a copy of every other routers Link State Protocol (LSP) frame. This helps to eliminate routing errors and considerably lessens convergence time.

Difference between 802.11b WLAN ad hoc and infrastructure modes
Wireless nodes can communicate peer-to-peer (direct) in the ad hoc mode.

Nodes on an IEEE 802.11b wireless LANs can communicate in one of two modes: ad hoc or infrastructure.
In ad hoc mode, the wireless nodes communicate directly with each other, without establishing a connection to an access point on a wired LAN.
In infrastructure mode, the wireless nodes communicate to an access point, which operates similarly to a bridge or router and manages traffic between the wireless network and the wired network.

Most common type for recent Ethernet installations--> Twisted Pair

Category 5 Unshielded Twisted Pair (UTP) is rated for very high data throughput (100 Mbps) at short distances (up to 100 meters), and is the standard cable type for Ethernet installations.

ThickNet, also known as 10Base5, uses traditional thick coaxial (coax) cable at data rates of up to 10 Mbps.

ThinNet, uses a thinner gauge coax, and is known as 10Base2. It has a shorter maximum segment distance than ThickNet, but is less expensive to install.

Twinax, is like ThinNet, but has two conductors, and was used in IBM Systems.

Describes SSL:
It allows an application to have authenticated, encrypted communications across a network.

Which backup method listed below will probably require the backup operator to use the most number of tapes for a complete system restoration, if a different tape is used every night in a five-day rotation
Incremental Backup Method.
Most backup methods use the Archive file attribute to determine whether the file should be backed up or not. The backup software determines which files need to be backed up by checking to see if the Archive file attribute has been set, and then resets the Archive bit value to null after the backup procedure. The Incremental Backup Method backs up only files that have been created or modified since the last backup was made, because the Archive file attribute is reset. This can result in the backup operator needing several tapes to do a complete restoration, as every tape with changed files as well as the last full backup tape will need to be restored.


The difference between an incremental backup and a differential backup is that the Archive file attribute is not reset after the differential backup is completed, therefore the changed file is backed up every time the differential backup is run.

The backup set grows in size until the next full backup as these files continue to be backed up during each subsequent differential backup, until the next complete backup occurs. The advantage of this backup method is that the backup operator should only need the full backup and the one differential backup to restore the system.

Fiber optic cable has three basic physical elements, the core, the cladding, and the jacket. The core is the innermost transmission medium, which can be glass or plastic. The next outer layer, the cladding is also made of glass or plastic, but has different properties, and helps to reflect the light back into the core. The outermost layer, the jacket, provides protection from heat, moisture, and other environmental elements.


224.0
2^3 -2=  6 subnets  (11100000.00000000) with 8190 (2^13 -2) hosts

240.0
2^4 -2 =14 subnets (11110000.00000000) with 4094 (2^12 -2)  hosts

248.0
2^5 -2 = 30  subnets (11111000.00000000) with  2046 (2^11-2) hosts

252.0
2^6 -2= 62 subnets  (11111100.00000000) with 1022 (2^10 -2) hosts

172.16.0.0, which subnet mask below would allow us to divide the network into the maximum number of subnets with at least 600 host addresses per subnet?
252.0

DHCP Snooping:
ensures DHCP servers can assign ip address to only selected systems.

Benifits of MPLS:
performance characteristics can be set
layer 2 service can be overlaid.
multiple layers can be eliminated.

Attack that sends out overload of udp packets from a spoofed source so that overload of icmp unreachable replies flood the victim.

Attacker inserts a irrational value into oversized  packet making it difficult for destination router to reassemble it.

0  echo reply/ping reply
3 delivery failure/ host unknown network unreachable
4 source quench
8 echo request/ping request
11 TTL/ used by tracert
12 ip header was bad
13 communication administratively prohibited

IGMP is based upon publish and subscribe model and how it share info with multicast routers.
IGMP is a method of allowing multicast transmission in LAN env.
combination of one to many and many to many delivery method.

IMs are provisioned to bypass firewall.

DCE similar to Kerberose.
DCE specifies its auth techniques that was lacking in kerberose.

Network perimeter concept restricts access from segment to segment via choke points.

Benifits of Packet filtering:
1. Network Scabality
2. Performance
3. Application dependence.

Drawback
Security

Primary goal of QoS:
1. Different types of data and information coexist (video, voice, data)
2. Jitter and latency are managed.
3. Dedicated bandwidth is maintained.

Many application are able transmit over one physical medium at same time using Multiplexing.

In TCP, Sequence numbers ensure message delivery.

SIP --> User client agent, user agent serbver.

Autonomous --> One, Hierarical (where one governing entity manages traffic flow)

Technical attacks: Algebric, linear, differential
no tech: Rubber horse. it involves assault and battery
When someone is physically threaten or possibly tortured so that they will hand over the key.

PGP, S/MIME/PEM/MSP:  Email standards.

DES Mode used for ATM PIN---> ECB

Which protocol protects the communication channel and the message between server and a client?
RPC

Block Cipher use [S boxes] to perform mathematical functions and permutations on message bits.
Substitutation boxes or S boxes use lookup tables that determine how bits should be scrambled and substituted.

Modular exponentiation in elliptic curves is the analog of the modular discreet logarithm problem.

In a digitally-signed message transmission using a hash function:
The message digest is encrypted in the private key of the sender.

The strength of RSApublic key encryption is based on the:
Difficulty in finding the prime factors of very large numbers

Elliptic curve cryptosystems::
Have a higher strength per bit than an RSA.

Acryptographic attack in which portions of the ciphertext are selected for trial decryption while having access to the corresponding decrypted plaintext is known as what type of attack:
Chosen ciphertext

The Secure Hash Algorithm (SHA-1) of the Secure Hash Standard (NIST FIPS PUB 180) processes data in block lengths of
512 bits.
If a block length is fewer than 512 bits, padding bits are added to make the block length equal to 512 bits.

The technique of confusion, proposed by Claude Shannon, is used in block ciphers to:
Conceal the statistical connection between ciphertext and plaintext.

The Advanced Encryption Standard, the Rijndael cipher, can be described as:
An iterated block cipher

The Rijndael cipher employs a round transformation that is itself comprised of three layers of transformations. Which are these layers:
Key addition layer
Linear mixing layer
Non-linear layer

Asecret mechanism that enables the implementation of the reverse function in a one-way function is called a
Trap door

Data diode: a mechanism„usually in multilevel security systems„that limits the flow of classified information to one direction.

Digital certification, Certification authority, Timestamping, Lightweight Directory Access Protocol (LDAP), Non-repudiation support------->Public Key Infrastructure (PKI)

The vulnerability associated with the requirement to change security protocols at a carriers Wireless Application Protocol (WAP) gateway from the Wireless Transport Layer Security Protocol (WTLS) to SSL or TLS over the wired network is called:
Wireless Application Protocol (WAP) Gap.

The Transport Layer Security (TLS) 1.0 protocol is based on which Protocol Specification: SSL-3.0
The differences between TLS and SSL are not great, but there is enough of a difference such that TLS 1.0 and SSL 3.0 are not operationally compatible. If interoperability is desired, there is a capability in TLS that allows it to function as SSL.

The primary goal of the TLS Protocol is to provide:
Privacy and data integrity between two communicating applications.

The TLS Protocol is comprised of the TLS Record and Handshake Protocols. The TLS Record Protocol is layered on top of a transportprotocol such as TCP and provides privacy and reliability to the communications. The privacy is implemented by encryption using symmetric key cryptography such as DES or RC4. The secret key is generated anew for each connection; however, the Record Protocol can be used without encryption. Integrity is provided through the use of a keyed Message Authentication Code (MAC) using hash algorithms such as SHA or MD5. The TLS Record Protocol is also used to encapsulate a higher-level protocol such as the TLS Handshake Protocol. This Handshake Protocol is used by the server and client to authenticate each other. The authentication can be accomplished using asymmetric key cryptography such as RSA or DSS. The Handshake Protocol also sets up the encryption algorithm and cryptographic keys to enable the application protocol to transmit and receive information.



Elliptic curve and the elliptic curve discrete logarithm problem
The elliptic curve is defined over a finite field comprised of real, complex or rational numbers.
The points on an elliptic curve form a Group under addition
Multiplication (or multiple additions) in an elliptic curve system is equivalent to modular exponentiation; thus, defining a discreet logarithm problem.

In communications between two parties, encrypting the hash function of a message with a symmetric key algorithm is equivalent to

Generating a keyed Message Authentication Code (MAC)
A MAC is used to authenticate files between users. If the sender and receiver both have the secret key, they are the only ones that can verify the hash function. If a symmetric key algorithm is used to encrypt the one-way hash function, then the one-way hash function becomes a keyed MAC.

Cryptographic hash function, H (m), where m denotes the message being hashed by the function H:

a. H (m) is collision free.
b. The output is of fixed length.
c. H (m) is a one-way function.

A message of < 2^64 bits is input to the Secure Hash Algorithm (SHA), and the resultant message digest of 160 bits is fed into the DSA, which generates the digital signature of the message.

If the application of a hash function results in an m-bit fixed length output, an attack on the hash function that attempts to achieve a collision after 2 m/2 possible trial input values is called: Birthday attack

This problem is analogous to asking the question How many people must be in a room for the probability of two people having the same birthday to be equal to 50%?Ž The answer is 23. Thus, trying 2m/2 possible trial inputs to a hash function gives a 50% chance of finding two inputs that have the same hash value.

The minimum information necessary on a digital certificate is
Name, public key, digital signature of the certifier
the name of the individual is certified and bound to his/her public key. This certification is validated by the digital signature of the certifying agent.

Message digest algorithms MD2, MD4 and MD5 have in common:
They all take a message of arbitrary length and produce a message digest of 128-bits.


Decrypt the LEAF with the family key, Kf; recover U; obtain a court order to obtain the two halves of Ku; recover Ku; and then recover Ks, the session key. Use the session key to decrypt the message.





The message is encrypted with the symmetric session key, Ks. In order to decrypt the message, then, Ks must be recovered. The LEAF contains the session key, but the LEAF is encrypted with the family key, Kf , that is common to all Clipper Chips. The authorized agency has access to Kf and decrypts the LEAF. However, the session key is still encrypted by the 80-bit unit key, Ku, that is unique to each Clipper Chip and is identified by the unique identifier, U. Ku is divided into two halves, and each half is deposited with an escrow agency. The law enforcement agency obtains the two halves of Ku by presenting the escrow agencies with a court order for the key identified by U. The two halves of the key obtained by the court order are XORed together to obtain Ku. Then, Ku is used to recover the session key, Ks, and Ks is used to decrypt the message..

The decryption sequence to obtain Ks can be summarized as:
Kf -> U -> [1/2Ku XOR 1/2 Ku ] -> Ku-> Ks

What BEST describes the National Security Agency-developed Capstone: A chip that implements the U.S. Escrowed Encryption Standard

Capstone is a Very Large Scale Integration (VLSI) chip that employs the Escrowed Encryption Standard and incorporates the Skipjack algorithm, similar to the Clipper Chip. As such, it has a LEAF. Capstone also supports public key exchange and digital signatures. At this time, Capstone products have their LEAF function suppressed and a Certifying Authority provides for key recovery.

Block cipher: A symmetric key algorithm that operates on a fixed-length block of plaintext and transforms it into a fixed-length block of ciphertext. A block cipher breaks the plaintext into fixed-length blocks, commonly 64-bits, and encrypts the blocks into fixed-length blocks of ciphertext. Another characteristic of the block cipher is that, if the same key is used, a particular plaintext block will be transformed into the same ciphertext block. Examples of block ciphers are DES, Skipjack, IDEA, RC5 and AES. An example of a block cipher in a symmetric key cryptosystem is the Electronic Code Book (ECB) mode of operation. In the ECB mode, a plaintext block is transformed into a ciphertext block. If the same key is used for each transformation, then a Code BookŽ can be compiled for each plaintext block and corresponding ciphertext block.

An iterated block cipher encrypts by breaking the plaintext block into
two halves and, with a subkey, applying a roundŽ transformation to
one of the halves. Then, the output of this transformation is XORed with
the remaining half. The round is completed by swapping the two
halves. This type of cipher is known as:
Feistel
The question stem describes one round of a Feistel cipher. This algorithm
was developed by an IBM team led by Horst Feistel. (H. Feistel,
Cryptography and Computer Privacy,Ž Scientific American, v.228, n.5,
May 1973) The algorithm was called Lucifer and was the basis for the
Data Encryption Standard (DES).

RC4 is a variable keysize
stream cipher developed by Ronald Rivest. In this type of cipher, a
sequence of bits that are the key is bit-wise XORed with the plaintext.

A key schedule is: A set of subkeys derived from a secret key

The subkeys are typically used in iterated block ciphers. In this
type of cipher, the plaintext is broken into fixed-length blocks and
enciphered in rounds.Ž In a round, the same transformation is
applied using one of the subkeys of the key schedule.

The Wireless Transport Layer Security (WTLS) Protocol in the Wireless
Application Protocol (WAP) stack is based on which Internet Security
Protocol?

TLS

WTLS has to incorporate
functionality that is provided for in TLS by TCP in the TCP/IP
Protocol suite in that WTLS can operate over UDP. WTLS supports
data privacy, authentication and integrity. Because WTLS has to
incorporate a large number of handshakes when security is implemented,
significant delays may occur. During a WTLS handshake
session, WTLS can set up the following security classes:
Class 1. No certificates
Class 2. The client does not have a certificate; the server has a
certificate
Class 3. The client and server have certificates

The Advanced Encryption Standard (Rijndael) block cipher
requirements regarding keys and block sizes have now evolved to
which configuration?
The block size is 128 bits, and the key can be 128, 192, or 256 bits.
AES is comprised of the three key sizes, 128, 192, and 256 bits with
a fixed block size of 128 bits. The Advanced Encryption Standard
(AES) was announced on November 26, 2001, as Federal Information
Processing Standard Publication (FIPS PUB 197). FIPS PUB 197 states
that This standard may be used by Federal departments and agencies
when an agency determines that sensitive (unclassified) information
(as defined in P.L. 100-235) requires cryptographic protection.
Other FIPS-approved cryptographic algorithms may be used in addition
to, or in lieu of, this standard.Ž Depending upon which of the
three keys is used, the standard may be referred to as AES-128,Ž
AES-192Ž or AES-256.Ž
The number of rounds used in the Rijndael cipher is a function of
the key size as follows:
256-bit key 14 rounds
192-bit key 12 rounds
128-bit key 10 rounds
Rijndael has a symmetric and parallel structure that provides for
flexibility of implementation and resistance to cryptanalytic attacks.
Attacks on Rijndael would involve the use of differential and linear
cryptanalysis.

The Wireless Transport Layer Security Protocol (WTLS) in the Wireless
Application Protocol (WAP) stack provides for security:
Between the WAP client and the gateway
Transport Layer Security (TLS) provides for security between the
content server on the Internet and the WAP gateway.

The MIME protocol specifies a structure for the body of an email
message. MIME supports a number of formats in the email body,
including graphic, enhanced text and audio, but does not provide
security services for these messages. S/MIME defines such services
for MIME as digital signatures and encryption based on a standard
syntax.

Digital cash refers to the electronic transfer of funds from one party to
another. When digital cash is referred to as anonymous or identified, it
means that:
Anonymous„the identity of the cash holder is not known;
Identified„the identity of the cash holder is known

Anonymous implementations of digital cash do not identify the
cash holder and use blind signature schemes; identified implementations
use conventional digital signatures to identify the cash holder.
In looking at these two approaches, anonymous schemes are analogous
to cash since cash does not allow tracing of the person who
made the cash payment while identified approaches are the analog of
credit or debit card transactions.

recovery methods;
a. A message is encrypted with a session key and the session key is, in
turn, encrypted with the public key of a trustee agent. The
encrypted session key is sent along with the encrypted message. The
trustee, when authorized, can then decrypt the message by recovering
the session key with the trustees private key.
b. A message is encrypted with a session key. The session key, in turn,
is broken into parts and each part is encrypted with the public key
of a different trustee agent. The encrypted parts of the session key
are sent along with the encrypted message. The trustees, when
authorized, can then decrypt their portion of the session key and
provide their respective parts of the session key to a central agent.
The central agent can then decrypt the message by reconstructing
the session key from the individual components.
c. A secret key or a private key is broken into a number of parts and
each part is deposited with a trustee agent. The agents can then
provide their parts of the key to a central authority, when presented
with appropriate authorization. The key can then be reconstructed
and used to decrypt messages encrypted with that key.

Encrypting parts of the session key with the private keys of the
trustee agents provides no security for the message since the
message can be decrypted by recovering the key components of
the session key using the public keys of the respective agents. These
public keys are available to anyone.

Theoretically, quantum computing offers the possibility of factoring the
products of large prime numbers and calculating discreet logarithms in
polynomial time. These calculations can be accomplished in such a
compressed time frame because:

A quantum bit in a quantum computer is actually a linear
superposition of both the one and zero states and, therefore, can
theoretically represent both values in parallel. This phenomenon
allows computation that usually takes exponential time to be
accomplished in polynomial time since different values of the binary
pattern of the solution can be calculated simultaneously.

In digital computers, a bit is in either a one or zero state. In a quantum
computer, through linear superposition, a quantum bit can be in
both states, essentially simultaneously. Thus, computations consisting
of trail evaluations of binary patterns can take place simultaneously
in exponential time. The probability of obtaining a correct result is
increased through a phenomenon called constructive interference of
light while the probability of obtaining an incorrect result is decreased
through destructive interference.

Describe the Public Key Cryptography Standards (PKCS)
A set of public-key cryptography standards that support algorithms
such as Diffie-Hellman and RSA as well as algorithm independent
standards.

PKCS supports algorithm-independent and algorithm-specific
implementations as well as digital signatures and certificates. It was
developed by a consortium including RSA Laboratories, Apple, DEC,
Lotus, Sun, Microsoft and MIT. At this writing, there are 15 PKCS
standards. Examples of these standards are:
PKCS #1. Defines mechanisms for encrypting and signing data
using the RSA public-key system
PKCS #3. Defines the Diffie-Hellman key agreement protocol
PKCS #10. Describes a syntax for certification requests
PKCS #15. Defines a standard format for cryptographic
credentials stored on cryptographic tokens.


An interface to a library of software functions that provide security and cryptography services is called:
A cryptographic application programming interface (CAPI)
CAPI is designed for software developers to call functions from
the library and, thus, make it easier to implement security services.
An example of a CAPI is the Generic Security Service API (GSSAPI.)
The GSS-API provides data confidentiality, authentication, and
data integrity services and supports the use of both public and secret key mechanisms. The GSS-API is described in the Internet Proposed Standard RFC 2078.


The British Standard 7799/ISO Standard 17799 discusses cryptographic
policies. It states, An organization should develop a policy on its use of
cryptographic controls for protection of its information . . . . When
developing a policy, the following should be considered:Ž

a. The management approach toward the use of cryptographic controls
across the organization
b. The approach to key management, including methods to deal with
the recovery of encrypted information in the case of lost,
compromised or damaged keys
c. Roles and responsibilities

A policy is a general statement of managements intent, and therefore, a policy would not specify the encryption scheme to be used.

The main chapter headings of the standard are:
Security Policy
Organizational Security
Asset Classification and Control
Personnel Security
Physical and Environmental Security
Communications and Operations Management
Access Control
Systems Development and Maintenance
Business Continuity Management
Compliance

The Number Field Sieve (NFS) is a:
General purpose factoring algorithm that can be used to factor large numbers.

The NFS has been successful in efficiently factoring numbers larger than 115 digits and a version of NFS has successfully factored a 155-digit number. Clearly, factoring is an attack that can be used against the RSA cryptosystem in which the public and private keys are calculated based on the product of two large prime numbers.

DESX is a variant of DES in which:

Input plaintext is bitwise XORed with 64 bits of additional key material before encryption with DES, and the output of DES is also bitwise XORed with another 64 bits of key material.
DESX was developed by Ron Rivest to increase the resistance of DES to brute force key search attacks; however, the resistance of DESX to differential and linear attacks is equivalent to that of DES with independent subkeys.

The ANSI X9.52 standard defines a variant of DES encryption with keys
k1, k2, and k3 as:
C = Ek3 [Dk2 [Ek1 [M]]]
What is this DES variant?
Triple DES in the EDE mode
This version of triple DES performs an encryption (E) of plaintext message M with key k1, a decryption (D) with key k2 (essentially, another encryption), and a third encryption with key k3. Another implementation of DES EDE is accomplished with keys k1 and k2 being independent, but with keys k1 and k3 being identical. This implementation of triple DES is written as:
C = Ek1 [Dk2 [Ek1 [M]]]

Using a modulo 26 substitution cipher where the letters A to Z of the alphabet are given a value of 0 to 25, respectively, encrypt the message OVERLORD BEGINS.Ž Use the key K =NEW and D =3 where D is the number of repeating letters representing the key. The encrypted message is:

BFAEPKEH XRKEAW

OVERLORD becomes 14 21 4 17 11 14 17 3
BEGINS becomes 1 4 6 8 13 18
The key NEW becomes 13 4 22
Adding the key repetitively to OVERLORD BEGINS modulo 26 yields 1 5 0 4 15 10 4 7 23 17 10 4 0 22, which translates to BFAEPKEH XRKEAW.

Adding the key repetitively to OVERLORD BEGINS modulo 26 yields 1 5 0 4 15 10 4 7 23 17 10 4 0 22, which translates to BFAEPKEH XRKEAW.

The algorithm of the 802.11 Wireless LAN Standard that is used to protect transmitted information from disclosure is called:
Wired Equivalency Privacy (WEP)
WEP is designed to prevent the violation of the confidentiality of data transmitted over the wireless LAN. Another feature of WEP is to prevent unauthorized access to the network.

Wireless Application Protocol, the security WTLS

The Wired Equivalency Privacy algorithm (WEP) of the 802.11 Wireless LAN Standard uses which of the following to protect the confidentiality of information being transmitted on the LAN?
A secret key that is shared between a mobile station (e.g., a laptop with a wireless Ethernet card) and a base station access point.

The transmitted packets are encrypted with a secret key and an
Integrity Check (IC) field comprised of a CRC-32 check sum that is
attached to the message. WEP uses the RC4 variable key-size
stream cipher encryption algorithm. RC4 was developed in 1987 by
Ron Rivest and operates in output feedback mode. Researchers at
the University of California at Berkely (wep@isaac.cs.berkeley.edu)
have found that the security of the WEP algorithm can be
compromised, particularly with the following attacks:
Passive attacks to decrypt traffic based on statistical analysis
Active attack to inject new traffic from unauthorized mobile
stations, based on known plaintext
Active attacks to decrypt traffic, based on tricking the access
point
Dictionary-building attack that, after analysis of about a days
worth of traffic, allows real-time automated decryption of all
traffic
The Berkeley researchers have found that these attacks are
effective against both the 40-bit and the so-called 128-bit versions of
WEP using inexpensive off-the-shelf equipment. These attacks can
also be used against networks that use the 802.11b Standard, which
is the extension to 802.11 to support higher data rates, but does not
change the WEP algorithm.
The weaknesses in WEP and 802.11 are being addressed by the
IEEE 802.11i Working Group. WEP will be upgraded to WEP2 with
the following proposed changes:
Modifying the method of creating the initialization vector (IV)
Modifying the method of creating the encryption key
Protection against replays
Protection against IV collision attacks
Protection against forged packets
In the longer term, it is expected that the Advanced Encryption
Standard (AES) will replace the RC4 encryption algorithm currently used in WEP.

In a block cipher, diffusion can be accomplished through: Permutation Diffusion is aimed at obscuring redundancy in the plaintext by spreading the effect of the transformation over the ciphertext. Permutation is also known as transposition and operates by rearranging the letters of the plaintext.

The National Computer Security Center (NCSC) is: A branch of the National Security Agency (NSA) that initiates research and develops and publishes standards and criteria for trusted information systems.

The NCSC promotes information systems security awareness and technology transfer through many channels, including the annualNational Information Systems Security Conference. It was foundedin 1981 as the Department of Defense Computer Security Center, and its name was change in 1985 to NCSC. It developed the Trusted Computer Evaluation Program Rainbow series for evaluating commercial products against information system security criteria.

A portion of a Vigenère cipher square is given below using five (1, 2, 14,
16, 22) of the possible 26 alphabets. Using the key word bow, which of
the following is the encryption of the word advanceŽ using the
Vigenère cipher in Table A.10?
a. b r r b b y h
b. b r r b j y f
c. b r r b b y f
d. b r r b c y f
Answer: c
The Vigenère cipher is a polyalphabetic substitution cipher. The key
word bow indicates which alphabets to use. The letter b indicates the
alphabet of row 1, the letter o indicates the alphabet of row 14, and
the letter w indicates the alphabet of row 22. To encrypt, arrange the
key word, repetitively over the plaintext as shown in Table A.11.
Thus, the letter a of the plaintext is transformed into b of alphabet in
row 1, the letter d is transformed into r of row 14, the letter v is transformed
into r of row 22 and so on.

There are two fundamental security protocols in IPSEC. These are the
Authentication Header (AH) and the Encapsulating Security Payload
(ESP). Which of the following correctly describes the functions of each?

ESP-data encrypting and source authenticating protocol that also
validates the integrity of the transmitted data; AH-source
authenticating protocol that also validates the integrity of the
transmitted data

ESP does have a source authentication and integrity capability
through the use of a hash algorithm and a secret key. It provides confidentiality
by means of secret key cryptography. DES and triple DES
secret key block ciphers are supported by IPSEC and other algorithms
will also be supported in the future. AH uses a hash algorithm
in the packet header to authenticate the sender and validate the
integrity of the transmitted data.

advantage of a stream cipher:
The receiver and transmitter must be synchronized.
a. The same equipment can be used for encryption and decryption.
b. It is amenable to hardware implementations that result in higher speeds.
c. Since encryption takes place bit by bit, there is no error propagation.

The transmitter and receiver must be synchronized since they must
use the same keystream bits for the same bits of the text that are to be
enciphered and deciphered. Usually, synchronizing frames must be sent
to effect the synchronization and, thus, additional overhead is required
for the transmissions.

Property of a public key cryptosystem? (Let P represent the private key, Q represent the public key and M the plaintext message.)
a. Q[P(M)] = M
b. P[Q(M)] = M
c. It is computationally infeasible to derive P from Q.

A form of digital signature where the signer is not privy to the content of the message is called a:
Blind signature.

A blind signature algorithm for the message M uses a blinding
factor, f; a modulus m; the private key, s, of the signer and the public
key, q, of the signer. The sender, who generates f and knows q,
presents the message to the signer in the form:
Mf q (mod m) Thus, the message is not in a form readable by the signer since the signer does not know f. The signer signs Mf q (mod m) with his/her private key, returning (Mf q)s (mod m) This factor can be reduced to fMs (mod m) since s and q are inverses of each other. The sender then divides fMs (mod m) by the blinding factor, f, to obtain Ms (mod m) Ms (mod m) is, therefore, the message, M, signed with the private key, s, of the signer.


The following compilation represents what facet of cryptanalysis?
A 8.2
B 1.5
C 2.8
D 4.3
E 12.7
F 2.2
G 2.0
H 6.1
I 7.0                            

 J 0.2
K 0.8
L 4.0
M 2.4
N 6.7
O 7.5
P 1.9
Q 0.1
R 6.0

S 6.3
T 9.1
U 2.8
V 1.0
W 2.4
X 0.2
Y 2.0
Z 0.1

Frequency analysis

The compilation is from a study by H. Becker and F. Piper that was
originally published in Cipher Systems: The Protection of Communication.
The listing shows the relative frequency in percent of the appearance
of the letters of the English alphabet in large numbers of
passages taken from newspapers and novels. Thus, in a substitution
cipher, an analysis of the frequency of appearance of certain letters
may give clues to the actual letter before transformation. Note that
the letters E, A, and T have relatively high percentages of appearance
in English text.


Superscalar computer architecture is characterized by a:
Processor that enables concurrent execution of multiple instructions in the same pipeline stage.

Memory space insulated from other running processes in a multiprocessing
system is part of a:
Protection domain.

In the discretionary portion of the Bell-LaPadula mode that is based on the access matrix, how the access rights are defined and evaluated is called:
Authorization
since authorization is concerned with how access rights are defined and how they are evaluated.

Acomputer system that employs the necessary hardware and software assurance measures to enable it to process multiple levels of classified orsensitive information is called a:
Trusted system.

For fault-tolerance to operate, a system must be:
Capable of detecting and correcting the fault.

Which of the following choices describes the four phases of the National Information Assurance Certification and Accreditation Process (NIACAP)?
Definition, Verification, Validation, and Post Accreditation

What is a programmable logic device (PLD)?
An integrated circuit with connections or internal logic gates that can be changed through a programming process.

The termination of selected, non-critical processing when a hardware or software failure occurs and is detected is referred to as:
Fail soft.

Which of the following are the three types of NIACAP accreditation?
Site, type, and system

Content-dependent control makes access decisions based on:
The objects data. [Sensitivity of content]

The term failover refers to:
Switching to a duplicate, hotŽ backup component.

hotŽ backup system that maintains duplicate states with the primary system

Primary storage is the:
Memory directly addressable by the CPU, which is for the storage of instructions and data that are associated with the program being executed.

In the Common Criteria, a Protection Profile:
Specifies the security requirements and protections of the products to be evaluated.

Context-dependent control uses which of the following to make decisions?
Subject or object attributes or environmental characteristics

What is a computer bus?
A group of conductors for the addressing of data and control.

Increasing performance in a computer by overlapping the steps of different instructions is called:
Pipelining.

The addressing mode in which an instruction accesses a memory location whose contents are the address of the desired data is called:

Indirect addressing.

The addressing mode in which the address location that is specified in the program instructions contains the address of the final desired location is called:
Indirect addressing.

indexed addressing, determines the desired memory address by adding the contents of the address defined in the programs instruction to that of an index register. 

Rregisters usually contained inside the CPU.

Absolute addressing, addresses the entire primary memory space.

Processes are placed in a ring structure according to:
 Least privilege.

The MULTICS operating system is a classic example of:
Ring protection system.

Multics is based on the ring protection architecture.

What are the hardware, firmware, and software elements of a Trusted Computing Base (TCB) that implement the reference monitor concept called?
A security kernel

The memory hierarchy in a typical digital computer, in order, is:
CPU, cache, primary memory, secondary memory

A processor in which a single instruction specifies more than one CONCURRENT operation is called:
Very Long Instruction Word processor

Superscalar processor, performs a concurrent execution of multiple instructions in the same pipeline stage.

A scalar processor, executes one instruction at a time.

The standard process to certify and accredit U.S. defense critical information systems is called:
a. DITSCAP

Defense Information Technology Security Certification and Accreditation Process.

The property that states, Reading or writing is permitted at a particular level of sensitivity, but not to either higher or lower levels of sensitivityŽ is called the:
Strong * (star) Property

The Discretionary Security Property, specifies discretionary access control in the Bell-LaPadula model by the use of an access matrix.

Three major parts of the Common Criteria (CC)?
Introduction and General Model
Security Functional Requirements
Security Assurance Requirements

In the Common Criteria, an implementation-independent statement of security needs for a set of IT security products that could be built is called a:

Protection Profile (PP).

Component of a CC Protection Profile;
a. Target of Evaluation (TOE) description
b. Threats against the product that must be addressed
c.  Security objectives

When microcomputers were first developed, the instruction fetch time was much longer than the instruction execution time because of the relatively slow speed of memory accesses. This situation led to the design of the:

CIRC
Complex Instruction Set Computer

The logic was that since it took a long time to fetch an instruction from memory relative to the time required to execute that instruction in the CPU, then the number of instructions required to implement a program should be reduced. This reasoning naturally resulted in densely coded instructions with more decode and execution cycles in the processor. This situation was ameliorated by pipelining the instructions wherein the decode and execution cycles of one instruction would be overlapped in time with the fetch cycle of the next instruction.

 RISC, evolved when packaging and memory technology advanced to the point where there was not much difference in memory access times and processor execution times. Thus, the objective of the RISC architecture was to reduce the number of cycles required to execute an instruction. Accordingly, this increased the number of instructions in the average program by approximately 30%, but it reduced the number of cycles per instruction on the average by a factor of four. Essentially, the RISC architecture uses simpler instructions but makes use of other features such as optimizing compilers to reduce the number of instructions required and large numbers of general purpose registers in the processor and data caches.

superscalar processor, allows concurrent execution of instructions in the same pipelined stage. A scalar processor is defined as a processor that executes one instruction at a time. The term superscalar denotes multiple, concurrent operations performed on scalar values as opposed to vectors or arrays that are used as objects of computation in array processors.
The very-long-instruction-word (VLIW) processor, multiple, concurrent operations are performed in a single instruction. Because multiple operations are performed in one instruction rather than using multiple instructions, the number of instructions is reduced relative to those in a scalar processor.
However, for this approach to be feasible, the operations in each VLIW instruction must be independent of each other.

The main objective of the Java Security Model ( JSM) is to: Protect the user from hostile, network mobile code

Component of a general enterprise security architecture model for an organization.
a. Information and resources to ensure the appropriate level of risk management
b. Consideration of all the items that comprise information security, including distributed systems, software, hardware, communications systems, and networks
c. A systematic and unified approach for evaluating the organizations information systems security infrastructure and defining approaches to implementation and deployment of information security controls.

The Bell-LaPadula model addresses which one of the following items:
Information flow from high to low

Information flow from high to low is addressed by the * -property of the Bell…LaPadula model, which states that a subject cannot write data from a higher level of classification to a lower level of classification. This property is also known as the confinement property or the no write down property.


The practical aspects of multilevel security in which, for example, an unclassified paragraph in a Secret document has to be moved to an Unclassified document, the Bell-LaPadula model introduces the concept of a:
Trusted subject

The model permits a trusted subject to violate the *-property but to comply with the intent of the *-property. Thus, a person who is a trusted subject could move unclassified data from a classified document to an unclassified document without violating the intent of the *-property. Another example would be for a trusted subject to downgrade the classification of material when it has been determined that the downgrade would not harm national or organizational security and would not violate the intent of the *-property.

In a refinement of the Bell…LaPadula model, the strong tranquility property states that:
Objects never change their security level

Weak tranquility property Objects never change their security level in a way that would violate the system security policy.

As an analog of confidentiality labels, integrity labels in the Biba model are assigned according to:

Subjects are assigned classes according to their trustworthiness; objects are assigned integrity labels according to the harm that would be done if the data were modified improperly.

The Clark-Wilson Integrity Model  A Comparison of Commercial and Military Computer Security Policies,Ž Proceedings of the 1987 IEEE Computer Society Symposium on Research in Security and Privacy, Los Alamitos, CA, IEEE Computer Society Press, 1987) focuses on what two concepts:

Separation of duty and well-formed transactions

The Clark-Wilson Model is a model focused on the needs of the commercial world and is based on the theory that integrity is more important than confidentiality for commercial organizations. Further, the model incorporates the commercial concepts of separation of duty and well formed transactions. The well-formed transaction of the model is implemented by the transformation procedure (TP.)ATP is defined in the model as the mechanism for transforming the set of constrained data items (CDIs) from one valid state of integrity to another valid state of integrity. The Clark-Wilson Model defines rules for separation of duty that denote the relations between a user, TPs, and the CDIs that can be operated upon by those TPs. The model talks about the access triple that is the user, the program that is permitted to operate on the data, and the data.

The model that addresses the situation wherein one group is not affected by another group using specific commands is called the:
Non-interference model

In the non-interference model, security policy assertions are defined in the abstract. The process of moving from the abstract to developing conditions that can be applied to the transition functions that operate on the objects is called unwinding.

The secure path between a user and the Trusted Computing Base (TCB) is called:
Trusted path

The Common Criteria terminology for the degree of examination of the product to be tested is:
Evaluation Assurance Level (EAL)

A difference between the Information Technology Security Evaluation Criteria (ITSEC) and the Trusted Computer System Evaluation Criteria (TCSEC) is:
ITSEC addresses integrity and availability as well as confidentiality.

Describe the standards addressed by Title II, Administrative Simplification, of the Health Insurance Portability and Accountability Act:

Transaction Standards, to include Code Sets; Unique Health Identifiers; Security and Electronic Signatures and Privacy.

The principles of Notice, Choice, Access, Security, and Enforcement refer to:
Privacy

Simple security property:
A user has access to a client companys information, c, if and only if for all other information, o, that the user can read, either x(c) z (o) or x(c) = x (o), where x(c) is the clients company and z (o) is the competitors of x(c).Ž

Chinese wall

Two categories of the policy of separation of duty are:
Dual control and functional separation Dual control requires that two or more subjects act together simultaneously to authorize an operation. A common example is the requirement that two individuals turn their keys simultaneously in two physically separated areas to arm a weapon. Functional separation implies a sequential approval process such as requiring the approval of a manager to send a check generated by a subordinate.

In  (NIACAP), a type accreditation performs:
Evaluates an application or system that is distributed to a number of different locations

The minimum national standards for certifying and accrediting national security systems.

NIACAP


The time required to switch transmission directions in a half-duplex line is called the turnaround time.
Serial data transmission in which information can be transmitted in two directions, but only one direction at a time, is called:
Half-duplex

Simplex, refers to communication that takes place in one direction only Full-duplex, can transmit and receive information in both directions simultaneously. The transmissions can be asynchronous or synchronous. In asynchronous transmission, a start bit is used to indicate the beginning of transmission. The start bit is followed by data bits and, then, by one or two stop bits to indicate the end of the transmission. Since start and stop bits are sent with every unit of data, the actual data transmission rate is lower since these overheadŽ bits are used for synchronization and do not carry information. In this mode, data is sent only when it is available and the data is not transmitted continuously. In synchronous transmission, the transmitter and receiver have synchronized clocks and the data is sent in a continuous stream. The clocks are synchronized by using transitions in the data and, therefore, start and stop bits are not required for each unit of data sent.

The ANSI ASC X12 Standard version 4010 applies to which one of the following HIPAA categories
Transactions

The transactions addressed by HIPAA are:
Health claims or similar encounter information
Health care payment and remittance advice
Coordination of Benefits
Health claim status
Enrollment and disenrollment in a health plan
Eligibility for a health plan
Health plan premium payments
Referral certification and authorization
The HIPAA EDI transaction standards to address these HIPAA
transactions include the following:
Health care claims or coordination of benefits
Retail drug NCPCP (National Council for Prescription Drug
Programs) v. 32
Dental claim ASC X12N 837: dental
Professional claim ASC X12N 837: professional
Institutional claim ASC X12N 837: institutional
Payment and remittance advice ASC X12N 835
Health claim status ASC X12N 276/277
Plan enrollment ASC X12 834
Plan eligibility ASC X12 270/271
Plan premium payments ASC X12 820

Referral certification ASC X12 N 278

The American National Standards Institute was founded in 1917 and is the only source of American Standards. The ANSI Accredited Standards Committee X12 was chartered in 1979 and is responsible for cross-industry standards for electronic documents. The HIPAA privacy standards were finalized in April, 2001, and implementation must be accomplished by April 14, 2003. The privacy rule covers individually identifiable health care information transmitted, stored in electronic or paper form, or communicated orally. Protected health information (PHI) may not be disclosed unless disclosure is approved by the individual, permitted by the legislation, required for treatment, part of health care operations, required by law, or necessary for payment. PHI is defined as individually identifiable health information that is transmitted by electronic media, maintained in any medium described in the definition of electronic media under HIPAA, or is transmitted or maintained in any other form or medium.


A 1999 law that addresses privacy issues related to health care, insurance and finance and that will be implemented by the states is:
Gramm-Leach-Bliley (GLB)

To implement privacy practices on Web sites :

The latest W3C working draft of P3P is P3P 1.0, 28January, 2002 (www.w3.org/TR). An excerpt of the W3C P3P Specification states P3P enables Web sites to express their privacy practices in a standard format that can be retrieved automatically and interpreted easily by user agents. P3P user agents will allow users to be informed of site practices (in both machine- and human-readable formats) and to automate decision-making based on these practices when appropriate. Thus users need not read the privacy policies at every site they visit.Ž
With P3, an organization can post its privacy policy in machine readable form (XML) on its Web site. This policy statement includes: Who has access to collected information The type of information collected How the information is used The legal entity making the privacy statement P3P also supports user agents that allow a user to configure a P3P-enabled Web browser with the users privacy preferences. Then, when the user attempts to access a Web site, the user agent compares the users stated preferences with the privacy policy in machine-readable form at the Web site. Access will be granted if the preferences match the policy. Otherwise, either access to the Web site will be blocked or a pop-up window will appear notifying the userthat he/she must change their privacy preferences. Usually, this means that the user has to lower his/her privacy threshold.

What process is used to accomplish high-speed data transfer between a peripheral device and computer memory, bypassing the Central Processing Unit (CPU)?
Direct memory access

DMA controller essentially takes control of the memory busses and manages the data transfer directly.

An associative memory operates in which way:
Searches for a specific data value in memory

Direct or absolute addressing mode: Returns values stored in a memory address location specified in the CPU address register.

Concerns usually apply to what type of architecture?
1. Desktop systems can contain sensitive information that may be at risk of being exposed.
2. Users may generally lack security awareness.
3. Modems present a vulnerability to dial-in attacks.
4. Lack of proper backup may exist.

Distributed

Additional concerns associated with distributed systems include:
1. A desktop PC or workstation can provide an avenue of access into critical information systems of an organization.
2. Downloading data from the Internet increases the risk of infecting corporate systems with a malicious code or an unintentional modification of the databases.
3. A desktop system and its associated disks may not be protected from physical intrusion or theft.

Centralized system, all the characteristics cited do not apply to a central host with no PCs or workstations with large amounts of memory attached. Also, the vulnerability presented by a modem attached to a PC or workstation would not exist.

An open system or architecture, is comprised of vendor independent subsystems that have published specifications and interfaces in order to permit operations with the products of other suppliers. One advantage of an open system is that it is subject to review and evaluation by independent parties.


The definition A relatively small amount (when compared to primary memory) of very high speed RAM, which holds the instructions and data from primary memory, that has a high probability of being accessed during the currently executing portion of a programŽ refers to: Cache

The organization that establishes a collaborative partnership of computer incident response, security and law enforcement professionals who work together to handle computer security incidents and to provide both proactive and reactive security services for the U.S. Federal governmentŽ is called:

Federal Computer Incident Response Center

FedCIRC charter, FedCIRC provides assistance
and guidance in incident response and provides a centralized
approach to incident handling across agency boundaries.Ž Specifically,
the mission of FedCIRC is to:

 Provide civil agencies with technical information, tools, methods, assistance, and guidance
Be proactive and provide liaison activities and analytical support
Encourage the development of quality products and services through collaborative relationships with Federal civil agencies, the Department of Defense, academia, and private industry
Promote the highest security profile for government information technology (IT) resources
Promote incident response and handling procedural awareness with the federal government.

the CERT Coordination Center (CERT/CC), is a unit of the Carnegie Mellon University Software Engineering Institute (SEI). SEI is a Federally funded R&D Center. CERTs mission is to alert the Internet community to vulnerabilities and attacks and to conduct research and training in the areas of computer security, including incident response.

Benefit of employing incident-handling capability:
a. It enhances internal communications and the readiness of the organization to respond to incidents.
b. It assists an organization in preventing damage from future incidents
c. Security training personnel would have a better understanding of users knowledge of security issues.


The primary benefits of employing an incident-handling capability are containing and repairing damage from incidents and preventing future damage. Additional benefits related to establishing an incident handling capability are:
Enhancement of the risk assessment process:
An incident handling capability will allow organizations to collect threat data that may be useful in their risk assessment and safeguard selection processes (e.g., in designing new systems). Statistics on the numbers and types of incidents in the organization can be used in the risk-assessment process as an indication of vulnerabilities and threats.
Enhancement of internal communications and the readiness of the organization to respond to any type of incident, not just computer security incidents. Internal communications will be improved, management will be better organized to receive communications, and contacts within public affairs, legal staff, law enforcement, and other groups will have been pre-established.
Security training personnel will have a better understanding of users knowledge of security issues. Trainers can use actual incidents to vividly illustrate the importance of computer security. Training that is based on current threats and controls recommended by incident-handling staff provides users with information more specifically directed to their current needs, thereby reducing the risks to the organization from incidents.

Statement below is accurate about Evaluation Assurance Levels (EALs) in the Common Criteria (CC)?
Predefined packages of assurance components that make up security confidence rating scale.

operational assurance:
Operational assurance is the process of reviewing an operational system to see that security controls are functioning correctly.

Operational assurance is the process of reviewing an operational system to see that security controls, both automated and manual, are functioning correctly and effectively. Operational assurance addresses whether the systems technical features are being bypassed or have vulnerabilities and whether required procedures are being followed.
To maintain operational assurance, organizations use two basic methods: system audits and monitoring. A system audit is a one-time or periodic event to evaluate security. Monitoring refers to an ongoing activity that examines either the system or the users.

Covert Storage Channel:
An information transfer that involves the direct or indirect writing of a storage location by one process and the direct or indirect reading of the storage location by another process.

A covert storage channel typically involves a finite resource (e.g., sectors on a disk) that is shared by two subjects at different security levels. One way to think of the difference between covert timing channels and covert storage channels is that covert timing channels are essentially memoryless, whereas covert storage channels are not. With a timing channel, the information transmitted from the sender must be sensed by the receiver immediately, or it will be lost. However, an error code indicating a full disk which is exploited to create a storage channel may stay constant for an indefinite amount of time, so a receiving process is not as constrained by time.

Description of a Protection Profile (PP), as defined by the Common Criteria (CC)?
A reusable definition of product security requirements

The Common Criteria (CC) is used in two ways:
a. As a standardized way to describe security requirements for IT products and systems
b. As a sound technical basis for evaluating the security features of these products and systems






































































No comments:

Post a Comment