Friday, August 8, 2014

Operation Security

The threat of unauthorized disclosure of sensitive information most likely to go unnoticed in the absence of auditing for Disgruntled employee.
Insiders (employees, contractors, etc.) can have access to information that they should not be allowed to and in the absence of auditing (logging) their actions can go unnoticed.

What provides controlled and un-intercepted interfaces into privileged user functions
Trusted paths
Trusted paths provide trustworthy interfaces into privileged user functions and are intended to provide a way to ensure that any communications over that path cannot be intercepted or corrupted.

The doors of a data center opens up in the event of a fire. Th is is an example of a. Fail-safe

Fail-safe mechanisms focuses on failing with a minimum of harm to personnel while fail-secure focuses on failing in a controlled manner to block access while the systems is in an inconsistent state. For example, data center door systems will fail safe to ensure that personnel can escape the area when the electrical power fails. A fail-secure door would prevent personnel from using the door at all, which could put personnel in jeopardy. Fail-open and fail-closed are fail safe mechanisms.

To ensure constant redundancy and fault-tolerance, which type of spare is recommended ....Hot spare
Hot sites are generally rented While redundant are not rented.

If speed is preferred over resilience, which of the following RAID configuration is the most suited ...RAID 0
In a RAID 0 configuration, fi les are written in stripes across multiple disks without the use of parity information. Th is technique allows for fast reading and writing to disk since all of the disks can typically be accessed in parallel. However, without the parity information, it is not possible to recover from a hard drive failure. Th is technique does not provide redundancy and should not be used for systems with high availability requirements.

Database shadowing: Database shadowing is the technique in which updates are shadowed in multiple locations. It is like copying the entire database on to a remote location. Backups are to be conducted on a regular basis and are useful in recovering information or a system in the event of   disaster.

Archiving is the storage of data that is not in continual use for historical purposes.

Data mirroring is a RAID technique that duplicates all disk writes from one disk to another to create two identical drives.

When the backup window is not long enough to backup all of the data and the restoration of backup must be as fast as possible, which type of high-availability backup strategy is recommended...Differential

Full backup would not be possible since the backup window is not long ago for all the data to be backed up. Additionally, it is less likely that the backup window can be increased to allow for a full backup, which is both time consuming and costly from a storage perspective. In an incremental backup, only the files that changed since the last backup will be backed up. In a differential backup, only the files that changed since the last full backup will be backed up. In general, differentials require more space than incremental backups while incremental backups are faster to perform. On the other hand, restoring data from incremental backups requires more time than differential backups. To restore from incremental backups, the last full backup and all of the incremental backups performed are combined. In contrast, restoring from a differential backup requires only the last full backup and the latest differential.

When you approach a restricted facility, you are requested for identification and verified against a pre-approved list by the guard at the front gate before being let in. Th is is an example of checking for the principle of Least privilege.

The major benefit of information classification is to identify the appropriate level of protection needs
Information classification refers to the practice of differentiating between different types of information assets and providing some guidance as to how classified information will need to be protected. Vulnerability scans can be used to map out the computing ecosystem. Threat modeling is used to identify threats and vulnerabilities. Configuration management can be used to determine the software baseline.

When information, once classified highly sensitive, is no longer critical or highly valued, that information must be Declassified.

Information classification also includes the processes and procedures to declassify information. For example, declassification may be used to downgrade the sensitivity of information. Over the course of time, information once considered sensitive may decline in value or criticality. In these instances, declassification efforts should be implemented to ensure that excessive protection controls are not used for nonsensitive information. When declassifying information, marking, handling, and storage requirements will likely be reduced. Organizations should have declassification practices well documented for use by individuals assigned with the task. Information may still be needed and so it cannot be destroyed, degaussed, or deleted.

The main benefit of placing users into groups and roles is  .....Ease of user administration

The likelihood of an individual’s compliance to organization’s policy can be determined by their ...Clearance level.

Clearances are a useful tool for determining the trustworthiness of an individual and the likelihood of their compliance with organization policy. Job rank, tile, or role may be tied to a clearance level, but this may not always be the case.

Reports must be specific on both the message and Intended audience 
Reporting is also fundamental to successful security operations. It can take a variety of forms depending on the intended audience. Technical reporting tends to be designed for technical specialists or managers with direct responsibility for service delivery. Management reporting will provide summaries of multiple systems as well as key metrics for each of the services covered by the report. Executive dashboards are intended for the executive who is interested in seeing only the highlights across multiple services, and provide simple summaries of current state, usually in a highly visual form such as charts and graphs.

What can help with ensuring that only the needed logs are collected for monitoring....Clipping level
Clipping levels are used to ensure that only needed logs are collected. Th is is mainly used, because even on a single system, logs can get to be very large. An example of a clipping level is that only failed access attempts are logged.

The main difference between a security event management (SEM) system and a log management system is that SEM systems are useful for log collection, collation, and analysis....in real time.

Security event management (SEM) solutions are intended to provide a common platform for log collection, collation, and analysis in real-time to allow for more effective and efficient response.

Log management systems are similar in that, they also collect logs and provide the ability to report against them,although their focus tends to be on the historical analysis of log information, rather than real-time analysis.

They may be combined with SEM solutions to provide both historical and real-time functions.

Normal traffic flagged as an attack, it is an example of  False-positive
False-positives occur when the IDS or IPS identifies something as an attack, but it is in fact normal traffic. False-negatives occur when it failed to interpret something as an attack when it should have. In these cases, intrusion systems must be carefully “tuned” to ensure that these are kept to a minimum.

The best way to ensure that there is no data remanence of sensitive information that was once stored on a burn-once DVD media is by Destruction

Optical media such as CDs and DVD must be physically destroyed to make sure that there is no residual data that can be disclosed. Since the media mentioned in this context is a read-only media (burn-once) DVD, the information on it cannot be overwritten or deleted.

Degaussing can reduce or remove data remanence in magnetic non-optical media.

What is Concerned with not only identifying the root cause but also addressing the underlying issue ...Problem management

While incident management is concerned primarily with managing an adverse event, problem management is concerned with tracking that event back to a root cause and addressing the underlying problem. Maintaining system integrity is accomplished through the process of change control management. Configuration management is a process of identifying and documenting hardware components, software, and the associated settings.

Before applying a software update to production systems, it is extremely important that...The production systems are backed up.
Prior to deploying updates to production servers, make certain that a full system backup is conducted. In the regrettable event of a system crash, due to the update, the server and data can be recovered without a signifi cant loss of data. Additionally, if the update involved propriety code, it will be necessary to provide a copy of the server or application image to the media librarian. The presence or absence of full disclosure information is good to have but not a requirement as the patching process will have to be a risk-based decision as it applies to the organization. Documentation of the patching process is the last step in patch management processes. Independent third-party assessments are not usually related to attesting patch validity.


Elements of a physical protection system ....deter, detect, delay, and response

To successfully complete a vulnerability assessment, it is critical that protection systems are well understood. Th is objective includes .....Th reat defi nition, target identifi cation, and facility characterization

At the beginning, a good assessment requires the security professional to determine specifi c protection objectives. Th ese objectives include threat defi nition, target identifi cation, and facility characteristics

Laminated glass is made from two sheets of ordinary glass bonded to a middle layer of resilient plastic. When it is struck it may crack but the pieces of glass tend to stick to the plastic inner material. Th is glass is recommended in what type of locations......Street-level windows, doorways, and other access areas

Th e strategy of forming layers of protection around an asset or facility is
known as....Defense-in-depth
In the concept of defense-in-depth, barriers are arraigned in layers with the level of security growing progressively higher as one comes closer to the center or the highest protective area. Defending an asset with a multiple posture can reduce the likelihood of a successful attack; if one layer of defense fails, another layer of defense will hopefully prevent the attack, and so on.

What crime reduction technique which is used by architects, city planners,
landscapers, interior designers, and security professionals with the objective of
creating a physical environment that positively infl uences human behavior?........Crime prevention through environmental design

Crime prevention through environmental design (CPTED) is
a crime reduction technique that has several key elements applicable to the analysis
of the building function and site design against physical attack. It is used by architects,
city planners, landscapers, interior designers, and security professionals with
the objective of creating a climate of safety in a community by designing a physical
environment that positively infl uences human behavior.

Th e key to a successful physical protection system is the integration of
........people, procedures, and equipment

Th e key to a successful system is the integration of people, procedures and equipment into a system that protects the targets from the threat. A well-designed system provides protection-in-depth, minimizes the consequences of component failures and exhibits balanced protection.

What is the primary objective of controlling entry into a facility or area
Ensure that only authorized persons are allowed to enter

Th e primary function of an access control system (ACS) is to
ensure that only authorized personnel are permitted inside the controlled area. Th is
can also include the regulation and fl ow of materials into and out of specifi c areas.
Persons subject to control can include employees, visitors, customers, vendors, and
the public. Access control measures should be diff erent for each application to fulfill
specifi c security, cost, and operational objectives

Security lighting for CCTV monitoring generally requires at least 1 to 2 footcandles
(fc) of illumination. What is the required lighting needed for safety
considerations in perimeter areas such as parking lots or garages?  5 fc
Lights used for CCTV monitoring generally requires at least
one to two footcandles of illumination, whereas the lighting needed for safety considerations
in exterior areas such as parking lots or garages substantially greater
(at least 5 fc).

What would be the most appropriate interior sensor used for a building that
has windows along the ground fl oor .........Acoustic and shock wave glass-break sensors

Glass-break sensors are a good intrusion detection device
for buildings with a lot of glass windows and doors with glass panes.Th e use of dual-technology glass break sensors—acoustic and shock wave—is most eff ective.
Th e reason is that if only acoustic is used and an employee pulls the window blinds
up, it can set off a false alarm; but if it is set to a dual-alarm system both acoustic
and shock sensors will need to be activated before an alarm is triggered.


CCTV technologies make possible four distinct yet complementary functions.
Th e fi rst is visual assessment of an alarm or other event. Th is permits
the operator to assess the nature of the alarm before initiating a response.
What are the other three functions of CCTV.........Surveillance, deterrence, and evidentiary archives
Uses of CCTV systems for security services include several
diff erent functions: surveillance, assessment, deterrence, and evidentiary archives

Businesses face new and complex physical security challenges across the full
spectrum of operations. Although security technologies are not the answer
to all organizational security problem, if applied appropriately what will they
provide?

Th ey can enhance the security envelope and in the majority of cases will save the organization money.

Th ese days, all businesses face new and complex physical
security challenges across the full spectrum of operations. Although security
technologies are not the answer to all organizational security problems, if applied
appropriately, they can enhance the security envelope and in the majority of cases
will save the organization money















Information Security:

Legal
Five rules of evidence:
authentic, be accurate, be complete, be convincing, and be admissible

Phases of an incident response:
a. Documentation
b. Containment
c. Investigation

The abstract concepts of law infl uenced by the writings of legal scholars and academics ...Civil/tort law

Intellectual property covers the expression of ideas rather than the ideas themselves ......Copyright

Intellectual property protects the goodwill a merchant or vendor invests in its products....Trademark

Like incident response, computer forensics model with various computer forensics guidelines:
International Organization of Computer Evidence (IOCE), Scientific Working Group on Digital Evidence (SWGDE), Association of Chief Police Officers (ACPO). These guidelines formalize the computer forensic processes by breaking them into numerous phases or steps.

Categories of software licensing:
a. Freeware
b. Commercial
c. Academic
d. shareware

Within these categories, there are specific types of agreements. Master agreements and end-user licensing agreements (EULAs) are the most prevalent.

Incident response sub-phases triage encompasses:
Detection, identification, notification

Integrity of a forensic bit stream image is often determined by Comparing hash totals to the original source

Ensuring the authenticity and integrity of evidence is critical. If the courts feel the evidence or its copies are not accurate or lack integrity, it is doubtful that the evidence or any information derived from the evidence will be admissible. Th e current protocol for demonstrating authenticity and integrity relies on hash functions that create unique numerical signatures that are sensitive to any bit changes. Currently, if these signatures match the original or have not changed since the original collection, the courts will accept that integrity has been established.

When dealing with digital evidence: 
The crime scene: Must have the least amount of contamination that is possible

Given the importance of the evidence that is available at a crime scene, the ability to deal with a scene in a manner that minimizes the amount of disruption, contamination, or destruction of evidence. Once a scene has been contaminated, there is no undo or redo button to push; the damage is done.

category of inappropriate activities:
Loss incurred unintentionally though the lack of operator training.....accidental loss
Theft of information or trade secrets for profit or unauthorized disclosure......intentionally illegal computer activity
Data scavenging through the resources available to normal system users......keyboard attack
Computer behavior that might be grounds for a job action or dismissal....Inappropriate computer activities

Maintenance accounts a threat to operations controls:
Maintenance accounts are commonly used by hackers to access network devices.

To guarantee that transaction records are retained IAW compliance requirements.....record retention.

A weakness in a system that could be exploited ....vulnerability
A company resource that could be lost due to an incident...asset
The minimization of loss associated with an incident ...risk management
A potential incident that could cause harm......threat

HIGHEST level of operator privilege .......Access Change

Object-oriented system:

is a group of independent objects that can be requested to perform certain operations or exhibit specific behaviors. These objects cooperate to provide the systems required functionality. The objects have an identity and can be created as the program executes (dynamic lifetime).

To provide the desired characteristics of object-oriented systems, the objects are encapsulated, i.e., they can only be accessed through messages sent to them to request performance of their defined operations. The object can be viewed as a black boxŽ whose internal details are hidden from outside observation and cannot normally be modified. Objects also exhibit the substitution property, which means that objects providing compatible operations can be substituted for each other.

In summary, an object-oriented system contains objects that exhibit the following properties:
Identity -- „each object has a name that is used to designate that object.
Encapsulation„an-- object can only be accessed through messages to perform its defined operations.
Substitution„ -- objects that perform compatible operations can be substituted for each other.
Dynamic--- lifetimes„ objects can be created as the program executes.

Functional programming, uses only mathematical functions to perform computations and solve problems. This approach is based on the assumption that any algorithm can be described as a mathematical function. Functional languages have the characteristics that::
1. They support functions and allow them to be manipulated by being passed as arguments and stored in data structures.
2. Functional abstraction is the only method of procedural abstraction.

In software engineering, the term verification is defined as:
To establish the truth of correspondence between a software product and its specification.

The discipline of identifying the components of a continually evolving system for the purposes of controlling changes to those components and maintaining integrity and traceability throughout the life cycle is called:
Configuration management

release control, involves deciding which requests are to be implemented in the new release, performing the changes and conducting testing.

Change control, involves the analysis and understanding of the existing code, and the design of changes, and corresponding test procedures.

The basic version of the Construction Cost Model (COCOMO), which proposes quantitative, life-cycle relationships, performs what function:

The Basic COCOMO Model:
The number of man-months (MM) required to develop the most common type of software product, in terms of the number of thousands of delivered source instructions (KDSI) in the software productŽ MM = 2.4 (KDSI)1.05.

The development schedule (TDEV) in monthsŽ
TDEV = 2.5(MM)0.38

In addition, Boehm has developed an intermediate COCOMO Model that also takes into account hardware constraints, personnel quality, use of modern tools, and other attributes and their aggregate impact on overall project costs. A detailed COCOMO Model, by Boehm, accounts for the effects of the additional factors used in the intermediate model on the costs of individual project phases.

The basic version of the Construction Cost Model (COCOMO), which proposes quantitative, life-cycle relationships, performs what function:
Estimates software development effort and cost as a function of the size of the software product in source instructions.

effort(sd) cost(size software products in source instruction)

The software development effort is determined using the following five user functions:
External input types
External output types
Logical internal file types
External interface file types
External inquiry types

Rayleigh curve:
applied to software development cost and effort estimation. In this method, estimates based on the number of lines of Source code are modified by the following two factors:
The manpower buildup index (MBI), which estimates the rate of buildup of staff on the project
A productivity factor (PF), which is based on the technology used.

Incremental development:
A refinement to the basic Waterfall Model that states that software should be developed in increments of functional capability.

The advantages of incremental development include the ease of testing increments of functional capability and the opportunity to incorporate user experience into a successively refined product.

The Spiral Model of the software development process uses metric relative to the spiral:  The radial dimension represents cumulative cost

The radial dimension represents cumulative cost and the angular dimension represents progress made in completing each cycle of the spiral. The spiral model is actually a meta-model for software development processes.
A summary of the stages in the spiral is as follows:
1. The spiral begins in the top, left-hand quadrant by determining the objectives of the portion of the product being developed, the alternative means of implementing this portion of the product, and the constraints imposed on the application of the alternatives.
2. Next, the risks of the alternatives are evaluated based on the objectives and constraints. Following this step, the relative balances of the perceived risks are determined.
3. The spiral then proceeds to the lower right-hand quadrant where the development phases of the projects begin. A major review completes each cycle and then the process begins anew for succeeding phases of the project. Typical succeeding phases are software product design, integration and test plan development, additional risk analyses, operational prototype, detailed design, code, unit test, acceptance test, and implementation.

In the Capability Maturity Model (CMM) for software, the definition describes the range of expected results that can be achieved by following a software processŽ is that of:
Software process capability

A software process is a set of activities, methods, and practices that are used to develop and maintain software and associated products.

Software process capability is a means of predicting the outcome of the next software project conducted by an organization.
software process performance, is the result achieved by following a software process.
Thus, software capability is aimed at expected results while software performance is focused on results that have been achieved.

Software process maturity:
 Defined      
Managed
Measured
Controlled

Effective

 Initial--->„the software process is ad hoc and most processes are undefined.
Repeatable---> „fundamental project management processes are in place.
Defined--->„the software process for both management and engineering functions is documented, standardized, and integrated into the organization.
Managed--->„the software process and product quality are measured, understood, and controlled.
Optimizing--->„continuous process improvement is being performed.

software process assessment VS a software capability evaluation

Software process assessments determine the state of an organizations current software process and are used to gain support from within the organization for a software process improvement program;

Software capability evaluations are used to identify contractors who are qualified to develop software or to monitor the state of the software process in a current software project.

Common term in object-oriented systems:
a. Behavior
b. Message
c. Method

Behavior, is a characteristic of an object.
The object is defined as a collection of operations that, when selected, reveal or manipulate the state of the object. Thus, consecutive invocations of an object may result in different behaviors, based on the last operations selected.

Message, is a request sent to an object to carry out a particular operation.

Method, is the code that describes what the object will do when sent a message.

In object-oriented programming, when all the methods of one class are passed on to a subclass, this is called: Inheritance

All the methods of one class, called a superclass are inherited by a subclass. Thus, all messages understood by the superclass are understood by the subclass.

In other words, the subclass inherits the behavior of the superclass.

Delegation, if an object does not have a method to satisfy a request it has received, it can delegate the request to another object.

object-oriented language:
a. Smalltalk
b. Simula 67
c. C++

Lisp is a functional language that processes symbolic expressions rather than numbers






















Thursday, August 7, 2014

Cryptography:

Message integrity:
Create a checksum, append it to the message, encrypt the message, then send to recipient The use of a simple error detecting code, checksum, or frame check sequence is often used along with symmetric key cryptography for message integrity.

Certificate authority provides what benefits to a user:----Validation that a public key is associated with a particular user
A certificate authority (CA) “signs” an entities digital certificate to certify that the certificate content accurately represents the certificate owner.

Digital certificate provides -----Proof of non-repudiation of origin.

RIPEMD-160 hash -- output length ---160 bits (SHA)

ANSI X9.17 is concerned primarily with ----- Protection and secrecy of keys

Protection and secrecy of keys is the primary concern of ANSI 9.17.
ANSI X9.17 was developed to address the need of financial institutions to transmit securities and funds securely using an electronic medium. Specifically, it describes the means to ensure the secrecy of keys.

Certificate is revoked, what is the proper procedure ------Updating the certificate revocation list
When a key is no longer valid the certificate revocation list should be updated. A certificate revocation list (CRL) is a list of non-valid certificates that should not be accepted by any member of the PKI.

Link encryption:
a. Link encryption encrypts routing information.
b. Link encryption is often used for Frame Relay or satellite links.
d. Link encryption provides better traffic flow confidentiality.

Link encryption is not suitable for high-risk environments due to possible privacy weakness at each node. It is possible that an attacker could view encrypted data as encrypt decrypt function is performed at each node along the  data path.

The sequence that controls the operation of the cryptographic algorithm ----- Cryptovariable/ Key

Process used in most block ciphers to increase their strength ------SP-network

The SP-network is the process described by Claude Shannon used in most block ciphers to increase their strength. SP stands for substitution and permutation (transposition), and most block ciphers do a series of repeated substitutions and permutations to add confusion and diff usion to the encryption process.

Cryptography supports all of the core principles of information security:
a. Availability
b. Confidentiality
c. Integrity
No Authenticity

A way to defeat frequency analysis as a method to determine the key is to use---Polyalphabetic ciphers

Use of several alphabets for substituting the plaintext is called polyalphabetic ciphers. It is designed to make the breaking of a cipher by frequency analysis Attacks more difficult.

Running key cipher is based on ---- Modular arithmetic

The use of modular mathematics and the representation of each letter by its numerical place in the alphabet are the key to many modern ciphers including running key ciphers.

Risk Management: Risk management identifies risks and calculates their impacts on the organization.

Risk management minimizes loss to information assets due to undesirable events through identification, measurement, and control. It encompasses the overall security review, risk analysis, selection, and evaluation of safeguards, cost–benefit analysis, management decision, safeguard identification and implementation, along with ongoing effectiveness review. Risk management provides a mechanism to the organization to ensure that executive management knows current risks, and informed decisions can be made.

Tactical security plans -----Deploy new security technology

Tactical plans provide the broad initiatives to support and achieve the goals specified in the strategic plan. Th ese initiatives may include deployments such as establishing an electronic policy development and distribution process, implementing robust change control for the server environment, reducing vulnerabilities residing on the servers using vulnerability management, implementing a “hot site” disaster recovery program, or implementing an identity management solution. These plans are more specific and may consist of multiple projects to complete the effort. Tactical plans are shorter in length, such as 6 to 18 months to achieve a specific security goal of the company.


















Business Continuity and Disaster Recovery Planning

Define business continuity/disaster recovery plan: Th e adequate preparations and procedures for the continuation of all business functions.

Business continuity planning (BCP) and Disaster recovery planning (DRP) address the preparation, processes, and practices required to ensure the preservation of the business in the face of major disruptions to normal business operations.

Regardless of industry, which element of legal and regulatory requirements are all industries subject to: Prudent man rule (exercise the same care in managing company affairs as in managing one’s own affairs)

The extent to which an organization should address business continuity or disaster recovery planning ------
Continuity planning is a significant corporate issue and should include all parts or functions of the company

Business continuity planning and Disaster recovery planning involve the identification, selection, implementation, testing, and updating of  prudent processes and specific actions necessary to protect critical business processes from the effects of major system and network disruptions and to ensure the timely restoration of business operations if significant disruptions occur.

Business impact analysis is performed to identify: Th e exposures to loss to the organization
Th e business impact analysis is what is going to help the company decide what needs to be recovered and how quickly it needs to be recovered.

During the risk analysis planning phase, actions that could manage threats or mitigate the effects of an event ---- Implementing procedural controls
The third element of risk is mitigating factors. Mitigating factors are the controls or safeguards the planner will put in place to reduce the impact of a threat.

The reason to implement additional controls or safeguards is to ....... reduce the impact of the threat
Preventing a disaster is always better than trying to recover from one. If the planner can recommend controls to be put in place to prevent the most likely of risks from having an impact on the organization’s ability to do business, then the planner will have fewer actual events to recover from.

Business impact analysis:  A business impact analysis establishes the effect of disruptions on the organization.
All business functions and the technology that supports them need to be classified based on their recovery priority. Recovery time frames for business operations are driven by the consequences of not performing the function. The consequences may be the result of business lost during the down period; contractual commitments not met resulting in fines or lawsuits, lost goodwill with customers.

Term disaster recovery commonly refers to--- The recovery of the technology environment. Once computers became part of the business landscape, it quickly became clear that we could not return to our manual processes if our computers failed. If those computer systems failed, there were not enough people to do the work nor did the people in the business still have the skill to do it manually anymore. Th is was the start of the disaster recovery industry. Still today, the term “disaster recovery” or “DR” commonly means recovery of the technology environment.

The effort to determine the consequences of disruptions that could result from a disaster
BIA
BIA helps the company decide what needs to be recovered and how quickly it needs to be recovered

Elements of risk------ Threats, assets and mitigating controls

Most efficient restore from tape backup-----Full backup
If a company wants the backup and recovery strategy to be as simple as possible, then they should only use full backups. Th ey take more time and hard drive space to perform but they are the most efficient in recovery.

Advantages of a hot site recovery solution is ---Highly available
Among the advantages of internal or external hot site are allows recovery to be tested, highly available, and site can be operational within hours.

Primary desired result of any well-planned business continuity exercise ---- Identifies plan strengths and weaknesses

Business continuity plan should be updated and maintained:
a. Immediately following an exercise.
b. Following a major change in personnel.
c. After installing new software.


DRP:
1. develop continuity planning policy statement.
2. BIA
3. identify preventive controls
4. develop recovery strategy
5. develop contingency plan
6. test the plan, take training,
7. maintain the paln


BIA Identifications:
1. What areas would suffer greatest operational and financial loss in the event of an disaster.
2. which systems are critical for the company and must be highly protected.
3. what amount of outage time a company can endure before it is permanently crippled.

Disaster recovery.......action ....just after disaster
Business continuity ...action....to keep operations running for a long period of time.

Steps in DR and CP:
1. project initiation
2. strategy dev
3. BIA
4. plan dev
5. imp
6. test
7. maintenance

Business case:


Walk-through:
collectively

Continuity planning policy statement:
Scope of BCP project, team members roles, goals of the project.

Acting out a specific scenario: simulation test.

BIA:
1. identify threat
2. identifying critical functions of the company
3. calculating RISK


To create a document to be used to help understand what impact a disruptive event would have on the business: Business Impact Assessment (BIA)

To define a strategy to minimize the effect of disturbances and to allow for the resumption of business processes: business continuity planning

The information needed to define the continuity strategy:
a. A strategy needs to be defined to preserve computing elements, such as hardware, software, and networking elements.
b. The strategy needs to address facility use during a disruptive event.
c. The strategy needs to define personnel roles in implementing continuity.

Element of BCP plan approval and implementation:
Creating an awareness of the plan
Obtaining senior management approval of the results
Updating the plan regularly and as needed

Most accurate about the results of the disaster recovery plan test:
If no deficiencies were found during the test, then the test was probably flawed.

Company/employee relations during and after a disaster.
The organization has a responsibility to continue salaries or other funding to the employees and/or families affected by the disaster.

Disbursement of funds during and after a disruptive event.
Authorized, signed checks should be stored securely off-site for access by lower-level managers in the event senior-level or financial management is unable to disburse funds normally.

post-disaster salvage team
a. The salvage team manages the cleaning of equipment after smoke damage.
b. The salvage team identifies sources of expertise to employ in the recovery of equipment or supplies.
c. The salvage team may be given the authority to declare when operations can resume at the disaster site.

role of the recovery team during the disaster
The recovery teams primary task is to get predefined critical business functions operating at the alternate processing site.
The recovery team will need full access to all backup media.

Continuity of operation plan:
establishes senior management and a headquarter after a disaster. It outlines roles and authorities, orders of succession and individual role task.

Information system contingency plan provides key information needed for system recovery:
roles and responsibilities
inventory
assessment procedures
recovery procedure
Testing a system



RA....assess Risk in All areas
BIA...assesses potential loss from a disaster.


Crisis communication plan ;
1. provides procedures for disseminating internal and external communications, means to provide critical status information and control rumor.
2. address communication with personnel and public; not system specific.
3. Incident based plan activated with a coop or bcp but may be used alone during a public exposure event.

Restoration of Organization's mission essential function focuses on :
Continuity of operation plan

Contingency plan includes:
system recovery
roles and respon
testing procedure

Difference between ISCP and DRP
ISCP can be developed for info recovery regardless of site or lacation.

DRP can be developed for info recovery from current site or temporary alternate site.

Longer distruption....more cost
shorter RTO...more expensive solution
calculating cost-balance-point will show an optimal point between disruption and recovery costs.

Data backup policy specification:
1. min. frequency
2. location of data stored
3. file name conventions
4. media rotation frequency

MOU---memorandum of understanding

System Environment:
low-impact..................tabletop
moderate-impact...........functional
high-impact........full functional

Business continuity Functional Analysis:
collect data
document function
develop hierarchy
apply data classification

Electronic vaulting characteristic:

1. transfers change in bulk (batch process)
2. backup in no real time(asyn)
3. no parallel processing to alternate site

Remote Journaling characteristics:
1. journal or transaction log is moved to remote
2. in real time(syn)
3. parallel processing to alternate site.


































Application Security

Von Neumann = A fundamental aspect of von Neumann architecture on which most computers today are based on is that there is no inherent difference between data and programming (instructions) representations in memory. Th erefore, we cannot tell whether the pattern 4Eh (00101110) is the letter N or a decrement operation code (commonly known as opcode). Similarly, the pattern 72h (01110010) may be the letter r or the fi rst byte of the “jump if below” opcode. Th erefore, without proper input validation, an attacker can provide input data that may actually be an instruction for the system to do something unintended  Linus’ law basically is based on the premise that with more people reviewing the source code (as in the case of open source), more security bugs can be detected and hence improve security.

Clark and Wilson model is an integrity model from which entity and referential integrity (RDBMS integrity) rules are derived  important characteristic of bytecode = A programming language like Java compiles source code intoa sort of pseudo-object code called bytecode. Th e bytecode is then processed by the interpreter (called the Java Virtual Machine, or JVM) for the CPU to run. Because the bytecode is already fairly close to object code, the interpretation process is much faster than for other interpreted languages. And because bytecode is still undergoing an interpretation, a given Java program will run on any machine that has a JVM. Memory management and sandboxing are important security aspects that apply to the programming language Java, but not to bytecode itself.

Th e debate over whether a pseudo-object (bytecode) representation can be easily reverse engineered is debatable and inconclusive. Because bytecode is more pseudo-object representation of the source code, reversing to source code is in fact considered less difficult than from object or executable code.

A covert channel or confinement problem is an information flow issue. It is a communication channel allowing two cooperating processes to transfer information in such a way that it violates the system’s security policy. Th ere are two types of covert channels, viz. storage and timing. A covert storage channel involves the direct or indirect reading of a storage location by one process and a direct or indirect reading of the same storage location by another process. Typically, a covert storage channel involves a finite resource, such as a memory location or sectoron a disk that is shared by two subjects at different security levels. Th is scenario is a description of a covert storage channel. A covert timing channel depends upon being able to influence the rate that some other process is able to acquire resources, such as the CPU, memory, or I/O devices. Covert channels as opposed to what should be the case (overt channels) could lead to denial of service and object reuse has to do with disclosure protection when objects in memory are reused by diff erent processes.


TOC/TOU is a common type of attack that occurs when some control changes between the time that the system security functions checkthe contents of variables and the time the variables actually are used during operations. For instance, a user logs on to a system in the morning and later is fi red. As a result of the termination, the security administrator removes the user from the user database. Because the user did not log off , he or she still has access to the system and might try to get even. Logic bombs are software modules set up to run in a quiescent state, but to monitor for a specifi c condition or set of conditions and to activate their payload under those conditions. Remote-access trojans are malicious programs designed to be installed, usually remotely, after systems are installed and working. Phishing attempts to get the user to provide information that will be useful for identity theft-type frauds

most eff ective defense against a buff er overfl ow attack is Encode the output Buff er overfl ows can result when a program fi lls up the assigned buff er of memory with more data than its buff er can hold. When the program begins to write beyond the end of the buff er, the program’s execution path can be changed, or data can be written into areas used by the operating system itself. A buff er overfl ow is caused by improper (or lacking) bounds checking on input to a program. By checking for the bounds (boundaries) of allowable input size, buff er overfl ow can be mitigated. Disallowing dynamic construction of queries is a defense against injection attacks and encoding the output mitigates scripting attacks. Th e collection of dangling objects in memory (garbage) can be requested but not necessarily forced and proper memory management can help mitigate buff er overfl ow attacks, but the most eff ective defenses against buff er overfl ow is bounds checking and proper error checking

Defect prevention rather than defect removal is characteristic of which of the following software development methodology= Cleanroom  In cleanroom software development methodology, the goal is to write code correctly the frst time, rather than trying to find the problems once they are there. Essentially, it focuses on defect prevention rather than defect removal. Th e waterfall methodology is extremely structured and its key distinguishing characteristic is that each phase (stage) must be completed before moving on to the next, in order to prevent ad hoc scope creep. A distinguishing feature of the spiral model is that in each phase of the waterfall there are four substages, based on the common Deming PDCA (Plan-Do-Check-Act) model; in particular, a risk assessment review (Check).

CASE is the technique of using computers and computer utilities to help with the systematic analysis, design, development, implementation, and maintenance of software  Sandboxing   One of the control mechanisms for mobile code is the sandbox. The sandbox provides a protective area for program execution. Limits are placed on the amount of memory and processor resources the program can consume. If the program exceeds these limits, the Web browser terminates the process and logs an error code. Th is can ensure the safety of the browser’s performance.

Salami scam: A variant on the concept of logic bombs involves what is known as the salami scam. Th e basic idea involves siphoning off small amounts of money (in some versions, fractions of a cent) credited to a specifi c account, over a large number of transactions.

Hoaxes use an odd kind of social engineering, relying on people’s naturally gregarious nature and desire to communicate, and on a sense of urgency and importance, using the ambition that people have to be the fi rst to provide important new information.

Two most common forms of attacks against databases are  Aggregation and inference

Aggregation is the ability to combine nonsensitive data from separate sources to create sensitive information. For example, a user takes two or more unclassifi ed pieces of data and combines them to form a classifi ed piece of data that then becomes unauthorized for that user. Th us, the combined data sensitivity can be greater than the classifi cation of individual parts. Inference  is the ability to deduce (infer) sensitive or restricted information from observing available information. Essentially, users may be able to determine unauthorized information from what information they can access and may never need to directly access unauthorized data. For example, if a user is reviewing authorized information about patients, such as the medications they have been prescribed, the user may be able to determine the illness. Inference is one of the hardest threats to control.
Web applications attacks
a. Injection and scripting
b. Session hijacking and cookie poisoning
d. Bypassing authentication and insecure cryptography

A property that ensures only valid or legal transactions that do not violate any user-defi ned integrity constraints in DBMS technologies is known as Consistency

ACID test, which stands for atomicity, consistency, isolation, and durability, is an important DBMS concept. Atomicity is when all the parts of a transaction’s execution are either all committed or all rolled back—do it all or not at all.
Consistency occurs when the database is transformed from one valid state to another valid state. A transaction is allowed only if it follows user-defi ned integrity constraints. Illegal transactions are not allowed, and if an integrity constraint cannot be satisfi ed, the transaction is rolled back to its previously valid state and the user is informed that the transaction has failed. Isolation is the process guaranteeing the results of a transaction are invisible to other transactions until the transaction is complete. Durability ensures the results of a completed transaction are permanent and can
survive future system and media failures, that is, once they are done, they cannot be undone. Th is is similar to transaction persistence

Expert systems are comprised of a knowledge base comprising modeled human experience and Inference engine Th e expert system uses a knowledge base (a collection of all the data, or knowledge, on a particular matter) and a set of algorithms or rules that infer new facts from knowledge and incoming data. Th e knowledge base could be the human experience that is available in an organization. Because the system reacts to a set of rules, if the rules are faulty, the response will also be faulty. Also, because human decision is removed from the point of action, if an error were to occur, the reaction time from a human would be longer.

Best defense against session hijacking and man-in-the-middle (MITM) attacks is Unique and random identification.
The use on non-predictable (randomized) and unique identifiers to identify sessions between two communicating parties is the best defense against session hijacking and man-in-the-middle attacks.

Encryption provides disclosure protection.
Prepared statements or procedures at the database layer, reduces the likelihood of injection attacks.
A database view is a preventive security control measure against disclosure attacks.

cross-site scripting/Buff er overfl ows/SQL injection Countermeasure:
Without appropriate output encoding, the script can be actively read and executed by the browser causing denial of service.
The collection of dangling objects in memory (garbage) can be requested but not necessarily
forced and proper memory management can help mitigate buff er overfl ow
attacks, but the most eff ective defenses against buff er overfl ow is bounds checking
and proper error checking

Disallowing dynamic construction of queries is a defense against injection attacks and encoding the output mitigates scripting attacks.

Locard’s principal of exchange: Locard’s principle of exchange states that when a crime is committed,
the perpetrators leave something behind and take something with them, hence the exchange. Th is principle allows us to identify aspects of the persons responsible, even with a purely digital crime scene.

Single sign on is also known as Password Synchronization.

Probing: To give an attacker a road map of the network

Redundant servers: A primary server that mirrors its data to a secondary server

Server cluster: A group of independent servers that are managed as a single system.

802.5: defines a token-passing ring access method
802.11a: wireless networking in the 5GHz band with speeds of up to 54 Mbps
802.11b: a wireless LAN in the 2.4 GHz band with speeds up to 11 Mbps.
802.3: describes a bus topology using CSMA/CD at 10 Mbps.

UTP cable category Category 4 is rated for 16 Mbps.[ UTP Category 4 cabling is common in later Token Ring networks and is rated for up to 16 Mbps.]
Category 5 : 100Mbps
Category 6: 155 Mbps
Category 7: 1Gbps

UTP: Coax consists of two insulated wires wrapped around each other in a regular spiral pattern
Fiber-optic cable: Coax carries signals as light waves
[Coax requires fixed spacing between connections]

Unicast: describes a packet sent from a single source to a single destination
Broadcast: describes a packet sent to all nodes on the network segment.
Anycast: refers to communication between any sender and the nearest of a group of receivers in a network

10Base-2: 10 Mbps thinnet coax cabling rated to 185 meters maximum length
10Base-5: 10 Mbps thicknet coax cabling rated to 500 meters maximum length
10Base-F: 10 Mbps baseband optical fiber
100Base-T: 100 Mbps unshielded twisted pair cabling

OSI reference model Session Layer protocol, standard, or interface: SQL, RPC, ASP, DNA SCP

MIDI --> Presentation Layer--> Video+ Audio

48-bit, 12-digit hexadecimal number known as the Media Access Control (MAC) address:  The first three bytes (or first half) of the six-byte MAC address is the manufacturers identifier. This can be a good troubleshooting aid if a network device is acting up, as it will isolate the brand of the failing device.

802.1D : spanning tree protocol is an Ethernet link-management protocol that provides link redundancy while preventing routing loops.
Since only one active path can exist for an Ethernet network to route properly, the STP algorithm calculates and manages the best loop-free path through the network.

IEEE 802.5, specifies a token-passing ring access method for LANs.
IEEE 802.3, specifies an Ethernet bus topology using Carrier Sense Multiple Access Control/Carrier Detect (CSMA/CD).
IEEE 802.11, is the IEEE standard that specifies 1 Mbps and 2 Mbps wireless connectivity in the 2.4 MHz ISM (Industrial, Scientific, Medical) band.

legal IP address ranges specified by RFC1976:
a. 10.0.0.0 - 10.255.255.255
b. 172.16.0.0 - 172.31.255.255
c. 192.168.0.0 - 192.168.255.255

ISDN Basic Rate Interface: It offers 2 B channels and 1 D channel carry user data at 64 Kbps each
ISDN Primary Rate Interface (PRI) for North America and Japan, with 23 B channels at 64 Kbps and one 64 Kbps D channel, for a total throughput of 1.544 Mbps.
ISDN PRI for Europe, Australia, and other parts of the world, with 30 B( 64 Kbps)  channels and one D channel, for a total throughput of 2.048 Mbps.

DOD layers conforms to the OSI transport layer:
Host-to-Host layer -- transport layer
Application layer -- Application, Presentation, and Session layers
Internet layer-- Network layer
Network Access Layer --- Data Link and Physical layers

Internetwork packet routing --> Network Layer

802.11a 5 GHz band 54 Mbps
802.11b 2.4 GHz 11 Mbps.
802.11g 2.4 GHz  20 Mbps up to 54 Mbps
802.15 2.4-2.5 GHz WPAN

UTP wiring is rated for 100BaseT --> Category 5   100 Mbps

Open Shortest Path First (OSPF) is a link-state hierarchical routing algorithm intended as a successor to RIP.
 least-cost routing, multipath routing, and load balancing.


Cat1 Under 1 MHz Analog voice, ISDN BRI
Cat2 1 MHz IBM 3270, AS/400, Apple LocalTalk
Cat3 16 MHz 10BaseT, 4 Mbps Token Ring
Cat4 20 MHz 16 Mbps Token Ring
Cat5 100 MHz 10/100BaseT

Isochronous: data transmission method in which data is sent continuously and doesnt use either an internal clocking source or start/stop bits for timing.

asynchronous, is a data transmission method using a start bit at the beginning of the data value, and a stop bit at the end of the value.

synchronous, is a messageframed transmission method that uses clocking pulses to match the speed of the data transmission.

pleisiochronous, is a transmission method that uses more than one timing source, sometimes running at different speeds. This method may require master and slave clock devices.

RAID LEVEL DESCRIPTION
RAID 0 Multiple Drive Striping
RAID 1 Disk Mirroring
RAID 3 Single Parity Drive
RAID 5 Distributed Parity Information


cutthrough and store-and-forward switching

a. A store-and-forward switch reads the whole packet and checks its validity before sending it to the next destination.
b. Both methods operate at layer two of the OSI reference model.
c. A cut-through switch reads only the header on the incoming data packet.
A cut-through switch introduces less latency than a store-and forward switch

SOCKS protocol
It is sometimes referred to as an application-level proxy.
It operates in the transport layer of the OSI model.
Network applications need to be SOCKS-ified to operate.

dial-up hacking:
War Dialing
 Demon Dialing
ToneLoc

not War Walking

War Walking (or War Driving) refers to scanning for 802.11-based wireless network information, by either driving or walking with a laptop, a wireless adapter in promiscuous mode, some type of scanning software such as NetStumbler or AiroPeek, and a Global Positioning System (GPS).

War Dialing, is a method used to hack into computers by using a software program to automatically call a large pool of telephone numbers to search for those that have a modem attached.

Demon Dialing, similar to War Dialing, is a tool used to attack one modem using brute force to guess the password and gain access.

ToneLoc, was one of the first war-dialing tools used by phone phreakers.Ž

A back doorŽ into a network means --> Mechanisms created by hackers to gain network access at a later time.

honey pot uses a dummy server with bogus applications as a decoy for intruders.

Like a dual-homed host, a screened-host firewall uses two network cards to connect to the trusted and untrusted networks, but screened-host firewall adds a screening router between the host and the untrusted network.

screened-subnet firewall :uses two NICs also, but has two screening routers with the host acting as a proxy server on its own network segment. One screening router controls traffic local to the network while the second monitors and controls incoming and outgoing Internet traffic.

Most common drawbacks to using a dual-homed host firewall --> Internal routing may accidentally become enabled.
A dual-homed host uses two NICs to attach to two separate networks, commonly a trusted network and an untrusted network. Its important that the internal routing function of the host be disabled to create an application-layer chokepoint and filter packets. Many systems come with routing enabled by default, such as IP forwarding, which makes the firewall useless.

Firewall type that uses a dynamic state table to inspect the content of packets --> stateful-inspection firewall

A stateful-inspection firewall intercepts incoming packets at the Netxwork level, then uses an Inspection Engine to extract state-related information from upper layers. It maintains the information in a dynamic state table and evaluates subsequent connection attempts.

X.25, defines an interface to the first commercially successful connection-oriented packet-switching network, in which the packets travel over virtual circuits.

Frame Relay, was a successor to X.25, and offers a connection-oriented packet-switching network.

Asynchronous Transfer Mode (ATM), was developed from an outgrowth of ISDN standards, and is fast-packet, connection-oriented, cell-switching technology.

Switched Multimegabit Data Service (SMDS) is a high-speed, connectionless, packet-switching public network service that extends LAN-like performance to a metropolitan area network (MAN) or a wide area network (WAN). Its generally delivered over a SONET ring with a maximum effective service radius of around 30 miles.

802.3 uses a LengthŽ field which indicates the number of data bytes that are in the data field.
Ethernet II uses a TypeŽ field in the same 2 bytes to identify the message protocol type.
Both frame formats use a 8-byte Preamble field at the start of the packet, and a 4- byte Frame Check Sequence (FCS) field at the end of the packet, so

Fiber optic cabling as its physical media:

a. 100BaseFX
b 1000BaseLX
c. 1000BaseSX

100BaseFX, specifies a 100 Mbps baseband fiber optic CSMA/CD LAN.
1000BaseLX, specifies a 1000Mbps CSMA/CD LAN over long wavelength fiber optics.
1000BaseSX, specifies a 1000Mbps CSMA/CD LAN over short wavelength fiber optics.

Which routing commonly broadcasts its routing table information to all other routers every minute.
Distance Vector Routing

Distance vector routing: uses the routing information protocol (RIP) to maintain a dynamic table of routing information that is updated regularly. It is the oldest and most common type of dynamic routing.

Static routing: defines a specific route in a configuration file on the router and does not require the routers to exchange route information dynamically.

Link state routers: functions like distance vector routers, but only use firsthand information when building routing tables by maintaining a copy of every other routers Link State Protocol (LSP) frame. This helps to eliminate routing errors and considerably lessens convergence time.

Difference between 802.11b WLAN ad hoc and infrastructure modes
Wireless nodes can communicate peer-to-peer (direct) in the ad hoc mode.

Nodes on an IEEE 802.11b wireless LANs can communicate in one of two modes: ad hoc or infrastructure.
In ad hoc mode, the wireless nodes communicate directly with each other, without establishing a connection to an access point on a wired LAN.
In infrastructure mode, the wireless nodes communicate to an access point, which operates similarly to a bridge or router and manages traffic between the wireless network and the wired network.

Most common type for recent Ethernet installations--> Twisted Pair

Category 5 Unshielded Twisted Pair (UTP) is rated for very high data throughput (100 Mbps) at short distances (up to 100 meters), and is the standard cable type for Ethernet installations.

ThickNet, also known as 10Base5, uses traditional thick coaxial (coax) cable at data rates of up to 10 Mbps.

ThinNet, uses a thinner gauge coax, and is known as 10Base2. It has a shorter maximum segment distance than ThickNet, but is less expensive to install.

Twinax, is like ThinNet, but has two conductors, and was used in IBM Systems.

Describes SSL:
It allows an application to have authenticated, encrypted communications across a network.

Which backup method listed below will probably require the backup operator to use the most number of tapes for a complete system restoration, if a different tape is used every night in a five-day rotation
Incremental Backup Method.
Most backup methods use the Archive file attribute to determine whether the file should be backed up or not. The backup software determines which files need to be backed up by checking to see if the Archive file attribute has been set, and then resets the Archive bit value to null after the backup procedure. The Incremental Backup Method backs up only files that have been created or modified since the last backup was made, because the Archive file attribute is reset. This can result in the backup operator needing several tapes to do a complete restoration, as every tape with changed files as well as the last full backup tape will need to be restored.


The difference between an incremental backup and a differential backup is that the Archive file attribute is not reset after the differential backup is completed, therefore the changed file is backed up every time the differential backup is run.

The backup set grows in size until the next full backup as these files continue to be backed up during each subsequent differential backup, until the next complete backup occurs. The advantage of this backup method is that the backup operator should only need the full backup and the one differential backup to restore the system.

Fiber optic cable has three basic physical elements, the core, the cladding, and the jacket. The core is the innermost transmission medium, which can be glass or plastic. The next outer layer, the cladding is also made of glass or plastic, but has different properties, and helps to reflect the light back into the core. The outermost layer, the jacket, provides protection from heat, moisture, and other environmental elements.


224.0
2^3 -2=  6 subnets  (11100000.00000000) with 8190 (2^13 -2) hosts

240.0
2^4 -2 =14 subnets (11110000.00000000) with 4094 (2^12 -2)  hosts

248.0
2^5 -2 = 30  subnets (11111000.00000000) with  2046 (2^11-2) hosts

252.0
2^6 -2= 62 subnets  (11111100.00000000) with 1022 (2^10 -2) hosts

172.16.0.0, which subnet mask below would allow us to divide the network into the maximum number of subnets with at least 600 host addresses per subnet?
252.0

DHCP Snooping:
ensures DHCP servers can assign ip address to only selected systems.

Benifits of MPLS:
performance characteristics can be set
layer 2 service can be overlaid.
multiple layers can be eliminated.

Attack that sends out overload of udp packets from a spoofed source so that overload of icmp unreachable replies flood the victim.

Attacker inserts a irrational value into oversized  packet making it difficult for destination router to reassemble it.

0  echo reply/ping reply
3 delivery failure/ host unknown network unreachable
4 source quench
8 echo request/ping request
11 TTL/ used by tracert
12 ip header was bad
13 communication administratively prohibited

IGMP is based upon publish and subscribe model and how it share info with multicast routers.
IGMP is a method of allowing multicast transmission in LAN env.
combination of one to many and many to many delivery method.

IMs are provisioned to bypass firewall.

DCE similar to Kerberose.
DCE specifies its auth techniques that was lacking in kerberose.

Network perimeter concept restricts access from segment to segment via choke points.

Benifits of Packet filtering:
1. Network Scabality
2. Performance
3. Application dependence.

Drawback
Security

Primary goal of QoS:
1. Different types of data and information coexist (video, voice, data)
2. Jitter and latency are managed.
3. Dedicated bandwidth is maintained.

Many application are able transmit over one physical medium at same time using Multiplexing.

In TCP, Sequence numbers ensure message delivery.

SIP --> User client agent, user agent serbver.

Autonomous --> One, Hierarical (where one governing entity manages traffic flow)

Technical attacks: Algebric, linear, differential
no tech: Rubber horse. it involves assault and battery
When someone is physically threaten or possibly tortured so that they will hand over the key.

PGP, S/MIME/PEM/MSP:  Email standards.

DES Mode used for ATM PIN---> ECB

Which protocol protects the communication channel and the message between server and a client?
RPC

Block Cipher use [S boxes] to perform mathematical functions and permutations on message bits.
Substitutation boxes or S boxes use lookup tables that determine how bits should be scrambled and substituted.

Modular exponentiation in elliptic curves is the analog of the modular discreet logarithm problem.

In a digitally-signed message transmission using a hash function:
The message digest is encrypted in the private key of the sender.

The strength of RSApublic key encryption is based on the:
Difficulty in finding the prime factors of very large numbers

Elliptic curve cryptosystems::
Have a higher strength per bit than an RSA.

Acryptographic attack in which portions of the ciphertext are selected for trial decryption while having access to the corresponding decrypted plaintext is known as what type of attack:
Chosen ciphertext

The Secure Hash Algorithm (SHA-1) of the Secure Hash Standard (NIST FIPS PUB 180) processes data in block lengths of
512 bits.
If a block length is fewer than 512 bits, padding bits are added to make the block length equal to 512 bits.

The technique of confusion, proposed by Claude Shannon, is used in block ciphers to:
Conceal the statistical connection between ciphertext and plaintext.

The Advanced Encryption Standard, the Rijndael cipher, can be described as:
An iterated block cipher

The Rijndael cipher employs a round transformation that is itself comprised of three layers of transformations. Which are these layers:
Key addition layer
Linear mixing layer
Non-linear layer

Asecret mechanism that enables the implementation of the reverse function in a one-way function is called a
Trap door

Data diode: a mechanism„usually in multilevel security systems„that limits the flow of classified information to one direction.

Digital certification, Certification authority, Timestamping, Lightweight Directory Access Protocol (LDAP), Non-repudiation support------->Public Key Infrastructure (PKI)

The vulnerability associated with the requirement to change security protocols at a carriers Wireless Application Protocol (WAP) gateway from the Wireless Transport Layer Security Protocol (WTLS) to SSL or TLS over the wired network is called:
Wireless Application Protocol (WAP) Gap.

The Transport Layer Security (TLS) 1.0 protocol is based on which Protocol Specification: SSL-3.0
The differences between TLS and SSL are not great, but there is enough of a difference such that TLS 1.0 and SSL 3.0 are not operationally compatible. If interoperability is desired, there is a capability in TLS that allows it to function as SSL.

The primary goal of the TLS Protocol is to provide:
Privacy and data integrity between two communicating applications.

The TLS Protocol is comprised of the TLS Record and Handshake Protocols. The TLS Record Protocol is layered on top of a transportprotocol such as TCP and provides privacy and reliability to the communications. The privacy is implemented by encryption using symmetric key cryptography such as DES or RC4. The secret key is generated anew for each connection; however, the Record Protocol can be used without encryption. Integrity is provided through the use of a keyed Message Authentication Code (MAC) using hash algorithms such as SHA or MD5. The TLS Record Protocol is also used to encapsulate a higher-level protocol such as the TLS Handshake Protocol. This Handshake Protocol is used by the server and client to authenticate each other. The authentication can be accomplished using asymmetric key cryptography such as RSA or DSS. The Handshake Protocol also sets up the encryption algorithm and cryptographic keys to enable the application protocol to transmit and receive information.



Elliptic curve and the elliptic curve discrete logarithm problem
The elliptic curve is defined over a finite field comprised of real, complex or rational numbers.
The points on an elliptic curve form a Group under addition
Multiplication (or multiple additions) in an elliptic curve system is equivalent to modular exponentiation; thus, defining a discreet logarithm problem.

In communications between two parties, encrypting the hash function of a message with a symmetric key algorithm is equivalent to

Generating a keyed Message Authentication Code (MAC)
A MAC is used to authenticate files between users. If the sender and receiver both have the secret key, they are the only ones that can verify the hash function. If a symmetric key algorithm is used to encrypt the one-way hash function, then the one-way hash function becomes a keyed MAC.

Cryptographic hash function, H (m), where m denotes the message being hashed by the function H:

a. H (m) is collision free.
b. The output is of fixed length.
c. H (m) is a one-way function.

A message of < 2^64 bits is input to the Secure Hash Algorithm (SHA), and the resultant message digest of 160 bits is fed into the DSA, which generates the digital signature of the message.

If the application of a hash function results in an m-bit fixed length output, an attack on the hash function that attempts to achieve a collision after 2 m/2 possible trial input values is called: Birthday attack

This problem is analogous to asking the question How many people must be in a room for the probability of two people having the same birthday to be equal to 50%?Ž The answer is 23. Thus, trying 2m/2 possible trial inputs to a hash function gives a 50% chance of finding two inputs that have the same hash value.

The minimum information necessary on a digital certificate is
Name, public key, digital signature of the certifier
the name of the individual is certified and bound to his/her public key. This certification is validated by the digital signature of the certifying agent.

Message digest algorithms MD2, MD4 and MD5 have in common:
They all take a message of arbitrary length and produce a message digest of 128-bits.


Decrypt the LEAF with the family key, Kf; recover U; obtain a court order to obtain the two halves of Ku; recover Ku; and then recover Ks, the session key. Use the session key to decrypt the message.





The message is encrypted with the symmetric session key, Ks. In order to decrypt the message, then, Ks must be recovered. The LEAF contains the session key, but the LEAF is encrypted with the family key, Kf , that is common to all Clipper Chips. The authorized agency has access to Kf and decrypts the LEAF. However, the session key is still encrypted by the 80-bit unit key, Ku, that is unique to each Clipper Chip and is identified by the unique identifier, U. Ku is divided into two halves, and each half is deposited with an escrow agency. The law enforcement agency obtains the two halves of Ku by presenting the escrow agencies with a court order for the key identified by U. The two halves of the key obtained by the court order are XORed together to obtain Ku. Then, Ku is used to recover the session key, Ks, and Ks is used to decrypt the message..

The decryption sequence to obtain Ks can be summarized as:
Kf -> U -> [1/2Ku XOR 1/2 Ku ] -> Ku-> Ks

What BEST describes the National Security Agency-developed Capstone: A chip that implements the U.S. Escrowed Encryption Standard

Capstone is a Very Large Scale Integration (VLSI) chip that employs the Escrowed Encryption Standard and incorporates the Skipjack algorithm, similar to the Clipper Chip. As such, it has a LEAF. Capstone also supports public key exchange and digital signatures. At this time, Capstone products have their LEAF function suppressed and a Certifying Authority provides for key recovery.

Block cipher: A symmetric key algorithm that operates on a fixed-length block of plaintext and transforms it into a fixed-length block of ciphertext. A block cipher breaks the plaintext into fixed-length blocks, commonly 64-bits, and encrypts the blocks into fixed-length blocks of ciphertext. Another characteristic of the block cipher is that, if the same key is used, a particular plaintext block will be transformed into the same ciphertext block. Examples of block ciphers are DES, Skipjack, IDEA, RC5 and AES. An example of a block cipher in a symmetric key cryptosystem is the Electronic Code Book (ECB) mode of operation. In the ECB mode, a plaintext block is transformed into a ciphertext block. If the same key is used for each transformation, then a Code BookŽ can be compiled for each plaintext block and corresponding ciphertext block.

An iterated block cipher encrypts by breaking the plaintext block into
two halves and, with a subkey, applying a roundŽ transformation to
one of the halves. Then, the output of this transformation is XORed with
the remaining half. The round is completed by swapping the two
halves. This type of cipher is known as:
Feistel
The question stem describes one round of a Feistel cipher. This algorithm
was developed by an IBM team led by Horst Feistel. (H. Feistel,
Cryptography and Computer Privacy,Ž Scientific American, v.228, n.5,
May 1973) The algorithm was called Lucifer and was the basis for the
Data Encryption Standard (DES).

RC4 is a variable keysize
stream cipher developed by Ronald Rivest. In this type of cipher, a
sequence of bits that are the key is bit-wise XORed with the plaintext.

A key schedule is: A set of subkeys derived from a secret key

The subkeys are typically used in iterated block ciphers. In this
type of cipher, the plaintext is broken into fixed-length blocks and
enciphered in rounds.Ž In a round, the same transformation is
applied using one of the subkeys of the key schedule.

The Wireless Transport Layer Security (WTLS) Protocol in the Wireless
Application Protocol (WAP) stack is based on which Internet Security
Protocol?

TLS

WTLS has to incorporate
functionality that is provided for in TLS by TCP in the TCP/IP
Protocol suite in that WTLS can operate over UDP. WTLS supports
data privacy, authentication and integrity. Because WTLS has to
incorporate a large number of handshakes when security is implemented,
significant delays may occur. During a WTLS handshake
session, WTLS can set up the following security classes:
Class 1. No certificates
Class 2. The client does not have a certificate; the server has a
certificate
Class 3. The client and server have certificates

The Advanced Encryption Standard (Rijndael) block cipher
requirements regarding keys and block sizes have now evolved to
which configuration?
The block size is 128 bits, and the key can be 128, 192, or 256 bits.
AES is comprised of the three key sizes, 128, 192, and 256 bits with
a fixed block size of 128 bits. The Advanced Encryption Standard
(AES) was announced on November 26, 2001, as Federal Information
Processing Standard Publication (FIPS PUB 197). FIPS PUB 197 states
that This standard may be used by Federal departments and agencies
when an agency determines that sensitive (unclassified) information
(as defined in P.L. 100-235) requires cryptographic protection.
Other FIPS-approved cryptographic algorithms may be used in addition
to, or in lieu of, this standard.Ž Depending upon which of the
three keys is used, the standard may be referred to as AES-128,Ž
AES-192Ž or AES-256.Ž
The number of rounds used in the Rijndael cipher is a function of
the key size as follows:
256-bit key 14 rounds
192-bit key 12 rounds
128-bit key 10 rounds
Rijndael has a symmetric and parallel structure that provides for
flexibility of implementation and resistance to cryptanalytic attacks.
Attacks on Rijndael would involve the use of differential and linear
cryptanalysis.

The Wireless Transport Layer Security Protocol (WTLS) in the Wireless
Application Protocol (WAP) stack provides for security:
Between the WAP client and the gateway
Transport Layer Security (TLS) provides for security between the
content server on the Internet and the WAP gateway.

The MIME protocol specifies a structure for the body of an email
message. MIME supports a number of formats in the email body,
including graphic, enhanced text and audio, but does not provide
security services for these messages. S/MIME defines such services
for MIME as digital signatures and encryption based on a standard
syntax.

Digital cash refers to the electronic transfer of funds from one party to
another. When digital cash is referred to as anonymous or identified, it
means that:
Anonymous„the identity of the cash holder is not known;
Identified„the identity of the cash holder is known

Anonymous implementations of digital cash do not identify the
cash holder and use blind signature schemes; identified implementations
use conventional digital signatures to identify the cash holder.
In looking at these two approaches, anonymous schemes are analogous
to cash since cash does not allow tracing of the person who
made the cash payment while identified approaches are the analog of
credit or debit card transactions.

recovery methods;
a. A message is encrypted with a session key and the session key is, in
turn, encrypted with the public key of a trustee agent. The
encrypted session key is sent along with the encrypted message. The
trustee, when authorized, can then decrypt the message by recovering
the session key with the trustees private key.
b. A message is encrypted with a session key. The session key, in turn,
is broken into parts and each part is encrypted with the public key
of a different trustee agent. The encrypted parts of the session key
are sent along with the encrypted message. The trustees, when
authorized, can then decrypt their portion of the session key and
provide their respective parts of the session key to a central agent.
The central agent can then decrypt the message by reconstructing
the session key from the individual components.
c. A secret key or a private key is broken into a number of parts and
each part is deposited with a trustee agent. The agents can then
provide their parts of the key to a central authority, when presented
with appropriate authorization. The key can then be reconstructed
and used to decrypt messages encrypted with that key.

Encrypting parts of the session key with the private keys of the
trustee agents provides no security for the message since the
message can be decrypted by recovering the key components of
the session key using the public keys of the respective agents. These
public keys are available to anyone.

Theoretically, quantum computing offers the possibility of factoring the
products of large prime numbers and calculating discreet logarithms in
polynomial time. These calculations can be accomplished in such a
compressed time frame because:

A quantum bit in a quantum computer is actually a linear
superposition of both the one and zero states and, therefore, can
theoretically represent both values in parallel. This phenomenon
allows computation that usually takes exponential time to be
accomplished in polynomial time since different values of the binary
pattern of the solution can be calculated simultaneously.

In digital computers, a bit is in either a one or zero state. In a quantum
computer, through linear superposition, a quantum bit can be in
both states, essentially simultaneously. Thus, computations consisting
of trail evaluations of binary patterns can take place simultaneously
in exponential time. The probability of obtaining a correct result is
increased through a phenomenon called constructive interference of
light while the probability of obtaining an incorrect result is decreased
through destructive interference.

Describe the Public Key Cryptography Standards (PKCS)
A set of public-key cryptography standards that support algorithms
such as Diffie-Hellman and RSA as well as algorithm independent
standards.

PKCS supports algorithm-independent and algorithm-specific
implementations as well as digital signatures and certificates. It was
developed by a consortium including RSA Laboratories, Apple, DEC,
Lotus, Sun, Microsoft and MIT. At this writing, there are 15 PKCS
standards. Examples of these standards are:
PKCS #1. Defines mechanisms for encrypting and signing data
using the RSA public-key system
PKCS #3. Defines the Diffie-Hellman key agreement protocol
PKCS #10. Describes a syntax for certification requests
PKCS #15. Defines a standard format for cryptographic
credentials stored on cryptographic tokens.


An interface to a library of software functions that provide security and cryptography services is called:
A cryptographic application programming interface (CAPI)
CAPI is designed for software developers to call functions from
the library and, thus, make it easier to implement security services.
An example of a CAPI is the Generic Security Service API (GSSAPI.)
The GSS-API provides data confidentiality, authentication, and
data integrity services and supports the use of both public and secret key mechanisms. The GSS-API is described in the Internet Proposed Standard RFC 2078.


The British Standard 7799/ISO Standard 17799 discusses cryptographic
policies. It states, An organization should develop a policy on its use of
cryptographic controls for protection of its information . . . . When
developing a policy, the following should be considered:Ž

a. The management approach toward the use of cryptographic controls
across the organization
b. The approach to key management, including methods to deal with
the recovery of encrypted information in the case of lost,
compromised or damaged keys
c. Roles and responsibilities

A policy is a general statement of managements intent, and therefore, a policy would not specify the encryption scheme to be used.

The main chapter headings of the standard are:
Security Policy
Organizational Security
Asset Classification and Control
Personnel Security
Physical and Environmental Security
Communications and Operations Management
Access Control
Systems Development and Maintenance
Business Continuity Management
Compliance

The Number Field Sieve (NFS) is a:
General purpose factoring algorithm that can be used to factor large numbers.

The NFS has been successful in efficiently factoring numbers larger than 115 digits and a version of NFS has successfully factored a 155-digit number. Clearly, factoring is an attack that can be used against the RSA cryptosystem in which the public and private keys are calculated based on the product of two large prime numbers.

DESX is a variant of DES in which:

Input plaintext is bitwise XORed with 64 bits of additional key material before encryption with DES, and the output of DES is also bitwise XORed with another 64 bits of key material.
DESX was developed by Ron Rivest to increase the resistance of DES to brute force key search attacks; however, the resistance of DESX to differential and linear attacks is equivalent to that of DES with independent subkeys.

The ANSI X9.52 standard defines a variant of DES encryption with keys
k1, k2, and k3 as:
C = Ek3 [Dk2 [Ek1 [M]]]
What is this DES variant?
Triple DES in the EDE mode
This version of triple DES performs an encryption (E) of plaintext message M with key k1, a decryption (D) with key k2 (essentially, another encryption), and a third encryption with key k3. Another implementation of DES EDE is accomplished with keys k1 and k2 being independent, but with keys k1 and k3 being identical. This implementation of triple DES is written as:
C = Ek1 [Dk2 [Ek1 [M]]]

Using a modulo 26 substitution cipher where the letters A to Z of the alphabet are given a value of 0 to 25, respectively, encrypt the message OVERLORD BEGINS.Ž Use the key K =NEW and D =3 where D is the number of repeating letters representing the key. The encrypted message is:

BFAEPKEH XRKEAW

OVERLORD becomes 14 21 4 17 11 14 17 3
BEGINS becomes 1 4 6 8 13 18
The key NEW becomes 13 4 22
Adding the key repetitively to OVERLORD BEGINS modulo 26 yields 1 5 0 4 15 10 4 7 23 17 10 4 0 22, which translates to BFAEPKEH XRKEAW.

Adding the key repetitively to OVERLORD BEGINS modulo 26 yields 1 5 0 4 15 10 4 7 23 17 10 4 0 22, which translates to BFAEPKEH XRKEAW.

The algorithm of the 802.11 Wireless LAN Standard that is used to protect transmitted information from disclosure is called:
Wired Equivalency Privacy (WEP)
WEP is designed to prevent the violation of the confidentiality of data transmitted over the wireless LAN. Another feature of WEP is to prevent unauthorized access to the network.

Wireless Application Protocol, the security WTLS

The Wired Equivalency Privacy algorithm (WEP) of the 802.11 Wireless LAN Standard uses which of the following to protect the confidentiality of information being transmitted on the LAN?
A secret key that is shared between a mobile station (e.g., a laptop with a wireless Ethernet card) and a base station access point.

The transmitted packets are encrypted with a secret key and an
Integrity Check (IC) field comprised of a CRC-32 check sum that is
attached to the message. WEP uses the RC4 variable key-size
stream cipher encryption algorithm. RC4 was developed in 1987 by
Ron Rivest and operates in output feedback mode. Researchers at
the University of California at Berkely (wep@isaac.cs.berkeley.edu)
have found that the security of the WEP algorithm can be
compromised, particularly with the following attacks:
Passive attacks to decrypt traffic based on statistical analysis
Active attack to inject new traffic from unauthorized mobile
stations, based on known plaintext
Active attacks to decrypt traffic, based on tricking the access
point
Dictionary-building attack that, after analysis of about a days
worth of traffic, allows real-time automated decryption of all
traffic
The Berkeley researchers have found that these attacks are
effective against both the 40-bit and the so-called 128-bit versions of
WEP using inexpensive off-the-shelf equipment. These attacks can
also be used against networks that use the 802.11b Standard, which
is the extension to 802.11 to support higher data rates, but does not
change the WEP algorithm.
The weaknesses in WEP and 802.11 are being addressed by the
IEEE 802.11i Working Group. WEP will be upgraded to WEP2 with
the following proposed changes:
Modifying the method of creating the initialization vector (IV)
Modifying the method of creating the encryption key
Protection against replays
Protection against IV collision attacks
Protection against forged packets
In the longer term, it is expected that the Advanced Encryption
Standard (AES) will replace the RC4 encryption algorithm currently used in WEP.

In a block cipher, diffusion can be accomplished through: Permutation Diffusion is aimed at obscuring redundancy in the plaintext by spreading the effect of the transformation over the ciphertext. Permutation is also known as transposition and operates by rearranging the letters of the plaintext.

The National Computer Security Center (NCSC) is: A branch of the National Security Agency (NSA) that initiates research and develops and publishes standards and criteria for trusted information systems.

The NCSC promotes information systems security awareness and technology transfer through many channels, including the annualNational Information Systems Security Conference. It was foundedin 1981 as the Department of Defense Computer Security Center, and its name was change in 1985 to NCSC. It developed the Trusted Computer Evaluation Program Rainbow series for evaluating commercial products against information system security criteria.

A portion of a Vigenère cipher square is given below using five (1, 2, 14,
16, 22) of the possible 26 alphabets. Using the key word bow, which of
the following is the encryption of the word advanceŽ using the
Vigenère cipher in Table A.10?
a. b r r b b y h
b. b r r b j y f
c. b r r b b y f
d. b r r b c y f
Answer: c
The Vigenère cipher is a polyalphabetic substitution cipher. The key
word bow indicates which alphabets to use. The letter b indicates the
alphabet of row 1, the letter o indicates the alphabet of row 14, and
the letter w indicates the alphabet of row 22. To encrypt, arrange the
key word, repetitively over the plaintext as shown in Table A.11.
Thus, the letter a of the plaintext is transformed into b of alphabet in
row 1, the letter d is transformed into r of row 14, the letter v is transformed
into r of row 22 and so on.

There are two fundamental security protocols in IPSEC. These are the
Authentication Header (AH) and the Encapsulating Security Payload
(ESP). Which of the following correctly describes the functions of each?

ESP-data encrypting and source authenticating protocol that also
validates the integrity of the transmitted data; AH-source
authenticating protocol that also validates the integrity of the
transmitted data

ESP does have a source authentication and integrity capability
through the use of a hash algorithm and a secret key. It provides confidentiality
by means of secret key cryptography. DES and triple DES
secret key block ciphers are supported by IPSEC and other algorithms
will also be supported in the future. AH uses a hash algorithm
in the packet header to authenticate the sender and validate the
integrity of the transmitted data.

advantage of a stream cipher:
The receiver and transmitter must be synchronized.
a. The same equipment can be used for encryption and decryption.
b. It is amenable to hardware implementations that result in higher speeds.
c. Since encryption takes place bit by bit, there is no error propagation.

The transmitter and receiver must be synchronized since they must
use the same keystream bits for the same bits of the text that are to be
enciphered and deciphered. Usually, synchronizing frames must be sent
to effect the synchronization and, thus, additional overhead is required
for the transmissions.

Property of a public key cryptosystem? (Let P represent the private key, Q represent the public key and M the plaintext message.)
a. Q[P(M)] = M
b. P[Q(M)] = M
c. It is computationally infeasible to derive P from Q.

A form of digital signature where the signer is not privy to the content of the message is called a:
Blind signature.

A blind signature algorithm for the message M uses a blinding
factor, f; a modulus m; the private key, s, of the signer and the public
key, q, of the signer. The sender, who generates f and knows q,
presents the message to the signer in the form:
Mf q (mod m) Thus, the message is not in a form readable by the signer since the signer does not know f. The signer signs Mf q (mod m) with his/her private key, returning (Mf q)s (mod m) This factor can be reduced to fMs (mod m) since s and q are inverses of each other. The sender then divides fMs (mod m) by the blinding factor, f, to obtain Ms (mod m) Ms (mod m) is, therefore, the message, M, signed with the private key, s, of the signer.


The following compilation represents what facet of cryptanalysis?
A 8.2
B 1.5
C 2.8
D 4.3
E 12.7
F 2.2
G 2.0
H 6.1
I 7.0                            

 J 0.2
K 0.8
L 4.0
M 2.4
N 6.7
O 7.5
P 1.9
Q 0.1
R 6.0

S 6.3
T 9.1
U 2.8
V 1.0
W 2.4
X 0.2
Y 2.0
Z 0.1

Frequency analysis

The compilation is from a study by H. Becker and F. Piper that was
originally published in Cipher Systems: The Protection of Communication.
The listing shows the relative frequency in percent of the appearance
of the letters of the English alphabet in large numbers of
passages taken from newspapers and novels. Thus, in a substitution
cipher, an analysis of the frequency of appearance of certain letters
may give clues to the actual letter before transformation. Note that
the letters E, A, and T have relatively high percentages of appearance
in English text.


Superscalar computer architecture is characterized by a:
Processor that enables concurrent execution of multiple instructions in the same pipeline stage.

Memory space insulated from other running processes in a multiprocessing
system is part of a:
Protection domain.

In the discretionary portion of the Bell-LaPadula mode that is based on the access matrix, how the access rights are defined and evaluated is called:
Authorization
since authorization is concerned with how access rights are defined and how they are evaluated.

Acomputer system that employs the necessary hardware and software assurance measures to enable it to process multiple levels of classified orsensitive information is called a:
Trusted system.

For fault-tolerance to operate, a system must be:
Capable of detecting and correcting the fault.

Which of the following choices describes the four phases of the National Information Assurance Certification and Accreditation Process (NIACAP)?
Definition, Verification, Validation, and Post Accreditation

What is a programmable logic device (PLD)?
An integrated circuit with connections or internal logic gates that can be changed through a programming process.

The termination of selected, non-critical processing when a hardware or software failure occurs and is detected is referred to as:
Fail soft.

Which of the following are the three types of NIACAP accreditation?
Site, type, and system

Content-dependent control makes access decisions based on:
The objects data. [Sensitivity of content]

The term failover refers to:
Switching to a duplicate, hotŽ backup component.

hotŽ backup system that maintains duplicate states with the primary system

Primary storage is the:
Memory directly addressable by the CPU, which is for the storage of instructions and data that are associated with the program being executed.

In the Common Criteria, a Protection Profile:
Specifies the security requirements and protections of the products to be evaluated.

Context-dependent control uses which of the following to make decisions?
Subject or object attributes or environmental characteristics

What is a computer bus?
A group of conductors for the addressing of data and control.

Increasing performance in a computer by overlapping the steps of different instructions is called:
Pipelining.

The addressing mode in which an instruction accesses a memory location whose contents are the address of the desired data is called:

Indirect addressing.

The addressing mode in which the address location that is specified in the program instructions contains the address of the final desired location is called:
Indirect addressing.

indexed addressing, determines the desired memory address by adding the contents of the address defined in the programs instruction to that of an index register. 

Rregisters usually contained inside the CPU.

Absolute addressing, addresses the entire primary memory space.

Processes are placed in a ring structure according to:
 Least privilege.

The MULTICS operating system is a classic example of:
Ring protection system.

Multics is based on the ring protection architecture.

What are the hardware, firmware, and software elements of a Trusted Computing Base (TCB) that implement the reference monitor concept called?
A security kernel

The memory hierarchy in a typical digital computer, in order, is:
CPU, cache, primary memory, secondary memory

A processor in which a single instruction specifies more than one CONCURRENT operation is called:
Very Long Instruction Word processor

Superscalar processor, performs a concurrent execution of multiple instructions in the same pipeline stage.

A scalar processor, executes one instruction at a time.

The standard process to certify and accredit U.S. defense critical information systems is called:
a. DITSCAP

Defense Information Technology Security Certification and Accreditation Process.

The property that states, Reading or writing is permitted at a particular level of sensitivity, but not to either higher or lower levels of sensitivityŽ is called the:
Strong * (star) Property

The Discretionary Security Property, specifies discretionary access control in the Bell-LaPadula model by the use of an access matrix.

Three major parts of the Common Criteria (CC)?
Introduction and General Model
Security Functional Requirements
Security Assurance Requirements

In the Common Criteria, an implementation-independent statement of security needs for a set of IT security products that could be built is called a:

Protection Profile (PP).

Component of a CC Protection Profile;
a. Target of Evaluation (TOE) description
b. Threats against the product that must be addressed
c.  Security objectives

When microcomputers were first developed, the instruction fetch time was much longer than the instruction execution time because of the relatively slow speed of memory accesses. This situation led to the design of the:

CIRC
Complex Instruction Set Computer

The logic was that since it took a long time to fetch an instruction from memory relative to the time required to execute that instruction in the CPU, then the number of instructions required to implement a program should be reduced. This reasoning naturally resulted in densely coded instructions with more decode and execution cycles in the processor. This situation was ameliorated by pipelining the instructions wherein the decode and execution cycles of one instruction would be overlapped in time with the fetch cycle of the next instruction.

 RISC, evolved when packaging and memory technology advanced to the point where there was not much difference in memory access times and processor execution times. Thus, the objective of the RISC architecture was to reduce the number of cycles required to execute an instruction. Accordingly, this increased the number of instructions in the average program by approximately 30%, but it reduced the number of cycles per instruction on the average by a factor of four. Essentially, the RISC architecture uses simpler instructions but makes use of other features such as optimizing compilers to reduce the number of instructions required and large numbers of general purpose registers in the processor and data caches.

superscalar processor, allows concurrent execution of instructions in the same pipelined stage. A scalar processor is defined as a processor that executes one instruction at a time. The term superscalar denotes multiple, concurrent operations performed on scalar values as opposed to vectors or arrays that are used as objects of computation in array processors.
The very-long-instruction-word (VLIW) processor, multiple, concurrent operations are performed in a single instruction. Because multiple operations are performed in one instruction rather than using multiple instructions, the number of instructions is reduced relative to those in a scalar processor.
However, for this approach to be feasible, the operations in each VLIW instruction must be independent of each other.

The main objective of the Java Security Model ( JSM) is to: Protect the user from hostile, network mobile code

Component of a general enterprise security architecture model for an organization.
a. Information and resources to ensure the appropriate level of risk management
b. Consideration of all the items that comprise information security, including distributed systems, software, hardware, communications systems, and networks
c. A systematic and unified approach for evaluating the organizations information systems security infrastructure and defining approaches to implementation and deployment of information security controls.

The Bell-LaPadula model addresses which one of the following items:
Information flow from high to low

Information flow from high to low is addressed by the * -property of the Bell…LaPadula model, which states that a subject cannot write data from a higher level of classification to a lower level of classification. This property is also known as the confinement property or the no write down property.


The practical aspects of multilevel security in which, for example, an unclassified paragraph in a Secret document has to be moved to an Unclassified document, the Bell-LaPadula model introduces the concept of a:
Trusted subject

The model permits a trusted subject to violate the *-property but to comply with the intent of the *-property. Thus, a person who is a trusted subject could move unclassified data from a classified document to an unclassified document without violating the intent of the *-property. Another example would be for a trusted subject to downgrade the classification of material when it has been determined that the downgrade would not harm national or organizational security and would not violate the intent of the *-property.

In a refinement of the Bell…LaPadula model, the strong tranquility property states that:
Objects never change their security level

Weak tranquility property Objects never change their security level in a way that would violate the system security policy.

As an analog of confidentiality labels, integrity labels in the Biba model are assigned according to:

Subjects are assigned classes according to their trustworthiness; objects are assigned integrity labels according to the harm that would be done if the data were modified improperly.

The Clark-Wilson Integrity Model  A Comparison of Commercial and Military Computer Security Policies,Ž Proceedings of the 1987 IEEE Computer Society Symposium on Research in Security and Privacy, Los Alamitos, CA, IEEE Computer Society Press, 1987) focuses on what two concepts:

Separation of duty and well-formed transactions

The Clark-Wilson Model is a model focused on the needs of the commercial world and is based on the theory that integrity is more important than confidentiality for commercial organizations. Further, the model incorporates the commercial concepts of separation of duty and well formed transactions. The well-formed transaction of the model is implemented by the transformation procedure (TP.)ATP is defined in the model as the mechanism for transforming the set of constrained data items (CDIs) from one valid state of integrity to another valid state of integrity. The Clark-Wilson Model defines rules for separation of duty that denote the relations between a user, TPs, and the CDIs that can be operated upon by those TPs. The model talks about the access triple that is the user, the program that is permitted to operate on the data, and the data.

The model that addresses the situation wherein one group is not affected by another group using specific commands is called the:
Non-interference model

In the non-interference model, security policy assertions are defined in the abstract. The process of moving from the abstract to developing conditions that can be applied to the transition functions that operate on the objects is called unwinding.

The secure path between a user and the Trusted Computing Base (TCB) is called:
Trusted path

The Common Criteria terminology for the degree of examination of the product to be tested is:
Evaluation Assurance Level (EAL)

A difference between the Information Technology Security Evaluation Criteria (ITSEC) and the Trusted Computer System Evaluation Criteria (TCSEC) is:
ITSEC addresses integrity and availability as well as confidentiality.

Describe the standards addressed by Title II, Administrative Simplification, of the Health Insurance Portability and Accountability Act:

Transaction Standards, to include Code Sets; Unique Health Identifiers; Security and Electronic Signatures and Privacy.

The principles of Notice, Choice, Access, Security, and Enforcement refer to:
Privacy

Simple security property:
A user has access to a client companys information, c, if and only if for all other information, o, that the user can read, either x(c) z (o) or x(c) = x (o), where x(c) is the clients company and z (o) is the competitors of x(c).Ž

Chinese wall

Two categories of the policy of separation of duty are:
Dual control and functional separation Dual control requires that two or more subjects act together simultaneously to authorize an operation. A common example is the requirement that two individuals turn their keys simultaneously in two physically separated areas to arm a weapon. Functional separation implies a sequential approval process such as requiring the approval of a manager to send a check generated by a subordinate.

In  (NIACAP), a type accreditation performs:
Evaluates an application or system that is distributed to a number of different locations

The minimum national standards for certifying and accrediting national security systems.

NIACAP


The time required to switch transmission directions in a half-duplex line is called the turnaround time.
Serial data transmission in which information can be transmitted in two directions, but only one direction at a time, is called:
Half-duplex

Simplex, refers to communication that takes place in one direction only Full-duplex, can transmit and receive information in both directions simultaneously. The transmissions can be asynchronous or synchronous. In asynchronous transmission, a start bit is used to indicate the beginning of transmission. The start bit is followed by data bits and, then, by one or two stop bits to indicate the end of the transmission. Since start and stop bits are sent with every unit of data, the actual data transmission rate is lower since these overheadŽ bits are used for synchronization and do not carry information. In this mode, data is sent only when it is available and the data is not transmitted continuously. In synchronous transmission, the transmitter and receiver have synchronized clocks and the data is sent in a continuous stream. The clocks are synchronized by using transitions in the data and, therefore, start and stop bits are not required for each unit of data sent.

The ANSI ASC X12 Standard version 4010 applies to which one of the following HIPAA categories
Transactions

The transactions addressed by HIPAA are:
Health claims or similar encounter information
Health care payment and remittance advice
Coordination of Benefits
Health claim status
Enrollment and disenrollment in a health plan
Eligibility for a health plan
Health plan premium payments
Referral certification and authorization
The HIPAA EDI transaction standards to address these HIPAA
transactions include the following:
Health care claims or coordination of benefits
Retail drug NCPCP (National Council for Prescription Drug
Programs) v. 32
Dental claim ASC X12N 837: dental
Professional claim ASC X12N 837: professional
Institutional claim ASC X12N 837: institutional
Payment and remittance advice ASC X12N 835
Health claim status ASC X12N 276/277
Plan enrollment ASC X12 834
Plan eligibility ASC X12 270/271
Plan premium payments ASC X12 820

Referral certification ASC X12 N 278

The American National Standards Institute was founded in 1917 and is the only source of American Standards. The ANSI Accredited Standards Committee X12 was chartered in 1979 and is responsible for cross-industry standards for electronic documents. The HIPAA privacy standards were finalized in April, 2001, and implementation must be accomplished by April 14, 2003. The privacy rule covers individually identifiable health care information transmitted, stored in electronic or paper form, or communicated orally. Protected health information (PHI) may not be disclosed unless disclosure is approved by the individual, permitted by the legislation, required for treatment, part of health care operations, required by law, or necessary for payment. PHI is defined as individually identifiable health information that is transmitted by electronic media, maintained in any medium described in the definition of electronic media under HIPAA, or is transmitted or maintained in any other form or medium.


A 1999 law that addresses privacy issues related to health care, insurance and finance and that will be implemented by the states is:
Gramm-Leach-Bliley (GLB)

To implement privacy practices on Web sites :

The latest W3C working draft of P3P is P3P 1.0, 28January, 2002 (www.w3.org/TR). An excerpt of the W3C P3P Specification states P3P enables Web sites to express their privacy practices in a standard format that can be retrieved automatically and interpreted easily by user agents. P3P user agents will allow users to be informed of site practices (in both machine- and human-readable formats) and to automate decision-making based on these practices when appropriate. Thus users need not read the privacy policies at every site they visit.Ž
With P3, an organization can post its privacy policy in machine readable form (XML) on its Web site. This policy statement includes: Who has access to collected information The type of information collected How the information is used The legal entity making the privacy statement P3P also supports user agents that allow a user to configure a P3P-enabled Web browser with the users privacy preferences. Then, when the user attempts to access a Web site, the user agent compares the users stated preferences with the privacy policy in machine-readable form at the Web site. Access will be granted if the preferences match the policy. Otherwise, either access to the Web site will be blocked or a pop-up window will appear notifying the userthat he/she must change their privacy preferences. Usually, this means that the user has to lower his/her privacy threshold.

What process is used to accomplish high-speed data transfer between a peripheral device and computer memory, bypassing the Central Processing Unit (CPU)?
Direct memory access

DMA controller essentially takes control of the memory busses and manages the data transfer directly.

An associative memory operates in which way:
Searches for a specific data value in memory

Direct or absolute addressing mode: Returns values stored in a memory address location specified in the CPU address register.

Concerns usually apply to what type of architecture?
1. Desktop systems can contain sensitive information that may be at risk of being exposed.
2. Users may generally lack security awareness.
3. Modems present a vulnerability to dial-in attacks.
4. Lack of proper backup may exist.

Distributed

Additional concerns associated with distributed systems include:
1. A desktop PC or workstation can provide an avenue of access into critical information systems of an organization.
2. Downloading data from the Internet increases the risk of infecting corporate systems with a malicious code or an unintentional modification of the databases.
3. A desktop system and its associated disks may not be protected from physical intrusion or theft.

Centralized system, all the characteristics cited do not apply to a central host with no PCs or workstations with large amounts of memory attached. Also, the vulnerability presented by a modem attached to a PC or workstation would not exist.

An open system or architecture, is comprised of vendor independent subsystems that have published specifications and interfaces in order to permit operations with the products of other suppliers. One advantage of an open system is that it is subject to review and evaluation by independent parties.


The definition A relatively small amount (when compared to primary memory) of very high speed RAM, which holds the instructions and data from primary memory, that has a high probability of being accessed during the currently executing portion of a programŽ refers to: Cache

The organization that establishes a collaborative partnership of computer incident response, security and law enforcement professionals who work together to handle computer security incidents and to provide both proactive and reactive security services for the U.S. Federal governmentŽ is called:

Federal Computer Incident Response Center

FedCIRC charter, FedCIRC provides assistance
and guidance in incident response and provides a centralized
approach to incident handling across agency boundaries.Ž Specifically,
the mission of FedCIRC is to:

 Provide civil agencies with technical information, tools, methods, assistance, and guidance
Be proactive and provide liaison activities and analytical support
Encourage the development of quality products and services through collaborative relationships with Federal civil agencies, the Department of Defense, academia, and private industry
Promote the highest security profile for government information technology (IT) resources
Promote incident response and handling procedural awareness with the federal government.

the CERT Coordination Center (CERT/CC), is a unit of the Carnegie Mellon University Software Engineering Institute (SEI). SEI is a Federally funded R&D Center. CERTs mission is to alert the Internet community to vulnerabilities and attacks and to conduct research and training in the areas of computer security, including incident response.

Benefit of employing incident-handling capability:
a. It enhances internal communications and the readiness of the organization to respond to incidents.
b. It assists an organization in preventing damage from future incidents
c. Security training personnel would have a better understanding of users knowledge of security issues.


The primary benefits of employing an incident-handling capability are containing and repairing damage from incidents and preventing future damage. Additional benefits related to establishing an incident handling capability are:
Enhancement of the risk assessment process:
An incident handling capability will allow organizations to collect threat data that may be useful in their risk assessment and safeguard selection processes (e.g., in designing new systems). Statistics on the numbers and types of incidents in the organization can be used in the risk-assessment process as an indication of vulnerabilities and threats.
Enhancement of internal communications and the readiness of the organization to respond to any type of incident, not just computer security incidents. Internal communications will be improved, management will be better organized to receive communications, and contacts within public affairs, legal staff, law enforcement, and other groups will have been pre-established.
Security training personnel will have a better understanding of users knowledge of security issues. Trainers can use actual incidents to vividly illustrate the importance of computer security. Training that is based on current threats and controls recommended by incident-handling staff provides users with information more specifically directed to their current needs, thereby reducing the risks to the organization from incidents.

Statement below is accurate about Evaluation Assurance Levels (EALs) in the Common Criteria (CC)?
Predefined packages of assurance components that make up security confidence rating scale.

operational assurance:
Operational assurance is the process of reviewing an operational system to see that security controls are functioning correctly.

Operational assurance is the process of reviewing an operational system to see that security controls, both automated and manual, are functioning correctly and effectively. Operational assurance addresses whether the systems technical features are being bypassed or have vulnerabilities and whether required procedures are being followed.
To maintain operational assurance, organizations use two basic methods: system audits and monitoring. A system audit is a one-time or periodic event to evaluate security. Monitoring refers to an ongoing activity that examines either the system or the users.

Covert Storage Channel:
An information transfer that involves the direct or indirect writing of a storage location by one process and the direct or indirect reading of the storage location by another process.

A covert storage channel typically involves a finite resource (e.g., sectors on a disk) that is shared by two subjects at different security levels. One way to think of the difference between covert timing channels and covert storage channels is that covert timing channels are essentially memoryless, whereas covert storage channels are not. With a timing channel, the information transmitted from the sender must be sensed by the receiver immediately, or it will be lost. However, an error code indicating a full disk which is exploited to create a storage channel may stay constant for an indefinite amount of time, so a receiving process is not as constrained by time.

Description of a Protection Profile (PP), as defined by the Common Criteria (CC)?
A reusable definition of product security requirements

The Common Criteria (CC) is used in two ways:
a. As a standardized way to describe security requirements for IT products and systems
b. As a sound technical basis for evaluating the security features of these products and systems