Saturday, October 28, 2006

Resume

Adam Norten
451 N. Highview Ave.
Elmhurst, IL 60126
Phone 630-279-8747
graceandglory13@sbcglobal.net

Experience PolyScience Niles, IL 2001-2002
Electrical Engineering Technician
o Aided and implemented in the construction of chillers re-circulators and optical design at an ISO9000 certified company
o Audited product safety and met UL and CE standards
o Gained knowledge of refrigeration and thermodynamics

Paige Personnel Rosemont, IL 2000-2001
Instrumentation Group Contractor
o Drafted drawings using AutoCAD and Micro Station for use in Chemical Engineering projects at UOP
o Checked purchase orders of engineering equipment against engineering schematics and mechanical flow diagrams

Cendant Mobility Danbury, CT 1999-2000
IT Help Desk Analyst
o Supported Microsoft and company software applications over a network
o Maintained MS Outlook, MS Access, Windows NT and MS word
o Provided customer service and support

Education
Nova Southeastern Ft. Lauderdale, FL Ph.D. in Computer Information Systems
Expected graduation: June 2009

Elmhurst College Elmhurst, IL
Master of Science in Computer Network Systems
May 2006 GPA 3.5

DeVry University Addison, IL
Bachelor of Science in Computer Engineering Technology
October, 2004 GPA 3.8

Honors Received
o President’s List o Dean’s List o Phi Theta Kappa
o Alpha Chi o Graduated Magna Cum Laude

The Session Token Protocol for Forensics and Traceback

Carrier, B., & Shields, C. (2004). The Session Token Protocol for Forensics and Traceback. ACM Transactions on Information Security and System Security, 7, 333-362. Retrieved on September 13, 2006, from the ACM Digital Library database.

In this article, Carrier and Shields describe the Session Token Protocol (STOP), a protocol designed to be used by forensic investigators and others to track the identity of an attacker. It is based and expands on the Identification Protocol (IDENT).

By “stone stepping,” an attacker logs on through many host computers before trying an attack to complicate any attempt to learn his identity. To seek a remedy for this problem, the authors started by reviewing two alternative but limited types of network traceback: IP traceback which attempts to locate the source of spoofed Internet Protocol packets, and network-based connection-chain which tracks attacks that were performed using a connection chain. These two types of traceback are not effective if the attacker spoofs the IP address of the victim in a system that responds with a protocol like DNS, also known as a reflector attack. The authors then described IDENT, the protocol on which STOP is based. IDENT is a two-way protocol used by a server to identify the client-side of a network connection. First, it establishes a TCP connection from the client port to the server port on the host computer. Then, the host computer connects to the TCP port on the previous host computer and sends the message , . The previous host computer then uses the source IP address to determine which process had a connection from the client port to the server port. If it is able to locate a process, it returns a message. IDENT’s limitations include that it may provide insecure sources with information concerning the service a user is using, can be used to collect email addresses, and the IDENT daemon does not automatically protect a user’s privacy.

The authors then introduced the STOP protocol which provides additional functionality to what is offered by IDENT, and data that is usually missing in forensics investigations. It gives a log of socket activity, can request that a daemon using this protocol save additional user-level and application-level data, and provides a mechanism that the protocol can use to trace a hacker using many hosts. In fact, STOP works most effectively when it is run on many hosts. While IDENT did not maintain privacy, STOP does so by returning a random token rather than a user’s name. Because IDENT is widely used, STOP has been built to be backward compatible with it. The protocol used by STOP incorporates recursive traceback in an attempt to gain access to information on the whole pathway of a connection. This lets investigators trace an attacker to their home system or computer. STOP may not always show the entire chain of connection. But it can lower the expensive costs of a forensics investigation required to catch an attacker.

Two of the three case studies that were shown by the authors were the simple and complex process structures. Their examples were of process trees that could be analyzed by a daemon that used the STOP protocol. The protocol was tested on three operating systems: Solaris, OpenBSD and Linux. Solaris used the KVM library to read kernel memory, but it used a stream design for it in-memory network structures. Solaris’ design made it harder for it to get information than in the OpenBSD design. OpenBSD, on the other hand, used the KVM library to read the process table and tables from the kernel’s memory. Linux used the pseudofile process system to get information about the process. In the tests it was found that Linux had a 973% increase in lookup time. Linux also spent 136% more time on a SV lookup than the OpenBSD operating system did. This was because OpenBSD could do more in kernel space and Linux had to utilize file IO and use scanf() to determine process data.

Some limitations of STOP are partial deployment, covert channels, its effects on network channels, integrity of saved data and the compromising of STOP daemons. While it is certain that the STOP protocol will not solve all traceback situations, it is a step closer to a solution.

Public-Key Cryptography and Password Protocols

Halevi, S., & Krawczyk, H. (1999). Public-Key Cryptography and Password Protocols.
ACM Transactions on Information and System Security, 2, 230-268. Retrieved on September 13, 2006, from the ACM Digital Library database.

In this article, Halevi and Krawczyk studied the combined use of weak (passwords) and strong (private key for public-key encryption) authentication and key exchange in asymmetric scenarios.

The most basic password use is sending a password from a user to a server in the clear in which the server stores a file with the plain password or its image to validate the password. In remote authentications, the password can be easily read by an eavesdropper. One of the most basic attacks is password guessing in which an attacker uses a small dictionary of common passwords. In an off-line attack, the attacker notes communications and then uses the dictionary to look for consistent passwords, while in an online attack, he keeps trying passwords from the dictionary until he gets the correct one. The authors define the security for a password-based one-way authentication protocol by describing an attacker who can watch runs of the protocol between the user and the server, prompt new authentication sessions where he can see all messages sent between the two, intercept messages which he can change or drop, and see if the server accepts the authentication or not.

Several protocols in the asymmetric scenario were then presented where the authentication server has a pair of private and public keys and the client uses a password. The first protocol, part of broader group of protocols called “encrypted challenge-response mechanisms,” was a simple one in which the password was encrypted with the server’s public key and then sent to the server for verification. The authors’ findings showed additional properties were needed to maintain the security of the protocol. Next, the authors used the challenge-response approach to encrypt the user’s response under the public key of a server to fend off guessed passwords. The authors determined that while encrypting the response would appear to be an effective way of preventing password guessing, it was not. The authors stated that since they aimed to achieve a higher level of security for their protocols than semantic security, they chose to use OAEP, a simple encoding of data for use with RSA encryption to provide protection against strong attacks.

The authors added to their authentication protocols the function of authenticating the server to the user their exchange of an authenticated secret key. They indicated that this provides security needed in many security applications. While the protocol does not give perfect forward secrecy because exposure of the server’s private key means the session key is revealed, the authors stated that perfect forward secrecy could be performed by using the Diffie-Hellman exchange and Mutual Authentication. The authors next suggested giving the user a hashed version of the public key, a so-called “public password,” to be used where clients cannot verify the authenticity of the server’s public key in order to extend the human-password and serve as a “hand held certificate” to a public key.

The authors next stated their definition of password protocol security and established the security of their encrypted challenge-response protocol. To do so, they created a model on which they could run and see their security requirements. The authors proposed a “probabilistic game” with a user, server, and intruder who has great but limited power. Each game had security parameters controlling the strength of cryptographic keys and functions, and a dictionary of passwords. Using their probabilistic game, the authors defined a secure one-way password authentication protocol by first stating that a protocol is syntactically correct when all messages are passed unchanged, then by explaining successful impersonation and authentication. Within this definition, the intruder’s strategy is to keep trying passwords until he is successful.

Finally, the authors gave an explanation of their public-key encryption method that can stop ciphertext-verification attacks using a key generation, probabilistic encryption and decryption algorithms. The authors noted that users and servers in a password setting have a shared secret, and that all the strong password mechanisms they and others propose use public-key techniques.

Use of Nested Certificates for Efficient Dynamic, and Trust Preserving Public Key Infrastructure

Levi, A., Caglayan, M.U., & Koc, Cetin K. (2004). Use of Nested Certificates for Efficient Dynamic, and Trust Preserving Public Key Infrastructure. ACM Transactions on Information Security and System Security, 7, 21-59. Retrieved September 13, 2006, from The ACM Digital Library database.

The purpose of this article is to present the Nested Public Key Infrastructure (NPKI) model, an alternative to the existing Public Key Infrastructure (PKI). NPKI improves on PKI by providing a more efficient method for verifying certificates for public key distribution.

A PKI is a certificate network designed to enable verifiers to find the right public key for a user by following a path of certificates. Certificates are digitally signed bindings between a public key and attributes such as a name, e-mail address, URL or authorization of the owner. They are issued by trusted Certification Authorities (CAs). A verifier uses a certificate by verifying the digital signature of the CA over the certificate and then locating the public key for the user. In PKI, since there are several CAs and a verifier does not know the public key for each, a verifier, in order to get a public key, needs to take and verify a certificate path in order to locate the public key it is seeking. In doing so, it must verify each certificate along the way to find the public key of the next CA, and the CA’s public key is then used to verify the next certificate. The verifier must trust all CAs in the chain.

In order to avoid having to verify each certificate along a path just to locate a single public key, the authors proposed a new PKI, nested-certificate-based PKI (NPKI). NPKI utilizes nested certifications which are basically certificates for other certificates and can be used to determine certificate paths. In NPKI, both classical and nested certificates can be used together. Certification Authorities (CA’s) distribute nested certificates instead of the certificates given their children in the PKI allowing PKI to transition into NPKI. In this way, classical certificate paths are made into nested certificate paths without wrecking trust relationships or topology already in place. In practice, NPKI provides nested certificate paths in which the first certificate is verified cryptographically and the others by fast hashing thus increasing verification speed.

In order to improve verification time, many nested certificates must be issued resulting in a trade off between improvement in verifying and the overhead in nested certificate issuing. This trade off was studied by the authors with a generic balanced tree PKI model. It was seen that, although not distributed uniformly, for a 4-level, 20-ary tree shaped PKI, the average verification was sped up to between 2.41 and 2.50 times faster while the number of certificates increased by 3.85 times. The authors also noted that since nested certificates are not for users but for other certificates, the rules for revoking them are different. They propose that two or possibly even one revoking tool is enough for nested certification paths. As such, NPKI is better than PKI in certificate revocation.
Also, nested certificate paths are quick enough to be verifiable by wireless users.

Consistency Analysis of Authorization of Hook Placement in the Linux Security Modules Framework

Jaeger, T., Edwards, A., & Zhang, X. (2004). Consistency Analysis of Authorization of Hook Placement in the Linux Security Modules Framework. ACM Transactions on Information Security and System Security, 7, 175-205. Retrieved on September 21, 2006, from the ACM Digital Library database.

In this article, the authors Jaeger, Edwards, and Zhang, tried to confirm for Linux users and kernel developers the correct location of hooks inside the Linux kernel which comprise the Linux Security Modules (LSM) project framework. The authors theorized that because hooks define the kinds of authorizations, including sensitive security operations, that a module can enforce, the consistency in authorizations is dependent on the proper placement of the hooks making consistency an indicator of correct hook placement.

Whenever a security sensitive operation is performed as a specific event, a set of LSM hooks must have mediated in that operation. While there are benefits to locating the hooks inside the kernel, their location makes a mediation interface harder to see, so the controlled operations and their mapping to policy operations are also harder to see. The authors noted that there was no location inside the kernel similar to the system call interface at which all the kernel’s controlled operations that access security sensitive data must pass, making pin-pointing such operations more difficult. In the absence of such a location, the authors sought a model to help identify controlled operations in the kernel, determine controlled operations authorizations requirements, and compare actual hook authorizations to authorization requirements.

In arriving at a solution, the authors considered the fact that LSM authorization hooks were almost always placed correctly making inconsistencies in authorization a sign of trouble, and that consistency is dependent on context. To collect and analyze authorizations, they established a system of logging generation tool using run-time analysis of the kernel and static analysis of its source code, and an authorization consistency analysis tool such as JaBA (a Java static analysis tool) to collect the logs. They also discussed improvements to the overall analysis that could be made using JaBA data flow analysis. They were able to identify operations that were irregular or unexpected by analyzing the output of a logging tool, and in this way found four anomalies that could have been exploited but were corrected with help of Linux Security Module users.

The concept of Layered Proving Trees and Its Application to the Automation of Security Protocol and Verification

Dojen, R., & Coffey, T. (2005). The concept of Layered Proving Trees and Its Application to the Automation of Security Protocol and Verification. ACM Transactions on Information and System Security, 8, 287-311. Retrieved September 13, 2006, from the ACM Digital Library database.

In this article, Dojen and Coffey presented the theoretical concept of Layered Proving Trees as a way of automatically employing a logically based security protocol verification.

While security protocols are important to maintaining secure communications, their construction leaves them vulnerable to attack. Verification of security protocols is essential but techniques used have had limited success. In response, the authors proposed the use of Layered Proving Trees using modal logics. The bonuses of Layered Proving Trees include that they have few resource requirements, allow for easy tracing of all decisions after completion of the proofs, and allow for the performance of exhaustive searches for proof of traces.

The authors explained the basic structure of a Layered Proving Tree including its levels, nodes (children and parents), and links between them, and then presented an algorithm which can be used to construct a Layered Proving Tree. They addressed implementation issues including proof of soundness (only true conclusions are reached) and proof of completeness (all true conclusions are reached), and also addressed avoiding non-termination which happens when the construction algorithm never runs out of nodes to be expanded. Non-termination can be classified into two groups: direct non-termination and indirect non-termination (which caused non-termination with other postulates). The authors suggested ways of preventing both types of non-termination.

The authors addressed the logic-specific issues of the Layered Proving Tree in a case study for the previously developed GNY logic. The issues were grammar, unification of terms, structure of postulates, and termination. The grammar developed had statements as its basic unit. A protocol was defined as a group of statements each consisting of a principal, operator and data. Data was an atom, a conjunction of data, data encrypted under a symmetric, public or private key, or the result of a function applied to some data. The type of data provided the idea of giving public and private keys to principals and the concept of two principal sharing secret and symmetric keys. The atoms of the grammar were: symmetric, public and private keys, principals, nonces, timestamps, functions and hash-functions.

The authors then set up a prototype and tested it against many security protocols, some of which had well known problems such as those in the BCY protocol. The automatic verifications from the prototype corresponded to the manual verifications, including detection of all the known problems in the protocols that were studied. Performance and scalability issues of setting up the prototype were summarized. The author’s results showed that a Layered Proving Tree has low memory and computing power requirements.

In conclusion, the Layered Proving Tree approach modeling the process of logic-based security protocol verification was proven to be effective and correc

A Framework for constructing Features and Models for Intrusion Detection Systems

Lee, W., & Stolfo, S.J. (2001). A Framework for constructing Features and Models for Intrusion Detection Systems. ACM Transactions on Information and System Security, 3, 227-261. Retrieved September 13, 2006, from the ACM Digital Library database.

In this article, Lee and Stolfo described Mining Audit Data for Automated Model for Intrusion Detection (MADAM ID), a framework that uses data mining to compute intrusion activity patterns and create intrusion detection models, and proposed improvements to make intrusion detection systems more systematic and automated.

Of the two main intrusion detection techniques, misuse detection focuses on identifying typical attack patterns and locations. Because novel attacks are non-typical, misuse detection methods are not effective in combating them. Also, non-typical or anomaly detection systems are apt to generate a higher rate of false alarms than misuse detection systems. To develop Intrusion Detection Systems (IDSs) to address these techniques, MADAM ID employs the use of data mining programs to collect large stores of data which is processed into ASCII network packet information. After being summarized as connection records, data mining programs are used to find re-occurring patterns and extract essential and non-essential features. Classification algorithms are then used to create intrusion detection models. The authors proposed introducing new tools to the framework including replacing manually coded intrusion patterns with learned rules, using patterns found in the audit data to selected system features, and using meta-learning as a means of creating a model that incorporates evidence from many base models and for predicting relationships by a number of classifications. The reasons for using meta-learning were to improve efficiency of combining intrusion detection models and improve the accuracy of classifications.

The authors experiments demonstrated that user anomaly detection models could be created using re-occurring patterns mined from audit data. The patterns could also be used as a guide for choosing statistical features to build classification models. The authors stated that since anomaly detection models are the only means of finding innovative intrusions, their future work would be creating algorithms for learning network anomaly detection models. The authors also indicated that ID models need to consider costs such as the costs of development, operation, damages of an intrusion, and detecting and responding to an intrusion.

Effective Role Administration Model

Oh, S., Sandhu, R., & Zhang, X. (2006). An Effective Role Administration Model Using Organization Structure. ACM Transactions on Information Security and System Security, 9, 113-137. Retrieved September 13, 2006, from the ACM Digital Library database.

In this article Oh, Sandhu and Zhang reviewed the Role-Based Access Control (RBAC) model, a model routinely used by enterprises, and proposed their own model, Administrative RBCA ’02 (ARBAC02) which improves on the RBAC model.

RBAC sought to administer security systems by using an organizational concept of rules to determine user access. A later improvement on RBAC, Administrative RBAC ’97 (ARBAC97), sought to improve on the original model by allowing for decentralized administration. ARBAC97 was made of three parts: User-Role Assignments ’97 (URA97) which used user pools and role ranges to decentralize user-role administration, Permission-Role Administration ’97 (PRA97) which used permission pools and role ranges to decentralize permission-role administration, and Role-Role Administration ’97 (RRA97) which used a role hierarchy to assign access rights to users. While URA97 sought to decentralize user-role administration, its drawbacks included that it required many steps for single user-role assignments and even more for higher destination roles in a role hierarchy, allowed for redundant role assignments, and had a restricted construction of user pools resulting from the use of user pools, prerequisite rules and a role hierarchy. While PRA97 was designed to decentralize permission role administration, it had the same problems at URA97 as well as the unwanted flow of permissions.

Unlike ARBAC97, ARBAC02 has a flexible make-up of user and permission pools by using organizational structure steps in role administration. First, users and permissions are granted to organizational units (OTs) by human resources and an information technology department. Next, security administration personnel grant the users and permissions in OTs to regular roles. Unlike the top-down method used in ARBAC97, ARBAC02 proposed a bottom-up inheritance for permission-role administration. ARBCA02 allots common permissions to lower positions and non-common permissions to higher positions in a Permission Organizational Structure (OS-P). This allows senior roles within the model hierarchy to inherit common permissions.

The authors also illustrated Organizational Structure User Pools and Permission Pools (OS-U/OS-P) in other access control models like Access Control Lists (ACL) and Lattice-Based Access Controls (LBAC) where access control choices are made beyond the control of one individual. An OS-U has all the users who are assigned by Human Resources in an organization while a Permission Organizational Structure (OS-P) is a hierarchy of organizational units shown as a permission pool. Since it is important that permission inheritance travels downward, an OS-P has an inverted tree structure, a maximum organization unit and only one direct child. An OS-P has permissions that were previously given by IT personnel within an organization. The authors therefore showed that organizational structure user pools and permission pools were a comprehensive solution to security administration for different access control methods.

Access Control

Heingartner, U., & Steenkiste, P. (2005). Access Control to People Location Information. ACM Transactions on Information Security and System Security, 8, 424-456. Retrieved on September 13, 2006, from the ACM Digital Library database.

In this article, the authors Heingartner and Steenkiste recognized that information concerning a person’s location needs to be available in a ubiquitous computing environment, but acknowledged that the unauthorized release of such information is a problem. In response, the authors proposed a model for access control of location information utilizing certificates that are stored in a decentralized, distributed way.

There are two basic types of people location services within a system: those that have information on the location of a person (such as a calendar service that has a person’s schedule), and those that have information on the location of devices (such as cellular telephones and laptop computers). Location policies determine which entities or persons have permission to learn a person’s location information, and for security reasons, only services that utilize access control are granted such location information.

The authors then presented a formal model requiring services to react to a location request after going through a location policy check which verifies that the entity making the request had access. For forwarded requests, the services need to be certain that a requesting service is trusted. The formalism the authors proposed in their decentralized architecture for a trust management system utilizes SPKI/SKSI certificates. Requests are composed of these digital certificates with policy and/or trust statements, which can be forwarded or delegated to a second entity by chains of policy or trust statements. The authors identified that the components needed for access control are a client who submits a request on a person’s location, and a mediating information service who forwards or creates the request for a device’s location as per the location policy. That service must check to see if it received the request from a trusted source, and then leaf services give location information based on what technology is being used. A certificate repository consists of certificates for entities that are either policy or trust statements (both of which are locally stored in an Access Control List (ACL)) or membership statements.

The authors showed through the prototype they developed in RSA and DSA-based signature generation that, compared to the cost of setting up a secure connection, the costs of an ACL are small. Their findings with ACLs showed that DSA-based signatures were 41% less expensive than RSA-based signatures. For clients with limited resources, they suggested the use of DSA rather than RSA for signing operations (in conjunction with key caching). The costs of proving that a service is trusted is similar to the cost of verifying a person’s digital signature in experiments. This cost can be reduced by caching and further lowered when queries are made multiple times. With caching, DSA performance increased by 22%, and RSA performance increased by 43%.

In conclusion, the authors, by analyzing the access control needs of a people location system, have shown that their design has the following advantages: users or a central authority can create the policies, certificates do not need to be centrally housed thus avoiding bottlenecks, digital certificates and not the identity of the issued queries need to be given to the system, an entire group can be given access, and access control can be delegated.

Battery Power

Chandramouli, R., Bapatla, S., & Subbalakshmi, K.P. (2006). Battery Power-Aware Encryption. ACM Transactions on Information Security and System Security, 9, 162-180. Retrieved on September 13, 2006, from the ACM Digital Library database.

Because the battery power of wireless devices is limited, they are vulnerable to attacks such as brute-force cryptanalysis attacks when their security parameters cannot be supported due to low battery power. The authors’ goal is to model and measure power usage of crypto algorithms to identify and thus minimize such security risks.

The stream and block ciphers the authors used for this paper were DES (a 64 bit symmetric block cipher), IDEA (a 64-bit plaintext block cipher), GOST (a 64-bit block encryption algorithm), and RC4 (a variable key-size cipher with a key stream independent of the plaintext). For the experimental hardware portion used to gather power consumption data, the writers used a laptop running a version of Oprofile for Red Hat Linux 2.4.8 adapted to monitor power value for different functions. The power used by the laptop to encrypt and decrypt algorithms in ten random plaintext data sets was measured as a function of the power going into the laptop, and the power consumption value was calculated as the product of the current and voltage used. The profiled data obtained from running the encryption algorithms on the laptop showed that power consumption changed linearly with the number of rounds in the DES, IDEA, and GOST encryption algorithms. (When the two part algorithm of substitution and permutation is applied once with a key, this is termed a “round”). The rate of change of power with respect to the number of rounds was the largest for IDEA, the smallest for GOST, and the power consumption of RC4 varied non-linearly with respect to the length of the key.

Although often used, the authors observed that there are no constructions of block ciphers that offer unconditional security. In order to assess the effectiveness of an encryption algorithm, they proposed subjecting it to a cryptanalysis attack such as a brute-force attack in which all possible encryption keys are tested. Since rounds and key length affect power consumed, a measure of security can be determined by comparing block cipher vulnerability in such an attack. By considering a linear attack of the DES algorithm, the authors determined that the vulnerability of a cipher can be defined as a ratio of the maximum number of block length plaintexts for such an attack to the number of plaintexts using a cryptanalysis algorithm.

To optimally allocate the battery power for a given number of data packets, each with different security requirements, without exceeding the power available, they proposed optimization formulation 1 and arrived at an algorithm they called GreedyAlloc_Power. Also, to determine optimal battery power where a relationship between plaintext-ciphertext pairs and cryptanalysis success rate is unavailable, the authors proposed optimization formulation 2 and arrived at an algorithm they called GreedyAlloc_Round. When the authors used the GreedyAlloc_Power algorithm, they found that an equal allocation of power to all packets regardless of vulnerability was inefficient, and when using GreedyAlloc_Round, that the cryptanalyst needed a factor of 2^8 more plaintexts to equal the performance of equal resource allocation.

In conclusion, the authors theorized that by using the algorithms they proposed, security provided by encryption algorithms can be optimized within the power limitations of a battery-powered device.

Annotated Bibliography

Park, J.S., Ahn, Gail-Joon, & Sandhu, R. (2001). Role-Based Access Control on the Web. ACM Transactions on Information and System Security, 4, 37-71. Retrieved on September 13, 2006, from the ACM Digital Library database.

In this article, authors Park, Ahn and Sandhu observed that current ways to access Web Control on Web servers, based on a user’s identity, were inadequate for enterprise-wide systems. In response, they proposed the use of Role-Based Access Control (RBAC) in large-scale Web environments with the addition of user-pull and server-pull architectures, as well as secure cookies and smart certificates.

Access control methods currently used on Web servers tend to use individual’s identity, a method not compatible with enterprise-wide systems. Instead, the authors proposed the use of RBAC to manage and enforce security in such environments. With RBAC, permissions are associated with roles and users are assigned to roles. A role forms the basis of an access control policy. In RBAC, administrators can make roles, grant permission to them, and assign users to the roles based on their job responsibilities. Users can make sessions in which they can start a subset of roles to which they belong. Each of these sessions can be assigned to many roles, but it maps to only one user. In this way, RBAC guarantees that only authorized users can get to access certain data or resources. It also supports information hiding, least privilege, and the separation of duties.

In order to manage role-based access control (RBAC) in Web environments, the authors proposed the use of user-pull and server-pull for roles. In user-pull, a user pulls his roles from the roles server and then shows them to the web servers. In server-pull, each Web server pulls the user’s roles from the role server. In the user-pull architecture, the binding of roles and identification for every user must be supported. There are three main components in both user-pull and server-pull architectures: a Web server, role server, and client. A role server holds user-role assignment (URA) information (for the domain). A web server has a table for permission-role assignment (PRA) which states the necessary roles for resources in the web server.

Another contribution of this paper was the concept of secure cookies as a way of getting information between the browser and the Web server. Cookies are insecure because they are transmitted in clear text. The authors described how to change regular cookies that have no security into secure cookies that resist cookie harvesting, network security and end-system threats. The use of secure cookies is a transparent process to users, and can be applied to browsers and Web servers. Secure cookies are made by cryptographic technologies to maintain integrity, authentication in that authentication services verify who owns the cookies, and confidentiality. There are two types of cryptographic technologies for secure cookies: public-key based and secret-key-based. The authors used a public-key based solution by a PGP package with Common Gateway Interface (CGI) scripts. The authors decided to use Pretty Good Privacy (PGP) for secure cookies in their implementation. Secure cookies only support user-pull since the cookies are stored in the user’s machines. On the other hand, LDAP and smart certificates both support user-pull and server-pull architectures.
Authors Park and Sandhu presented the concept of smart certificates in a work in 1999, and in this article the authors described an RBAC implementation for smart certificates in a user-pull scenario. The basic idea of X.509 certificates, the predecessor to smart certificates, was to bind users to keys. Smart certificates are extended X.509 certificates for the Web and RBAC which support user-pull and server-pull architectures. They can maintain several certification authorities (CAs) without losing maintenance, contain attributes, give postdated and renewable certificates, and keep confidentiality.

To demonstrate the use of their ideas, the authors implemented each architecture by using well known technologies (i.e. X.509, cookies, SSL and Lightweight Directory Access Protocol (LDAP)) that could be used in conjunction with Web technologies. The authors discussed the use of RBAC on the Web using different technologies on different architectures, and compared the tradeoffs of different approaches on the basis of their experiences.

The authors proposed that successfully combining RBAC and the Web can make a huge impact on the deployment of effective enterprise-wide security in large-scale systems, and believe that their contributions in this paper were important in giving strong security management based on users’ roles on the Web.