Friday, December 16, 2005

Network Project

Table of Contents

Table of Contents ………………………………………………………………………………. 2

I. Introduction ………………………………………………………………………… 2

A. Scenario ……………………………………………………………………….…2

B. Functionality Requirements ……………………………………………………. 2

i. Windows ……………………………………………………………….. 2
ii. LINUX …………………………………………………………………. 3

C. Services to Consider …………………………………………………………. . 3

II. Analysis ……………………………………………………………………………. 4

A. Project Proposal ………………………………………………………………. .4

i. Windows (Windows 2003 Server) ……………………………………. .4

1. Active Directory, Primary DNS, and WINS Server
(IP 10.207.32.4) (Jeff H., Jeff S., & Adam) ……...……………. 4

2. Terminal Server (IP 10.207.32.2) (Mustapha, Adam, & Jeff H.)14

3. DHCP Services and DHCP Relay Services (IP 10.207.32.3)
(Tarik & Jeff H.)………………………………………………..19

4. Router (IP 10.207.32.1 and 10.207.33.1) (Jeff H.)

ii. LINUX (Red Hat Server) …………………………………………….. 24

1. Secondary DNS and DHCP Services (IP 10.207.33.2)
(Mustapha & Jeff H.) ….………………………………………24

2. SAMBA, NFS, and CUPS Services (IP 10.207.33.3) (Steve,
Adam, Jeff H. & Jeff S.) ………………………………………31

3. APACHE and FTP Services (IP 10.207.33.5) (Steve)……….. 34

4. SMTP & POP3 (IP 10.207.33.4) (Adam) ...…………………. 39

III. Conclusion ………………………………………………………………………. 45

IV. References ………………………………………………………………………. 45


I. Introduction

For its CS 545 Group Project, the Class of Fall 2005 including Mustapha Aitzemkour, Steve Ehrlich, Jeff Heiden, Adam Norten, and Jeff Sarris, has been assigned the task of developing a computer network for a small hypothetical company. The scenario and network requirements were defined by Professor Ali, and the class members distributed the network building tasks among themselves. This paper will recount the project scenario, definitions and parameters. It will also present basic working definitions used by class participants in completing the project, explanations of the software and programs used, and will include a summary of the work completed by the class members.

A. Scenario

A small company called Manamana, which likes to have a mix of Windows XP and LINUX servers, has hired the Elmhurst College CS 545 Class of Fall 2005 as consultants. The company has a variety of computer equipment running Windows XP, a number of hubs and a few class A IPs. It utilizes a number of laser printers connected to each of its networks. The company would like the class to set up a LAN/WAN such that its current resources can be more efficiently utilized.

Manamana hired the class as its primary network consultant on a contract basis. Our assignment is to make the appropriate proposal, provide detailed configuration recommendations and support our recommendations by implementing them. At the current time cost is not an issue.

B. Functionality Requirements

i. Windows

1. The computer network consists of an ADS Windows 2003 Domain and sub-domains.
2. All client computers use Dynamic Host Configuration Protocol (DHCP) Server to obtain Internet Protocols (IPs) Addresses.
3. We are using internal Windows Internet Name Services (WINS) to resolve the Host Names.
4. We have configured the Terminal Server, so that users and administrators can remotely use and manage the network resources.
5. The users on the two networks (LINUX network and Microsoft Windows Network) have access to each other’s printers.
6. All users on the LINUX network are able to access the file and print services of Windows 2003 network, and visa versa.
7. The Administrator is tired of reconfiguring the desktop settings on each workstation every morning. He wants us to enforce a consistent policy across all workstations such that users, except the Administrator, should not be able to save their Desktop changes.
8. Manamana wants us to make sure that internal networks are protected from security threats. The company owns few Class C IPs, but is using class “A” IPs that they do not own for their internal networks. We are to make sure these addresses are not broadcast to the outside.
9. We will create few Organizational Units (OUs) in each domain and restrict them from accessing particular applications by implementing group policies.
10. We will use roaming profiles to allow users to access different workstations.

ii. LINUX

1. We will implement both a primary and secondary Domain Name Server (DNS) to resolve the Host Names and Domain Names.
2. All client computers have access to a printer.
3. All client computers have access to the file services of a central server via Network File Server (NFS) or Samba.
4. All client computers have access to e-mail; both sending (SMTP) and receiving (POP3), as well as having aliases and virtual accounts configured for users and groups.
5. There is File Transfer Protocol (FTP) access for an administrator.

C. Services To Consider

- Dynamic Host Configuration Protocol (DHCP)
- Domain Name Server (DNS) (primary and secondary / forward zone, reverse zone, cache zone)
- Printing services (CUPS)
- Network File Server (NFS)
- Samba
- Simple Mail Transfer Protocol (SMTP)
- Post Office Protocol version 3 (POP3) configured using software called Webmin.
- Apache for WWW We set this up and configured it using software called Webmin
-WU-FTP
NOTE:
o Every user should has an account on the system (e.g., email address) based on the first initial + last name schema, such that a user by the name of Jeff Heiden has an account jheiden.


II. Analysis

A. Project Proposal



i. Windows (Windows 2003 Server)

1. Active Directory, Primary DNS, and WINS Server (IP 10.207.32.4) (Jeff H., Jeff S., & Adam)

Per the project requirements, the class was to implement a primary DNS server, an Active Directory server, and a WINS server.
Before describing the work done on this portion of the project, the following is a brief explanation of the terms “Domain Name System” and “Windows Internet Name Service.” The Domain Name System (DNS) is a database that contains information about all computers in a TCP/IP network. It is a system which helps Internet users access the Internet more easily, by allowing them to specify meaningful names to web sites and/or other users with whom they want to communicate. When computers talk to each other via the Internet, they use the Internet Protocol (IP) protocol. The IP distinguishes hosts from each other by an IP address. Therefore, a DNS server is needed by the software applications to convert humans’ meaningful names into computer meaningful names (IP addresses) and provide the final user with an easier way to communicate via the Internet.
Each computer name consists of a sequence of alpha-numeric segments separated by periods. For example, a computer name might be: eccnsdns.eccns.local. A Computer name is also called a “Domain Name” and Domain names are hierarchical, with the most significant part of the name on the right. The left-most segment of a name (eccnsdns in the example) is the name of an individual computer. Other segments of the full name identify the group that owns the individual name. In the example, the individual name eccnsdns belongs to the eccns group of names, which is itself belongs to the local group.
To obtain access to a domain (eccns.local in our case), DNS helps clients locate a domain controller, which “stores the objects for the domain in which it is installed” and “accepts account logons and initiates their authentication…and controls access to network resources.” When a client wants to log on to a domain, it sends a query to the DNS server designated in the client TCP/IP configurations. Then the DNS server, which stores information about all domain controllers available in the domain through constant messages containing the availability status of these controllers, will reply back by assigning the right domain controller to that client.
The domain controller (DC) (eccnsdns in our domain) is automatically created as a global catalog server. A Global Catalog server stores the objects and their attributes from all domains in the forest. It contains its own full, writable domain replica (all objects and all attributes) plus a partial, read-only replica of every other domain in the forest. It is built and updated automatically by the Active Directory replication system. The object attributes that are copied to global catalog servers are the attributes that are most likely to be used to search for the object in Active Directory Service (ADS).
The main role of the global catalog is to make it possible for clients to search the Active Directory without having to be referred from server to server until a domain controller that has the domain directory partition storing the requested object is found. Hence, all clients’ requests to the DNS are actually directed by the ADS to the global catalog servers.
To provide fault tolerance in eccns.local, we assigned a secondary, back up DNS server (dns2). The secondary DNS stores the contents of the zone file located in the primary DNS. Both servers (eccnsdns and dns2) are synchronized through their zone files, hence enabling the secondary DNS server to perform name resolution and to be available for clients in case the primary DNS server fails to respond.
Windows Internet Name Service (WINS) provides the equivalent of a DNS server for the NetBIOS namespace in that it resolves NetBIOS names into IP addresses by using the WINS dynamic database to call the exact name records. It offers a distributed database for registering and querying dynamic mappings of NetBIOS names for computers and objects on a network.
In the example below, the following scenario occurs:
1. HOST-A, a WINS client, registers any of its local NetBIOS names with its configured. WINS server: WINS-A.
2. HOST-B, another WINS client, queries WINS-A to locate the IP address for HOST-A on the network.
3. WINS-A replies with the IP address for HOST-A.
WINS reduces the use of local IP broadcasts for NetBIOS name resolution and enables users to locate systems on remote networks easily. Because WINS registrations are done automatically each time clients start and join the network, the WINS database is automatically updated when dynamic address configuration changes are made. For example, when a DHCP server issues a new or changed IP address to a WINS-enabled client computer, WINS information for the client is updated. This requires no manual changes to be made by either a user or network administrator.
Since we have designed our network to host two operating systems (LINUX and Windows), any client that is not a Windows client will use the WINS server designated in its TCP/IP configuration for any NETBIOS names queries.
Installation & Configuration

Before configuring these services, Windows Server 2003 had to be installed and configured. The IP address of the server (10.207.32.4) was statically entered. This is important since this is a server on the network, but even more important because it is also the Primary DNS server. Clients must be able to communicate with all the servers in the network at all times. If a server has a dynamic IP address, the clients will not know that address at all times. The DNS server can track any changes, but obviously this DNS Server address then has to be known to the entire network and cannot change. When setting up a network, dynamic IP addressing should be utilized only for the client PCs. Since this server will also be the primary DNS server, the DNS setting is set to the same as the IP address (10.207.32.4).

After completion of the Windows Server 2003 installation and base configuration, the services are then installed beginning with Active Directory. The reason that this server is a DNS server along with Active Directory is because Active Directory requires DNS to also be installed in order to function properly. The Active Directory service is very closely tied to the DNS service. At the beginning of the installation for Active Directory, the prompt comes up informing of this requirement, so DNS is installed prior to Active Directory. The domain that was chosen is eccns.local in order to distinguish our internal network from any external domain such as a .com address. This is not a requirement, but a good practice to follow when creating an Active Directory domain. After these services are installed, so is WINS.

Configuration begins with DNS. First, a forward lookup zone is created which is the zone most used for DNS queries. The forward lookup is used to resolve an easy to remember domain name to an IP address. The reverse lookup zone, which also needs to be created, is used for resolving an IP address to a domain name. After adding the zones, the records must then be added. When adding records to the forward zone, it can also create the corresponding reverse zone entry automatically. Once all the records are added, DNS can be tested by using nslookup. An nslookup of a domain name (for example eccnsdns.eccns.local) should resolve to an IP address (10.207.32.4) and visa versa, an nslookup of an IP should resolve to a domain name. Once this works, the DNS server is configured properly.

Next, Active Directory is configured. Users were added to Active Directory following a previously determined naming convention, first initial + last name. Once users are added, the next task is to start joining servers and workstations to the domain. At each Windows based computer under System Properties the domain name must be set. By default the workgroup is set to WORKGROUP, but instead of workgroup settings the domain name (eccns.local) must input into the second box. A valid login is required in order to join the domain. Once it is joined to the domain, the Domain Controller can resolve all future logins on that computer. At this point only logins are resolved, however, once roaming profiles are added one can login to their personal desktop from any location on the network as the roaming profile stores all this information on the server.

In order to join the Linux servers to Active Directory, the following steps had to be taken:

1. Stop the winbind and samba services:
/etc/init.d/smb stop
/etc/init.d/winbind stop
2. Edit the Kerberos files to have the right configuration
/etc/krb5.conf
[libdefaults]
default_realm = ECCNS.LOCAL
[realms]
ECCNS.LOCAL = {
kdc = eccnsdns.eccns.local
default_domain = ECCNS.LOCAL
kpasswd_server = eccnsdns.eccns.local
admin_server = eccnsdns.eccns.local
}
[domain_realm]
.eccns.local = eccns.local
3. Edit the Samba files to have the following configuration settings
/etc/samba/smb.conf
workgroup = server
security = ads
realm = ECCNS.LOCAL
encrypt passwords = yes
username map = /etc/samba/smbusers
winbind uid = 10000-20000
winbind gid = 10000-20000
winbind use default domain = yes
winbind enum users = yes
winbind enum groups = yes
4. Join the domain
net ads join -U administrator -S eccnsdns
5. Restart both the winbind and samba services
/etc/init.d/smb start
/etc/init.d/winbind start
6. Test the join with the following command:
/usr/bin/wbinfo -g
All the groups in the Active Directory structure should be displayed.
Once the Linux server has joined the domain, services, such as Samba, will authenticate any attempted logins to the domain. This allows for a user to be added at a central location and then be able to access, for example, file and print service from the Linux network.

Controlling the Active Directory Environment

There are many ways to accomplish different kinds of control within Active Directory. The tool for exerting most of these controls is Group Policy, the successor to NT 4’s System Policy.

With Group Policy, the administrator of the server can control a wide variety of features like allowing the user to have the “Run” command in their Start menu. Other options include controlling how a user’s desktop appears, providing a particular website within their Favorites menu inside Internet Explorer, or controlling the system time on a PC (by default, normal users cannot change the time on their workstations) as well as controlling other security features.

In our scenario, we were presented with the task of eliminating the administrative task of reconfiguring desktops for non-administrative staff by not allowing users to save their desktop settings. The way we chose to accomplish this task was to create a group policy that prevents such activity and apply it to an Organizational Unit (OU). After creating the OU in Figure 1 called “Domain Users,” the OU was populated with user profiles.


Figure 1.

Using the Group Policy Editor, a snap-in module used in the Microsoft Management Console for Active Directory, the policy called “No changes,” was created. See Figure 2.



Figure 2.

At this point, we attached the group policy to the OU, see Figure 3.



Figure 3.

One important step we discovered was the need to create an OU or use an existing OU of which the Administrator or Administrator Equivalent is not a member. If an Administrator creates a group policy in the same OU to which he/she belongs, they may set a configuration policy against all other users and himself/herself thereby forcing the Administrator to reinstall Active Directory in order to remedy the problem.

Another type of control that can be established is by using roaming profiles. Roaming profiles are used to preserve a user's configuration (desktop, background, etc) and present the user with an identical environment on any computer onto which the user logs. This is done by storing the user’s profile in a central location (in this case, Active Directory), as opposed to the traditional user profile that is stored on the local device. By storing this information in a central location, the information is always the same as long as the user logs onto the same Active Directory environment.

One of the required features of Manamana’s network design was the implementation of roaming profiles on user profiles to allow them to access different workstations. The roaming profile is used to provide the same look and feel to a user’s login profile no matter which machine he/she is logged into within the Active Directory environment.

Roaming profiles are created by location of the user profile path to a shared location. Therefore, a shared location must be created. See Figure 4.



Figure 4.


After creating the shared location, the profile is modified to accommodate the shared location, as depicted in Figure 5.




Figure 5.


After the above was been completed, the final requirement was to give shared permissions to the users so that the Group Policy can be accessed and modified by the user if this is required. Figure 6 shows the security permissions.



Figure 6.

Regardless of which machine a user is logged onto in the domain, he/she can have all of the shortcuts and desktop settings that are viewable on any other desktop. Roaming profiles are a very good choice for consistency, but the user must have the correct permissions to access the shared location otherwise the roaming profile will not work and the user will be logged on with a default profile.

1. Terminal Server (IP 10.207.32.2) (Mustapha)

In this portion of the project, we were required to set up Terminal Server for Manamana.
Microsoft Windows Terminal Services was introduced for Microsoft Windows NT with a separate Terminal Server Edition. Beginning with Microsoft Windows 2000, and continuing on to Microsoft Windows 2003, it became a fully integrated part of all Windows servers.
Terminal Server provides an effective and reliable way to distribute Windows-based programs with a network server. With Terminal Server, a single point of installation allows multiple users to access the desktop on a server running one of the Windows Server 2003 family operating systems. Users can run programs, save files, and use network resources as if they were sitting at that computer.
You can use Terminal Services Manager to manage and monitor users, sessions, and processes on any terminal server on the network. It can also be used to:
• Display information about servers, sessions, users, and processes.
• Connect to and disconnect from sessions.
• Monitor sessions.
• Reset sessions.
• Send messages to users.
• Log off users
• Terminate processes.
This is how it works. The service provides remote access to a Windows desktop through "thin client" software, allowing the client computer to serve as a terminal emulator. Terminal Services transmits only the user interface of the program to the client. The client then returns keyboard and mouse clicks to be processed by the server. Each user logs on and sees only their individual session, which is managed transparently by the server operating system and is independent of any other client session. Client software can run on a number of client hardware devices, including computers and Windows-based terminals. Other devices, such as Macintosh computers or UNIX-based workstations, can use additional third-party software to connect to a server running Terminal Server.
One of the many benefits of using Terminal Server is centralized deployment of programs. With Terminal Server, all program execution, data processing, and data storage occur on the server, centralizing the deployment of programs. Terminal Server ensures that all clients can access the same version of a program. Software is installed only once on the server, rather than on every desktop throughout the organization, reducing the costs associated with updating individual computers.
Another benefit from using Terminal Server is improved speed. Terminal Server brings Windows Server 2003 family operating systems to desktops faster. Terminal Services Helps Bridge the gap while older desktops are migrated to Microsoft® Windows® XP Professional, providing a virtual desktop experience of any Windows Server 2003 family operating system to computers that are running earlier versions of Windows.
Terminal Services clients are available for many different desktop platforms including Microsoft MS-DOS, Windows-based terminals, Macintosh, and UNIX. Additionally, a Web-based version of the Terminal Services client (Remote Desktop Web Connection) provides Terminal Services connectivity to computers with Web access and an Internet Explorer browser. (Connectivity for MS-DOS, Macintosh, and UNIX-based computers requires additional software.)
Terminal Services also takes full advantage of existing hardware. Terminal Services extends the model of distributed computing by allowing computers to operate as both thin clients and full-featured personal computers simultaneously. Computers can continue to be used as they have been within existing networks while also functioning as thin clients capable of emulating the Windows XP Professional desktop.
One of the functionality requirements Manamana presented was that we configure a Terminal Server so that users and administrators can use and manage the network resources remotely. Remote Desktop for Administration (formerly known as Terminal Services in Remote Administration mode) provides remote access to the desktop of any computer running one of the Windows Server 2003 family operating systems, allowing you to administer your server—even a Microsoft® Windows 2000 server—from virtually any computer on the network. Up to two remote sessions, plus the console session, can be accessed simultaneously. Terminal Server licensing is not required to use this feature.
The following is information on some additional features Terminal Services provides:
• You can configure new connections for Terminal Services, modify the settings of existing connections, and delete connections by using the Terminal Services Configuration tool (TSCC.msc) or Group Policy (gpedit.msc).
• By default, Terminal Services connections are encrypted at the highest level of security available (128-bit).
• You can monitor the actions of a client logged on to a terminal server by remotely controlling the user's session from another session. Remote control allows you to either observe or actively control another session. If you choose to actively control a session, you will be able to input keyboard and mouse actions to the session. A message can be displayed on the client session asking permission to view or take part in the session before the session is remotely controlled. You can use Terminal Services Group Policies or Terminal Services Configuration to configure remote control settings for a connection and Terminal Services Manager to initiate remote control on a client session Windows Server 2003 family operating systems also support Remote Assistance, which allows greater versatility for controlling another user's session. Remote Assistance also provides the ability to chat with the other user. Remote control can also be configured on a per-user basis using Group Policies or the Terminal Services Extension to Local Users and Groups and Active Directory Users and Computers.
• When you install one of the Windows Server 2003 family operating systems, the Remote Desktop Users group is one of the built-in user groups on your computer. By default, this group is not populated when you install Terminal Server on your computer. You must choose the users and groups that you want to have permission to log on remotely to the terminal server, and manually add them to the Remote Desktop Users group. This increases the security of remote connections, and also allows you to install any required programs before users start connecting to the terminal server.
As to installing the Terminal Server, we determined that we only needed to install for the remote administration. We did not need to install the entire Terminal Server program but just enable remote connection. The easiest way is to add a role to the server using manage your server wizard or using windows components wizard accessible via add or remove programs tool in the control panel.
The following are the steps we followed in order to set up Terminal Server remote connections:
To enable or disable remote connections
1. Open System in Control Panel.
2. On the Remote tab, select or clear the Allow users to connect remotely to your Computer.
The following are the steps we followed in order to set up Terminal Server remote Desktop Users Group:
To add users to the Remote Desktop Users group
1. Open Computer Management.
2. In the console tree, click the Local Users and Groups node.
3. In the details pane, double-click the Groups folder.
4. Double-click Remote Desktop Users, and then click Add....
5. On the Select Users dialog box, click Locations... to specify the search location.
6. Click Object Types... to specify the types of objects you want to search for.
7. Type the name you want to add in the Enter the object names to select (examples): box.
8. Click Check Names.
9 When the name is located, click OK.
The following are the steps we followed in order to connect to the console session of a server using the Remote Desktops MMC Snap-in:

1. Open Remote Desktops snap-in.

2. If you have not already done so, create the connection to the terminal server or computer to which you want to connect.

3. In the console tree, right-click the connection.

4. In the context menu, click Connect.

The following are the steps we followed in order to activate the Remote Desktop Connection:

1. Start->Programs->Accessories->Communications->Remote Desktop Connection.
2. Open Remote Desktop Connection.
3. In Computer, type a computer name or IP address. The computer can be a terminal server, or it can be a computer running Windows XP Professional or a Windows Server 2003 operating system that has Remote Desktop enabled and for which you have Remote Desktop permissions.
The following are things to be certain of in order to connect to the Remote Desktop for Administration:

1. Be sure you have the appropriate permission to use Remote Desktop for Administration. You must be an Administrator, or you must be a member of the Remote Desktop Users group.

2. Be sure Remote Desktop for Administration is enabled.

3. You must have the network computer name or IP address of the server.

4. You must not turn off the server.

Terminal Services permissions can be handled easily on a per-computer basis, using the Remote Desktop Users user group and the RemoteInteractiveLogon right. In some cases, however, it might be necessary to manage permissions on a per-connection basis.
The following are the steps we followed in order to remotely control a session:

1. Open Terminal Services Manager.
2. Right-click the session you want to monitor, and then click Remote Control. The Remote Control dialog box appears.
3. In Hot key, select the keys you want to use to end a remote control session, and then click OK. The default hot key is CTRL+* (using * from the numeric keypad only).
When you want to end remote control, press CTRL+* (or whatever hot key you have defined).
The following are things to be certain of when remotely controlling another session:

1. You must have Full Control permission to remotely control another session.
2. To configure remote control settings for a connection, use Terminal Services Configuration. Remote control can also be configured on a per-user basis by using the Terminal Services Extension to Local Users and Groups and Active Directory Users and Computers.
3. Before monitoring begins, the server warns the user that the session is about to be remotely controlled, unless this warning is disabled. Your session might appear to be frozen for a few seconds while it waits for a response from the user.
4. When you enter the remote control session, your current session shares every input and output with the session you are monitoring.
5. Your session must be capable of supporting the video resolution used at the session you are remotely controlling or the operation fails.
6. The console session can neither remotely control another session nor can it be remotely controlled by another session.
7. You can also use the shadow command to remotely control another session.
Each user who logs on to a Terminal Services session must have a user account either on the server or in a domain on the network that the server is on. The Terminal Services user account contains additional information about the user that determines when users log on, under what conditions, and how specific desktop settings are stored. Windows Server 2003 family operating systems contain a built-in User group called Remote Desktop Users, which is used to manage Terminal Services users.

2. DHCP Services and DHCP Relay Services (IP 10.207.32.3) (Tarik)

In this portion of the project, we were required to set up DHCP Services for Manamana.
Before proceeding with an explanation of the work done on this portion of the project, the following is a brief explanation of the term “Dynamic Host Configuration Protocol.” Dynamic Host Configuration Protocol (DHCP) is an IP standard to simplify host IP configuration management. It provides a way for DHCP servers to manage dynamic allocation of IP addresses and other related configuration details for DHCP-enabled clients on a network.
Every computer on a TCP/IP network must have a unique IP address. The IP address (together with its related subnet mask) identifies both the host computer and the subnet mask to which it is attached. When a computer is moved to a different subnet, the IP address must be changed. DHCP allows you to dynamically assign an IP address to a client from a DHCP server IP address database on a local network. In fact, DHCP reduces the complexity and amount of administrative work involved in reconfiguring computers.
There are many benefits to using DHCP. It requires less effort than manually configuring the assignment of IP addresses. Also, it makes updating a default gateway easier or DNS server’s IP address. Having to manually make changes would be labor intensive requiring to you visit every machine to be updated. DHCP also eliminates duplicate IP addresses, provided you correctly configure the DHCP scopes.
The DHCP server database should contain valid configuration parameters for all clients on the network, valid IP addresses maintained in a pool for assignment to clients (10.207.32.10 to 10.207.33.254 in our network), plus reserved addresses for manual assignment and duration of a lease offered by the server. The lease defines the length of time for which the assigned IP address can be used.
We chose to have two DHCP servers, one for Red Hat (10.207.33.2) and one for the Windows 2003 server (10.207.32.3). The following is a discussion of the work we did on the Windows 2003 DHCP server.
The first component of DHCP we looked at is the Relay Agent. The Relay Agent is a type of router. This is how it works. The Relay Agent intercepts DHCPDiscover packets from clients and then unicasts to the DHCP on their behalf. The secret of successful relaying is to create the appropriate scope on the DHCP server. The Relay Agent adds the source IP address when it contacts DHCP. This is how the server knows from its list of scopes which subnet to offer an IP address. You find the Relay Agent in Routing and Remote Access (RRAS). Because the Relay Agent is a type of router, the RRAS location to install and configure the DHCP Relay is appropriate. Once you find and install the Relay Agent, configuring it is easy. You need to instruct the router or DHCP Relay Agent the IP address of the real DHCP server. Right click the DHCP Relay Agent, and then select “properties” from the shortcut menu.
To determine how many routers lie between you client and its DHCP, with each router representing 1 hop, calculate the maximum hop count that you need and configure the Relay Agent accordingly. If you wish to check the Hop Count Threshold, from the RRAS interface, navigate to the IP Routing, DHCP Relay Agent, and right click on the Interface, not the server.
When using Relay Agents, especially if you configure more than one, there is a possibility of duplicate IP addresses. The conflict detection feature means that the DHCP server checks by pinging the proposed address lease before actually issuing it. Naturally, if the server receives a reply that IP address is not offered. Conflict Detection is a property of the DHCP server as a whole and not of individual scopes. To set the threshold, right click the server icon, properties, then Advanced (Tab).
If DHCP cannot assign an IP address, then clients can give themselves an Automatic Private IP Address (APIPA) in the range 169.254.x.y where x and y are two random numbers between 1 and 254. While APIPA is a sign of failure, the fact that the client has a valid IP address means that it can keep on polling to see if a DHCP server has come back online.
You can use Predefined options to set the WPAD (Web Proxy Auto Detect) for XP clients. You could set the ISA server Proxy with a group policy, but it may be easier to control via a DHCP option. From the DHCP server icon, select: Set Predefined Options. Select: Add. Next, in the Name box, enter WPAD. Change the Data Type box to: String. In the Code box, type: 252. Press enter. In the Predefined Option and Values dialog box, type http:// ISA-yourServer: 80 /wpad.dat. Note that 80 is the default port of the ISA AutoDiscovery service.
As to troubleshooting, IPCONFIG is a good troubleshooting tool. For example IPCONFIG /all, /release /renew. When you run IPCONFIG, if you see address beginning 169.254.x.y, this is known as APIPA which is discussed above.
Since Manamana’s DHCP server will be newly installed, we checked to ensure that it was Authorized in Active Directory by an Enterprise Admin. Since it was Authorized, we then checked to ensure that the scope was activated.
It goes without saying that the very DHCP server itself must have a fixed IP address. The DHCP server cannot be its own client.

We made sure that we added the interface to the Relay Agent. The Relay Agent is found under the Routing and RAS server icon. While we added the interface itself, by right clicking the Relay Agent object, we selected New Interface from the short cut menu.
In order to install the DHCP server and configure it correctly, we navigated to: Add Remove Programs, Windows Components, Networking Services. Then we were prompted to insert the Windows server CD.
While adding the DHCP service is easy, configuring the scope options requires thought. For instance, if you make a mistake with the subnet mask, you cannot amend that scope, you would have to delete and start again. However, you can add and change the options such as Type 006 DNS server, or Type 015 Domain name.
Here is a table summarizing how a DHCP service results in clients getting an IP address. Here are the classic 4 packets that clients exchange during a lease negotiation.
Client Server
DHCPDiscover --> <--- DHCPOffer
DHCPRequest --> <--- DHCPack

Reserving IP addresses is useful in two situations: for file and print servers and for important machines where leases are in short supply. DHCP knows which machine to lease a particular IP to by its MAC address (also called NIC or Physical address). In Windows 2003, when you enter the MAC address, DHCP strips out the hyphens if you include them amongst the HEX numbers. To find the MAC address, ping the machine then type arp -a.
In a Windows Server 2003 domain, all DHCP servers need to be authorized in Active Directory. We logged on (or type “RunAs”) as a member of the Enterprise Admins group, then right click the DHCP server icon, and Authorize. The RIS service also needs to be Authorized before it becomes active.
After we Authorized a server, each scope needed to be activated individually. So, we right clicked the scope to activate. We got a green arrow and knew we were successful.
To install the DHCP server, we did the following:
1. Open Windows Components Wizard.

2. Under Components, scroll to and click Networking Services.

3. Click Details.

4. Under Subcomponents of Networking Services, click Dynamic Host Configuration Protocol (DHCP), and then click OK.

5. Click Next. If prompted, type the full path to the Windows Server 2003 distribution files, and then click Next.
Required files are copied to your hard disk.
In order to open the Windows Components Wizard, we did the following:
1. Click Start.
2. Click Control Panel.
3. Double-click Add or Remove programs.
4. Click Add/Remove Windows Components.
DHCP servers must be configured with a static IP address
In order to open the DHCP console, we did the following:
1. Click Start.
2. Click Settings.
3. Click Control Panel.
4. Double-click Administrative Tools.
5. Double-click DHCP.
The DHCP console is an administrative tool for managing DHCP servers. For more information
In order to connect to a DHCP server, we did the following:
1. Open DHCP.
2. In the console tree, click DHCP.
3. On the Action menu, click Add Server.
4. In the Add Server dialog box, do one of the following:
a. Click This server, enter the name of the DHCP server that you want to connect to, and then click OK.
b. Click This authorized DHCP server, click the DHCP server that you want to connect to, and then click OK
To open DHCP, we did the following:
1. Click Start.
2. Click Settings.
3. Click Control Panel.
4. Double-click Administrative Tools.
5. Double-click DHCP.
If you are operating DHCP in an Active Directory environment and are connecting to a server for the first time, you need to first authorize the new DHCP server in the directory service.
If DHCP is installed and running on this computer, you do not need to make a manual connection using the wizard. In most cases, the local DHCP server automatically appears in the list of servers when the DHCP console starts, and you connect to the server by clicking it in the console tree.
Only users belonging to the following groups can connect to a DHCP server:
• DHCP Users
• DHCP Administrators
• Domain Admins
• Enterprise Admins
To start the DHCP server, we did the following:
1. Open DHCP.
2. In the console tree, click the applicable DHCP server.
3. On the Action menu, point to All Tasks and then click one of the following:
a. To start the service, click Start.
b. To stop the service, click Stop.
c. To interrupt the service, click Pause.
d. To stop and then automatically restart the service, click Restart.
After you pause or stop the server, the Resume option appears and can be clicked to immediately resume service.
You can also perform the tasks Start, Stop, Pause and Restart at a command prompt by using the following commands:
1. Net start dhcpserver
2. Net stop dhcpserver
3. Net pause dhcpserver
4. Net continue dhcpserver
You can also perform these tasks at the netsh> command prompt or in a script using the Netsh commands for DHCP.
In order to reconcile the DHCP database, we did the following:
4. Open DHCP.
5. In the console tree, click the applicable DHCP server.
6. On the Action menu, click Reconcile All Scopes.
7. In the Reconcile All Scopes dialog box, click Verify. (Inconsistencies found are reported in the status window.)
8. If the database is found to be consistent, click OK.
If the database is not consistent, click the displayed addresses that need to be reconciled, and click Reconcile to repair inconsistencies.
4. Router (IP 10.207.32.1 and 10.207.33.1) (Jeff Heiden)

Routers are devices that route packets of data destined for networks other that your local network. A packet is a network message that includes data and has a source and destination identified within it. A Router can be a very expensive dedicated appliance or it can be created from a multi-homed computer. The difference between the dedicated appliance and the multi-homed computer is that the dedicated appliance is more efficient at routing to different networks.

For the purpose of our test system, the Router is a multi-homed computer that has been configured as a Windows 2003 server with “Routing and Remote Access” installed on it. The Routing and Remote Access (RRAS) is used here to redirect the traffic from the Windows network to the Linux network and vise-versa.

ii. LINUX (Red Hat Server)

1. Secondary DNS and DHCP Services (IP 10.207.33.2) (Tarik, Mustapha, Jeff H. & Jeff S.)
Per the project requirements, the class was to implement a secondary DNS servers as a fail-safe and a DHCP server for the Linux network.
As explained in Section II.A.i.1. above, the Domain Name System (DNS) is a database that contains information about all computers in a TCP/IP network. Section II.A.i.1. addressed our setting up the Primary DNS, and this section will address setting up the secondary DNS.
As a side note, in order to complete work on the DNS, the class members were to go to their assigned server, open the network properties, and go to TCP/IP configuration. In the DNS part (which below IP address configuration), each was to include the following IP 10.207.33.2 as a secondary DNS. As a check when done, each class member was to attempt using nslookup in both directions :nslookup 10.207.33.2 and nslookup server4dns.

In order to create the secondary DNS which we named eccns.local, we needed to create forward and reverse zones.

Creating the Forward Zone
To create the forward zone, we did the following:
1. Click on create a new master zone
2. Zone type: Forward (forward name to address) select it
3. Domain name /Network: eccns.local. (domain here) (Note the (.) at the end of the domain name, it has to be there -is not a mistake.
4. Record file: Automatic
5. Master server: server4dns.eccns.local [/] Add NS record for Master Server?
6. Email address: root@localhost or root@eccns.local
7. Use zone template: no
8. Click on create
Once the zone was created, we proceeded to edit its properties. It took us to this panel automatically.
In order to edit the Forward Zone, we did the following:
1. Click on Address
2. Enter name: eccns.local.
3. Enter address: 10.207.33.2 (note: this is the physical address domain 1)
4. Time-To Leave: default
5. Update reverse?: yes
6. Click on create

Figure 7 – Screen Shot of Address record
7. Return to record types
In order to add Name Server Records, we did the following:
1. Enter zone name: eccns.local.
2. Time-To-Leave : Default
3. Enter Name server: dns2.eccns.local. (host.domain.com)
4. Click on create

Figure 8 – Screen Shot of Name Server Records
5. Return to record type.
In order to add a Name Alias Record, we did the following:
1. Name: www
2. Time-To-Leave: Default
3. Real Name: eccns.local.
4. Click on create
5. Name: mail
6. Time-To Leave: Default
7. Real Name: eccns.local.
8. Click on create
9. Name: ftp
10. Time-To-Leave: Default
11. Real Name: eccns.local.
12. Click on create

Figure 9 – Screen Shot of Name Alias
13. Return to Record Type
To create a Mail Exchange Record (MX record), we did the following:
1. Name: eccns.local.
2. Time-To-Leave: Default
3. Mail Server: eccns.local.
4. Priority: 10
5. Click on create
6. Return to Record Types

Figure 10 – Screen Shot of Mail Record
We just finished creating the forward zone. At the very bottom of this current panel (Edit Master Zone), click on Return to zone list. From the zone list we clicked on Apply Changes.
The next step was to create the Reverse Zone for :eccns.local
Creating the Reverse Zone
To create the Reverse Zone, we followed these steps:
1. Click on Create New Master Zone
2. Now the Zone type will be: Reverse
3. Domain name/network: 10.207.33. (The last number is left out which is 2)
4. Records file: Automatic
5. Master server: server4dns.eccns.local [/] Add NS record for Master Server?
6. Email address: root@localhost or root@eccns.local
7. Use template: no
8. Refresh time: leave as default
9. Expiry time: leave as default
10. IP address for template: leave blank
11. Transfer retry time: leave as default
12. Default time to leave: leave as default
13. Click on create
Next we edited the Master Zone properties for the Reverse Zone that we just created.
Next, we created a pointer as follows:
1. Click on PT
2. Now add Reverse Address Record
3. Address: 10.207.33.2 (type complete IP address here)
4. Host name: eccns.local.
5. Update forward: yes
6. Click on Create
7. Return to Record Types
We next added a name Server (NS) as follows:
1. Zone Name: 33.207.10
2. Name Server: server4dns.eccns.local.
3. Time to leave: Default
4. Click create
5. Return to Record Types
In order to add a Name Alias Record (CN), we did the following:
1. Name: www
2. Time-To-Leave: Default
3. Real Name: eccns.local.
4. Click on create
5. Name: mail
6. Time-to-Leave: Default
7. Real Name: eccns.local.
8. Click on create
9. Name: ftp
10. Time-to-Leave: Default
11. Real Name: eccns.local.
12. Click on create
13. Return to zone list
14. Click on Apply Changes
We just completed a totally functional secondary DNS. We can now verify the changes in the main configuration file /etc/named.conf. Note that a new zone has been added to the file.
Figure 11 below is a screen shot of the secondary DNS created by BIND:

Figure 11 - screen shot of the secondary DNS created by BIND

Figure 12 - screen shot of the Record Files in the secondary DNS
Next, to activate all changes, restart the service by entering the following:
[root@server4dns root]# service named restart
In the alternative, you may reboot the system.
Since we created a working master DNS, Manamana can now use our system for almost anything. In the same manner we created this master DNS, you can also create a slave DNS server at a different IP address. By creating a slave DNS, the two DNS computers can replicate the data of each other (Fault Tolerance) so that if one of the servers is down, the other one will respond to the queries.

2. SAMBA, NFS, and CUPS Services (IP 10.207.33.3) (Steve, Adam, Jeff H. & Jeff S.)

Per the project requirements, all users must have access to file and print resources.
Configuration began by installing the RedHat Linux operating system on the print server. Difficulties were immediately encountered during installation of the operating system and the computer crashed twice. It was determined that the reason it was consistently crashing was because the drivers were not on the same CD as the operating system. One solution was to download drivers from the Internet and install them before installing the operating system. The route chosen was to install a more recent version of Linux, Fedora 3. During the installation of the OS, the packages for Samba, NFS, and CUPS were all installed along with the KDE and GNOME windowing systems for easier administration of the system.
Unlike on a Windows server, configuration of services rarely involves a graphical interface. This means that the KDE and GNOME windowing systems were not necessary to the installation, but they do offer some utilities that can aid in configuration. Although these graphical configuration utilities can streamline the process of configuration, the terminal and vi quickly become a Linux administrators best friend. The graphical interfaces simply change the text based configuration files. Instead of being limited by the GUI’s interface, it is very simple to configure everything through the terminal. Whenever changing any configuration files for services, that service must be stopped and restarted. In Red Hat, the command for that is service [name of service] restart. Since configuration is happening through the terminal, it is very quick to write changes to vi, restart the service, and test it out. On future File/Print servers a graphical environment will probably not be installed as it is not necessary.
Since the OS is now installed, it is time to configure CUPS, the print server. All initial configuration of this service took place using the graphical utilities. The printer that was chosen already had an internal network card. Through the Printers section it was added as a local printer. Once it was added, the web interface for CUPS was accessed and this printer was added to the CUPS service as an available printer. The web interface is convenient since it allows for remote access to the status of the print server. It displays the current queue and allows for canceling or resubmitting of jobs. The port to access this interface is 631, so through the network, both to add the printer and to access this interface that port was used. The format for accessing any available printer is as follows: fileprint.eccns.local:631/printers/[name of printer]. After some permission issues, the local network IP range was added and the printer worked just fine.
Next came the Samba configuration. This configuration of this service was handled solely through the command line and the file /etc/samba/smb.conf. First, a directory was created in the root called Share. It was then added to smb.conf as a common share directory that all users have full access to. Giving full access through this file, however, does not give complete access to the actual directory, just to the share. Change Mode must then be used to also give access to the directory. Once permission has been granted, since this server has been added to the domain, all users in Active Directory can access this share using the server’s name, FILEPRINT. In Windows, the mapping is created using the following syntax: \\FILEPRINT\Share. In Linux it is handled using the following syntax: smb://fileprint/Share.
Next came the NFS configuration.
NFS: Network File System

NFS, Network File System, was developed by Sun Microsystems in the 1980s. It provides a mechanism by which UNIX systems can share their disk resources. What makes NFS really useful is that it can function across multiple UNIX/Linux machines, enabling a centrally managed directory structure to be accessed by many clients by exporting access from the server and mounting it on the client. In this way, duplication is avoided and disk resources are centralized.

NFS Architecture

There are three programs that provide NFS services. They are:

rpc.portmapper - This maps calls made from other machines to the correct NFS daemons. It is meant for the client machine.

rpc.nfsd – This daemon translates the NFS requests into a format understood by the
local filesystems.

rpc.mountd – This daemon services requests to mount or unmount filesytems.

The NFS services are included in the Red Hat Linux distribution and can be chosen at installation time.

Configuring both NFS Servers and Clients

The two key files that are used for NFS services are /etc/exports at the server and /etc/fstab at the client. The /etc/exports file contains information as to which directories are to be shared with which clients and the extent of that client’s access rights. The /etc/fstab file on the client machine specifies a mapping between the local client directories and the servers and directories on those servers that map to each one. Note that there can be a mix; a client may have some sub directories come from one server and some from another, creating a sort of virtual directory structure.

The /etc/exports file follows this format:

directory-to-export client-ips permissions

So each host is given a set of access permissions. The most significant ones are as follows:

rw – Read and Write Access
ro -- Read Only Access
synch – Tells the server to commit file writes to disk immediately.

In order to allow the Linux client machines to gain access to the /export/nfsshare test directory, we had to enter the following:

/export/nfsshare 10.207.33.* (ro)

Note the use of the wildcard. This is important since the exact IP of the client depends on the value that the DHCP server will grant, and that cannot be known in advance. The only thing to do is to specify the known range of IPs from which the DHCP must choose.

Once changes in the /etc/exports file are made, you can have them picked up by the following command:

exportfs –r

This sends certain signals to the rpc.nfsd daemaon to reread the file.

At the client computer, we have two choices:

1) Each time we wish access to the exported files, we can mount the server’s directory to an (already existing) local directory: E..g

mount –t nfs fileprint:/export/nfsshare /nfstest

2) Most likely we would want that access to be established at boot time. To do that, the /etc/fstab file must be modified. The format of the /etc/fsab file is as follows:

Server:Directory-To-Export Mount-Point FileSystemType Options 0 0

as in:

fileprint:/export/nfsshare /nfstest nfs defaults 0 0

By the time you have logged in, the mount as been done.

Starting and Stopping NFS

To start the service at the server, the following command is entered:

service nfs start

This starts both the NFS daemons and runs the exportfs command.

To configure the machine so that NFS is started at the server’s boot, a determination must be made as to which run levels should be associated with this service. Normally those are levels 3 and 5, so we ran:

chkconfig --level 35 nfs on

On the client side, the “portmapper” process is required. Normally this is included and activated in the standard workstation Linux distribution, but it can be verified with the following command at the client computer:

rpcinfo –p

This will list all RPC programs running on the system.

3. APACHE and FTP Services (IP 10.207.33.5) ( Steve E.)
As required of all servers, initial configuration began with installation of the server OS. Having just worked out difficulties encountered in installing the email server (see section ii.4. below), the installation of the operating system went more smoothly.
Once the operating system was successfully installed, our next task was to setup the files for the web server. As with the email server (see section ii.4. below), what we found and ultimately recommend for Manamana is a software application that we located on the Internet at http://www.webmin.com/ called Webmin.

Apache Web Server

We will try to give an overview here of the issues involved in the installation, configuration and administration of the Apache Web Server.
Server Installation
There are two options for installing the Apache Web Server: either to install the RPM package that comes with the Linux installation media, or to top for compiling the source code yourself. We chose the former.
Apache RPM takes about 6 MB and installs files in the following directories:
/etc/httpd/conf -- This directory contains all the Apache configuration files, such as httpd.conf, access.conf and srm.conf.

/etc/rc.d - The tree under this directory contains startup/stop scripts. These are used to start and stop the server from the command line as well as when the computer is halted started or rebooted.

/var/www – The RPM installs default server ICONS, CGI programs and HTML files in this directory.

/var/log/http -- This is the default log directory. There are, by default, two log files, access_log and error_log which reside in this directory.

Runtime Server Configuration Settings.

Starting with Apace 1.3.4, all runtime configuration settings are stored in one file:
/etc/httpd/conf/httpd.conf
The general idea in this file is that one specifies configuration directives, which are commands to set a particular option, in the form of:
directive option option
Some directives are simple and set a single value such as a filename, but others are more complex and must be specified in sections. These larger, section directives, look like HTML tags and are enclosed in angle brackets. Within the scope of the starting and tags, individual options can be specified that apply only to that directive. The former category includes things like the server type, port number name of the server and an error logfile name. The later category includes such items as Virtual Hosting and restricting access to certain directories and resources.
The following is a listing of some of the more commonly used directives that are available:

ServerType - Can be either “standalone” or “inetd”. “inetd” will cause a new process to be spawned for every new HTTP request. Usually we want standalone.

ServerRoot - This is the absolute path to the server’s main directory, where the configuration and logfiles are kept. It defaults to /etc/httpd.

Port - Indicates which port the server should run on. This defaults to 80. If another value is used it needs to specified in the URL when attempting to bring up the page.

User and Group - These directives should be set to the user id and group id that the server will assume to process the requests. There are generally two ways to configure this option, either as “nobody” or as that of a specific user. Usually we chose “nobody” because of security considerations.

ServerAdmin – This directive is set to the email address of the webmaster administering the server. This setting is always a good idea in case there are problems.

ServerName - This directive sets the hostname the server will return. It should be set to a fully qualified name that has a DNS entry.

DocumentRoot - This sets the absolute path of the document tree from which the server will serve its files. The default is /var/www/html.

KeepAlive - Usually set to ON. This allows for persistent connection between client and server.

Authentication and Access Control
There could be times when there is material on the web site that is not supposed to be made available to the general public. One needs to be able to lock out these areas from everyone but a selected group of users. There are two basic approaches to doing this, either by checking the client’s IP or by asking for a user name and password.
The IP based approach can be implemented using the “allow” and “deny” directives for that purpose. These directives can use the word “all,” a fully qualified domain name, a full or partial IP or a network/subnet mask. Thus, one might have:
allow from x.y.com
allow from 212.85.67
deny from 212.85.67.9/255.255.255.0
The default behavior of Apache is to apply all the deny directives first and only then check the allow directives. This could be changed with an “order” statement. So, if one specified “Order deny, allow” the result would be that if a host does not meet the deny criteria it will be allowed. The other possibility is to specify “Order allow, deny” which means that the allow directives will be evaluated first. The effect would be that if a host is not specifically allowed, it would be denied access to the resource.
There are several methods of doing authentication in Apache. We will discuss only the most common: basic authentication. With this method, a user is required to submit a user name and password to be verified. In order to set up a file to check these user names/passwords against, the htpasswd command can be run with –c option. For example, on the command line one could enter:
httpasswd –c allowedpeople alig
to create the file “allowedpeople” with the user alig listed. You would then be prompted for this password. To tell Apache about this file, you would use the AuthUserFile directive. By associating different authentication files with different tags, one can get appropriate permissions for different groups of users.
For our project, we had no specific web requirements. No Virtual Hosting or dynamic content was needed, nor was there any need to restrict access. We therefore took the default settings, installed and started the service and confirmed that a test page was readable to all.
To configure the service, one can modify the httpd.conf file manually or one can use one of the Linux GUI’s. The GUI method is as follows:
System Settings -> Server Settings ->HTTP
Enter the options for ServerName, webmaster’s email address and Available Addresses (“all available addresses on port 80”)
To start the service, manually enter:
/etc/rc.d/init.d/httpd start
or use the GUI as:
System Settings -> Server Settings -> Services,
check the httpd box in the left column and then hit START.
After this, we could point the browsers to web.eccns.local and access the test web page. To be sure that the service starts from now on at boot time, we entered:
chkconfig –level 35 httpd on
FTP Services

We installed and configured Linux’s Very Secure FTP Daemon (vsftpd) to provide file transfer service. vsftpd was included in the Fedora distribution, and we chose that package as part of the (server6) installation.

There were several configuration decisions that had to be made. Configurations settings for the vsftpd service are stored in the /etc/vsftpd/vsftpd.conf file. The file contains a default group of settings that may be tweaked. However, the items in that file are not exhaustive as the complete set is found in the vsftd.conf manual page in section 5. The settings found there are the most common and useful. They include:

1) This setting enables the Administrator to make the FTP service anonymous or standard. Standard ftp requires logins and passwords on the server in order to obtain or install files. Anonymous ftp does not and accepts a universal “anonymous” login with a provided email address as a password. For Manamana’s application, we wanted a little more security and thus chose the standard option. We thus set:

anonymous_enable = NO

2) Similarly, we want local users to be able to log in. To make that happen, we set:

local_enable = YES

3) We want them to be able to upload files, such as html documents so we set:

write_enable = YES

4) Since it is good to keep logs of this activity though, we set:

xferlog_enable = YES

5) The default file for that is set by:

xferlog_file = /var/log/vsftpd.log

There are also settings for

- Limiting the maximum amount of simultaneous client connections (max_clients)

- The maximum rate of data transfer for both anonymous and standard logins
( anon_max_rate, local_max_rate)

- Limiting the amount of client connections from a single IP address. (max_per_ip)

- A greeting banner message (ftpd_banner)

Once the configuration options were setup as appropriate, we started the service.
We did so with:

service vsftpd start

To configure vsftpd to routinely start on booting the computer, we used

chkconfig vsftpd on

Since we are using standard ftp, the Administrator will need to create logins and passwords. Each ftp user will then be placed in an ftp documents (doc) directory. The root directory does this as follows:

1) Create a group for ftp users:

groupadd ftp-users

2) Setup a directory for the ftp documents:

mkdir /home/ftp-docs

3) Make that directory accessible to the ftp-users group:

chmod 750 /home/ftp-docs
chown root:ftp-users /home/ftp-docs

4) Add the users of that group, with their default directory set to /home/ftp-docs:

useradd –g ftp-users –d /home/ftp-docs sehrlich
useradd –g ftp-users –d /home/ftp-docs jheiden
. . .

5) Assign each of them the standard password:

passwd sehrlich ECcns2005
passwd jheiden ECcns2005

6) Copy files into the /home/ftp-docs directory for distribution:

7) Change the permissions in the file for read only access by the group:

chmod 740 /home/ftp-docs/*

On the client side, ftp client software comes included with standard Linux, and there is no special configuration to be done. We successfully tested the service from both the Linux and Windows client, performing both downloads and uploads of files.

4. SMTP, POP3, and DNS Relay Services (IP 10.207.33.4) (Adam)
Per the project requirements, all Manamana clients are to have access to email, both sending (SMTP) and receiving (POP3), as well as having aliases and virtual accounts configured for users and groups.
We began by installing the Linux/RedHat operating system on a PC that will be used as the email server for Manamana. We tried to install the Red Hat Linux 8.0 Publisher’s Edition found at the end of the textbook “Hello Linux!” We immediately encountered difficulties when he attempted to install it in that the computer crashed. This happened several times in a row. We determined that the reason it was consistently crashing was because the drivers were not on the same CD as the operating system. One solution suggested to us was to download drivers from the Internet and install them before installing the operating system. We wondered if we could save time by getting all the drivers at once from the newest distribution of Linux/Fedora/Red Hat, Version 3.0. We located that version on the Internet, installed it, and were able to continue with his portion of the project.
During the Linux installation process, we were prompted to select that it was an email server to be installed, and within that installation we chose SMTP for sending and POP3 for receiving emails according to the project requirements.
Simple Mail Transfer Protocol (SMTP)
Simple Mail Transfer Protocol (SMTP) is capable of sending e-mails both within a local computer network and outside of that local network. In doing so, SMTP performs a number of tasks including that it:
• Checks to see if the mail recipient is local;
• Searches local, root, and Elmhurst DNS servers based on MX records (Mail Exchange);
• Connects to the appropriate server; and,
• Hands off email to POP3.
• If mail not deliverable after 5 days, message considered “undeliverable.”
The following are some of the commands to be used in order to send an email using SMTP. To send e-mail at the command prompt, the user types the commands as they are show at the bullets below:
• TELNET server-name 25 (SMTP communicates on port 25)
• HELO your-domain (This is where we identify the source of the email)
• MAIL FROM: your-email-address
• RCPT TO: recipient-email-address
• DATA
• Text of e-mail goes here
• . (End of DATA section is punctuated with a single dot on its own line.)
• QUIT
Note that SMTP returns code to let the e-mail client know whether or not the command was successful.
As an example, the above command entered at the computer with the IP address 10.207.33.5 25 would appear as follows:
TELNET 10.207.33.5 25
HELO mx.elmhurst.edu
MAIL FROM: mustapha@eccns.local
RCPT TO: adam@eccns.local
DATA
This is a test message
.
Post Office Protocol ver. 3 (POP3)
Post Office Protocol ver. 3 (POP3) is capable of receiving e-mails both within a local computer network and from outside of that local network. In doing so, POP3 performs a number of tasks including that it:
• Has a text file for each account;
• Lists email accounts;
• Messages are formatted;
• Sends a copy of the mailbox
• Separates the file into separate messages; and,
• Resets and erases the mailbox.
The following are some of the commands that must be entered in order to receive an email via POP3.
• TELNET pop.eccns.local 110 (110 is the receiving port, and 25 is the sending port)
• USER adam
• PASS ECcns2005
• RETR
• QUIT
• LIST (lists the messages)
• DELE (deletes a message)
Webmin & Sendmail
We also had to configure the files for the email. One way of doing so was to utilize Sendmail. We soon learned that Sendmail has one of the nastiest configuration files in the Linux (or even Windows) world. We found that there is a special M4 scripting language to help make modifications easier, but even that was hard to work with. What we found and ultimately recommend for Manamana is a software application that we located on the Internet at http://www.webmin.com/ called Webmin. See Figure 7 which is a screen shot of Webmin.
Webmin is a web-based interface for system administration of Unix. Webmin can use any browser that supports tables and forms and Java for the File Manager module. Webmin consists of a simple web server, and a number of CGI programs which directly update system files like “/etc/inetd.conf” and “/etc/passwd”. The web server and all CGI programs are written in Perl version 5 which are both text-based programs, and use no non-standard Perl modules. Webmin provides a Web interface into the Linux box, thus providing a “semi” Graphical User Interface (GUI) for management. One of the Webmin modules is the Sendmail module. By updating one field on the form, Manamana can have Sendmail up and running.

Figure 7
The Webmin program automatically installed the following files for us:
• Distributions
– sendmail-..rpm
– sendmail-cf-..rpm
– sendmail-config-..rpm
– sendmail-doc-..rpm
• Procedure
– rpm –e package_name
– rpm –ivh package_name
Sendmail is set up to run this file when the computer is booted
/etc/rc.d/init.d/sendmail
/etc/rc.d/rc3.d/S85sendmail
The following file is set up to run when Sendmail starts:
/usr/bin/sendmail –q15m
The following file is set up to configure for Sendmail:
/etc/sendmail.cf
The following are the paths to the files which you can use to alter files within the Sendmail module to make the email program do exactly what you want it to do such as restrict access, create aliases, relay mails, etc. Of course, there will be “default” settings in each of these files already, but you can go into these files and change the text to override those default settings.
• Restrict access: /etc/mail/access
• Create aliases: /etc/mail/aliases
• Relay mail: /etc/mail/relay-domains
• Domains: /etc/mail/sendmail.cw
• Virtual users: /etc/mail/virturaltable
• Logging:
• mail.* /var/log/mail.log
• Secures file system with:
• chmod –R 600 /etc/mail/*
• Privacy Flags:
• 0 PrivacyOptions=needmailhelo,needexpnhelo,
noexpn, novrfy,restrictmailq,restrictqrun
In addition, the following are tools available in Sendmail that can be used to obtain information on the emails:
• mailq – prints a summary of mail messages queued for delivery. The mail is queued in directory: /var/spool/mqueue/
• mailstats – displays current mail statistics. Statistics are stored in file: /var/log/sendmail.st/
• purgestatmailstats: purges the mail statistics
• praliases: displays current mail aliases
Up to this point, we were looking at ways in which Manamana could access their email at the command prompt. In the alternative, we looked at Manamana’s setting up Evolution, a GUI that can be used to send, receive, and organize email. Figure 8 is a screen shot of the Evolution INBOX.
Figure 8
Now clients at Manamana can access their email either at the command prompt or they can set up Evolution at their terminal if they wish to have a GUI.

III. Conclusion

For its CS 545 Group Project, the Class of Fall 2005 successfully developed a computer network for Manamana, a small hypothetical company. The scenario and project requirements detailed that Manamana wanted to have a mix of Windows Server 2003 and LINUX servers. It already had several computers already running Windows XP, a number of hubs and a few class A IPs. It utilized two laser printers connected to each of its networks, and wanted the class to set up a LAN/WAN such that its current resources could be more efficiently utilized.

The class set up a single computer network incorporating Windows Server and LINUX operating systems. They designed the network to consist of seven servers with a router (IP 10.207.32.1 and 10.207.33.1) between the Windows and LINUX sides. They set up a client computer and printer on the LINUX side, as well as a client computer and printer on the Windows side. The class reviewed the functionality requirements for the network, and determined which server to use to resolve each requirement. On the Windows side, the class used the first server (IP 10.207.32.4) to implement Active Directory, Primary DNS and WINS services. It used the second server (IP 10.207.32.2) to implement Terminal Server, and the third server (IP 10.207.32.3) for DHCP services. On the LINUX side, they used the first server (IP 10.207.33.2) to establish the Secondary DNS and DHCP Services, and the second server (IP 10.207.33.3) to set up SAMBA, NFS and CUPS Services for printers. They used the third server (IP 10.207.33.4) for SMTP and POP3 for email services. Finally, they used the fourth server (IP 10.207.33.5) for APACHE web and FTP web services.

The network design developed by the class provides maximum computer efficiency using the resources that Manamana had available. Most importantly, it meets the current needs proposed by Manamana for a dual operating system network which can grow as Manamana’s business grows.

IV. References

Ball, Bill and Duff, Hoyt. Red Hat Linux 9 Unleashed. Sam’s Publishing, 2003.

Brandon, Cameron, MCSE TCP/IP for Dummies. IDG Books Worldwide, Inc., 1998.

The Computer Technology Documentation Project. Downtownhost.com Web Hosting Services. 26 Nov. 2005 .

Configuring Roaming User Profiles. Microsoft Corporation, 2005

Hall, Jon “Maddog,” and Sery, Paul G. Red Hat Fedora Linux 3 for Dummies. Wiley Publishing, Inc., 2005.

“How to Configure a Default Gateway for Multihomed Computer with LAN and Internet Access.” 20 December 20 2004 .

Minasi, Mark, Internet Connection Sharing. October 1999. .

Minasi, Mark, Mastering Windows 2000 Server, Second Edition. Microsoft Corporation, 2000.

“Roaming Profiles.” May 2002. .

“Router (definition).” Internet.com / Webopedia. Jupiter Media Corporation. 4 December 2005 .

“Squid with NTLM Authentication from How To Guides.” Townsville Linux Users Group. 4 December 2005. .

“Technical Overview of Terminal Services.” Microsoft Windows Server 2003. Microsoft Corporation
Thomas, Guy. “DHCP in Windows Servers 2003.” Computer Performance. 2 December 2005. .
Webmin.com. 18 Oct. 2005. Open Country/Sourceforg.net. 26 Nov. 2005. .
WindowsNetworking.com. TechGenix Ltd. 26 Nov. 2005 .

Zacker, Craig, 70-290 Managing and Maintaining a Microsoft® Windows Server™ 2003 Environment Textbook. Redmond, Washington: Microsoft Press, 2004.

0 Comments:

Post a Comment

<< Home