Information Systems Security at CLRC
by Trevor Daniels
Maintaining adequate security against the proliferating threats from Internet hackers is difficult enough for an organisation which needs only occasional access to the Internet; achieving it when continuous Internet access is absolutely essential to carrying on the most important parts of the organisation's mission, as it is for research laboratories involved in international collaborations, is a continual challenge to technical staff. We describe in non-technical terms some of the approaches adopted by CLRC over the last three years to meet this challenge.
The main business of CLRC is to promote research, to support the advancement of knowledge and to promote public understanding in science, engineering and technology. This involves close collaboration with a wide variety of academic and research institutes and technological companies world-wide, and a free exchange of information with both these organisations and the general public is a fundamental part of most of CLRC's work.
The facilities offered by the Internet are nowadays essential to meet these requirements for collaboration and dissemination, but they can only be employed effectively by operating a relatively open Internet security policy. Often the operating regime within an international collaboration is determined by that collaboration, and it is not possible to impose security standards which might prevent that collaboration inter-working effectively.
Furthermore, because most parts of the laboratory are involved in such collaborations, a very large number of servers of various kinds must be visible to the Internet, yet the staff involved in maintaining those servers are scientists, not security experts.
Maintaining adequate security under these conditions requires flexibility and a high degree of expertise to configure and maintain the several protection mechanisms deployed in the firewalls: these must keep intruders out yet not impede the work of the laboratory. How has this been achieved? Initially we adopted two main techniques to limit the exposure of CLRC computers to intruders.
First, we divided our computers into those that needed to provide externally visible services (let us call these Class A computers) and those that did not (Class B), and we assigned IP addresses from a specific range to Class A computers. This enabled us to block incoming connections which attempted to contact Class B computers easily and efficiently in the routers connecting our LAN to the Internet. This effectively hid over 90% of our computers from Internet intruders without limiting the ability of those machines to initiate connections themselves.
Second, we determined what precise services each Class A computer needed to provide, and limited connections to those machines to the specific port numbers which were required to deliver those services.
Coupled with the requirement that the system administrators of Class A computers must maintain their systems to specific standards, these relatively simple techniques provided adequate security for most of the period up to the end of 2000, and met the objective of not interfering with the normal work of the Laboratory. However, during early 2001 the continually increasing number of vulnerabilities being actively exploited by intruders, the widening variety and increasing effectiveness of their attacks, and the appearance of automatically propagating worms necessitated further measures.
Because worms are able to propagate very rapidly it is no longer effective to rely on the manual application of patches to systems to prevent infection. The time for a propagating worm to probe the entire Internet is measured in hours or even minutes with optimal search techniques, yet reactive system patching at best takes several hours and this extends to a day or two at weekends. To successfully combat worms it is necessary to anticipate their characteristics and deploy generic preventative measures in advance.
Most worms propagate via either email or websites. It is therefore essential to be able to intercept all email and all web browsing at the site periphery in order to screen out network packets carrying worms. This requires forcing all email to first pass through a single logical receipt service and for all external web browsing to be conducted via a proxy server. During 2001 both of these measures were enforced by appropriate blocks in the main site routers.
Once all the web and email traffic is being routed through specific machines it is possible to install screening services on those machines. These screens take two forms. The first is a standard virus checker, updated automatically at least daily and more frequently manually as necessary. This measure prevents known and established worms and viruses gaining access to the site by these routes. However, this alone is still not effective against new and rapidly propagating worms for which no signature is yet available in the virus scanners. To reduce the exposure to these it is necessary to screen out generically those file types which are likely to carry executable content. This is difficult for those file types which are likely to be transmitted as part of the normal business of the Laboratory, but a number of them, eg those used for screen savers, are not essential to business and may be blocked.
All these measures were introduced at CLRC during 2001. How successful have they been? In spite of these measures there have been a number of compromised machines within CLRC, but none of the compromises has been serious and none have propagated internally. Very little disruption to the work of the Lab has resulted from either the compromises or the preventative measures taken, and to this extent the adopted approach has been successful. We believe the balance between prevention and working restrictions is about right.
The first observation is that the essential security methods are technically complex and subject to human error. Most of the intrusions we have seen would have been prevented by the procedures outlined above, but mistakes, perhaps inevitably, were made by the people responsible for their implementation: filters in routers were installed incorrectly, system administrators failed to patch systems promptly or left services running which were not required, or misinterpreted often complex instructions regarding system patching. The lesson is that a single line of defence is inadequate. As many blocks, detection systems and protective measures as possible must be deployed. Externally facing servers must be combined to reduce their number and therefore also the number of staff involved in their maintenance, so concentrating the technical expertise where it matters.
Second, in addition to installing virus detection in all computers, relays and proxies it is now essential to install packet filters in multiple routers to safeguard against both human error and the subsequent internal propagation of infections should a system be compromised. This will require the reorganisation of the internal network to provide several levels of protection. In this respect the use of VLANs offers the simplest solution.
Thirdly, specific action needs to be taken to prevent infections of the more vulnerable home machines, visitor machines and laptops propagating should they be connected to the internal networks. Personal firewalls, specifically protected sub-nets and various security procedures all need to be considered and deployed.
All this in addition to maintaining all the virus checking, router filtering and security procedures already in place.