by Herman te Riele
Factoring large numbers is not just a nice pastime for number theorists. It is a testing ground for the world's most powerful computer systems, it stimulates the design of new algorithms and is relevant for public-key cryptography. CWI's research group in Computational Number Theory has made several outstanding contributions in this field over the past decades.
Finding the prime factors of large numbers was considered an interesting, but purely theoretical problem until in 1978 Rivest, Shamir and Adleman (RSA) proposed to use the near-impossibility of reconstructing the prime factors from the product of two large prime numbers as an encryption technique to protect sensitive information. Practical experience with the best factoring algorithms so far suggests that factoring is indeed a so-called NP-hard problem. The RSA-method became a cornerstone in public-key cryptography and aroused great interest in, e.g., banking and military circles. A research group at CWI is working in this field.
The question how large the constituting prime numbers should be chosen in order to guarantee the RSA-encryption to be `safe' depends on the time needed by the most advanced computers to find these numbers from their product. Experience over the last decades reveals that expert predictions always underestimated the advances in hardware and factoring methods. For example, the present factoring world record - a number of 129 decimal digits - which Rivest et al. challenged the public to factor in 1977 and was estimated to require 40 quadrillion years using the best algorithms and machines then available, was factored in 1994 after an eight-month worldwide computing effort, in which CWI contributed with idle workstation cycles. The fact that in 1967 the maximum was only about 25 decimal digits illustrates the rapid developments in algorithms and hardware, if we realize that for the best known factoring methods the computational effort roughly doubles if the number to be factored grows with 3-4 decimal digits.
From the outset CWI was involved in attempts to break existing factoring records and thus establish lower limits for `safe' prime factors. In the 1980s two important algorithmic discoveries considerably increased the size of the numbers which can be factored within a reasonable time on a modern computer: the Quadratic Sieve (QS) method ( C. Pomerance, 1985) and the Elliptic Curve Method (ECM) (H.W. Lenstra Jr., 1987). The methods are complementary in the sense that ECM finds smaller factors of larger numbers, whereas QS finds larger factors of smaller numbers. A third method, called the Number Field Sieve (NFS) (J.M. Pollard, 1993), is expected to be more efficient for general numbers than QS, and finding out the cross-over point between NFS and QS (presently around 105 decimal digits) is the subject of intensive current research at CWI and elsewhere.
At CWI much effort has been spent over the past ten years on the efficient implementation of the Quadratic Sieve method on large vector mainframes like the CDC Cyber 205, the NEC SX-2, and the CRAY Y-MP and C98 computers. Over the years several factorization records were established. These, and many other factored numbers were contributions to the Cunningham Table - a table of known factors of numbers of the form a**n +-1, initiated in 1925 by A.J.C. Cunningham and H.J. Woodall - and an extension of this table. The latest records were obtained mid-1994 in a collaboration between Oregon State University and CWI with the help of the CRAY C98 at the Academic Computing Centre Amsterdam SARA and many workstations. They include a 162-digit Cunningham number, factored with NFS. Ongoing work on the extended Cunningham table recently resulted in determining the factors of all composite numbers with less than 90 decimal digits.
return to the contents page