Number Theory

Ntheory Class Reference

class sympy.ntheory.generate.Sieve

An infinite list of prime numbers, implemented as a dynamically growing sieve of Eratosthenes. When a lookup is requested involving an odd number that has not been sieved, the sieve is automatically extended up to that number.

In [ ]:
>>> from sympy import sieve
>>> from array import array # this line and next for doctest only
>>> sieve._list = array('l', [2, 3, 5, 7, 11, 13])
In [ ]:
>>> 25 in sieve
False
>>> sieve._list
array('l', [2, 3, 5, 7, 11, 13, 17, 19, 23])

extend(n)

Grow the sieve to cover all primes <= n (a real number).

In [ ]:
>>> from sympy import sieve
>>> from array import array # this line and next for doctest only
>>> sieve._list = array('l', [2, 3, 5, 7, 11, 13])
In [ ]:
>>> sieve.extend(30)
>>> sieve[10] == 29
True

extend_to_no(i)

Extend to include the ith prime number.

i must be an integer.

The list is extended by 50% if it is too short, so it is likely that it will be longer than requested.

In [ ]:
>>> from sympy import sieve
>>> from array import array # this line and next for doctest only
>>> sieve._list = array('l', [2, 3, 5, 7, 11, 13])
In [ ]:
>>> sieve.extend_to_no(9)
>>> sieve._list
array('l', [2, 3, 5, 7, 11, 13, 17, 19, 23])

primerange(a, b)

Generate all prime numbers in the range [a, b).

In [ ]:
>>> from sympy import sieve
>>> print([i for i in sieve.primerange(7, 18)])
[7, 11, 13, 17]

search(n)

Return the indices i, j of the primes that bound n.

If n is prime then i == j.

Although n can be an expression, if ceiling cannot convert it to an integer then an n error will be raised.

In [ ]:
>>> from sympy import sieve
>>> sieve.search(25)
(9, 10)
>>> sieve.search(23)
(9, 9)

Ntheory Functions Reference

sympy.ntheory.generate.prime(nth)

Return the nth prime, with the primes indexed as prime(1) = 2, prime(2) = 3, etc.... The nth prime is approximately n*log(n) and can never be larger than 2**n.

In [ ]:
>>> from sympy import prime
>>> prime(10)
29
>>> prime(1)
2

sympy.ntheory.primetest.isprime : Test if n is prime primerange : Generate all primes in a given range primepi : Return the number of primes less than or equal to n

sympy.ntheory.generate.primepi(n)

Return the value of the prime counting function pi(n) = the number of prime numbers less than or equal to n.

In [ ]:
>>> from sympy import primepi
>>> primepi(25)
9

sympy.ntheory.primetest.isprime : Test if n is prime primerange : Generate all primes in a given range prime : Return the nth prime

sympy.ntheory.generate.nextprime(n, ith=1)

Return the ith prime greater than n.

i must be an integer.

Potential primes are located at 6*j +/- 1. This property is used during searching.

In [ ]:
>>> from sympy import nextprime
>>> [(i, nextprime(i)) for i in range(10, 15)]
[(10, 11), (11, 13), (12, 13), (13, 17), (14, 17)]
>>> nextprime(2, ith=2) # the 2nd prime after 2
5

prevprime : Return the largest prime smaller than n primerange : Generate all primes in a given range

sympy.ntheory.generate.prevprime(n)

Return the largest prime smaller than n.

Potential primes are located at 6*j +/- 1. This property is used during searching.

In [ ]:
>>> from sympy import prevprime
>>> [(i, prevprime(i)) for i in range(10, 15)]
[(10, 7), (11, 7), (12, 11), (13, 11), (14, 13)]

nextprime : Return the ith prime greater than n primerange : Generates all primes in a given range

sympy.ntheory.generate.primerange(a, b)

Generate a list of all prime numbers in the range [a, b).

If the range exists in the default sieve, the values will be returned from there; otherwise values will be returned but will not modify the sieve.

Some famous conjectures about the occurence of primes in a given range are [1]:

Twin primes: though often not, the following will give 2 primes

an infinite number of times:

primerange(6n - 1, 6n + 2)

Legendre's: the following always yields at least one prime

primerange(n2, (n+1)2+1)

Bertrand's (proven): there is always a prime in the range

primerange(n, 2*n)

Brocard's: there are at least four primes in the range

primerange(prime(n)2, prime(n+1)2)

The average gap between primes is log(n) [2]; the gap between primes can be arbitrarily large since sequences of composite numbers are arbitrarily large, e.g. the numbers in the sequence n! + 2, n! + 3 ... n! + n are all composite.

In [ ]:
>>> from sympy import primerange, sieve
>>> print([i for i in primerange(1, 30)])
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29]

The Sieve method, primerange, is generally faster but it will occupy more memory as the sieve stores values. The default instance of Sieve, named sieve, can be used:

In [ ]:
>>> list(sieve.primerange(1, 30))
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29]

nextprime : Return the ith prime greater than n prevprime : Return the largest prime smaller than n randprime : Returns a random prime in a given range primorial : Returns the product of primes based on condition Sieve.primerange : return range from already computed primes

or extend the sieve to contain the requested range.

sympy.ntheory.generate.randprime(a, b)

Return a random prime number in the range [a, b).

Bertrand's postulate assures that randprime(a, 2*a) will always succeed for a > 1.

In [ ]:
>>> from sympy import randprime, isprime
>>> randprime(1, 30) #doctest: +SKIP
13
>>> isprime(randprime(1, 30))
True

primerange : Generate all primes in a given range

sympy.ntheory.generate.primorial(n, nth=True)

Returns the product of the first n primes (default) or the primes less than or equal to n (when "nth=False").

In [ ]:
>>> from sympy.ntheory.generate import primorial, randprime, primerange
>>> from sympy import factorint, Mul, primefactors, sqrt
>>> primorial(4) # the first 4 primes are 2, 3, 5, 7
210
>>> primorial(4, nth=False) # primes <= 4 are 2 and 3
6
>>> primorial(1)
2
>>> primorial(1, nth=False)
1
>>> primorial(sqrt(101), nth=False)
210

One can argue that the primes are infinite since if you take a set of primes and multiply them together (e.g. the primorial) and then add or subtract 1, the result cannot be divided by any of the original factors, hence either 1 or more new primes must divide this product of primes.

In this case, the number itself is a new prime:

In [ ]:
>>> factorint(primorial(4) + 1)
{211: 1}

In this case two new primes are the factors:

In [ ]:
>>> factorint(primorial(4) - 1)
{11: 1, 19: 1}

Here, some primes smaller and larger than the primes multiplied together are obtained:

In [ ]:
>>> p = list(primerange(10, 20))
>>> sorted(set(primefactors(Mul(*p) + 1)).difference(set(p)))
[2, 5, 31, 149]

primerange : Generate all primes in a given range

sympy.ntheory.generate.cycle_length(f, x0, nmax=None, values=False)

For a given iterated sequence, return a generator that gives the length of the iterated cycle (lambda) and the length of terms before the cycle begins (mu); if "values" is True then the terms of the sequence will be returned instead. The sequence is started with value "x0".

Note: more than the first lambda + mu terms may be returned and this is the cost of cycle detection with Brent's method; there are, however, generally less terms calculated than would have been calculated if the proper ending point were determined, e.g. by using Floyd's method.

In [ ]:
>>> from sympy.ntheory.generate import cycle_length

This will yield successive values of i <-- func(i):

In [ ]:
>>> def iter(func, i):
...     while 1:
...         ii = func(i)
...         yield ii
...         i = ii
...

A function is defined:

In [ ]:
>>> func = lambda i: (i**2 + 1) % 51

and given a seed of 4 and the mu and lambda terms calculated:

In [ ]:
>>> next(cycle_length(func, 4))
(6, 2)

We can see what is meant by looking at the output:

In [ ]:
>>> n = cycle_length(func, 4, values=True)
>>> list(ni for ni in n)
[17, 35, 2, 5, 26, 14, 44, 50, 2, 5, 26, 14]

There are 6 repeating values after the first 2.

If a sequence is suspected of being longer than you might wish, "nmax" can be used to exit early (and mu will be returned as None):

In [ ]:
>>> next(cycle_length(func, 4, nmax = 4))
(4, None)
>>> [ni for ni in cycle_length(func, 4, nmax = 4, values=True)]
[17, 35, 2, 5]

Code modified from:

sympy.ntheory.factor_.smoothness(n)

Return the B-smooth and B-power smooth values of n.

The smoothness of n is the largest prime factor of n; the power- smoothness is the largest divisor raised to its multiplicity.

In [ ]:
>>> from sympy.ntheory.factor_ import smoothness
>>> smoothness(2**7*3**2)
(3, 128)
>>> smoothness(2**4*13)
(13, 16)
>>> smoothness(2)
(2, 2)

factorint, smoothness_p

sympy.ntheory.factor_.smoothness_p(n, m=-1, power=0, visual=None)

Return a list of [m, (p, (M, sm(p + m), psm(p + m)))...] where:

p**M is the base-p divisor of n

sm(p + m) is the smoothness of p + m (m = -1 by default)

psm(p + m) is the power smoothness of p + m

The list is sorted according to smoothness (default) or by power smoothness if power=1.

The smoothness of the numbers to the left (m = -1) or right (m = 1) of a factor govern the results that are obtained from the p +/- 1 type factoring methods.

In [ ]:
>>> from sympy.ntheory.factor_ import smoothness_p, factorint
>>> smoothness_p(10431, m=1)
(1, [(3, (2, 2, 4)), (19, (1, 5, 5)), (61, (1, 31, 31))])
>>> smoothness_p(10431)
(-1, [(3, (2, 2, 2)), (19, (1, 3, 9)), (61, (1, 5, 5))])
>>> smoothness_p(10431, power=1)
(-1, [(3, (2, 2, 2)), (61, (1, 5, 5)), (19, (1, 3, 9))])

If visual=True then an annotated string will be returned:

In [ ]:
>>> print(smoothness_p(21477639576571, visual=1))
p**i=4410317**1 has p-1 B=1787, B-pow=1787
p**i=4869863**1 has p-1 B=2434931, B-pow=2434931

This string can also be generated directly from a factorization dictionary and vice versa:

In [ ]:
>>> factorint(17*9)
{3: 2, 17: 1}
>>> smoothness_p(_)
'p**i=3**2 has p-1 B=2, B-pow=2\np**i=17**1 has p-1 B=2, B-pow=16'
>>> smoothness_p(_)
{3: 2, 17: 1}

The table of the output logic is:

In [ ]:

Visual

Input

True

False

other

dict

str

tuple

str

str

str

tuple

dict

tuple

str

tuple

str

n

str

tuple

tuple

mul

str

tuple

tuple

factorint, smoothness

sympy.ntheory.factor_.trailing(n)

Count the number of trailing zero digits in the binary representation of n, i.e. determine the largest power of 2 that divides n.

In [ ]:
>>> from sympy import trailing
>>> trailing(128)
7
>>> trailing(63)
0

sympy.ntheory.factor_.multiplicity(p, n)

Find the greatest integer m such that p**m divides n.

In [ ]:
>>> from sympy.ntheory import multiplicity
>>> from sympy.core.numbers import Rational as R
>>> [multiplicity(5, n) for n in [8, 5, 25, 125, 250]]
[0, 1, 2, 3, 3]
>>> multiplicity(3, R(1, 9))
-2

sympy.ntheory.factor_.perfect_power(n, candidates=None, big=True, factor=True)

Return "(b, e)" such that "n" == "b**e" if "n" is a perfect power; otherwise return "False".

By default, the base is recursively decomposed and the exponents collected so the largest possible "e" is sought. If "big=False" then the smallest possible "e" (thus prime) will be chosen.

If "candidates" for exponents are given, they are assumed to be sorted and the first one that is larger than the computed maximum will signal failure for the routine.

If "factor=True" then simultaneous factorization of n is attempted since finding a factor indicates the only possible root for n. This is True by default since only a few small factors will be tested in the course of searching for the perfect power.

In [ ]:
>>> from sympy import perfect_power
>>> perfect_power(16)
(2, 4)
>>> perfect_power(16, big = False)
(4, 2)

sympy.ntheory.factor_.pollard_rho(n, s=2, a=1, retries=5, seed=1234, max_steps=None, F=None)

Use Pollard's rho method to try to extract a nontrivial factor of "n". The returned factor may be a composite number. If no factor is found, "None" is returned.

The algorithm generates pseudo-random values of x with a generator function, replacing x with F(x). If F is not supplied then the function x**2 + "a" is used. The first value supplied to F(x) is "s". Upon failure (if "retries" is > 0) a new "a" and "s" will be supplied; the "a" will be ignored if F was supplied.

The sequence of numbers generated by such functions generally have a a lead-up to some number and then loop around back to that number and begin to repeat the sequence, e.g. 1, 2, 3, 4, 5, 3, 4, 5 -- this leader and loop look a bit like the Greek letter rho, and thus the name, 'rho'.

For a given function, very different leader-loop values can be obtained so it is a good idea to allow for retries:

In [ ]:
>>> from sympy.ntheory.generate import cycle_length
>>> n = 16843009
>>> F = lambda x:(2048*pow(x, 2, n) + 32767) % n
>>> for s in range(5):
...     print('loop length = %4i; leader length = %3i' % next(cycle_length(F, s)))
...
loop length = 2489; leader length =  42
loop length =   78; leader length = 120
loop length = 1482; leader length =  99
loop length = 1482; leader length = 285
loop length = 1482; leader length = 100

Here is an explicit example where there is a two element leadup to a sequence of 3 numbers (11, 14, 4) that then repeat:

In [ ]:
>>> x=2
>>> for i in range(9):
...     x=(x**2+12)%17
...     print(x)
...
16
13
11
14
4
11
14
4
11
>>> next(cycle_length(lambda x: (x**2+12)%17, 2))
(3, 2)
>>> list(cycle_length(lambda x: (x**2+12)%17, 2, values=True))
[16, 13, 11, 14, 4]

Instead of checking the differences of all generated values for a gcd with n, only the kth and 2*kth numbers are checked, e.g. 1st and 2nd, 2nd and 4th, 3rd and 6th until it has been detected that the loop has been traversed. Loops may be many thousands of steps long before rho finds a factor or reports failure. If "max_steps" is specified, the iteration is cancelled with a failure after the specified number of steps.

In [ ]:
>>> from sympy import pollard_rho
>>> n=16843009
>>> F=lambda x:(2048*pow(x,2,n) + 32767) % n
>>> pollard_rho(n, F=F)
257

Use the default setting with a bad value of "a" and no retries:

In [ ]:
>>> pollard_rho(n, a=n-2, retries=0)

If retries is > 0 then perhaps the problem will correct itself when new values are generated for a:

In [ ]:
>>> pollard_rho(n, a=n-2, retries=1)
257

Richard Crandall & Carl Pomerance (2005), "Prime Numbers: A Computational Perspective", Springer, 2nd edition, 229-231

sympy.ntheory.factor_.pollard_pm1(n, B=10, a=2, retries=0, seed=1234)

Use Pollard's p-1 method to try to extract a nontrivial factor of "n". Either a divisor (perhaps composite) or "None" is returned.

The value of "a" is the base that is used in the test gcd(a**M - 1, n). The default is 2. If "retries" > 0 then if no factor is found after the first attempt, a new "a" will be generated randomly (using the "seed") and the process repeated.

Note: the value of M is lcm(1..B) = reduce(ilcm, range(2, B + 1)).

A search is made for factors next to even numbers having a power smoothness less than "B". Choosing a larger B increases the likelihood of finding a larger factor but takes longer. Whether a factor of n is found or not depends on "a" and the power smoothness of the even mumber just less than the factor p (hence the name p - 1).

Although some discussion of what constitutes a good "a" some descriptions are hard to interpret. At the modular.math site referenced below it is stated that if gcd(aM - 1, n) = N then aM % q**r is 1 for every prime power divisor of N. But consider the following:

In [ ]:
>>> from sympy.ntheory.factor_ import smoothness_p, pollard_pm1
>>> n=257*1009
>>> smoothness_p(n)
(-1, [(257, (1, 2, 256)), (1009, (1, 7, 16))])

So we should (and can) find a root with B=16:

In [ ]:
>>> pollard_pm1(n, B=16, a=3)
1009

If we attempt to increase B to 256 we find that it doesn't work:

In [ ]:
>>> pollard_pm1(n, B=256)
>>>

But if the value of "a" is changed we find that only multiples of 257 work, e.g.:

In [ ]:
>>> pollard_pm1(n, B=256, a=257)
1009

Checking different "a" values shows that all the ones that didn't work had a gcd value not equal to "n" but equal to one of the factors:

In [ ]:
>>> from sympy.core.numbers import ilcm, igcd
>>> from sympy import factorint, Pow
>>> M = 1
>>> for i in range(2, 256):
...     M = ilcm(M, i)
...
>>> set([igcd(pow(a, M, n) - 1, n) for a in range(2, 256) if
...      igcd(pow(a, M, n) - 1, n) != n])
set([1009])

But does aM % d for every divisor of n give 1?

In [ ]:
>>> aM = pow(255, M, n)
>>> [(d, aM%Pow(*d.args)) for d in factorint(n, visual=True).args]
[(257**1, 1), (1009**1, 1)]

No, only one of them. So perhaps the principle is that a root will be found for a given value of B provided that:

the power smoothness of the p - 1 value next to the root does not exceed B

a**M % p != 1 for any of the divisors of n.

By trying more than one "a" it is possible that one of them will yield a factor.

With the default smoothness bound, this number can't be cracked:

In [ ]:
>>> from sympy.ntheory import pollard_pm1, primefactors
>>> pollard_pm1(21477639576571)

Increasing the smoothness bound helps:

In [ ]:
>>> pollard_pm1(21477639576571, B=2000)
4410317

Looking at the smoothness of the factors of this number we find:

In [ ]:
>>> from sympy.utilities import flatten
>>> from sympy.ntheory.factor_ import smoothness_p, factorint
>>> print(smoothness_p(21477639576571, visual=1))
p**i=4410317**1 has p-1 B=1787, B-pow=1787
p**i=4869863**1 has p-1 B=2434931, B-pow=2434931

The B and B-pow are the same for the p - 1 factorizations of the divisors because those factorizations had a very large prime factor:

In [ ]:
>>> factorint(4410317 - 1)
{2: 2, 617: 1, 1787: 1}
>>> factorint(4869863-1)
{2: 1, 2434931: 1}

Note that until B reaches the B-pow value of 1787, the number is not cracked;

In [ ]:
>>> pollard_pm1(21477639576571, B=1786)
>>> pollard_pm1(21477639576571, B=1787)
4410317

The B value has to do with the factors of the number next to the divisor, not the divisors themselves. A worst case scenario is that the number next to the factor p has a large prime divisisor or is a perfect power. If these conditions apply then the power-smoothness will be about p/2 or p. The more realistic is that there will be a large prime factor next to p requiring a B value on the order of p/2. Although primes may have been searched for up to this level, the p/2 is a factor of p - 1, something that we don't know. The modular.math reference below states that 15% of numbers in the range of 1015 to 1515 + 104 are 106 power smooth so a B of 10**6 will fail 85% of the time in that range. From 108 to 108 + 10**3 the percentages are nearly reversed...but in that range the simple trial division is quite fast.

Richard Crandall & Carl Pomerance (2005), "Prime Numbers: A Computational Perspective", Springer, 2nd edition, 236-238

sympy.ntheory.factor_.factorint(n, limit=None, use_trial=True, use_rho=True, use_pm1=True, verbose=False, visual=None)

Given a positive integer "n", "factorint(n)" returns a dict containing the prime factors of "n" as keys and their respective multiplicities as values. For example:

In [ ]:
>>> from sympy.ntheory import factorint
>>> factorint(2000)    # 2000 = (2**4) * (5**3)
{2: 4, 5: 3}
>>> factorint(65537)   # This number is prime
{65537: 1}

For input less than 2, factorint behaves as follows:

"factorint(1)" returns the empty factorization, "{}"

"factorint(0)" returns "{0:1}"

"factorint(-n)" adds "-1:1" to the factors and then factors "n"

Partial Factorization:

If "limit" (> 3) is specified, the search is stopped after performing trial division up to (and including) the limit (or taking a corresponding number of rho/p-1 steps). This is useful if one has a large number and only is interested in finding small factors (if any). Note that setting a limit does not prevent larger factors from being found early; it simply means that the largest factor may be composite. Since checking for perfect power is relatively cheap, it is done regardless of the limit setting.

This number, for example, has two small factors and a huge semi-prime factor that cannot be reduced easily:

In [ ]:
>>> from sympy.ntheory import isprime
>>> from sympy.core.compatibility import long
>>> a = 1407633717262338957430697921446883
>>> f = factorint(a, limit=10000)
>>> f == {991: 1, long(202916782076162456022877024859): 1, 7: 1}
True
>>> isprime(max(f))
False

This number has a small factor and a residual perfect power whose base is greater than the limit:

In [ ]:
>>> factorint(3*101**7, limit=5)
{3: 1, 101: 7}

Visual Factorization:

If "visual" is set to "True", then it will return a visual factorization of the integer. For example:

In [ ]:
>>> from sympy import pprint
>>> pprint(factorint(4200, visual=True))
 3  1  2  1
2 *3 *5 *7

Note that this is achieved by using the evaluate=False flag in Mul and Pow. If you do other manipulations with an expression where evaluate=False, it may evaluate. Therefore, you should use the visual option only for visualization, and use the normal dictionary returned by visual=False if you want to perform operations on the factors.

You can easily switch between the two forms by sending them back to factorint:

In [ ]:
>>> from sympy import Mul, Pow
>>> regular = factorint(1764); regular
{2: 2, 3: 2, 7: 2}
>>> pprint(factorint(regular))
 2  2  2
2 *3 *7
In [ ]:
>>> visual = factorint(1764, visual=True); pprint(visual)
 2  2  2
2 *3 *7
>>> print(factorint(visual))
{2: 2, 3: 2, 7: 2}

If you want to send a number to be factored in a partially factored form you can do so with a dictionary or unevaluated expression:

In [ ]:
>>> factorint(factorint({4: 2, 12: 3})) # twice to toggle to dict form
{2: 10, 3: 3}
>>> factorint(Mul(4, 12, evaluate=False))
{2: 4, 3: 1}

The table of the output logic is:

Input

True

False

other

dict

mul

dict

mul

n

mul

dict

dict

mul

mul

dict

dict

Algorithm:

The function switches between multiple algorithms. Trial division quickly finds small factors (of the order 1-5 digits), and finds all large factors if given enough time. The Pollard rho and p-1 algorithms are used to find large factors ahead of time; they will often find factors of the order of 10 digits within a few seconds:

In [ ]:
>>> factors = factorint(12345678910111213141516)
>>> for base, exp in sorted(factors.items()):
...     print('%s %s' % (base, exp))
...
2 2
2507191691 1
1231026625769 1

Any of these methods can optionally be disabled with the following boolean parameters:

"use_trial": Toggle use of trial division

"use_rho": Toggle use of Pollard's rho method

"use_pm1": Toggle use of Pollard's p-1 method

"factorint" also periodically checks if the remaining part is a prime number or a perfect power, and in those cases stops.

If "verbose" is set to "True", detailed progress is printed.

smoothness, smoothness_p, divisors

sympy.ntheory.factor_.primefactors(n, limit=None, verbose=False)

Return a sorted list of n's prime factors, ignoring multiplicity and any composite factor that remains if the limit was set too low for complete factorization. Unlike factorint(), primefactors() does not return -1 or 0.

In [ ]:
>>> from sympy.ntheory import primefactors, factorint, isprime
>>> primefactors(6)
[2, 3]
>>> primefactors(-5)
[5]
In [ ]:
>>> sorted(factorint(123456).items())
[(2, 6), (3, 1), (643, 1)]
>>> primefactors(123456)
[2, 3, 643]
In [ ]:
>>> sorted(factorint(10000000001, limit=200).items())
[(101, 1), (99009901, 1)]
>>> isprime(99009901)
False
>>> primefactors(10000000001, limit=300)
[101]

divisors

sympy.ntheory.factor_.divisors(n, generator=False)

Return all divisors of n sorted from 1..n by default. If generator is True an unordered generator is returned.

The number of divisors of n can be quite large if there are many prime factors (counting repeated factors). If only the number of factors is desired use divisor_count(n).

In [ ]:
>>> from sympy import divisors, divisor_count
>>> divisors(24)
[1, 2, 3, 4, 6, 8, 12, 24]
>>> divisor_count(24)
8
In [ ]:
>>> list(divisors(120, generator=True))
[1, 2, 4, 8, 3, 6, 12, 24, 5, 10, 20, 40, 15, 30, 60, 120]

This is a slightly modified version of Tim Peters referenced at: http://stackoverflow.com/questions/1010381/python-factorization

primefactors, factorint, divisor_count

sympy.ntheory.factor_.divisor_count(n, modulus=1)

Return the number of divisors of "n". If "modulus" is not 1 then only those that are divisible by "modulus" are counted.

In [ ]:
>>> from sympy import divisor_count
>>> divisor_count(6)
4

factorint, divisors, totient

sympy.ntheory.factor_.totient(args, *kw_args)

Calculate the Euler totient function phi(n)

In [ ]:
>>> from sympy.ntheory import totient
>>> totient(1)
1
>>> totient(25)
20

divisor_count

sympy.ntheory.modular.symmetric_residue(a, m)

Return the residual mod m such that it is within half of the modulus.

In [ ]:
>>> from sympy.ntheory.modular import symmetric_residue
>>> symmetric_residue(1, 6)
1
>>> symmetric_residue(4, 6)
-2

sympy.ntheory.modular.crt(m, v, symmetric=False, check=True)

Chinese Remainder Theorem.

The moduli in m are assumed to be pairwise coprime. The output is then an integer f, such that f = v_i mod m_i for each pair out of v and m. If "symmetric" is False a positive integer will be returned, else |f| will be less than or equal to the LCM of the moduli, and thus f may be negative.

If the moduli are not co-prime the correct result will be returned if/when the test of the result is found to be incorrect. This result will be None if there is no solution.

The keyword "check" can be set to False if it is known that the moduli are coprime.

As an example consider a set of residues "U = [49, 76, 65]" and a set of moduli "M = [99, 97, 95]". Then we have:

In [ ]:
>>> from sympy.ntheory.modular import crt, solve_congruence

>>> crt([99, 97, 95], [49, 76, 65])
(639985, 912285)

This is the correct result because:

In [ ]:
>>> [639985 % m for m in [99, 97, 95]]
[49, 76, 65]

If the moduli are not co-prime, you may receive an incorrect result if you use "check=False":

In [ ]:
>>> crt([12, 6, 17], [3, 4, 2], check=False)
(954, 1224)
>>> [954 % m for m in [12, 6, 17]]
[6, 0, 2]
>>> crt([12, 6, 17], [3, 4, 2]) is None
True
>>> crt([3, 6], [2, 5])
(5, 6)

Note: the order of gf_crt's arguments is reversed relative to crt, and that solve_congruence takes residue, modulus pairs.

Programmer's note: rather than checking that all pairs of moduli share no GCD (an O(n**2) test) and rather than factoring all moduli and seeing that there is no factor in common, a check that the result gives the indicated residuals is performed -- an O(n) operation.

solve_congruence sympy.polys.galoistools.gf_crt : low level crt routine used by this routine

sympy.ntheory.modular.crt1(m)

First part of Chinese Remainder Theorem, for multiple application.

In [ ]:
>>> from sympy.ntheory.modular import crt1
>>> crt1([18, 42, 6])
(4536, [252, 108, 756], [0, 2, 0])

sympy.ntheory.modular.crt2(m, v, mm, e, s, symmetric=False)

Second part of Chinese Remainder Theorem, for multiple application.

In [ ]:
>>> from sympy.ntheory.modular import crt1, crt2
>>> mm, e, s = crt1([18, 42, 6])
>>> crt2([18, 42, 6], [0, 0, 0], mm, e, s)
(0, 4536)

sympy.ntheory.modular.solve_congruence(remainder_modulus_pairs, *hint)

Compute the integer "n" that has the residual "ai" when it is divided by "mi" where the "ai" and "mi" are given as pairs to this function: ((a1, m1), (a2, m2), ...). If there is no solution, return None. Otherwise return "n" and its modulus.

The "mi" values need not be co-prime. If it is known that the moduli are not co-prime then the hint "check" can be set to False (default=True) and the check for a quicker solution via crt() (valid when the moduli are co-prime) will be skipped.

If the hint "symmetric" is True (default is False), the value of "n" will be within 1/2 of the modulus, possibly negative.

In [ ]:
>>> from sympy.ntheory.modular import solve_congruence

What number is 2 mod 3, 3 mod 5 and 2 mod 7?

In [ ]:
>>> solve_congruence((2, 3), (3, 5), (2, 7))
(23, 105)
>>> [23 % m for m in [3, 5, 7]]
[2, 3, 2]

If you prefer to work with all remainder in one list and all moduli in another, send the arguments like this:

In [ ]:
>>> solve_congruence(*zip((2, 3, 2), (3, 5, 7)))
(23, 105)

The moduli need not be co-prime; in this case there may or may not be a solution:

In [ ]:
>>> solve_congruence((2, 3), (4, 6)) is None
True
In [ ]:
>>> solve_congruence((2, 3), (5, 6))
(5, 6)

The symmetric flag will make the result be within 1/2 of the modulus:

In [ ]:
>>> solve_congruence((2, 3), (5, 6), symmetric=True)
(-1, 6)

crt : high level routine implementing the Chinese Remainder Theorem

sympy.ntheory.multinomial.binomial_coefficients(n)

Return a dictionary containing pairs where are binomial coefficients and .

Examples

In [ ]:
>>> from sympy.ntheory import binomial_coefficients
>>> binomial_coefficients(9)
{(0, 9): 1, (1, 8): 9, (2, 7): 36, (3, 6): 84,
 (4, 5): 126, (5, 4): 126, (6, 3): 84, (7, 2): 36, (8, 1): 9, (9, 0): 1}

binomial_coefficients_list, multinomial_coefficients

sympy.ntheory.multinomial.binomial_coefficients_list(n)

Return a list of binomial coefficients as rows of the Pascal's triangle.

In [ ]:
>>> from sympy.ntheory import binomial_coefficients_list
>>> binomial_coefficients_list(9)
[1, 9, 36, 84, 126, 126, 84, 36, 9, 1]

binomial_coefficients, multinomial_coefficients

sympy.ntheory.multinomial.multinomial_coefficients(m, n)

Return a dictionary containing pairs "{(k1,k2,..,km) : C_kn}" where "C_kn" are multinomial coefficients such that "n=k1+k2+..+km".

For example:

In [ ]:
>>> from sympy.ntheory import multinomial_coefficients
>>> multinomial_coefficients(2, 5) # indirect doctest
{(0, 5): 1, (1, 4): 5, (2, 3): 10, (3, 2): 10, (4, 1): 5, (5, 0): 1}

The algorithm is based on the following result:

Code contributed to Sage by Yann Laigle-Chapuy, copied with permission of the author.

binomial_coefficients_list, binomial_coefficients

sympy.ntheory.multinomial.multinomial_coefficients_iterator(m, n, _tuple=)

multinomial coefficient iterator

This routine has been optimized for m large with respect to n by taking advantage of the fact that when the monomial tuples t are stripped of zeros, their coefficient is the same as that of the monomial tuples from "multinomial_coefficients(n, n)". Therefore, the latter coefficients are precomputed to save memory and time.

In [ ]:
>>> from sympy.ntheory.multinomial import multinomial_coefficients
>>> m53, m33 = multinomial_coefficients(5,3), multinomial_coefficients(3,3)
>>> m53[(0,0,0,1,2)] == m53[(0,0,1,0,2)] == m53[(1,0,2,0,0)] == m33[(0,1,2)]
True
In [ ]:
>>> from sympy.ntheory.multinomial import multinomial_coefficients_iterator
>>> it = multinomial_coefficients_iterator(20,3)
>>> next(it)
((3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 1)

sympy.ntheory.partitions_.npartitions(n, verbose=False)

Calculate the partition function P(n), i.e. the number of ways that n can be written as a sum of positive integers.

P(n) is computed using the Hardy-Ramanujan-Rademacher formula, described e.g. at http://mathworld.wolfram.com/PartitionFunctionP.html

The correctness of this implementation has been tested for 10**n up to n = 8.

In [ ]:
>>> from sympy.ntheory import npartitions
>>> npartitions(25)
1958

sympy.ntheory.primetest.mr(n, bases)

Perform a Miller-Rabin strong pseudoprime test on n using a given list of bases/witnesses.

Richard Crandall & Carl Pomerance (2005), "Prime Numbers: A Computational Perspective", Springer, 2nd edition, 135-138

In [ ]:
>>> from sympy.ntheory.primetest import mr
>>> mr(1373651, [2, 3])
False
>>> mr(479001599, [31, 73])
True

sympy.ntheory.primetest.isprime(n)

Test if n is a prime number (True) or not (False). For n < 10**16 the answer is accurate; greater n values have a small probability of actually being pseudoprimes.

Negative primes (e.g. -2) are not considered prime.

The function first looks for trivial factors, and if none is found, performs a safe Miller-Rabin strong pseudoprime test with bases that are known to prove a number prime. Finally, a general Miller-Rabin test is done with the first k bases which will report a pseudoprime as a prime with an error of about 4**-k. The current value of k is 46 so the error is about 2 x 10**-28.

In [ ]:
>>> from sympy.ntheory import isprime
>>> isprime(13)
True
>>> isprime(15)
False

sympy.ntheory.generate.primerange : Generates all primes in a given range sympy.ntheory.generate.primepi : Return the number of primes less than or equal to n sympy.ntheory.generate.prime : Return the nth prime

sympy.ntheory.residue_ntheory.n_order(a, n)

Returns the order of "a" modulo "n".

The order of "a" modulo "n" is the smallest integer "k" such that "a**k" leaves a remainder of 1 with "n".

In [ ]:
>>> from sympy.ntheory import n_order
>>> n_order(3, 7)
6
>>> n_order(4, 7)
3

sympy.ntheory.residue_ntheory.is_primitive_root(a, p)

Returns True if "a" is a primitive root of "p"

"a" is said to be the primitive root of "p" if gcd(a, p) == 1 and totient(p) is the smallest positive number s.t.

a**totient(p) cong 1 mod(p)

In [ ]:
>>> from sympy.ntheory import is_primitive_root, n_order, totient
>>> is_primitive_root(3, 10)
True
>>> is_primitive_root(9, 10)
False
>>> n_order(3, 10) == totient(10)
True
>>> n_order(9, 10) == totient(10)
False

sympy.ntheory.residue_ntheory.primitive_root(p)

Returns the smallest primitive root or None

[1] W. Stein "Elementary Number Theory" (2011), page 44 [2] P. Hackman "Elementary Number Theory" (2009), Chapter C

p : positive integer

In [ ]:
>>> from sympy.ntheory.residue_ntheory import primitive_root
>>> primitive_root(19)
2

sympy.ntheory.residue_ntheory.sqrt_mod(a, p, all_roots=False)

find a root of "x**2 = a mod p"

a : integer p : positive integer all_roots : if True the list of roots is returned or None

If there is no root it is returned None; else the returned root is less or equal to "p // 2"; in general is not the smallest one. It is returned "p // 2" only if it is the only root.

Use "all_roots" only when it is expected that all the roots fit in memory; otherwise use "sqrt_mod_iter".

In [ ]:
>>> from sympy.ntheory import sqrt_mod
>>> sqrt_mod(11, 43)
21
>>> sqrt_mod(17, 32, True)
[7, 9, 23, 25]

sympy.ntheory.residue_ntheory.quadratic_residues(p)

Returns the list of quadratic residues.

In [ ]:
>>> from sympy.ntheory.residue_ntheory import quadratic_residues
>>> quadratic_residues(7)
[0, 1, 2, 4]

sympy.ntheory.residue_ntheory.nthroot_mod(a, n, p, all_roots=False)

find the solutions to "x**n = a mod p"

a : integer n : positive integer p : positive integer all_roots : if False returns the smallest root, else the list of roots

In [ ]:
>>> from sympy.ntheory.residue_ntheory import nthroot_mod
>>> nthroot_mod(11, 4, 19)
8
>>> nthroot_mod(11, 4, 19, True)
[8, 11]
>>> nthroot_mod(68, 3, 109)
23

sympy.ntheory.residue_ntheory.is_nthpow_residue(a, n, m)

Returns True if "x**n == a (mod m)" has solutions.

Hackman "Elementary Number Theory" (2009), page 76

sympy.ntheory.residue_ntheory.is_quad_residue(a, p)

Returns True if "a" (mod "p") is in the set of squares mod "p", i.e a % p in set([i**2 % p for i in range(p)]). If "p" is an odd prime, an iterative method is used to make the determination:

In [ ]:
>>> from sympy.ntheory import is_quad_residue
>>> list(set([i**2 % 7 for i in range(7)]))
[0, 1, 2, 4]
>>> [j for j in range(7) if is_quad_residue(j, 7)]
[0, 1, 2, 4]

legendre_symbol, jacobi_symbol

sympy.ntheory.residue_ntheory.legendre_symbol(a, p)

0 if a is multiple of p

1 if a is a quadratic residue of p

-1 otherwise

p should be an odd prime by definition

In [ ]:
>>> from sympy.ntheory import legendre_symbol
>>> [legendre_symbol(i, 7) for i in range(7)]
[0, 1, 1, -1, 1, -1, -1]
>>> list(set([i**2 % 7 for i in range(7)]))
[0, 1, 2, 4]

is_quad_residue, jacobi_symbol

sympy.ntheory.residue_ntheory.jacobi_symbol(m, n)

Returns the product of the legendre_symbol(m, p) for all the prime factors, p, of n.

0 if m cong 0 mod(n)

1 if x**2 cong m mod(n) has a solution

-1 otherwise

In [ ]:
>>> from sympy.ntheory import jacobi_symbol, legendre_symbol
>>> from sympy import Mul, S
>>> jacobi_symbol(45, 77)
-1
>>> jacobi_symbol(60, 121)
1

The relationship between the jacobi_symbol and legendre_symbol can be demonstrated as follows:

In [ ]:
>>> L = legendre_symbol
>>> S(45).factors()
{3: 2, 5: 1}
>>> jacobi_symbol(7, 45) == L(7, 3)**2 * L(7, 5)**1
True

is_quad_residue, legendre_symbol