# Summing up#

## What else might you do?#

We chose a particular algorithm to improve on trial and error in the computation of primes. So the big questions are:

are there other algorithms and options that might reduce runtime?

is this good performance good enough or a problem for our study?

are we missing any obvious big ticket design changes?

There are in fact other algorithms which are more efficient at computing larger primes than our sieve, but our sieve is reasonably good for the small primes we have computed. Could we make further improvements in its basic design? Yes we can. For instance, we know that all even numbers above two cannot be primes. We therefore don’t even need to consider these in our algorithm. This not only reduces computation, but we can also half the size of our data structure (you might try this an exercise).

Another option that we will explore in a later section is `numpy`

. This provides some very fast optimised data structures and procedures that we will compare to standard python.

## Conclusions#

We’ve seen that a good algorithm makes a huge difference to what is feasible to compute. A trial and error approach to computing primes prevented us from even finding relatively small prime numbers in a reasonable time frame. The **Sieve of Eratosthenes** made the unfeasible suddenly feasible. We’ve also seen that the design of code can also affect execution time by an order of magnitude. But not all optimisations will have a huge impact on performance and some optimisation may scale with our problem size.