[SOLVED] What optimizations should be left for compiler?

Issue

Assume that you have chosen the most efficient algorithm for solving a problem where performance is the first priority, and now that you’re implementing it you have to decide about details like this:

v[i*3+0], v[i*3+1] and v[i*3+2] contain the components of the velocity of particle i and we want to calculate the total kinetic energy. Given that all particles are of the same mass, one may write:

inline double sqr(double x)
{
    return x*x;
}

double get_kinetic_energy(double v[], int n)
{
    double sum = 0.0;

    for (int i=0; i < n; i++)
        sum += sqr(v[i*3+0]) + sqr(v[i*3+1]) + sqr(v[i*3+2]);

    return 0.5 * mass * sum;
}

To reduce the number of multiplications, it can be written as:

double get_kinetic_energy(double v[], int n)
{
    double sum = 0.0;

    for (int i=0; i < n; i++)
    {
        double *w = v + i*3;
        sum += sqr(w[0]) + sqr(w[1]) + sqr(w[2]);
    }

    return 0.5 * mass * sum;
}

(one may write a function with even fewer multiplications, but that’s not the point of this question)

Now my question is: Since many C compilers can do this kind of optimizations automatically, where should the developer rely on the compiler and where should she/he try to do some optimization manually?

Solution

Looking at how both gcc and clang handle your code, the micro optimisation you contemplate is vain. The compilers already apply standard common subexpression elimination techniques that remove to overhead you are trying to eliminate.

As a matter of fact, the code generated handles 2 components at a time using XMM registers.

If performance is a must, then here are steps that will save the day:

  • the real judge is the wall clock. Write a benchmark with realistic data and measure performance until you get consistent results.

  • if you have a profiler, use it to determine where are the bottlenecks if any. Changing algorithms for those parts that appear to hog performance is an effective approach.

  • try and get the best from the compiler: study the optimization options and try and let the compiler use more aggressive techniques if they are appropriate for the target system. For example -mavx512f -mavx512cd let the gcc generate code that handles 8 components at a time using the 512-bit ZMM registers.

    This is a non intrusive technique as the code does not change, so you don’t risk introducing new bugs by hand optimizing the code.

Optimisation is a difficult art. In my experience, simplifying the code gets better results and far fewer bugs than adding extra subtle stuff to try and improve performance at the cost of readability and correctness.

Looking at the code, an obvious simplification seems to generate the same results and might facilitate the optimizer’s job (but again, let the wall clock be the judge):

double get_kinetic_energy(const double v[], int n, double mass)
{
    double sum = 0.0;

    for (int i = 0; i < 3 * n; i++)
        sum += v[i] * v[i];

    return 0.5 * mass * sum;
}

Answered By – chqrlie

Answer Checked By – Senaida (BugsFixing Volunteer)

Leave a Reply

Your email address will not be published.