Sep 192016
 

Acta. Cryst. (2016) B72 661-683 (Feature Article) [ doi:10.1107/S2052520616012890 ]

surfaceDirect determination of the Flack parameter as part of the structure refinement procedure usually gives different, though similar, values to post-refinement methods. The source of this discrepancy has been probed by analysing a range of data sets taken from the recent literature. Most significantly, it was observed that the directly refined Flack (x) parameter and its standard uncertainty are usually not much influenced by changes in the refinement weighting schemes, but if they are then there are probably problems with the data or model. Post-refinement analyses give Flack parameters strongly influenced by the choice of weights. Weights derived from those used in the main least squares lead to post-refinement estimates of the Flack parameters and their standard uncertainties very similar to those obtained by direct refinement. Weights derived from the variances of the observed structure amplitudes are more appropriate and often yield post-refinement Flack parameters similar to those from direct refinement, but always with lower standard uncertainties. Substantial disagreement between direct and post-refinement determinations are strongly indicative of problems with the data, which may be difficult to identify. Examples drawn from 28 structure determinations are provided showing a range of different underlying problems. It seems likely that post-refinement methods taking into account the slope of the normal probability plot are currently the most robust estimators of absolute structure and should be reported along with the directly refined values.

Publisher’s copy

May 152013
 

Crystallographic structure refinement can involve hundreds of millions of calculations for a single iteration of structure refinement; careful optimisation plays an important role in determining how efficiently the software makes uses of the available CPU power. The following freely available tools help identify bottlenecks in software implementations, and allow testing potentially faster algorithms and compiler options. On recent CPUs, a carefully optimised algorithm can easily be ten times faster than a naïve implementation. Not only does this save time, but it also enables the use of larger data sets and more complicated models to tackle ever more complicated problems. A time-critical portion of code from the crystallographic refinement package CRYSTALS is analysed here.

All the software tools discussed are free and open source, running on the Linux operating system. Some of them are not available on Windows.

Profiling

The first optimisation step is profiling the execution of the existing code and algorithms. This will reveal exactly how much time is spent in functions, lines of code, and even assembly instructions. Two approaches are common:

  1. Emulating a CPU in software and then counting every instruction executed. This is the method used in valgrind. It can also look for memory leaks. Because the CPU is emulated, it can take up to 200 times longer than normal code execution.
  2. Exploiting hardware counters directly inside the CPU. These counters can be checked at fixed intervals and then recorded. While there is almost no performance penalty compared to the normal execution, it can be inaccurate. The software perf from the linux kernel can exploit them.
pascal-kcache

KCacheGrind: the coloured regions correspond to different functions in the software while the area of each corresponds to the time spent in that function during execution.

This example uses the least-squares refinement routine (\sfls) in the crystallographic analysis package CRYSTALS. A decent size data set (http://dx.doi.org/10.1107/S1600536813007757) from the journal Acta Crystallographica Section E has been used. The command line version of CRYSTALS (compiled with COMPCODE=LIN) was compiled on Linux using the open source compiler gfortran. The executable was then profiled using the software valgrind and the profiling data were analysed with kcachegrind. The output includes a graphical map (shown below) in which coloured regions correspond to different functions in the software and the area of each region corresponds to the time spent in that function during execution.

 

The same data has been analysed using the software perf with the following result:

89.94% crystals crystals      [.] adlhsblock_
 3.71% crystals crystals      [.] xchols_
 2.90% crystals crystals      [.] xsflsx_
 0.89% crystals libm-2.17.so  [.] __expf_finite
 0.44% crystals crystals      [.] xzerof_

In both cases, the profile analysis reveals that about 90% of the time is spent in the adlhsblock function, which is just 21 lines long including declarations. The body of the function is shown below (accumula.F, revision 1.8).

I = 1
do ROW=1, BLOCKdimension  ! Loop over all the rows of the block
    CONST = DERIVS(ROW)   ! Get the constent term
    do COLUMN = ROW, BLOCKdimension
        MATBLOCK(I) = MATBLOCK(I) + CONST*DERIVS(COLUMN) ! Sum on the next term.
        I = I + 1         ! Move to the next position in the matrix
    end do
end do

Instruction level analysis

The adlhsblock function is forming the normal matrix from the design matrix and is mathematically doing the matrix multiplication Zt Z. To save memory, the design matrix is not stored completely and the calculation is done reflection by reflection, multiplying and accumulating the outer product of one row of Z. Furthermore, only the upper triangle of the normal matrix is stored, which reduces the number of operations, but makes for convoluted row/column indexing of the elements.

Further investigation of the code profiling within the function indicates that the bottleneck is on the line:

MATBLOCK(I) = MATBLOCK(I) + CONST*DERIVS(COLUMN)

The assembly instructions also revealed the used of scalar instructions.

 0.09 │70:  vmovss (%rsi),%xmm0
21.21 │     add $0x4,%rsi
      │         MATBLOCK(I) = MATBLOCK(I) + CONST*DERIVS(COLUMN)
 0.05 │78:  vmulss %xmm0,%xmm1,%xmm0
12.24 │     movslq %ecx,%rcx 
 0.12 │     lea -0x4(%rdx,%rcx,4),%rcx
20.42 │     vaddss (%rcx),%xmm0,%xmm0
13.35 │     vmovss %xmm0,(%rcx)
      │         I = I + 1
20.91 │     mov %eax,%ecx
 0.06 │     add $0x1,%eax

Modern CPUs include two kind of processing units: scalar units (which process one input at a time) and vector units (which can process multiple input at the same time with the same operation). The latter instructions are called SIMD. The performance of SIMD can be outstanding compare to scalar instructions: On a Sandy bridge Intel processor the vector instructions can operate on up to eight single precision numbers at the same time. Compilers can automatically use these instructions based on patterns in the source code (see http://gcc.gnu.org/projects/tree-ssa/vectorization.html). However it is advisable to always check if a loop has been vectorized as expected as some restrictions may apply (see http://blog.debroglie.net/2013/04/14/autovectorization-not-vectorized-not-suitable-for-gather/). The use of scalar instructions is symptomatic of a suboptimal optimisation.

Optimisation and analysis

The standard optimisation level compiler switch for CRYSTALS in the Linux makefile is ‘-O2’ which does not include autovectorisation (autovectorisation is enabled at ‘O3’ level). CRYSTALS was therefore compiled with autovectorisation enabled (-ftree-vectorize -msse2) and did not give any speed up. Applying the flag (-ftree-vectorizer-verbose) and checking the output during compilation confirmed that no loop had been vectorised. In order to improve the situation the inner loop was removed and replaced with array operations and the recursive dependency on the indices was removed.

do ROW=1, BLOCKdimension
   i = ((row-1)*(2*blockdimension-row+2))/2+1
   j = i + blockdimension - row
   MATBLOCK(i:j) = MATBLOCK(i:j)+DERIVS(ROW)* derivs(row:BLOCKdimension)
end do

The new version has been compared to the original given different level of optimisation:

Compilation flag Original code (Wall clock time in s) New code (Wall clock time in s)
-O2 16 12
-O2 -ftree-vectorize -msse2 16 6.7
-O2 -ftree-vectorize -mavx 16 5.0

The improvement without vectorization (16s to 12s) is surprising: Because each cycle in the loop is independent the greater flexibility could be exploited by the scheduler to reorder instructions for greater efficiency. When using sse2 or avx instructions the new version is much faster still. The double size of the avx vector compare to sse is also clearly visible.

The new code was profiled using perf and compared to the original one. The bottleneck remains in the adlhsblock function, but the assembly output confirms the use of the vectorised avx intructions (vmulps and vaddps for example).

85.75% crystals crystals     [.] adlhsblock_
 5.40% crystals crystals     [.] xchols_
 4.40% crystals crystals     [.] xsflsx_
 1.30% crystals libm-2.17.so [.] __expf_finite
 0.61% crystals crystals     [.] xzerof_
      |         MATBLOCK(i:j) = MATBLOCK(i:j)+DERIVS(ROW)*derivs(row:BLOCKdimension)
 2.35 |15a: vmovup (%r11,%rcx,1),%xmm1
 8.42 |     add $0x1,%r8
 4.73 |     vinser $0x1,0x10(%r11,%rcx,1),%ymm1,%ymm1
 7.91 |     vmulps %ymm2,%ymm1,%ymm1
 8.82 |     vaddps (%r14,%rcx,1),%ymm1,%ymm1
41.82 |     vmovap %ymm1,(%r14,%rcx,1)
12.10 |     add $0x20,%rcx 
 2.62 |     cmp %r8,%r13
      |   ? ja 15a

 

Using code profiling to identify a bottleneck, followed by optimisation of the algorithm and appropriate choice of compiler switches result in least-squares refinement that is up to three times faster.

Sep 212012
 

J. Appl. Cryst. (2012). 45, 1057–1060. [ doi:10.1107/S0021889812035790 ]

The traditional Waser distance restraint, the rigid-bond restraint and atomic displacement parameter (ADP) similarity restraints have an equal influence on both atoms involved in the restraint. This may be inappropriate in cases where it can reasonably be expected that the precision of the determination of the positional parameters and ADPs is not equal, e.g. towards the extremities of a librating structure or where one atom is a significantly stronger scatterer than the other. In these cases, the traditional restraint feeds information from the poorly defined atom to the better defined atom, with the possibility that its characteristics become degraded. The modified restraint described here feeds information from the better defined atom to the more poorly defined atom with minimal feedback.

Electronic reprints

Publisher’s copy

 

Apr 302012
 

J Appl. Cryst. (2012), 45, 417-429. [ doi:10.1107/S0021889812015191 ]

Leverages measure the influence that observations (intensity data and restraints) have on the fit obtained in crystal structure refinement. Further analysis enables the influence that observations have on specific parameters to be measured. The results of leverage analyses are discussed in the context of the amino acid alanine and an incomplete high-pressure data set of the complex bis(salicylaldoximato)copper(II). Leverage analysis can reveal situations where weak data are influential and allows an assessment of the influence of restraints. Analysis of the high-pressure refinement of the copper complex shows that the influence of the highest-leverage intensity observations increases when completeness is reduced, but low leverages stay low. The influence of restraints, notably those applying the Hirshfeld rigid-bond criterion, also increases dramatically. In alanine the precision of the Flack parameter is determined by medium-resolution data with moderate intensities. The results of a leverage analysis can be incorporated into a weighting scheme designed to optimize the precision of a selected parameter. This was applied to absolute structure refinement of light-atom crystal structures. The standard uncertainty of the Flack parameter could be reduced to around 0.1 even for a hydrocarbon.

Electronic reprints

Publisher’s copy

Sep 132011
 

J. Appl. Cryst.  (2011), 44, 1017-1022.    [ doi:10.1107/S0021889811034066 ]

A summary of the features for investigating absolute structure available in the crystallographic refinement program CRYSTALS is presented, together with the results of analyses of 150 light-atom structures collected with molybdenum radiation carried out with these tools. The results confirm that the Flack and Hooft parameters are strongly indicative, even when the standard uncertainties are large compared to the thresholds recommended by Flack & Bernardinelli [J. Appl. Cryst. (2000), 33, 1143–1148].

Electronic reprints

  • Oxford University Research Archive [direct pdf]

Publisher’s copy

Dec 072010
 

Acta Cryst. (2011), A67, 21-34.    [ doi:10.1107/S010876731004287X ]

The practical use of the average and difference intensities of Friedel opposites at different stages of structure analysis has been investigated. It is shown how these values may be properly and practically used at the stage of space-group determination. At the stage of least-squares refinement, it is shown that increasing the weight of the difference intensities does not improve their fit to the model. The correct form of the coefficients for a difference electron-density calculation is given. In the process of structure validation, it is further shown that plots of the observed and model difference intensities provide an objective method to evaluate the fit of the data to the model and to reveal insufficiencies in the intensity measurements. As a further tool for the validation of structure determinations, the use of the Patterson functions of the average and difference intensities has been investigated and their clear advantage demonstrated.

Electronic reprints

Publisher’s copy

Nov 272010
 

J. Appl. Cryst. (2011), 44, 52-59.    [ doi:10.1107/S0021889810042470 ]

One of the requirements for the next generation of small-molecule crystallographers is a mathematical programming infrastructure. It should provide a modelling design process, where the model formulation is kept separate from the optimization process to provide gains in reliability, scalability and extensibility, enabling the application of optimization components in general, and refinement-based applications in particular, as applied to crystallographic problems. A research project has been undertaken to design and implement an innovative toolkit library – a small-molecule toolkit (SMTK) – for crystallographic modelling and refinement. This paper provides an overview of SMTK and its object-oriented implementation. As a practical illustration, it also shows the context of use for a set of classes and discusses how the toolkit enables the user rapidly to develop, maintain and explore the full capabilities of crystallography and so create new applications. SMTK reduces the degree of effort required to construct and develop new algorithms and provides users with an easy and efficient means to test ideas, as well as to build large and maintainable models which can readily be adapted to any new situation.

Publishers copy:

Jun 292010
 

J. Appl. Cryst. (2010), 43, 1100-1107.    [ doi:10.1107/S0021889810025598 ]

Because they scatter X-rays weakly, H atoms are often abused or neglected during structure refinement. The reasons why the H atoms should be included in the refinement and some of the consequences of mistreatment are discussed along with selected real examples demonstrating some of the features for hydrogen treatment that can be found in the software suite CRYSTALS.

Hydrogen addition in CRYSTALS

Hydrogen addition in CRYSTALS

Electronic reprints:

Publisher’s copy:

Apr 152010
 
Susan Huth presents Kirsten Christensen with the Durward Cruickshank Award

Susan Huth presents Kirsten Christensen with the Durward Cruickshank Award

The final dinner of the British Crystallographic Association Spring Meeting in Warwick was interrupted, as always, with the prize winning awards. Amber Thompson was awarded the International Union of Crystallography Prize (a copy of International Tables) for her explanation of the advantages of choosing non-standard space groups. Kirsten Christensen was awarded the Durward Cruickshank prize for a young crystallographer who had made an outstanding contribution to crystallography.

 

 

Other contributions include:

N. David Brown, James Haestier, Mustapha Sadki, Amber L. Thompson & David J. Watkin
matchbOx:  Automatic Structure Matching to Facilitate Crystallographic Refinement (YC Presentation)

Kirsten E. Christensen, Christopher J. Serpell, Nicholas E. Evans & Paul D. Beer
Pushing the Boundaries of Small Molecule Crystallography:  The Challenging Structure of a Macrocyclic Anion Sensor (Poster)

Richard I. Cooper, Amber L. Thompson & David J. Watkin
The Hydrogen Challenge:  Where are we Now? (Poster)

Christopher J. Serpell & Paul D. Beer
Refinement of Large Supramolecular Structures (Presentation)

David J. Watkin
Dealing with Difficult Data (Session Chair)