iotbx.merging_statistics
index
/net/chevy/raid1/nat/src/cctbx_project/iotbx/merging_statistics.py

Routines for calculating common metrics of data quality based on merging of
redundant observations.

 
Modules
       
cStringIO
iotbx.data_plots
sys

 
Classes
       
__builtin__.object
dataset_statistics
filter_intensities_by_sigma
merging_stats
model_based_arrays
libtbx.slots_getstate_setstate(__builtin__.object)
estimate_crude_resolution_cutoffs

 
class dataset_statistics(__builtin__.object)
    Container for overall and by-shell merging statistics, plus a table_data
object suitable for displaying graphs (or outputting loggraph format).
 
  Methods defined here:
__init__(self, i_obs, crystal_symmetry=None, d_min=None, d_max=None, anomalous=False, n_bins=10, debug=False, file_name=None, model_arrays=None, sigma_filtering=<libtbx.AutoType object>, d_min_tolerance=1e-06, estimate_cutoffs=False, log=None)
as_cif_block(self, cif_block=None)
as_remark_200(self, wavelength=None)
extract_outer_shell_stats(self)
For compatibility with iotbx.logfiles (which should probably now be
deprecated) and phenix.table_one
get_estimated_cutoffs(self)
show(self, out=None, header=True)
show_cc_star(self, out=None)
show_estimated_cutoffs(self, out=<open file '<stdout>', mode 'w'>, prefix='')
show_loggraph(self, out=None)
show_model_vs_data(self, out=None, prefix='')

Data descriptors defined here:
__dict__
dictionary for instance variables (if defined)
__weakref__
list of weak references to the object (if defined)
quality_table
signal_table

 
class estimate_crude_resolution_cutoffs(libtbx.slots_getstate_setstate)
    Uses incredibly simplistic criteria to determine the approximate
resolution limit of the data based on the merging statistics (using much
smaller bins than normal).  Not really appropriate for routine use, but
useful for the pedantic and just-curious.
 
 
Method resolution order:
estimate_crude_resolution_cutoffs
libtbx.slots_getstate_setstate
__builtin__.object

Methods defined here:
__init__(self, i_obs, crystal_symmetry=None, n_bins=100, i_over_sigma_min=2.0, r_merge_max=0.5, r_meas_max=0.5, completeness_min_conservative=0.9, completeness_min_permissive=0.5, cc_one_half_min=0.5)
show(self, out=<open file '<stdout>', mode 'w'>, prefix='')

Data descriptors defined here:
cc_one_half_cut
cc_one_half_min
completeness_cut_conservative
completeness_cut_permissive
completeness_min_conservative
completeness_min_permissive
d_min_overall
i_over_sigma_cut
i_over_sigma_min
n_bins
r_meas_cut
r_meas_max
r_merge_cut
r_merge_max

Data and other attributes defined here:
cutoffs_attr = ['i_over_sigma_cut', 'r_merge_cut', 'r_meas_cut', 'completeness_cut_conservative', 'completeness_cut_permissive', 'cc_one_half_cut']

Methods inherited from libtbx.slots_getstate_setstate:
__getstate__(self)
__setstate__(self, state)

 
class filter_intensities_by_sigma(__builtin__.object)
    Wrapper for filtering intensities based on one of several different
conventions:
 
  - in XDS, reflections where I < -3*sigmaI after merging are deleted from
    both the merged and unmerged arrays
  - in Scalepack, the filtering is done before merging
  - SCALA and AIMLESS do not do any filtering
 
note that ctruncate and cctbx.french_wilson (any others?) do their own
filtering, e.g. discarding I < -4*sigma in cctbx.french_wilson.
 
  Methods defined here:
__init__(self, array, sigma_filtering=<libtbx.AutoType object>)

Data descriptors defined here:
__dict__
dictionary for instance variables (if defined)
__weakref__
list of weak references to the object (if defined)

 
class merging_stats(__builtin__.object)
    Calculate standard merging statistics for (scaled) unmerged data.  Usually
these statistics will consider I(+) and I(-) as observations of the same
reflection, but these can be kept separate instead if desired.
 
Reflections with negative sigmas will be discarded, and depending on the
program we're trying to mimic, excessively negative intensities.
 
  Methods defined here:
__init__(self, array, model_arrays=None, anomalous=False, debug=None, sigma_filtering='scala')
format(self)
format_for_cc_star_gui(self)
format_for_gui(self)
format_for_model_cc(self)
show_summary(self, out=<open file '<stdout>', mode 'w'>, prefix='')
table_data(self)

Data descriptors defined here:
__dict__
dictionary for instance variables (if defined)
__weakref__
list of weak references to the object (if defined)

 
class model_based_arrays(__builtin__.object)
    Container for observed and calculated intensities, along with the selections
for work and free sets; these should be provided by mmtbx.f_model.  It is
assumed (or hoped) that the resolution range of these arrays will be
the same as that of the unmerged data, but the current implementation does
not force this.
 
  Methods defined here:
__init__(self, f_obs, i_obs, i_calc, work_sel, free_sel)
cc_work_and_free(self, other)
Given a unique array of arbitrary resolution range, extract the equivalent
reflections from the observed and calculated intensities, and calculate
CC and R-factor for work and free sets.  Currently, these statistics will
be None if there are no matching reflections.

Data descriptors defined here:
__dict__
dictionary for instance variables (if defined)
__weakref__
list of weak references to the object (if defined)

 
Functions
       
get_filtering_convention(i_obs, sigma_filtering=<libtbx.AutoType object>)
select_data(file_name, data_labels, log=None, assume_shelx_observation_type_is=None)
sqrt(...)
sqrt(x)
 
Return the square root of x.

 
Data
        Auto = <libtbx.AutoType object>
citations_str = ' Diederichs K & Karplus PA (1997) Nature Struct...plus PA & Diederichs K (2012) Science 336:1030-3.'
division = _Feature((2, 2, 0, 'alpha', 2), (3, 0, 0, 'alpha', 0), 8192)
merging_params_str = '\nhigh_resolution = None\n .type = float\n .input... with negative SigmaI will always be discarded.\n\n'
sigma_filtering_phil_str = '\nsigma_filtering = *auto xds scala scalepack\n ....s with negative SigmaI will always be discarded.\n'