Reader Functions

The following are function which read various components of FCS files.

Dataset parsing

The majority of the functions in this section are intended to read TEXT, possibly with accompanying data, possibly from multiple datasets in an FCS file.

These are summarized below:

Function

Parse Mode

Includes Data

Dataset Number

fcs_read_flat_text()

flat

no

singular

fcs_read_std_text()

standard

no

singular

fcs_read_flat_dataset()

flat

yes

singular

fcs_read_std_dataset()

standard

yes

singular

fcs_read_flat_texts()

flat

no

plural

fcs_read_std_texts()

standard

no

plural

fcs_read_flat_datasets()

flat

yes

plural

fcs_read_std_datasets()

standard

yes

plural

Each column denotes the category to which each function belongs and its intended purpose:

Parse Mode:

This refers to the method used to parse TEXT. “Flat” mode treats TEXT as a flat list of keywords and does not further processing. “Standard” mode attempts to collect this flat list into a well-defined data structure which in pyreflow is a version-specific python class (see CoreTEXT* and CoreDataset*).

“Standard” mode requires that TEXT first be parsed in “flat” mode, which implies the latter is more lenient with regard to deviations from the FCS standard.

Includes Data:

If “yes”, the function will include DATA, ANALYSIS, and OTHER segments in the returned object. Otherwise it will just include the TEXT segment.

Dataset Number:

This refers to the number of datasets in an FCS file that can be parsed by the function. If a function is “singular”, it can only parse the first dataset. Otherwise it can parse multiple datasets from a file, and returns these in a list rather than a single object.

The vast majority of FCS files only have one dataset, so the singular functions are simpler to use for many cases since they do not require any flags to be set to read one dataset.

Singular functions optionally take an dataset_offset argument which can be used to “jump” to any dataset in a file (assuming obviously one knows where it is).

Plural functions take skip and limit arguments. The former will skip the first n datasets when returning the final list (although the TEXT for all datasets will still be read to get $NEXTDATA). limit will stop the parser after n datasets have been parsed. The defaults for these are both None which will tell the parser to exhaustively read all datasets.

HEADER parsing

fcs_read_header() merely reads the first HEADER in an FSC file.

There is no plural (multi-dataset) version of this function since reading multiple datasets requires TEXT to be parsed to obtain NEXTDATA

This function also takes a dataset_offset argument, so one can theoretically read any HEADER in the file if one knows its offset.

Offline keyword repair

fcs_read_flat_dataset_with_keywords() can be used to parse a flat list of keyword pairs into a dataset.

Sometimes, the flags provided by fcs_read_flat_dataset() are not enough to repair any issues in TEXT that might make a file unreadable.

In these cases, one can read TEXT in flat mode using fcs_read_flat_text(), repair the keywords and/or offsets out-of-band, and then feed these into fcs_read_flat_dataset_with_keywords().

This only applies to flat mode. For the standardized analogue, see the from_kws methods in CoreTEXT* and CoreDataset*.

All functions

pyreflow.api.fcs_read_flat_text(path, text_correction=(0, 0), data_correction=(0, 0), analysis_correction=(0, 0), other_corrections=[], max_other=None, other_width=8, squish_offsets=False, allow_negative=False, truncate_offsets=False, version_override=None, supp_text_correction=(0, 0), allow_overlapping_supp_text=False, ignore_supp_text=False, use_literal_delims=False, allow_non_ascii_delim=False, allow_missing_final_delim=False, allow_nonunique=False, allow_odd=False, allow_empty=False, allow_delim_at_boundary=False, allow_non_utf8=False, use_latin1=False, allow_non_ascii_keywords=False, allow_missing_supp_text=False, allow_supp_text_own_delim=False, allow_missing_nextdata=False, trim_value_whitespace=False, trim_trailing_whitespace=False, ignore_standard_keys=([], []), promote_to_standard=([], []), demote_from_standard=([], []), rename_standard_keys={}, replace_standard_key_values={}, append_standard_keywords={}, substitute_standard_key_values=({}, {}), warnings_are_errors=False, hide_warnings=False, dataset_offset=0)

Read HEADER and TEXT from first dataset in FCS file.

Parameters:
  • path (Path) – Path to be read.

  • text_correction (tuple[int, int]) – Corrections for TEXT offsets in HEADER. Defaults to (0, 0).

  • data_correction (tuple[int, int]) – Corrections for DATA offsets in HEADER. Defaults to (0, 0).

  • analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in HEADER. Defaults to (0, 0).

  • other_corrections (list[tuple[int, int]]) – Corrections for OTHER offsets if they exist. Each correction will be applied in order. If an offset does not need to be corrected, use (0, 0). This will not affect the number of OTHER segments that are read; this is controlled by max_other. Defaults to [].

  • max_other (int | None) – Maximum number of OTHER segments that can be parsed. None means limitless. Defaults to None.

  • other_width (int) – Width (in bytes) to use when parsing OTHER offsets. Defaults to 8.

  • squish_offsets (bool) – If True and a segment’s ending offset is zero, treat entire offset as empty. This might happen if the ending offset is longer than 8 digits, in which case it must be written in TEXT. If this happens, the standards mandate that both offsets be written to TEXT and that the HEADER offsets be set to 0,0, so only writing one is an error unless this flag is set. This should only happen in FCS 3.0 files and above. Defaults to False.

  • allow_negative (bool) – If true, allow negative values in a HEADER offset. If negative offsets are found, they will be replaced with 0. Some files will denote an “empty” offset as 0,-1, which is logically correct since the last offset points to the last byte, thus 0,0 is actually 1 byte long. Unfortunately this is not what the standards say, so specifying 0,-1 is an error unless this flag is set. Defaults to False.

  • truncate_offsets (bool) – If true, truncate offsets that exceed the end of the file. In some cases the DATA offset (usually) might exceed the end of the file by 1, which is usually a mistake and should be corrected with data_correction (or analogous for the offending offset). If this is not the case, the file is likely corrupted. This flag will allow such files to be read conveniently if desired. Defaults to False.

  • version_override (Literal[“FCS2.0”, “FCS3.0”, “FCS3.1”, “FCS3.2”] | None) – Override the FCS version as seen in HEADER. Defaults to None.

  • supp_text_correction (tuple[int, int]) – Corrections for Supplemental TEXT offsets in TEXT. Defaults to (0, 0).

  • allow_overlapping_supp_text (bool) – If True allow supplemental TEXT offsets to overlap the primary TEXT offsets from HEADER or HEADER itself and raise a warning if such an overlap is found. Otherwise raise a FileLayoutError. The offsets will not be used if an overlap is found in either case. Defaults to False.

  • ignore_supp_text (bool) – If True, ignore supplemental TEXT entirely. Defaults to False.

  • use_literal_delims (bool) – If True, treat every delimiter as literal (turn off escaping). Without escaping, delimiters cannot be included in keys or values, but empty values become possible. Use this option for files where unescaped delimiters results in the ‘correct’ interpretation of TEXT. Defaults to False.

  • allow_non_ascii_delim (bool) – If True allow non-ASCII delimiters (outside 1-126). Defaults to False.

  • allow_missing_final_delim (bool) – If True allow TEXT to not end with a delimiter. Defaults to False.

  • allow_nonunique (bool) – If True allow non-unique keys in TEXT. In such cases, only the first key will be used regardless of this setting; Defaults to False.

  • allow_odd (bool) – If True, allow TEXT to contain odd number of words. The last ‘dangling’ word will be dropped independent of this flag. Defaults to False.

  • allow_empty (bool) – If True allow keys with blank values. Only relevant if use_literal_delims is also True. Defaults to False.

  • allow_delim_at_boundary (bool) – If True allow delimiters at word boundaries. The FCS standard forbids this because it is impossible to tell if such delimiters belong to the previous or the next word. Consequently, delimiters at boundaries will be dropped regardless of this flag. Setting this to True will turn this into a warning not an error. Only relevant if use_literal_delims is False. Defaults to False.

  • allow_non_utf8 (bool) – If True allow non-UTF8 characters in TEXT. Words with such characters will be dropped regardless; setting this to True will turn these cases into warnings not errors. Defaults to False.

  • use_latin1 (bool) – If True interpret all characters in TEXT as Latin-1 (aka ISO/IEC 8859-1) instead of UTF-8. Defaults to False.

  • allow_non_ascii_keywords (bool) – If True allow non-ASCII keys. This only applies to non-standard keywords, as all standardized keywords may only contain letters, numbers, and start with $. Regardless, all compliant keys must only have ASCII. Setting this to true will emit an error when encountering such a key. If false, the key will be kept as a non-standard key. Defaults to False.

  • allow_missing_supp_text (bool) – If True allow supplemental TEXT offsets to be missing from primary TEXT. Defaults to False.

  • allow_supp_text_own_delim (bool) – If True allow supplemental TEXT offsets to have a different delimiter compared to primary TEXT. Defaults to False.

  • allow_missing_nextdata (bool) – If True allow $NEXTDATA to be missing. This is a required keyword in all versions. However, most files only have one dataset in which case this keyword is meaningless. Defaults to False.

  • trim_value_whitespace (bool) – If True trim whitespace from all values. If performed, trimming precedes all other repair steps. Any values which are entirely spaces will become blanks, in which case it may also be sensible to enable allow_empty. Defaults to False.

  • trim_trailing_whitespace (bool) – If True trim whitespace off the end of TEXT. This will effectively move the ending offset of TEXT to the first non-whitespace byte immediately preceding the actual ending offset given in HEADER. Defaults to False.

  • ignore_standard_keys (tuple[list[str], list[str]]) – Remove standard keys from TEXT. The leading $ is implied so do not include it.. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • promote_to_standard (tuple[list[str], list[str]]) – Promote nonstandard keys to standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • demote_from_standard (tuple[list[str], list[str]]) – Demote nonstandard keys from standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • rename_standard_keys (dict[str, str]) – Rename standard keys in TEXT. Keys matching the first part of the pair will be replaced by the second. Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • replace_standard_key_values (dict[str, str]) – Replace values for standard keys in TEXT Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • append_standard_keywords (dict[str, str]) – Append standard key/value pairs to TEXT. All keys and values will be included as they appear here. The leading $ is implied so do not include it. Defaults to {}.

  • substitute_standard_key_values (tuple[dict[str, tuple[str, str, bool]], dict[str, tuple[str, str, bool]]]) – Apply sed-like substitution operation on matching standard keys. The leading $ is implied when matching keys. The first dict corresponds to keys which are matched literally, and the second corresponds to keys which are matched via regular expression. The members in the 3-tuple values correspond to a regular expression, replacement string, and global flag respectively. The regular expression may contain capture expressions which must be matched exactly in the replacement string. If the global flag is True, replace all found matches, otherwise only replace the first. Any references in replacement string must be given with surrounding brackets like "${1}" or "${cygnus}". Defaults to ({}, {}).

  • warnings_are_errors (bool) – If True all warnings will be regarded as errors. Defaults to False.

  • hide_warnings (bool) – If True hide all warnings. Defaults to False.

  • dataset_offset (int) – Starting position in the file of the dataset to be read Defaults to 0.

Return type:

FlatTEXTOutput

Raises:
  • ConfigError – if other_width is less than 1 and greater than 20

  • ConfigError – if field 1 in dict value in field 1 or 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • OverflowError – if field 1 or 2 in analysis_correction, data_correction, supp_text_correction, or text_correction is less than -2**31 or greater than 2**31-1

  • OverflowError – if field 1 or 2 in any in other_corrections is less than -2**31 or greater than 2**31-1

  • ParseKeyError – if any in field 1 in demote_from_standard, ignore_standard_keys, or promote_to_standard contains non-ASCII characters or is empty

  • ConfigError – if any in field 2 in demote_from_standard, ignore_standard_keys, or promote_to_standard is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict key in append_standard_keywords, rename_standard_keys, or replace_standard_key_values contains non-ASCII characters or is empty

  • ParseKeyError – if dict key in field 1 in substitute_standard_key_values contains non-ASCII characters or is empty

  • ConfigError – if dict key in field 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict value in rename_standard_keys contains non-ASCII characters or is empty

  • ConfigError – if references in replacement string in dict value in field 1 or 2 in substitute_standard_key_values do not match captures in regular expression

  • FileLayoutError – If HEADER or TEXT are not parsable

  • ParseKeyError – If any keys from TEXT contain non-ASCII characters and allow_non_ascii_keywords is False

pyreflow.api.fcs_read_std_text(path, text_correction=(0, 0), data_correction=(0, 0), analysis_correction=(0, 0), other_corrections=[], max_other=None, other_width=8, squish_offsets=False, allow_negative=False, truncate_offsets=False, version_override=None, supp_text_correction=(0, 0), allow_overlapping_supp_text=False, ignore_supp_text=False, use_literal_delims=False, allow_non_ascii_delim=False, allow_missing_final_delim=False, allow_nonunique=False, allow_odd=False, allow_empty=False, allow_delim_at_boundary=False, allow_non_utf8=False, use_latin1=False, allow_non_ascii_keywords=False, allow_missing_supp_text=False, allow_supp_text_own_delim=False, allow_missing_nextdata=False, trim_value_whitespace=False, trim_trailing_whitespace=False, ignore_standard_keys=([], []), promote_to_standard=([], []), demote_from_standard=([], []), rename_standard_keys={}, replace_standard_key_values={}, append_standard_keywords={}, substitute_standard_key_values=({}, {}), trim_intra_value_whitespace=False, time_meas_pattern='^(TIME|Time)$', allow_missing_time=False, force_time_linear=False, ignore_time_optical_keys=[], date_pattern=None, time_pattern=None, allow_pseudostandard=False, allow_unused_standard=False, disallow_deprecated=False, fix_log_scale_offsets=False, nonstandard_measurement_pattern=None, ignore_time_gain=False, parse_indexed_spillover=False, disallow_localtime=False, text_data_correction=(0, 0), text_analysis_correction=(0, 0), ignore_text_data_offsets=False, ignore_text_analysis_offsets=False, allow_header_text_offset_mismatch=False, allow_missing_required_offsets=False, truncate_text_offsets=False, allow_optional_dropping=False, transfer_dropped_optional=False, integer_widths_from_byteord=False, integer_byteord_override=None, disallow_range_truncation=False, warnings_are_errors=False, hide_warnings=False, dataset_offset=0)

Read standardized TEXT from first dataset in FCS file.

Parameters:
  • path (Path) – Path to be read.

  • text_correction (tuple[int, int]) – Corrections for TEXT offsets in HEADER. Defaults to (0, 0).

  • data_correction (tuple[int, int]) – Corrections for DATA offsets in HEADER. Defaults to (0, 0).

  • analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in HEADER. Defaults to (0, 0).

  • other_corrections (list[tuple[int, int]]) – Corrections for OTHER offsets if they exist. Each correction will be applied in order. If an offset does not need to be corrected, use (0, 0). This will not affect the number of OTHER segments that are read; this is controlled by max_other. Defaults to [].

  • max_other (int | None) – Maximum number of OTHER segments that can be parsed. None means limitless. Defaults to None.

  • other_width (int) – Width (in bytes) to use when parsing OTHER offsets. Defaults to 8.

  • squish_offsets (bool) – If True and a segment’s ending offset is zero, treat entire offset as empty. This might happen if the ending offset is longer than 8 digits, in which case it must be written in TEXT. If this happens, the standards mandate that both offsets be written to TEXT and that the HEADER offsets be set to 0,0, so only writing one is an error unless this flag is set. This should only happen in FCS 3.0 files and above. Defaults to False.

  • allow_negative (bool) – If true, allow negative values in a HEADER offset. If negative offsets are found, they will be replaced with 0. Some files will denote an “empty” offset as 0,-1, which is logically correct since the last offset points to the last byte, thus 0,0 is actually 1 byte long. Unfortunately this is not what the standards say, so specifying 0,-1 is an error unless this flag is set. Defaults to False.

  • truncate_offsets (bool) – If true, truncate offsets that exceed the end of the file. In some cases the DATA offset (usually) might exceed the end of the file by 1, which is usually a mistake and should be corrected with data_correction (or analogous for the offending offset). If this is not the case, the file is likely corrupted. This flag will allow such files to be read conveniently if desired. Defaults to False.

  • version_override (Literal[“FCS2.0”, “FCS3.0”, “FCS3.1”, “FCS3.2”] | None) – Override the FCS version as seen in HEADER. Defaults to None.

  • supp_text_correction (tuple[int, int]) – Corrections for Supplemental TEXT offsets in TEXT. Defaults to (0, 0).

  • allow_overlapping_supp_text (bool) – If True allow supplemental TEXT offsets to overlap the primary TEXT offsets from HEADER or HEADER itself and raise a warning if such an overlap is found. Otherwise raise a FileLayoutError. The offsets will not be used if an overlap is found in either case. Defaults to False.

  • ignore_supp_text (bool) – If True, ignore supplemental TEXT entirely. Defaults to False.

  • use_literal_delims (bool) – If True, treat every delimiter as literal (turn off escaping). Without escaping, delimiters cannot be included in keys or values, but empty values become possible. Use this option for files where unescaped delimiters results in the ‘correct’ interpretation of TEXT. Defaults to False.

  • allow_non_ascii_delim (bool) – If True allow non-ASCII delimiters (outside 1-126). Defaults to False.

  • allow_missing_final_delim (bool) – If True allow TEXT to not end with a delimiter. Defaults to False.

  • allow_nonunique (bool) – If True allow non-unique keys in TEXT. In such cases, only the first key will be used regardless of this setting; Defaults to False.

  • allow_odd (bool) – If True, allow TEXT to contain odd number of words. The last ‘dangling’ word will be dropped independent of this flag. Defaults to False.

  • allow_empty (bool) – If True allow keys with blank values. Only relevant if use_literal_delims is also True. Defaults to False.

  • allow_delim_at_boundary (bool) – If True allow delimiters at word boundaries. The FCS standard forbids this because it is impossible to tell if such delimiters belong to the previous or the next word. Consequently, delimiters at boundaries will be dropped regardless of this flag. Setting this to True will turn this into a warning not an error. Only relevant if use_literal_delims is False. Defaults to False.

  • allow_non_utf8 (bool) – If True allow non-UTF8 characters in TEXT. Words with such characters will be dropped regardless; setting this to True will turn these cases into warnings not errors. Defaults to False.

  • use_latin1 (bool) – If True interpret all characters in TEXT as Latin-1 (aka ISO/IEC 8859-1) instead of UTF-8. Defaults to False.

  • allow_non_ascii_keywords (bool) – If True allow non-ASCII keys. This only applies to non-standard keywords, as all standardized keywords may only contain letters, numbers, and start with $. Regardless, all compliant keys must only have ASCII. Setting this to true will emit an error when encountering such a key. If false, the key will be kept as a non-standard key. Defaults to False.

  • allow_missing_supp_text (bool) – If True allow supplemental TEXT offsets to be missing from primary TEXT. Defaults to False.

  • allow_supp_text_own_delim (bool) – If True allow supplemental TEXT offsets to have a different delimiter compared to primary TEXT. Defaults to False.

  • allow_missing_nextdata (bool) – If True allow $NEXTDATA to be missing. This is a required keyword in all versions. However, most files only have one dataset in which case this keyword is meaningless. Defaults to False.

  • trim_value_whitespace (bool) – If True trim whitespace from all values. If performed, trimming precedes all other repair steps. Any values which are entirely spaces will become blanks, in which case it may also be sensible to enable allow_empty. Defaults to False.

  • trim_trailing_whitespace (bool) – If True trim whitespace off the end of TEXT. This will effectively move the ending offset of TEXT to the first non-whitespace byte immediately preceding the actual ending offset given in HEADER. Defaults to False.

  • ignore_standard_keys (tuple[list[str], list[str]]) – Remove standard keys from TEXT. The leading $ is implied so do not include it.. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • promote_to_standard (tuple[list[str], list[str]]) – Promote nonstandard keys to standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • demote_from_standard (tuple[list[str], list[str]]) – Demote nonstandard keys from standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • rename_standard_keys (dict[str, str]) – Rename standard keys in TEXT. Keys matching the first part of the pair will be replaced by the second. Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • replace_standard_key_values (dict[str, str]) – Replace values for standard keys in TEXT Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • append_standard_keywords (dict[str, str]) – Append standard key/value pairs to TEXT. All keys and values will be included as they appear here. The leading $ is implied so do not include it. Defaults to {}.

  • substitute_standard_key_values (tuple[dict[str, tuple[str, str, bool]], dict[str, tuple[str, str, bool]]]) – Apply sed-like substitution operation on matching standard keys. The leading $ is implied when matching keys. The first dict corresponds to keys which are matched literally, and the second corresponds to keys which are matched via regular expression. The members in the 3-tuple values correspond to a regular expression, replacement string, and global flag respectively. The regular expression may contain capture expressions which must be matched exactly in the replacement string. If the global flag is True, replace all found matches, otherwise only replace the first. Any references in replacement string must be given with surrounding brackets like "${1}" or "${cygnus}". Defaults to ({}, {}).

  • trim_intra_value_whitespace (bool) – If True, trim whitespace between delimiters such as , and ; within keyword value strings. Defaults to False.

  • time_meas_pattern (str | None) – A pattern to match the $PnN of the time measurement. If None, do not try to find a time measurement. Defaults to "^(TIME|Time)$".

  • allow_missing_time (bool) – If True allow time measurement to be missing. Defaults to False.

  • force_time_linear (bool) – If True force time measurement to be linear independent of $PnE. Defaults to False.

  • ignore_time_optical_keys (list[Literal[“F”, “L”, “O”, “T”, “P”, “V”, “CALIBRATION”, “DET”, “TAG”, “FEATURE”, “ANALYTE”]]) – Ignore optical keys in temporal measurement. These keys are nonsensical for time measurements but are not explicitly forbidden in the the standard. Provided keys are the string after the “Pn” in the “PnX” keywords. Defaults to [].

  • date_pattern (str | None) – If supplied, will be used as an alternative pattern when parsing $DATE. If not supplied, $DATE will be parsed according to the standard pattern which is %d-%b-%Y. Defaults to None.

  • time_pattern (str | None) – If supplied, will be used as an alternative pattern when parsing $BTIM and $ETIM. The values "%!" or "%@" may be used to match 1/60 seconds or centiseconds respectively. If not supplied, $BTIM and $ETIM will be parsed according to the standard pattern which is version-specific. Defaults to None.

  • allow_pseudostandard (bool) – If True allow non-standard keywords with a leading $. The presence of such keywords often means the version in HEADER is incorrect. Defaults to False.

  • allow_unused_standard (bool) – If True allow unused standard keywords to be present. Defaults to False.

  • disallow_deprecated (bool) – If True throw error if a deprecated key is encountered. Defaults to False.

  • fix_log_scale_offsets (bool) – If True fix log-scale PnE and keywords which have zero offset (ie X,0.0 where X is non-zero). Defaults to False.

  • nonstandard_measurement_pattern (str | None) – Pattern to use when matching nonstandard measurement keys. Must be a regular expression pattern with %n which will represent the measurement index and should not start with $. Otherwise should be a normal regular expression as defined in regexp-syntax. Defaults to None.

  • ignore_time_gain (bool) – If True ignore the $PnG (gain) keyword. This keyword should not be set according to the standard} however, this library will allow gain to be 1.0 since this equates to identity. If gain is not 1.0, this is nonsense and it can be ignored with this flag. Defaults to False.

  • parse_indexed_spillover (bool) – Parse $SPILLOVER with numeric indices rather than strings (ie names or $PnN) Defaults to False.

  • disallow_localtime (bool) – If true, require that $BEGINDATETIME and $ENDDATETIME have a timezone if provided. This is not required by the standard, but not having a timezone is ambiguous since the absolute value of the timestamp is dependent on localtime and therefore is location-dependent. Only affects FCS 3.2. Defaults to False.

  • text_data_correction (tuple[int, int]) – Corrections for DATA offsets in TEXT. Defaults to (0, 0).

  • text_analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in TEXT. Defaults to (0, 0).

  • ignore_text_data_offsets (bool) – If True ignore DATA offsets in TEXT Defaults to False.

  • ignore_text_analysis_offsets (bool) – If True ignore ANALYSIS offsets in TEXT Defaults to False.

  • allow_header_text_offset_mismatch (bool) – If True allow TEXT and HEADER offsets to mismatch. Defaults to False.

  • allow_missing_required_offsets (bool) – If True allow required DATA and ANALYSIS (3.1 or lower) offsets in TEXT to be missing. If missing, fall back to offsets from HEADER. Defaults to False.

  • truncate_text_offsets (bool) – If True truncate offsets that exceed end of file. Defaults to False.

  • allow_optional_dropping (bool) – If True drop optional keys that cause an error and emit warning instead. Defaults to False.

  • transfer_dropped_optional (bool) – If True transfer optional keys to non-standard dict if dropped. Defaults to False.

  • integer_widths_from_byteord (bool) – If True set all $PnB to the number of bytes from $BYTEORD. Only has an effect for FCS 2.0/3.0 where $DATATYPE is I. Defaults to False.

  • integer_byteord_override (list[int] | None) – Override $BYTEORD for integer layouts. Defaults to None.

  • disallow_range_truncation (bool) – If True throw error if $PnR values need to be truncated to match the number of bytes specified by $PnB and $DATATYPE. Defaults to False.

  • warnings_are_errors (bool) – If True all warnings will be regarded as errors. Defaults to False.

  • hide_warnings (bool) – If True hide all warnings. Defaults to False.

  • dataset_offset (int) – Starting position in the file of the dataset to be read Defaults to 0.

Return type:

tuple[CoreTEXT2_0 | CoreTEXT3_0 | CoreTEXT3_1 | CoreTEXT3_2, StdTEXTOutput]

Raises:
  • ConfigError – if nonstandard_measurement_pattern does not have "%n"

  • ConfigError – if time_pattern does not have specifiers for hours, minutes, seconds, and optionally sub-seconds (where "%!" and "%@" correspond to 1/60 seconds and centiseconds respectively) as outlined in chrono

  • ConfigError – if date_pattern does not have year, month, and day specifiers as outlined in chrono

  • ConfigError – if other_width is less than 1 and greater than 20

  • ConfigError – if time_meas_pattern is not a valid regular expression as described in regexp-syntax

  • InvalidKeywordValueError – if integer_byteord_override is not a list of integers including all from 1 to N where N is the length of the list (up to 8)

  • ConfigError – if field 1 in dict value in field 1 or 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • OverflowError – if field 1 or 2 in analysis_correction, data_correction, supp_text_correction, text_analysis_correction, text_correction, or text_data_correction is less than -2**31 or greater than 2**31-1

  • OverflowError – if field 1 or 2 in any in other_corrections is less than -2**31 or greater than 2**31-1

  • ParseKeyError – if any in field 1 in demote_from_standard, ignore_standard_keys, or promote_to_standard contains non-ASCII characters or is empty

  • ConfigError – if any in field 2 in demote_from_standard, ignore_standard_keys, or promote_to_standard is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict key in append_standard_keywords, rename_standard_keys, or replace_standard_key_values contains non-ASCII characters or is empty

  • ParseKeyError – if dict key in field 1 in substitute_standard_key_values contains non-ASCII characters or is empty

  • ConfigError – if dict key in field 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict value in rename_standard_keys contains non-ASCII characters or is empty

  • ConfigError – if references in replacement string in dict value in field 1 or 2 in substitute_standard_key_values do not match captures in regular expression

  • FileLayoutError – If HEADER or TEXT are unparsable

  • ParseKeyError – If any keys from TEXT contain non-ASCII characters and allow_non_ascii_keywords is False

  • ExtraKeywordError – If any standard keys are unused and allow_pseudostandard or allow_unused_standard are False

  • FCSDeprecatedError – If any keywords or their values are deprecated and disallow_deprecated is True

  • ParseKeywordValueError – If any keyword values could not be read from their string encoding

  • RelationalError – If keywords that are referenced by other keywords are missing

pyreflow.api.fcs_read_flat_dataset(path, text_correction=(0, 0), data_correction=(0, 0), analysis_correction=(0, 0), other_corrections=[], max_other=None, other_width=8, squish_offsets=False, allow_negative=False, truncate_offsets=False, version_override=None, supp_text_correction=(0, 0), allow_overlapping_supp_text=False, ignore_supp_text=False, use_literal_delims=False, allow_non_ascii_delim=False, allow_missing_final_delim=False, allow_nonunique=False, allow_odd=False, allow_empty=False, allow_delim_at_boundary=False, allow_non_utf8=False, use_latin1=False, allow_non_ascii_keywords=False, allow_missing_supp_text=False, allow_supp_text_own_delim=False, allow_missing_nextdata=False, trim_value_whitespace=False, trim_trailing_whitespace=False, ignore_standard_keys=([], []), promote_to_standard=([], []), demote_from_standard=([], []), rename_standard_keys={}, replace_standard_key_values={}, append_standard_keywords={}, substitute_standard_key_values=({}, {}), text_data_correction=(0, 0), text_analysis_correction=(0, 0), ignore_text_data_offsets=False, ignore_text_analysis_offsets=False, allow_header_text_offset_mismatch=False, allow_missing_required_offsets=False, truncate_text_offsets=False, allow_optional_dropping=False, transfer_dropped_optional=False, integer_widths_from_byteord=False, integer_byteord_override=None, disallow_range_truncation=False, allow_uneven_event_width=False, allow_tot_mismatch=False, warnings_are_errors=False, hide_warnings=False, dataset_offset=0)

Read one dataset from FCS file in flat mode.

Parameters:
  • path (Path) – Path to be read.

  • text_correction (tuple[int, int]) – Corrections for TEXT offsets in HEADER. Defaults to (0, 0).

  • data_correction (tuple[int, int]) – Corrections for DATA offsets in HEADER. Defaults to (0, 0).

  • analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in HEADER. Defaults to (0, 0).

  • other_corrections (list[tuple[int, int]]) – Corrections for OTHER offsets if they exist. Each correction will be applied in order. If an offset does not need to be corrected, use (0, 0). This will not affect the number of OTHER segments that are read; this is controlled by max_other. Defaults to [].

  • max_other (int | None) – Maximum number of OTHER segments that can be parsed. None means limitless. Defaults to None.

  • other_width (int) – Width (in bytes) to use when parsing OTHER offsets. Defaults to 8.

  • squish_offsets (bool) – If True and a segment’s ending offset is zero, treat entire offset as empty. This might happen if the ending offset is longer than 8 digits, in which case it must be written in TEXT. If this happens, the standards mandate that both offsets be written to TEXT and that the HEADER offsets be set to 0,0, so only writing one is an error unless this flag is set. This should only happen in FCS 3.0 files and above. Defaults to False.

  • allow_negative (bool) – If true, allow negative values in a HEADER offset. If negative offsets are found, they will be replaced with 0. Some files will denote an “empty” offset as 0,-1, which is logically correct since the last offset points to the last byte, thus 0,0 is actually 1 byte long. Unfortunately this is not what the standards say, so specifying 0,-1 is an error unless this flag is set. Defaults to False.

  • truncate_offsets (bool) – If true, truncate offsets that exceed the end of the file. In some cases the DATA offset (usually) might exceed the end of the file by 1, which is usually a mistake and should be corrected with data_correction (or analogous for the offending offset). If this is not the case, the file is likely corrupted. This flag will allow such files to be read conveniently if desired. Defaults to False.

  • version_override (Literal[“FCS2.0”, “FCS3.0”, “FCS3.1”, “FCS3.2”] | None) – Override the FCS version as seen in HEADER. Defaults to None.

  • supp_text_correction (tuple[int, int]) – Corrections for Supplemental TEXT offsets in TEXT. Defaults to (0, 0).

  • allow_overlapping_supp_text (bool) – If True allow supplemental TEXT offsets to overlap the primary TEXT offsets from HEADER or HEADER itself and raise a warning if such an overlap is found. Otherwise raise a FileLayoutError. The offsets will not be used if an overlap is found in either case. Defaults to False.

  • ignore_supp_text (bool) – If True, ignore supplemental TEXT entirely. Defaults to False.

  • use_literal_delims (bool) – If True, treat every delimiter as literal (turn off escaping). Without escaping, delimiters cannot be included in keys or values, but empty values become possible. Use this option for files where unescaped delimiters results in the ‘correct’ interpretation of TEXT. Defaults to False.

  • allow_non_ascii_delim (bool) – If True allow non-ASCII delimiters (outside 1-126). Defaults to False.

  • allow_missing_final_delim (bool) – If True allow TEXT to not end with a delimiter. Defaults to False.

  • allow_nonunique (bool) – If True allow non-unique keys in TEXT. In such cases, only the first key will be used regardless of this setting; Defaults to False.

  • allow_odd (bool) – If True, allow TEXT to contain odd number of words. The last ‘dangling’ word will be dropped independent of this flag. Defaults to False.

  • allow_empty (bool) – If True allow keys with blank values. Only relevant if use_literal_delims is also True. Defaults to False.

  • allow_delim_at_boundary (bool) – If True allow delimiters at word boundaries. The FCS standard forbids this because it is impossible to tell if such delimiters belong to the previous or the next word. Consequently, delimiters at boundaries will be dropped regardless of this flag. Setting this to True will turn this into a warning not an error. Only relevant if use_literal_delims is False. Defaults to False.

  • allow_non_utf8 (bool) – If True allow non-UTF8 characters in TEXT. Words with such characters will be dropped regardless; setting this to True will turn these cases into warnings not errors. Defaults to False.

  • use_latin1 (bool) – If True interpret all characters in TEXT as Latin-1 (aka ISO/IEC 8859-1) instead of UTF-8. Defaults to False.

  • allow_non_ascii_keywords (bool) – If True allow non-ASCII keys. This only applies to non-standard keywords, as all standardized keywords may only contain letters, numbers, and start with $. Regardless, all compliant keys must only have ASCII. Setting this to true will emit an error when encountering such a key. If false, the key will be kept as a non-standard key. Defaults to False.

  • allow_missing_supp_text (bool) – If True allow supplemental TEXT offsets to be missing from primary TEXT. Defaults to False.

  • allow_supp_text_own_delim (bool) – If True allow supplemental TEXT offsets to have a different delimiter compared to primary TEXT. Defaults to False.

  • allow_missing_nextdata (bool) – If True allow $NEXTDATA to be missing. This is a required keyword in all versions. However, most files only have one dataset in which case this keyword is meaningless. Defaults to False.

  • trim_value_whitespace (bool) – If True trim whitespace from all values. If performed, trimming precedes all other repair steps. Any values which are entirely spaces will become blanks, in which case it may also be sensible to enable allow_empty. Defaults to False.

  • trim_trailing_whitespace (bool) – If True trim whitespace off the end of TEXT. This will effectively move the ending offset of TEXT to the first non-whitespace byte immediately preceding the actual ending offset given in HEADER. Defaults to False.

  • ignore_standard_keys (tuple[list[str], list[str]]) – Remove standard keys from TEXT. The leading $ is implied so do not include it.. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • promote_to_standard (tuple[list[str], list[str]]) – Promote nonstandard keys to standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • demote_from_standard (tuple[list[str], list[str]]) – Demote nonstandard keys from standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • rename_standard_keys (dict[str, str]) – Rename standard keys in TEXT. Keys matching the first part of the pair will be replaced by the second. Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • replace_standard_key_values (dict[str, str]) – Replace values for standard keys in TEXT Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • append_standard_keywords (dict[str, str]) – Append standard key/value pairs to TEXT. All keys and values will be included as they appear here. The leading $ is implied so do not include it. Defaults to {}.

  • substitute_standard_key_values (tuple[dict[str, tuple[str, str, bool]], dict[str, tuple[str, str, bool]]]) – Apply sed-like substitution operation on matching standard keys. The leading $ is implied when matching keys. The first dict corresponds to keys which are matched literally, and the second corresponds to keys which are matched via regular expression. The members in the 3-tuple values correspond to a regular expression, replacement string, and global flag respectively. The regular expression may contain capture expressions which must be matched exactly in the replacement string. If the global flag is True, replace all found matches, otherwise only replace the first. Any references in replacement string must be given with surrounding brackets like "${1}" or "${cygnus}". Defaults to ({}, {}).

  • text_data_correction (tuple[int, int]) – Corrections for DATA offsets in TEXT. Defaults to (0, 0).

  • text_analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in TEXT. Defaults to (0, 0).

  • ignore_text_data_offsets (bool) – If True ignore DATA offsets in TEXT Defaults to False.

  • ignore_text_analysis_offsets (bool) – If True ignore ANALYSIS offsets in TEXT Defaults to False.

  • allow_header_text_offset_mismatch (bool) – If True allow TEXT and HEADER offsets to mismatch. Defaults to False.

  • allow_missing_required_offsets (bool) – If True allow required DATA and ANALYSIS (3.1 or lower) offsets in TEXT to be missing. If missing, fall back to offsets from HEADER. Defaults to False.

  • truncate_text_offsets (bool) – If True truncate offsets that exceed end of file. Defaults to False.

  • allow_optional_dropping (bool) – If True drop optional keys that cause an error and emit warning instead. Defaults to False.

  • transfer_dropped_optional (bool) – If True transfer optional keys to non-standard dict if dropped. Defaults to False.

  • integer_widths_from_byteord (bool) – If True set all $PnB to the number of bytes from $BYTEORD. Only has an effect for FCS 2.0/3.0 where $DATATYPE is I. Defaults to False.

  • integer_byteord_override (list[int] | None) – Override $BYTEORD for integer layouts. Defaults to None.

  • disallow_range_truncation (bool) – If True throw error if $PnR values need to be truncated to match the number of bytes specified by $PnB and $DATATYPE. Defaults to False.

  • allow_uneven_event_width (bool) – If True allow event width to not perfectly divide length of DATA. Does not apply to delimited ASCII layouts. Defaults to False.

  • allow_tot_mismatch (bool) – If True allow $TOT to not match number of events as computed by the event width and length of DATA. Does not apply to delimited ASCII layouts. Defaults to False.

  • warnings_are_errors (bool) – If True all warnings will be regarded as errors. Defaults to False.

  • hide_warnings (bool) – If True hide all warnings. Defaults to False.

  • dataset_offset (int) – Starting position in the file of the dataset to be read Defaults to 0.

Return type:

FlatDatasetOutput

Raises:
  • ConfigError – if other_width is less than 1 and greater than 20

  • InvalidKeywordValueError – if integer_byteord_override is not a list of integers including all from 1 to N where N is the length of the list (up to 8)

  • ConfigError – if field 1 in dict value in field 1 or 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • OverflowError – if field 1 or 2 in analysis_correction, data_correction, supp_text_correction, text_analysis_correction, text_correction, or text_data_correction is less than -2**31 or greater than 2**31-1

  • OverflowError – if field 1 or 2 in any in other_corrections is less than -2**31 or greater than 2**31-1

  • ParseKeyError – if any in field 1 in demote_from_standard, ignore_standard_keys, or promote_to_standard contains non-ASCII characters or is empty

  • ConfigError – if any in field 2 in demote_from_standard, ignore_standard_keys, or promote_to_standard is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict key in append_standard_keywords, rename_standard_keys, or replace_standard_key_values contains non-ASCII characters or is empty

  • ParseKeyError – if dict key in field 1 in substitute_standard_key_values contains non-ASCII characters or is empty

  • ConfigError – if dict key in field 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict value in rename_standard_keys contains non-ASCII characters or is empty

  • ConfigError – if references in replacement string in dict value in field 1 or 2 in substitute_standard_key_values do not match captures in regular expression

  • FileLayoutError – If HEADER, TEXT, or DATA are unparsable

  • ParseKeyError – If any keys from TEXT contain non-ASCII characters and allow_non_ascii_keywords is False

  • FCSDeprecatedError – If an ASCII layout is used and FCS version is 3.1 or 3.2

  • ParseKeywordValueError – If any keyword values could not be read from their string encoding

  • RelationalError – If keywords are incompatible with indicated layout of DATA

  • EventDataError – If values in DATA cannot be read

pyreflow.api.fcs_read_std_dataset(path, text_correction=(0, 0), data_correction=(0, 0), analysis_correction=(0, 0), other_corrections=[], max_other=None, other_width=8, squish_offsets=False, allow_negative=False, truncate_offsets=False, version_override=None, supp_text_correction=(0, 0), allow_overlapping_supp_text=False, ignore_supp_text=False, use_literal_delims=False, allow_non_ascii_delim=False, allow_missing_final_delim=False, allow_nonunique=False, allow_odd=False, allow_empty=False, allow_delim_at_boundary=False, allow_non_utf8=False, use_latin1=False, allow_non_ascii_keywords=False, allow_missing_supp_text=False, allow_supp_text_own_delim=False, allow_missing_nextdata=False, trim_value_whitespace=False, trim_trailing_whitespace=False, ignore_standard_keys=([], []), promote_to_standard=([], []), demote_from_standard=([], []), rename_standard_keys={}, replace_standard_key_values={}, append_standard_keywords={}, substitute_standard_key_values=({}, {}), trim_intra_value_whitespace=False, time_meas_pattern='^(TIME|Time)$', allow_missing_time=False, force_time_linear=False, ignore_time_optical_keys=[], date_pattern=None, time_pattern=None, allow_pseudostandard=False, allow_unused_standard=False, disallow_deprecated=False, fix_log_scale_offsets=False, nonstandard_measurement_pattern=None, ignore_time_gain=False, parse_indexed_spillover=False, disallow_localtime=False, text_data_correction=(0, 0), text_analysis_correction=(0, 0), ignore_text_data_offsets=False, ignore_text_analysis_offsets=False, allow_header_text_offset_mismatch=False, allow_missing_required_offsets=False, truncate_text_offsets=False, allow_optional_dropping=False, transfer_dropped_optional=False, integer_widths_from_byteord=False, integer_byteord_override=None, disallow_range_truncation=False, allow_uneven_event_width=False, allow_tot_mismatch=False, warnings_are_errors=False, hide_warnings=False, dataset_offset=0)

Read one standardized dataset from FCS file.

Parameters:
  • path (Path) – Path to be read.

  • text_correction (tuple[int, int]) – Corrections for TEXT offsets in HEADER. Defaults to (0, 0).

  • data_correction (tuple[int, int]) – Corrections for DATA offsets in HEADER. Defaults to (0, 0).

  • analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in HEADER. Defaults to (0, 0).

  • other_corrections (list[tuple[int, int]]) – Corrections for OTHER offsets if they exist. Each correction will be applied in order. If an offset does not need to be corrected, use (0, 0). This will not affect the number of OTHER segments that are read; this is controlled by max_other. Defaults to [].

  • max_other (int | None) – Maximum number of OTHER segments that can be parsed. None means limitless. Defaults to None.

  • other_width (int) – Width (in bytes) to use when parsing OTHER offsets. Defaults to 8.

  • squish_offsets (bool) – If True and a segment’s ending offset is zero, treat entire offset as empty. This might happen if the ending offset is longer than 8 digits, in which case it must be written in TEXT. If this happens, the standards mandate that both offsets be written to TEXT and that the HEADER offsets be set to 0,0, so only writing one is an error unless this flag is set. This should only happen in FCS 3.0 files and above. Defaults to False.

  • allow_negative (bool) – If true, allow negative values in a HEADER offset. If negative offsets are found, they will be replaced with 0. Some files will denote an “empty” offset as 0,-1, which is logically correct since the last offset points to the last byte, thus 0,0 is actually 1 byte long. Unfortunately this is not what the standards say, so specifying 0,-1 is an error unless this flag is set. Defaults to False.

  • truncate_offsets (bool) – If true, truncate offsets that exceed the end of the file. In some cases the DATA offset (usually) might exceed the end of the file by 1, which is usually a mistake and should be corrected with data_correction (or analogous for the offending offset). If this is not the case, the file is likely corrupted. This flag will allow such files to be read conveniently if desired. Defaults to False.

  • version_override (Literal[“FCS2.0”, “FCS3.0”, “FCS3.1”, “FCS3.2”] | None) – Override the FCS version as seen in HEADER. Defaults to None.

  • supp_text_correction (tuple[int, int]) – Corrections for Supplemental TEXT offsets in TEXT. Defaults to (0, 0).

  • allow_overlapping_supp_text (bool) – If True allow supplemental TEXT offsets to overlap the primary TEXT offsets from HEADER or HEADER itself and raise a warning if such an overlap is found. Otherwise raise a FileLayoutError. The offsets will not be used if an overlap is found in either case. Defaults to False.

  • ignore_supp_text (bool) – If True, ignore supplemental TEXT entirely. Defaults to False.

  • use_literal_delims (bool) – If True, treat every delimiter as literal (turn off escaping). Without escaping, delimiters cannot be included in keys or values, but empty values become possible. Use this option for files where unescaped delimiters results in the ‘correct’ interpretation of TEXT. Defaults to False.

  • allow_non_ascii_delim (bool) – If True allow non-ASCII delimiters (outside 1-126). Defaults to False.

  • allow_missing_final_delim (bool) – If True allow TEXT to not end with a delimiter. Defaults to False.

  • allow_nonunique (bool) – If True allow non-unique keys in TEXT. In such cases, only the first key will be used regardless of this setting; Defaults to False.

  • allow_odd (bool) – If True, allow TEXT to contain odd number of words. The last ‘dangling’ word will be dropped independent of this flag. Defaults to False.

  • allow_empty (bool) – If True allow keys with blank values. Only relevant if use_literal_delims is also True. Defaults to False.

  • allow_delim_at_boundary (bool) – If True allow delimiters at word boundaries. The FCS standard forbids this because it is impossible to tell if such delimiters belong to the previous or the next word. Consequently, delimiters at boundaries will be dropped regardless of this flag. Setting this to True will turn this into a warning not an error. Only relevant if use_literal_delims is False. Defaults to False.

  • allow_non_utf8 (bool) – If True allow non-UTF8 characters in TEXT. Words with such characters will be dropped regardless; setting this to True will turn these cases into warnings not errors. Defaults to False.

  • use_latin1 (bool) – If True interpret all characters in TEXT as Latin-1 (aka ISO/IEC 8859-1) instead of UTF-8. Defaults to False.

  • allow_non_ascii_keywords (bool) – If True allow non-ASCII keys. This only applies to non-standard keywords, as all standardized keywords may only contain letters, numbers, and start with $. Regardless, all compliant keys must only have ASCII. Setting this to true will emit an error when encountering such a key. If false, the key will be kept as a non-standard key. Defaults to False.

  • allow_missing_supp_text (bool) – If True allow supplemental TEXT offsets to be missing from primary TEXT. Defaults to False.

  • allow_supp_text_own_delim (bool) – If True allow supplemental TEXT offsets to have a different delimiter compared to primary TEXT. Defaults to False.

  • allow_missing_nextdata (bool) – If True allow $NEXTDATA to be missing. This is a required keyword in all versions. However, most files only have one dataset in which case this keyword is meaningless. Defaults to False.

  • trim_value_whitespace (bool) – If True trim whitespace from all values. If performed, trimming precedes all other repair steps. Any values which are entirely spaces will become blanks, in which case it may also be sensible to enable allow_empty. Defaults to False.

  • trim_trailing_whitespace (bool) – If True trim whitespace off the end of TEXT. This will effectively move the ending offset of TEXT to the first non-whitespace byte immediately preceding the actual ending offset given in HEADER. Defaults to False.

  • ignore_standard_keys (tuple[list[str], list[str]]) – Remove standard keys from TEXT. The leading $ is implied so do not include it.. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • promote_to_standard (tuple[list[str], list[str]]) – Promote nonstandard keys to standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • demote_from_standard (tuple[list[str], list[str]]) – Demote nonstandard keys from standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • rename_standard_keys (dict[str, str]) – Rename standard keys in TEXT. Keys matching the first part of the pair will be replaced by the second. Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • replace_standard_key_values (dict[str, str]) – Replace values for standard keys in TEXT Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • append_standard_keywords (dict[str, str]) – Append standard key/value pairs to TEXT. All keys and values will be included as they appear here. The leading $ is implied so do not include it. Defaults to {}.

  • substitute_standard_key_values (tuple[dict[str, tuple[str, str, bool]], dict[str, tuple[str, str, bool]]]) – Apply sed-like substitution operation on matching standard keys. The leading $ is implied when matching keys. The first dict corresponds to keys which are matched literally, and the second corresponds to keys which are matched via regular expression. The members in the 3-tuple values correspond to a regular expression, replacement string, and global flag respectively. The regular expression may contain capture expressions which must be matched exactly in the replacement string. If the global flag is True, replace all found matches, otherwise only replace the first. Any references in replacement string must be given with surrounding brackets like "${1}" or "${cygnus}". Defaults to ({}, {}).

  • trim_intra_value_whitespace (bool) – If True, trim whitespace between delimiters such as , and ; within keyword value strings. Defaults to False.

  • time_meas_pattern (str | None) – A pattern to match the $PnN of the time measurement. If None, do not try to find a time measurement. Defaults to "^(TIME|Time)$".

  • allow_missing_time (bool) – If True allow time measurement to be missing. Defaults to False.

  • force_time_linear (bool) – If True force time measurement to be linear independent of $PnE. Defaults to False.

  • ignore_time_optical_keys (list[Literal[“F”, “L”, “O”, “T”, “P”, “V”, “CALIBRATION”, “DET”, “TAG”, “FEATURE”, “ANALYTE”]]) – Ignore optical keys in temporal measurement. These keys are nonsensical for time measurements but are not explicitly forbidden in the the standard. Provided keys are the string after the “Pn” in the “PnX” keywords. Defaults to [].

  • date_pattern (str | None) – If supplied, will be used as an alternative pattern when parsing $DATE. If not supplied, $DATE will be parsed according to the standard pattern which is %d-%b-%Y. Defaults to None.

  • time_pattern (str | None) – If supplied, will be used as an alternative pattern when parsing $BTIM and $ETIM. The values "%!" or "%@" may be used to match 1/60 seconds or centiseconds respectively. If not supplied, $BTIM and $ETIM will be parsed according to the standard pattern which is version-specific. Defaults to None.

  • allow_pseudostandard (bool) – If True allow non-standard keywords with a leading $. The presence of such keywords often means the version in HEADER is incorrect. Defaults to False.

  • allow_unused_standard (bool) – If True allow unused standard keywords to be present. Defaults to False.

  • disallow_deprecated (bool) – If True throw error if a deprecated key is encountered. Defaults to False.

  • fix_log_scale_offsets (bool) – If True fix log-scale PnE and keywords which have zero offset (ie X,0.0 where X is non-zero). Defaults to False.

  • nonstandard_measurement_pattern (str | None) – Pattern to use when matching nonstandard measurement keys. Must be a regular expression pattern with %n which will represent the measurement index and should not start with $. Otherwise should be a normal regular expression as defined in regexp-syntax. Defaults to None.

  • ignore_time_gain (bool) – If True ignore the $PnG (gain) keyword. This keyword should not be set according to the standard} however, this library will allow gain to be 1.0 since this equates to identity. If gain is not 1.0, this is nonsense and it can be ignored with this flag. Defaults to False.

  • parse_indexed_spillover (bool) – Parse $SPILLOVER with numeric indices rather than strings (ie names or $PnN) Defaults to False.

  • disallow_localtime (bool) – If true, require that $BEGINDATETIME and $ENDDATETIME have a timezone if provided. This is not required by the standard, but not having a timezone is ambiguous since the absolute value of the timestamp is dependent on localtime and therefore is location-dependent. Only affects FCS 3.2. Defaults to False.

  • text_data_correction (tuple[int, int]) – Corrections for DATA offsets in TEXT. Defaults to (0, 0).

  • text_analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in TEXT. Defaults to (0, 0).

  • ignore_text_data_offsets (bool) – If True ignore DATA offsets in TEXT Defaults to False.

  • ignore_text_analysis_offsets (bool) – If True ignore ANALYSIS offsets in TEXT Defaults to False.

  • allow_header_text_offset_mismatch (bool) – If True allow TEXT and HEADER offsets to mismatch. Defaults to False.

  • allow_missing_required_offsets (bool) – If True allow required DATA and ANALYSIS (3.1 or lower) offsets in TEXT to be missing. If missing, fall back to offsets from HEADER. Defaults to False.

  • truncate_text_offsets (bool) – If True truncate offsets that exceed end of file. Defaults to False.

  • allow_optional_dropping (bool) – If True drop optional keys that cause an error and emit warning instead. Defaults to False.

  • transfer_dropped_optional (bool) – If True transfer optional keys to non-standard dict if dropped. Defaults to False.

  • integer_widths_from_byteord (bool) – If True set all $PnB to the number of bytes from $BYTEORD. Only has an effect for FCS 2.0/3.0 where $DATATYPE is I. Defaults to False.

  • integer_byteord_override (list[int] | None) – Override $BYTEORD for integer layouts. Defaults to None.

  • disallow_range_truncation (bool) – If True throw error if $PnR values need to be truncated to match the number of bytes specified by $PnB and $DATATYPE. Defaults to False.

  • allow_uneven_event_width (bool) – If True allow event width to not perfectly divide length of DATA. Does not apply to delimited ASCII layouts. Defaults to False.

  • allow_tot_mismatch (bool) – If True allow $TOT to not match number of events as computed by the event width and length of DATA. Does not apply to delimited ASCII layouts. Defaults to False.

  • warnings_are_errors (bool) – If True all warnings will be regarded as errors. Defaults to False.

  • hide_warnings (bool) – If True hide all warnings. Defaults to False.

  • dataset_offset (int) – Starting position in the file of the dataset to be read Defaults to 0.

Return type:

tuple[CoreDataset2_0 | CoreDataset3_0 | CoreDataset3_1 | CoreDataset3_2, StdDatasetOutput]

Raises:
  • ConfigError – if nonstandard_measurement_pattern does not have "%n"

  • ConfigError – if time_pattern does not have specifiers for hours, minutes, seconds, and optionally sub-seconds (where "%!" and "%@" correspond to 1/60 seconds and centiseconds respectively) as outlined in chrono

  • ConfigError – if date_pattern does not have year, month, and day specifiers as outlined in chrono

  • ConfigError – if other_width is less than 1 and greater than 20

  • ConfigError – if time_meas_pattern is not a valid regular expression as described in regexp-syntax

  • InvalidKeywordValueError – if integer_byteord_override is not a list of integers including all from 1 to N where N is the length of the list (up to 8)

  • ConfigError – if field 1 in dict value in field 1 or 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • OverflowError – if field 1 or 2 in analysis_correction, data_correction, supp_text_correction, text_analysis_correction, text_correction, or text_data_correction is less than -2**31 or greater than 2**31-1

  • OverflowError – if field 1 or 2 in any in other_corrections is less than -2**31 or greater than 2**31-1

  • ParseKeyError – if any in field 1 in demote_from_standard, ignore_standard_keys, or promote_to_standard contains non-ASCII characters or is empty

  • ConfigError – if any in field 2 in demote_from_standard, ignore_standard_keys, or promote_to_standard is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict key in append_standard_keywords, rename_standard_keys, or replace_standard_key_values contains non-ASCII characters or is empty

  • ParseKeyError – if dict key in field 1 in substitute_standard_key_values contains non-ASCII characters or is empty

  • ConfigError – if dict key in field 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict value in rename_standard_keys contains non-ASCII characters or is empty

  • ConfigError – if references in replacement string in dict value in field 1 or 2 in substitute_standard_key_values do not match captures in regular expression

  • FileLayoutError – If HEADER, TEXT, or DATA are unparsable

  • ParseKeyError – If any keys from TEXT contain non-ASCII characters and allow_non_ascii_keywords is False

  • FCSDeprecatedError – If any keywords or their values are deprecated and disallow_deprecated is True

  • ParseKeywordValueError – If any keyword values could not be read from their string encoding

  • RelationalError – If keywords are incompatible with indicated layout of DATA or if keywords that are referenced by other keywords do not exist

  • EventDataError – If values in DATA cannot be read

  • ExtraKeywordError – If any standard keys are unused and allow_pseudostandard or allow_unused_standard are False

pyreflow.api.fcs_read_flat_texts(path, skip=None, limit=None, text_correction=(0, 0), data_correction=(0, 0), analysis_correction=(0, 0), other_corrections=[], max_other=None, other_width=8, squish_offsets=False, allow_negative=False, truncate_offsets=False, version_override=None, supp_text_correction=(0, 0), allow_overlapping_supp_text=False, ignore_supp_text=False, use_literal_delims=False, allow_non_ascii_delim=False, allow_missing_final_delim=False, allow_nonunique=False, allow_odd=False, allow_empty=False, allow_delim_at_boundary=False, allow_non_utf8=False, use_latin1=False, allow_non_ascii_keywords=False, allow_missing_supp_text=False, allow_supp_text_own_delim=False, allow_missing_nextdata=False, trim_value_whitespace=False, trim_trailing_whitespace=False, ignore_standard_keys=([], []), promote_to_standard=([], []), demote_from_standard=([], []), rename_standard_keys={}, replace_standard_key_values={}, append_standard_keywords={}, substitute_standard_key_values=({}, {}), warnings_are_errors=False, hide_warnings=False)

Read HEADER and TEXT from multiple datasets in FCS file.

Parameters:
  • path (Path) – Path to be read.

  • skip (int | None) – Number of datasets to skip Defaults to None.

  • limit (int | None) – Parse up to this many datasets Defaults to None.

  • text_correction (tuple[int, int]) – Corrections for TEXT offsets in HEADER. Defaults to (0, 0).

  • data_correction (tuple[int, int]) – Corrections for DATA offsets in HEADER. Defaults to (0, 0).

  • analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in HEADER. Defaults to (0, 0).

  • other_corrections (list[tuple[int, int]]) – Corrections for OTHER offsets if they exist. Each correction will be applied in order. If an offset does not need to be corrected, use (0, 0). This will not affect the number of OTHER segments that are read; this is controlled by max_other. Defaults to [].

  • max_other (int | None) – Maximum number of OTHER segments that can be parsed. None means limitless. Defaults to None.

  • other_width (int) – Width (in bytes) to use when parsing OTHER offsets. Defaults to 8.

  • squish_offsets (bool) – If True and a segment’s ending offset is zero, treat entire offset as empty. This might happen if the ending offset is longer than 8 digits, in which case it must be written in TEXT. If this happens, the standards mandate that both offsets be written to TEXT and that the HEADER offsets be set to 0,0, so only writing one is an error unless this flag is set. This should only happen in FCS 3.0 files and above. Defaults to False.

  • allow_negative (bool) – If true, allow negative values in a HEADER offset. If negative offsets are found, they will be replaced with 0. Some files will denote an “empty” offset as 0,-1, which is logically correct since the last offset points to the last byte, thus 0,0 is actually 1 byte long. Unfortunately this is not what the standards say, so specifying 0,-1 is an error unless this flag is set. Defaults to False.

  • truncate_offsets (bool) – If true, truncate offsets that exceed the end of the file. In some cases the DATA offset (usually) might exceed the end of the file by 1, which is usually a mistake and should be corrected with data_correction (or analogous for the offending offset). If this is not the case, the file is likely corrupted. This flag will allow such files to be read conveniently if desired. Defaults to False.

  • version_override (Literal[“FCS2.0”, “FCS3.0”, “FCS3.1”, “FCS3.2”] | None) – Override the FCS version as seen in HEADER. Defaults to None.

  • supp_text_correction (tuple[int, int]) – Corrections for Supplemental TEXT offsets in TEXT. Defaults to (0, 0).

  • allow_overlapping_supp_text (bool) – If True allow supplemental TEXT offsets to overlap the primary TEXT offsets from HEADER or HEADER itself and raise a warning if such an overlap is found. Otherwise raise a FileLayoutError. The offsets will not be used if an overlap is found in either case. Defaults to False.

  • ignore_supp_text (bool) – If True, ignore supplemental TEXT entirely. Defaults to False.

  • use_literal_delims (bool) – If True, treat every delimiter as literal (turn off escaping). Without escaping, delimiters cannot be included in keys or values, but empty values become possible. Use this option for files where unescaped delimiters results in the ‘correct’ interpretation of TEXT. Defaults to False.

  • allow_non_ascii_delim (bool) – If True allow non-ASCII delimiters (outside 1-126). Defaults to False.

  • allow_missing_final_delim (bool) – If True allow TEXT to not end with a delimiter. Defaults to False.

  • allow_nonunique (bool) – If True allow non-unique keys in TEXT. In such cases, only the first key will be used regardless of this setting; Defaults to False.

  • allow_odd (bool) – If True, allow TEXT to contain odd number of words. The last ‘dangling’ word will be dropped independent of this flag. Defaults to False.

  • allow_empty (bool) – If True allow keys with blank values. Only relevant if use_literal_delims is also True. Defaults to False.

  • allow_delim_at_boundary (bool) – If True allow delimiters at word boundaries. The FCS standard forbids this because it is impossible to tell if such delimiters belong to the previous or the next word. Consequently, delimiters at boundaries will be dropped regardless of this flag. Setting this to True will turn this into a warning not an error. Only relevant if use_literal_delims is False. Defaults to False.

  • allow_non_utf8 (bool) – If True allow non-UTF8 characters in TEXT. Words with such characters will be dropped regardless; setting this to True will turn these cases into warnings not errors. Defaults to False.

  • use_latin1 (bool) – If True interpret all characters in TEXT as Latin-1 (aka ISO/IEC 8859-1) instead of UTF-8. Defaults to False.

  • allow_non_ascii_keywords (bool) – If True allow non-ASCII keys. This only applies to non-standard keywords, as all standardized keywords may only contain letters, numbers, and start with $. Regardless, all compliant keys must only have ASCII. Setting this to true will emit an error when encountering such a key. If false, the key will be kept as a non-standard key. Defaults to False.

  • allow_missing_supp_text (bool) – If True allow supplemental TEXT offsets to be missing from primary TEXT. Defaults to False.

  • allow_supp_text_own_delim (bool) – If True allow supplemental TEXT offsets to have a different delimiter compared to primary TEXT. Defaults to False.

  • allow_missing_nextdata (bool) – If True allow $NEXTDATA to be missing. This is a required keyword in all versions. However, most files only have one dataset in which case this keyword is meaningless. Defaults to False.

  • trim_value_whitespace (bool) – If True trim whitespace from all values. If performed, trimming precedes all other repair steps. Any values which are entirely spaces will become blanks, in which case it may also be sensible to enable allow_empty. Defaults to False.

  • trim_trailing_whitespace (bool) – If True trim whitespace off the end of TEXT. This will effectively move the ending offset of TEXT to the first non-whitespace byte immediately preceding the actual ending offset given in HEADER. Defaults to False.

  • ignore_standard_keys (tuple[list[str], list[str]]) – Remove standard keys from TEXT. The leading $ is implied so do not include it.. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • promote_to_standard (tuple[list[str], list[str]]) – Promote nonstandard keys to standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • demote_from_standard (tuple[list[str], list[str]]) – Demote nonstandard keys from standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • rename_standard_keys (dict[str, str]) – Rename standard keys in TEXT. Keys matching the first part of the pair will be replaced by the second. Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • replace_standard_key_values (dict[str, str]) – Replace values for standard keys in TEXT Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • append_standard_keywords (dict[str, str]) – Append standard key/value pairs to TEXT. All keys and values will be included as they appear here. The leading $ is implied so do not include it. Defaults to {}.

  • substitute_standard_key_values (tuple[dict[str, tuple[str, str, bool]], dict[str, tuple[str, str, bool]]]) – Apply sed-like substitution operation on matching standard keys. The leading $ is implied when matching keys. The first dict corresponds to keys which are matched literally, and the second corresponds to keys which are matched via regular expression. The members in the 3-tuple values correspond to a regular expression, replacement string, and global flag respectively. The regular expression may contain capture expressions which must be matched exactly in the replacement string. If the global flag is True, replace all found matches, otherwise only replace the first. Any references in replacement string must be given with surrounding brackets like "${1}" or "${cygnus}". Defaults to ({}, {}).

  • warnings_are_errors (bool) – If True all warnings will be regarded as errors. Defaults to False.

  • hide_warnings (bool) – If True hide all warnings. Defaults to False.

Return type:

list[FlatTEXTOutput]

Raises:
  • OverflowError – if limit or skip is less than 0 or greater than 2**64-1

  • ConfigError – if other_width is less than 1 and greater than 20

  • ConfigError – if field 1 in dict value in field 1 or 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • OverflowError – if field 1 or 2 in analysis_correction, data_correction, supp_text_correction, or text_correction is less than -2**31 or greater than 2**31-1

  • OverflowError – if field 1 or 2 in any in other_corrections is less than -2**31 or greater than 2**31-1

  • ParseKeyError – if any in field 1 in demote_from_standard, ignore_standard_keys, or promote_to_standard contains non-ASCII characters or is empty

  • ConfigError – if any in field 2 in demote_from_standard, ignore_standard_keys, or promote_to_standard is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict key in append_standard_keywords, rename_standard_keys, or replace_standard_key_values contains non-ASCII characters or is empty

  • ParseKeyError – if dict key in field 1 in substitute_standard_key_values contains non-ASCII characters or is empty

  • ConfigError – if dict key in field 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict value in rename_standard_keys contains non-ASCII characters or is empty

  • ConfigError – if references in replacement string in dict value in field 1 or 2 in substitute_standard_key_values do not match captures in regular expression

  • FileLayoutError – If HEADER or TEXT are not parsable

  • ParseKeyError – If any keys from TEXT contain non-ASCII characters and allow_non_ascii_keywords is False

pyreflow.api.fcs_read_std_texts(path, skip=None, limit=None, text_correction=(0, 0), data_correction=(0, 0), analysis_correction=(0, 0), other_corrections=[], max_other=None, other_width=8, squish_offsets=False, allow_negative=False, truncate_offsets=False, version_override=None, supp_text_correction=(0, 0), allow_overlapping_supp_text=False, ignore_supp_text=False, use_literal_delims=False, allow_non_ascii_delim=False, allow_missing_final_delim=False, allow_nonunique=False, allow_odd=False, allow_empty=False, allow_delim_at_boundary=False, allow_non_utf8=False, use_latin1=False, allow_non_ascii_keywords=False, allow_missing_supp_text=False, allow_supp_text_own_delim=False, allow_missing_nextdata=False, trim_value_whitespace=False, trim_trailing_whitespace=False, ignore_standard_keys=([], []), promote_to_standard=([], []), demote_from_standard=([], []), rename_standard_keys={}, replace_standard_key_values={}, append_standard_keywords={}, substitute_standard_key_values=({}, {}), trim_intra_value_whitespace=False, time_meas_pattern='^(TIME|Time)$', allow_missing_time=False, force_time_linear=False, ignore_time_optical_keys=[], date_pattern=None, time_pattern=None, allow_pseudostandard=False, allow_unused_standard=False, disallow_deprecated=False, fix_log_scale_offsets=False, nonstandard_measurement_pattern=None, ignore_time_gain=False, parse_indexed_spillover=False, disallow_localtime=False, text_data_correction=(0, 0), text_analysis_correction=(0, 0), ignore_text_data_offsets=False, ignore_text_analysis_offsets=False, allow_header_text_offset_mismatch=False, allow_missing_required_offsets=False, truncate_text_offsets=False, allow_optional_dropping=False, transfer_dropped_optional=False, integer_widths_from_byteord=False, integer_byteord_override=None, disallow_range_truncation=False, warnings_are_errors=False, hide_warnings=False)

Read standardized TEXT from multiple datasets in FCS file.

Parameters:
  • path (Path) – Path to be read.

  • skip (int | None) – Number of datasets to skip. The HEADER and TEXT from skipped datasets will still be read to obtain $NEXTDATA for the next dataset in the file. Defaults to None.

  • limit (int | None) – Parse up to this many datasets Defaults to None.

  • text_correction (tuple[int, int]) – Corrections for TEXT offsets in HEADER. Defaults to (0, 0).

  • data_correction (tuple[int, int]) – Corrections for DATA offsets in HEADER. Defaults to (0, 0).

  • analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in HEADER. Defaults to (0, 0).

  • other_corrections (list[tuple[int, int]]) – Corrections for OTHER offsets if they exist. Each correction will be applied in order. If an offset does not need to be corrected, use (0, 0). This will not affect the number of OTHER segments that are read; this is controlled by max_other. Defaults to [].

  • max_other (int | None) – Maximum number of OTHER segments that can be parsed. None means limitless. Defaults to None.

  • other_width (int) – Width (in bytes) to use when parsing OTHER offsets. Defaults to 8.

  • squish_offsets (bool) – If True and a segment’s ending offset is zero, treat entire offset as empty. This might happen if the ending offset is longer than 8 digits, in which case it must be written in TEXT. If this happens, the standards mandate that both offsets be written to TEXT and that the HEADER offsets be set to 0,0, so only writing one is an error unless this flag is set. This should only happen in FCS 3.0 files and above. Defaults to False.

  • allow_negative (bool) – If true, allow negative values in a HEADER offset. If negative offsets are found, they will be replaced with 0. Some files will denote an “empty” offset as 0,-1, which is logically correct since the last offset points to the last byte, thus 0,0 is actually 1 byte long. Unfortunately this is not what the standards say, so specifying 0,-1 is an error unless this flag is set. Defaults to False.

  • truncate_offsets (bool) – If true, truncate offsets that exceed the end of the file. In some cases the DATA offset (usually) might exceed the end of the file by 1, which is usually a mistake and should be corrected with data_correction (or analogous for the offending offset). If this is not the case, the file is likely corrupted. This flag will allow such files to be read conveniently if desired. Defaults to False.

  • version_override (Literal[“FCS2.0”, “FCS3.0”, “FCS3.1”, “FCS3.2”] | None) – Override the FCS version as seen in HEADER. Defaults to None.

  • supp_text_correction (tuple[int, int]) – Corrections for Supplemental TEXT offsets in TEXT. Defaults to (0, 0).

  • allow_overlapping_supp_text (bool) – If True allow supplemental TEXT offsets to overlap the primary TEXT offsets from HEADER or HEADER itself and raise a warning if such an overlap is found. Otherwise raise a FileLayoutError. The offsets will not be used if an overlap is found in either case. Defaults to False.

  • ignore_supp_text (bool) – If True, ignore supplemental TEXT entirely. Defaults to False.

  • use_literal_delims (bool) – If True, treat every delimiter as literal (turn off escaping). Without escaping, delimiters cannot be included in keys or values, but empty values become possible. Use this option for files where unescaped delimiters results in the ‘correct’ interpretation of TEXT. Defaults to False.

  • allow_non_ascii_delim (bool) – If True allow non-ASCII delimiters (outside 1-126). Defaults to False.

  • allow_missing_final_delim (bool) – If True allow TEXT to not end with a delimiter. Defaults to False.

  • allow_nonunique (bool) – If True allow non-unique keys in TEXT. In such cases, only the first key will be used regardless of this setting; Defaults to False.

  • allow_odd (bool) – If True, allow TEXT to contain odd number of words. The last ‘dangling’ word will be dropped independent of this flag. Defaults to False.

  • allow_empty (bool) – If True allow keys with blank values. Only relevant if use_literal_delims is also True. Defaults to False.

  • allow_delim_at_boundary (bool) – If True allow delimiters at word boundaries. The FCS standard forbids this because it is impossible to tell if such delimiters belong to the previous or the next word. Consequently, delimiters at boundaries will be dropped regardless of this flag. Setting this to True will turn this into a warning not an error. Only relevant if use_literal_delims is False. Defaults to False.

  • allow_non_utf8 (bool) – If True allow non-UTF8 characters in TEXT. Words with such characters will be dropped regardless; setting this to True will turn these cases into warnings not errors. Defaults to False.

  • use_latin1 (bool) – If True interpret all characters in TEXT as Latin-1 (aka ISO/IEC 8859-1) instead of UTF-8. Defaults to False.

  • allow_non_ascii_keywords (bool) – If True allow non-ASCII keys. This only applies to non-standard keywords, as all standardized keywords may only contain letters, numbers, and start with $. Regardless, all compliant keys must only have ASCII. Setting this to true will emit an error when encountering such a key. If false, the key will be kept as a non-standard key. Defaults to False.

  • allow_missing_supp_text (bool) – If True allow supplemental TEXT offsets to be missing from primary TEXT. Defaults to False.

  • allow_supp_text_own_delim (bool) – If True allow supplemental TEXT offsets to have a different delimiter compared to primary TEXT. Defaults to False.

  • allow_missing_nextdata (bool) – If True allow $NEXTDATA to be missing. This is a required keyword in all versions. However, most files only have one dataset in which case this keyword is meaningless. Defaults to False.

  • trim_value_whitespace (bool) – If True trim whitespace from all values. If performed, trimming precedes all other repair steps. Any values which are entirely spaces will become blanks, in which case it may also be sensible to enable allow_empty. Defaults to False.

  • trim_trailing_whitespace (bool) – If True trim whitespace off the end of TEXT. This will effectively move the ending offset of TEXT to the first non-whitespace byte immediately preceding the actual ending offset given in HEADER. Defaults to False.

  • ignore_standard_keys (tuple[list[str], list[str]]) – Remove standard keys from TEXT. The leading $ is implied so do not include it.. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • promote_to_standard (tuple[list[str], list[str]]) – Promote nonstandard keys to standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • demote_from_standard (tuple[list[str], list[str]]) – Demote nonstandard keys from standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • rename_standard_keys (dict[str, str]) – Rename standard keys in TEXT. Keys matching the first part of the pair will be replaced by the second. Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • replace_standard_key_values (dict[str, str]) – Replace values for standard keys in TEXT Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • append_standard_keywords (dict[str, str]) – Append standard key/value pairs to TEXT. All keys and values will be included as they appear here. The leading $ is implied so do not include it. Defaults to {}.

  • substitute_standard_key_values (tuple[dict[str, tuple[str, str, bool]], dict[str, tuple[str, str, bool]]]) – Apply sed-like substitution operation on matching standard keys. The leading $ is implied when matching keys. The first dict corresponds to keys which are matched literally, and the second corresponds to keys which are matched via regular expression. The members in the 3-tuple values correspond to a regular expression, replacement string, and global flag respectively. The regular expression may contain capture expressions which must be matched exactly in the replacement string. If the global flag is True, replace all found matches, otherwise only replace the first. Any references in replacement string must be given with surrounding brackets like "${1}" or "${cygnus}". Defaults to ({}, {}).

  • trim_intra_value_whitespace (bool) – If True, trim whitespace between delimiters such as , and ; within keyword value strings. Defaults to False.

  • time_meas_pattern (str | None) – A pattern to match the $PnN of the time measurement. If None, do not try to find a time measurement. Defaults to "^(TIME|Time)$".

  • allow_missing_time (bool) – If True allow time measurement to be missing. Defaults to False.

  • force_time_linear (bool) – If True force time measurement to be linear independent of $PnE. Defaults to False.

  • ignore_time_optical_keys (list[Literal[“F”, “L”, “O”, “T”, “P”, “V”, “CALIBRATION”, “DET”, “TAG”, “FEATURE”, “ANALYTE”]]) – Ignore optical keys in temporal measurement. These keys are nonsensical for time measurements but are not explicitly forbidden in the the standard. Provided keys are the string after the “Pn” in the “PnX” keywords. Defaults to [].

  • date_pattern (str | None) – If supplied, will be used as an alternative pattern when parsing $DATE. If not supplied, $DATE will be parsed according to the standard pattern which is %d-%b-%Y. Defaults to None.

  • time_pattern (str | None) – If supplied, will be used as an alternative pattern when parsing $BTIM and $ETIM. The values "%!" or "%@" may be used to match 1/60 seconds or centiseconds respectively. If not supplied, $BTIM and $ETIM will be parsed according to the standard pattern which is version-specific. Defaults to None.

  • allow_pseudostandard (bool) – If True allow non-standard keywords with a leading $. The presence of such keywords often means the version in HEADER is incorrect. Defaults to False.

  • allow_unused_standard (bool) – If True allow unused standard keywords to be present. Defaults to False.

  • disallow_deprecated (bool) – If True throw error if a deprecated key is encountered. Defaults to False.

  • fix_log_scale_offsets (bool) – If True fix log-scale PnE and keywords which have zero offset (ie X,0.0 where X is non-zero). Defaults to False.

  • nonstandard_measurement_pattern (str | None) – Pattern to use when matching nonstandard measurement keys. Must be a regular expression pattern with %n which will represent the measurement index and should not start with $. Otherwise should be a normal regular expression as defined in regexp-syntax. Defaults to None.

  • ignore_time_gain (bool) – If True ignore the $PnG (gain) keyword. This keyword should not be set according to the standard} however, this library will allow gain to be 1.0 since this equates to identity. If gain is not 1.0, this is nonsense and it can be ignored with this flag. Defaults to False.

  • parse_indexed_spillover (bool) – Parse $SPILLOVER with numeric indices rather than strings (ie names or $PnN) Defaults to False.

  • disallow_localtime (bool) – If true, require that $BEGINDATETIME and $ENDDATETIME have a timezone if provided. This is not required by the standard, but not having a timezone is ambiguous since the absolute value of the timestamp is dependent on localtime and therefore is location-dependent. Only affects FCS 3.2. Defaults to False.

  • text_data_correction (tuple[int, int]) – Corrections for DATA offsets in TEXT. Defaults to (0, 0).

  • text_analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in TEXT. Defaults to (0, 0).

  • ignore_text_data_offsets (bool) – If True ignore DATA offsets in TEXT Defaults to False.

  • ignore_text_analysis_offsets (bool) – If True ignore ANALYSIS offsets in TEXT Defaults to False.

  • allow_header_text_offset_mismatch (bool) – If True allow TEXT and HEADER offsets to mismatch. Defaults to False.

  • allow_missing_required_offsets (bool) – If True allow required DATA and ANALYSIS (3.1 or lower) offsets in TEXT to be missing. If missing, fall back to offsets from HEADER. Defaults to False.

  • truncate_text_offsets (bool) – If True truncate offsets that exceed end of file. Defaults to False.

  • allow_optional_dropping (bool) – If True drop optional keys that cause an error and emit warning instead. Defaults to False.

  • transfer_dropped_optional (bool) – If True transfer optional keys to non-standard dict if dropped. Defaults to False.

  • integer_widths_from_byteord (bool) – If True set all $PnB to the number of bytes from $BYTEORD. Only has an effect for FCS 2.0/3.0 where $DATATYPE is I. Defaults to False.

  • integer_byteord_override (list[int] | None) – Override $BYTEORD for integer layouts. Defaults to None.

  • disallow_range_truncation (bool) – If True throw error if $PnR values need to be truncated to match the number of bytes specified by $PnB and $DATATYPE. Defaults to False.

  • warnings_are_errors (bool) – If True all warnings will be regarded as errors. Defaults to False.

  • hide_warnings (bool) – If True hide all warnings. Defaults to False.

Return type:

list[tuple[CoreTEXT2_0 | CoreTEXT3_0 | CoreTEXT3_1 | CoreTEXT3_2, StdTEXTOutput]]

Raises:
  • OverflowError – if limit or skip is less than 0 or greater than 2**64-1

  • ConfigError – if nonstandard_measurement_pattern does not have "%n"

  • ConfigError – if time_pattern does not have specifiers for hours, minutes, seconds, and optionally sub-seconds (where "%!" and "%@" correspond to 1/60 seconds and centiseconds respectively) as outlined in chrono

  • ConfigError – if date_pattern does not have year, month, and day specifiers as outlined in chrono

  • ConfigError – if other_width is less than 1 and greater than 20

  • ConfigError – if time_meas_pattern is not a valid regular expression as described in regexp-syntax

  • InvalidKeywordValueError – if integer_byteord_override is not a list of integers including all from 1 to N where N is the length of the list (up to 8)

  • ConfigError – if field 1 in dict value in field 1 or 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • OverflowError – if field 1 or 2 in analysis_correction, data_correction, supp_text_correction, text_analysis_correction, text_correction, or text_data_correction is less than -2**31 or greater than 2**31-1

  • OverflowError – if field 1 or 2 in any in other_corrections is less than -2**31 or greater than 2**31-1

  • ParseKeyError – if any in field 1 in demote_from_standard, ignore_standard_keys, or promote_to_standard contains non-ASCII characters or is empty

  • ConfigError – if any in field 2 in demote_from_standard, ignore_standard_keys, or promote_to_standard is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict key in append_standard_keywords, rename_standard_keys, or replace_standard_key_values contains non-ASCII characters or is empty

  • ParseKeyError – if dict key in field 1 in substitute_standard_key_values contains non-ASCII characters or is empty

  • ConfigError – if dict key in field 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict value in rename_standard_keys contains non-ASCII characters or is empty

  • ConfigError – if references in replacement string in dict value in field 1 or 2 in substitute_standard_key_values do not match captures in regular expression

  • FileLayoutError – If HEADER or TEXT are unparsable

  • ParseKeyError – If any keys from TEXT contain non-ASCII characters and allow_non_ascii_keywords is False

  • ExtraKeywordError – If any standard keys are unused and allow_pseudostandard or allow_unused_standard are False

  • FCSDeprecatedError – If any keywords or their values are deprecated and disallow_deprecated is True

  • ParseKeywordValueError – If any keyword values could not be read from their string encoding

  • RelationalError – If keywords that are referenced by other keywords are missing

pyreflow.api.fcs_read_flat_datasets(path, skip=None, limit=None, text_correction=(0, 0), data_correction=(0, 0), analysis_correction=(0, 0), other_corrections=[], max_other=None, other_width=8, squish_offsets=False, allow_negative=False, truncate_offsets=False, version_override=None, supp_text_correction=(0, 0), allow_overlapping_supp_text=False, ignore_supp_text=False, use_literal_delims=False, allow_non_ascii_delim=False, allow_missing_final_delim=False, allow_nonunique=False, allow_odd=False, allow_empty=False, allow_delim_at_boundary=False, allow_non_utf8=False, use_latin1=False, allow_non_ascii_keywords=False, allow_missing_supp_text=False, allow_supp_text_own_delim=False, allow_missing_nextdata=False, trim_value_whitespace=False, trim_trailing_whitespace=False, ignore_standard_keys=([], []), promote_to_standard=([], []), demote_from_standard=([], []), rename_standard_keys={}, replace_standard_key_values={}, append_standard_keywords={}, substitute_standard_key_values=({}, {}), text_data_correction=(0, 0), text_analysis_correction=(0, 0), ignore_text_data_offsets=False, ignore_text_analysis_offsets=False, allow_header_text_offset_mismatch=False, allow_missing_required_offsets=False, truncate_text_offsets=False, allow_optional_dropping=False, transfer_dropped_optional=False, integer_widths_from_byteord=False, integer_byteord_override=None, disallow_range_truncation=False, allow_uneven_event_width=False, allow_tot_mismatch=False, warnings_are_errors=False, hide_warnings=False)

Read multiple datasets from FCS file in flat mode.

Parameters:
  • path (Path) – Path to be read.

  • skip (int | None) – Number of datasets to skip. The HEADER and TEXT from skipped datasets will still be read to obtain $NEXTDATA for the next dataset in the file. Defaults to None.

  • limit (int | None) – Parse up to this many datasets Defaults to None.

  • text_correction (tuple[int, int]) – Corrections for TEXT offsets in HEADER. Defaults to (0, 0).

  • data_correction (tuple[int, int]) – Corrections for DATA offsets in HEADER. Defaults to (0, 0).

  • analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in HEADER. Defaults to (0, 0).

  • other_corrections (list[tuple[int, int]]) – Corrections for OTHER offsets if they exist. Each correction will be applied in order. If an offset does not need to be corrected, use (0, 0). This will not affect the number of OTHER segments that are read; this is controlled by max_other. Defaults to [].

  • max_other (int | None) – Maximum number of OTHER segments that can be parsed. None means limitless. Defaults to None.

  • other_width (int) – Width (in bytes) to use when parsing OTHER offsets. Defaults to 8.

  • squish_offsets (bool) – If True and a segment’s ending offset is zero, treat entire offset as empty. This might happen if the ending offset is longer than 8 digits, in which case it must be written in TEXT. If this happens, the standards mandate that both offsets be written to TEXT and that the HEADER offsets be set to 0,0, so only writing one is an error unless this flag is set. This should only happen in FCS 3.0 files and above. Defaults to False.

  • allow_negative (bool) – If true, allow negative values in a HEADER offset. If negative offsets are found, they will be replaced with 0. Some files will denote an “empty” offset as 0,-1, which is logically correct since the last offset points to the last byte, thus 0,0 is actually 1 byte long. Unfortunately this is not what the standards say, so specifying 0,-1 is an error unless this flag is set. Defaults to False.

  • truncate_offsets (bool) – If true, truncate offsets that exceed the end of the file. In some cases the DATA offset (usually) might exceed the end of the file by 1, which is usually a mistake and should be corrected with data_correction (or analogous for the offending offset). If this is not the case, the file is likely corrupted. This flag will allow such files to be read conveniently if desired. Defaults to False.

  • version_override (Literal[“FCS2.0”, “FCS3.0”, “FCS3.1”, “FCS3.2”] | None) – Override the FCS version as seen in HEADER. Defaults to None.

  • supp_text_correction (tuple[int, int]) – Corrections for Supplemental TEXT offsets in TEXT. Defaults to (0, 0).

  • allow_overlapping_supp_text (bool) – If True allow supplemental TEXT offsets to overlap the primary TEXT offsets from HEADER or HEADER itself and raise a warning if such an overlap is found. Otherwise raise a FileLayoutError. The offsets will not be used if an overlap is found in either case. Defaults to False.

  • ignore_supp_text (bool) – If True, ignore supplemental TEXT entirely. Defaults to False.

  • use_literal_delims (bool) – If True, treat every delimiter as literal (turn off escaping). Without escaping, delimiters cannot be included in keys or values, but empty values become possible. Use this option for files where unescaped delimiters results in the ‘correct’ interpretation of TEXT. Defaults to False.

  • allow_non_ascii_delim (bool) – If True allow non-ASCII delimiters (outside 1-126). Defaults to False.

  • allow_missing_final_delim (bool) – If True allow TEXT to not end with a delimiter. Defaults to False.

  • allow_nonunique (bool) – If True allow non-unique keys in TEXT. In such cases, only the first key will be used regardless of this setting; Defaults to False.

  • allow_odd (bool) – If True, allow TEXT to contain odd number of words. The last ‘dangling’ word will be dropped independent of this flag. Defaults to False.

  • allow_empty (bool) – If True allow keys with blank values. Only relevant if use_literal_delims is also True. Defaults to False.

  • allow_delim_at_boundary (bool) – If True allow delimiters at word boundaries. The FCS standard forbids this because it is impossible to tell if such delimiters belong to the previous or the next word. Consequently, delimiters at boundaries will be dropped regardless of this flag. Setting this to True will turn this into a warning not an error. Only relevant if use_literal_delims is False. Defaults to False.

  • allow_non_utf8 (bool) – If True allow non-UTF8 characters in TEXT. Words with such characters will be dropped regardless; setting this to True will turn these cases into warnings not errors. Defaults to False.

  • use_latin1 (bool) – If True interpret all characters in TEXT as Latin-1 (aka ISO/IEC 8859-1) instead of UTF-8. Defaults to False.

  • allow_non_ascii_keywords (bool) – If True allow non-ASCII keys. This only applies to non-standard keywords, as all standardized keywords may only contain letters, numbers, and start with $. Regardless, all compliant keys must only have ASCII. Setting this to true will emit an error when encountering such a key. If false, the key will be kept as a non-standard key. Defaults to False.

  • allow_missing_supp_text (bool) – If True allow supplemental TEXT offsets to be missing from primary TEXT. Defaults to False.

  • allow_supp_text_own_delim (bool) – If True allow supplemental TEXT offsets to have a different delimiter compared to primary TEXT. Defaults to False.

  • allow_missing_nextdata (bool) – If True allow $NEXTDATA to be missing. This is a required keyword in all versions. However, most files only have one dataset in which case this keyword is meaningless. Defaults to False.

  • trim_value_whitespace (bool) – If True trim whitespace from all values. If performed, trimming precedes all other repair steps. Any values which are entirely spaces will become blanks, in which case it may also be sensible to enable allow_empty. Defaults to False.

  • trim_trailing_whitespace (bool) – If True trim whitespace off the end of TEXT. This will effectively move the ending offset of TEXT to the first non-whitespace byte immediately preceding the actual ending offset given in HEADER. Defaults to False.

  • ignore_standard_keys (tuple[list[str], list[str]]) – Remove standard keys from TEXT. The leading $ is implied so do not include it.. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • promote_to_standard (tuple[list[str], list[str]]) – Promote nonstandard keys to standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • demote_from_standard (tuple[list[str], list[str]]) – Demote nonstandard keys from standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • rename_standard_keys (dict[str, str]) – Rename standard keys in TEXT. Keys matching the first part of the pair will be replaced by the second. Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • replace_standard_key_values (dict[str, str]) – Replace values for standard keys in TEXT Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • append_standard_keywords (dict[str, str]) – Append standard key/value pairs to TEXT. All keys and values will be included as they appear here. The leading $ is implied so do not include it. Defaults to {}.

  • substitute_standard_key_values (tuple[dict[str, tuple[str, str, bool]], dict[str, tuple[str, str, bool]]]) – Apply sed-like substitution operation on matching standard keys. The leading $ is implied when matching keys. The first dict corresponds to keys which are matched literally, and the second corresponds to keys which are matched via regular expression. The members in the 3-tuple values correspond to a regular expression, replacement string, and global flag respectively. The regular expression may contain capture expressions which must be matched exactly in the replacement string. If the global flag is True, replace all found matches, otherwise only replace the first. Any references in replacement string must be given with surrounding brackets like "${1}" or "${cygnus}". Defaults to ({}, {}).

  • text_data_correction (tuple[int, int]) – Corrections for DATA offsets in TEXT. Defaults to (0, 0).

  • text_analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in TEXT. Defaults to (0, 0).

  • ignore_text_data_offsets (bool) – If True ignore DATA offsets in TEXT Defaults to False.

  • ignore_text_analysis_offsets (bool) – If True ignore ANALYSIS offsets in TEXT Defaults to False.

  • allow_header_text_offset_mismatch (bool) – If True allow TEXT and HEADER offsets to mismatch. Defaults to False.

  • allow_missing_required_offsets (bool) – If True allow required DATA and ANALYSIS (3.1 or lower) offsets in TEXT to be missing. If missing, fall back to offsets from HEADER. Defaults to False.

  • truncate_text_offsets (bool) – If True truncate offsets that exceed end of file. Defaults to False.

  • allow_optional_dropping (bool) – If True drop optional keys that cause an error and emit warning instead. Defaults to False.

  • transfer_dropped_optional (bool) – If True transfer optional keys to non-standard dict if dropped. Defaults to False.

  • integer_widths_from_byteord (bool) – If True set all $PnB to the number of bytes from $BYTEORD. Only has an effect for FCS 2.0/3.0 where $DATATYPE is I. Defaults to False.

  • integer_byteord_override (list[int] | None) – Override $BYTEORD for integer layouts. Defaults to None.

  • disallow_range_truncation (bool) – If True throw error if $PnR values need to be truncated to match the number of bytes specified by $PnB and $DATATYPE. Defaults to False.

  • allow_uneven_event_width (bool) – If True allow event width to not perfectly divide length of DATA. Does not apply to delimited ASCII layouts. Defaults to False.

  • allow_tot_mismatch (bool) – If True allow $TOT to not match number of events as computed by the event width and length of DATA. Does not apply to delimited ASCII layouts. Defaults to False.

  • warnings_are_errors (bool) – If True all warnings will be regarded as errors. Defaults to False.

  • hide_warnings (bool) – If True hide all warnings. Defaults to False.

Return type:

list[FlatDatasetOutput]

Raises:
  • OverflowError – if limit or skip is less than 0 or greater than 2**64-1

  • ConfigError – if other_width is less than 1 and greater than 20

  • InvalidKeywordValueError – if integer_byteord_override is not a list of integers including all from 1 to N where N is the length of the list (up to 8)

  • ConfigError – if field 1 in dict value in field 1 or 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • OverflowError – if field 1 or 2 in analysis_correction, data_correction, supp_text_correction, text_analysis_correction, text_correction, or text_data_correction is less than -2**31 or greater than 2**31-1

  • OverflowError – if field 1 or 2 in any in other_corrections is less than -2**31 or greater than 2**31-1

  • ParseKeyError – if any in field 1 in demote_from_standard, ignore_standard_keys, or promote_to_standard contains non-ASCII characters or is empty

  • ConfigError – if any in field 2 in demote_from_standard, ignore_standard_keys, or promote_to_standard is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict key in append_standard_keywords, rename_standard_keys, or replace_standard_key_values contains non-ASCII characters or is empty

  • ParseKeyError – if dict key in field 1 in substitute_standard_key_values contains non-ASCII characters or is empty

  • ConfigError – if dict key in field 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict value in rename_standard_keys contains non-ASCII characters or is empty

  • ConfigError – if references in replacement string in dict value in field 1 or 2 in substitute_standard_key_values do not match captures in regular expression

  • FileLayoutError – If HEADER, TEXT, or DATA are unparsable

  • ParseKeyError – If any keys from TEXT contain non-ASCII characters and allow_non_ascii_keywords is False

  • FCSDeprecatedError – If an ASCII layout is used and FCS version is 3.1 or 3.2

  • ParseKeywordValueError – If any keyword values could not be read from their string encoding

  • RelationalError – If keywords are incompatible with indicated layout of DATA

  • EventDataError – If values in DATA cannot be read

pyreflow.api.fcs_read_std_datasets(path, skip=None, limit=None, text_correction=(0, 0), data_correction=(0, 0), analysis_correction=(0, 0), other_corrections=[], max_other=None, other_width=8, squish_offsets=False, allow_negative=False, truncate_offsets=False, version_override=None, supp_text_correction=(0, 0), allow_overlapping_supp_text=False, ignore_supp_text=False, use_literal_delims=False, allow_non_ascii_delim=False, allow_missing_final_delim=False, allow_nonunique=False, allow_odd=False, allow_empty=False, allow_delim_at_boundary=False, allow_non_utf8=False, use_latin1=False, allow_non_ascii_keywords=False, allow_missing_supp_text=False, allow_supp_text_own_delim=False, allow_missing_nextdata=False, trim_value_whitespace=False, trim_trailing_whitespace=False, ignore_standard_keys=([], []), promote_to_standard=([], []), demote_from_standard=([], []), rename_standard_keys={}, replace_standard_key_values={}, append_standard_keywords={}, substitute_standard_key_values=({}, {}), trim_intra_value_whitespace=False, time_meas_pattern='^(TIME|Time)$', allow_missing_time=False, force_time_linear=False, ignore_time_optical_keys=[], date_pattern=None, time_pattern=None, allow_pseudostandard=False, allow_unused_standard=False, disallow_deprecated=False, fix_log_scale_offsets=False, nonstandard_measurement_pattern=None, ignore_time_gain=False, parse_indexed_spillover=False, disallow_localtime=False, text_data_correction=(0, 0), text_analysis_correction=(0, 0), ignore_text_data_offsets=False, ignore_text_analysis_offsets=False, allow_header_text_offset_mismatch=False, allow_missing_required_offsets=False, truncate_text_offsets=False, allow_optional_dropping=False, transfer_dropped_optional=False, integer_widths_from_byteord=False, integer_byteord_override=None, disallow_range_truncation=False, allow_uneven_event_width=False, allow_tot_mismatch=False, warnings_are_errors=False, hide_warnings=False)

Read multiple standardized datasets from FCS file.

Parameters:
  • path (Path) – Path to be read.

  • skip (int | None) – Number of datasets to skip. The HEADER and TEXT from skipped datasets will still be read to obtain $NEXTDATA for the next dataset in the file. Defaults to None.

  • limit (int | None) – Parse up to this many datasets Defaults to None.

  • text_correction (tuple[int, int]) – Corrections for TEXT offsets in HEADER. Defaults to (0, 0).

  • data_correction (tuple[int, int]) – Corrections for DATA offsets in HEADER. Defaults to (0, 0).

  • analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in HEADER. Defaults to (0, 0).

  • other_corrections (list[tuple[int, int]]) – Corrections for OTHER offsets if they exist. Each correction will be applied in order. If an offset does not need to be corrected, use (0, 0). This will not affect the number of OTHER segments that are read; this is controlled by max_other. Defaults to [].

  • max_other (int | None) – Maximum number of OTHER segments that can be parsed. None means limitless. Defaults to None.

  • other_width (int) – Width (in bytes) to use when parsing OTHER offsets. Defaults to 8.

  • squish_offsets (bool) – If True and a segment’s ending offset is zero, treat entire offset as empty. This might happen if the ending offset is longer than 8 digits, in which case it must be written in TEXT. If this happens, the standards mandate that both offsets be written to TEXT and that the HEADER offsets be set to 0,0, so only writing one is an error unless this flag is set. This should only happen in FCS 3.0 files and above. Defaults to False.

  • allow_negative (bool) – If true, allow negative values in a HEADER offset. If negative offsets are found, they will be replaced with 0. Some files will denote an “empty” offset as 0,-1, which is logically correct since the last offset points to the last byte, thus 0,0 is actually 1 byte long. Unfortunately this is not what the standards say, so specifying 0,-1 is an error unless this flag is set. Defaults to False.

  • truncate_offsets (bool) – If true, truncate offsets that exceed the end of the file. In some cases the DATA offset (usually) might exceed the end of the file by 1, which is usually a mistake and should be corrected with data_correction (or analogous for the offending offset). If this is not the case, the file is likely corrupted. This flag will allow such files to be read conveniently if desired. Defaults to False.

  • version_override (Literal[“FCS2.0”, “FCS3.0”, “FCS3.1”, “FCS3.2”] | None) – Override the FCS version as seen in HEADER. Defaults to None.

  • supp_text_correction (tuple[int, int]) – Corrections for Supplemental TEXT offsets in TEXT. Defaults to (0, 0).

  • allow_overlapping_supp_text (bool) – If True allow supplemental TEXT offsets to overlap the primary TEXT offsets from HEADER or HEADER itself and raise a warning if such an overlap is found. Otherwise raise a FileLayoutError. The offsets will not be used if an overlap is found in either case. Defaults to False.

  • ignore_supp_text (bool) – If True, ignore supplemental TEXT entirely. Defaults to False.

  • use_literal_delims (bool) – If True, treat every delimiter as literal (turn off escaping). Without escaping, delimiters cannot be included in keys or values, but empty values become possible. Use this option for files where unescaped delimiters results in the ‘correct’ interpretation of TEXT. Defaults to False.

  • allow_non_ascii_delim (bool) – If True allow non-ASCII delimiters (outside 1-126). Defaults to False.

  • allow_missing_final_delim (bool) – If True allow TEXT to not end with a delimiter. Defaults to False.

  • allow_nonunique (bool) – If True allow non-unique keys in TEXT. In such cases, only the first key will be used regardless of this setting; Defaults to False.

  • allow_odd (bool) – If True, allow TEXT to contain odd number of words. The last ‘dangling’ word will be dropped independent of this flag. Defaults to False.

  • allow_empty (bool) – If True allow keys with blank values. Only relevant if use_literal_delims is also True. Defaults to False.

  • allow_delim_at_boundary (bool) – If True allow delimiters at word boundaries. The FCS standard forbids this because it is impossible to tell if such delimiters belong to the previous or the next word. Consequently, delimiters at boundaries will be dropped regardless of this flag. Setting this to True will turn this into a warning not an error. Only relevant if use_literal_delims is False. Defaults to False.

  • allow_non_utf8 (bool) – If True allow non-UTF8 characters in TEXT. Words with such characters will be dropped regardless; setting this to True will turn these cases into warnings not errors. Defaults to False.

  • use_latin1 (bool) – If True interpret all characters in TEXT as Latin-1 (aka ISO/IEC 8859-1) instead of UTF-8. Defaults to False.

  • allow_non_ascii_keywords (bool) – If True allow non-ASCII keys. This only applies to non-standard keywords, as all standardized keywords may only contain letters, numbers, and start with $. Regardless, all compliant keys must only have ASCII. Setting this to true will emit an error when encountering such a key. If false, the key will be kept as a non-standard key. Defaults to False.

  • allow_missing_supp_text (bool) – If True allow supplemental TEXT offsets to be missing from primary TEXT. Defaults to False.

  • allow_supp_text_own_delim (bool) – If True allow supplemental TEXT offsets to have a different delimiter compared to primary TEXT. Defaults to False.

  • allow_missing_nextdata (bool) – If True allow $NEXTDATA to be missing. This is a required keyword in all versions. However, most files only have one dataset in which case this keyword is meaningless. Defaults to False.

  • trim_value_whitespace (bool) – If True trim whitespace from all values. If performed, trimming precedes all other repair steps. Any values which are entirely spaces will become blanks, in which case it may also be sensible to enable allow_empty. Defaults to False.

  • trim_trailing_whitespace (bool) – If True trim whitespace off the end of TEXT. This will effectively move the ending offset of TEXT to the first non-whitespace byte immediately preceding the actual ending offset given in HEADER. Defaults to False.

  • ignore_standard_keys (tuple[list[str], list[str]]) – Remove standard keys from TEXT. The leading $ is implied so do not include it.. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • promote_to_standard (tuple[list[str], list[str]]) – Promote nonstandard keys to standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • demote_from_standard (tuple[list[str], list[str]]) – Demote nonstandard keys from standard keys in TEXT. The first member of the tuples is a list of strings which match literally. The second member is a list of regular expressions corresponding to regexp-syntax. Defaults to ([], []).

  • rename_standard_keys (dict[str, str]) – Rename standard keys in TEXT. Keys matching the first part of the pair will be replaced by the second. Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • replace_standard_key_values (dict[str, str]) – Replace values for standard keys in TEXT Comparisons are case insensitive. The leading $ is implied so do not include it. Defaults to {}.

  • append_standard_keywords (dict[str, str]) – Append standard key/value pairs to TEXT. All keys and values will be included as they appear here. The leading $ is implied so do not include it. Defaults to {}.

  • substitute_standard_key_values (tuple[dict[str, tuple[str, str, bool]], dict[str, tuple[str, str, bool]]]) – Apply sed-like substitution operation on matching standard keys. The leading $ is implied when matching keys. The first dict corresponds to keys which are matched literally, and the second corresponds to keys which are matched via regular expression. The members in the 3-tuple values correspond to a regular expression, replacement string, and global flag respectively. The regular expression may contain capture expressions which must be matched exactly in the replacement string. If the global flag is True, replace all found matches, otherwise only replace the first. Any references in replacement string must be given with surrounding brackets like "${1}" or "${cygnus}". Defaults to ({}, {}).

  • trim_intra_value_whitespace (bool) – If True, trim whitespace between delimiters such as , and ; within keyword value strings. Defaults to False.

  • time_meas_pattern (str | None) – A pattern to match the $PnN of the time measurement. If None, do not try to find a time measurement. Defaults to "^(TIME|Time)$".

  • allow_missing_time (bool) – If True allow time measurement to be missing. Defaults to False.

  • force_time_linear (bool) – If True force time measurement to be linear independent of $PnE. Defaults to False.

  • ignore_time_optical_keys (list[Literal[“F”, “L”, “O”, “T”, “P”, “V”, “CALIBRATION”, “DET”, “TAG”, “FEATURE”, “ANALYTE”]]) – Ignore optical keys in temporal measurement. These keys are nonsensical for time measurements but are not explicitly forbidden in the the standard. Provided keys are the string after the “Pn” in the “PnX” keywords. Defaults to [].

  • date_pattern (str | None) – If supplied, will be used as an alternative pattern when parsing $DATE. If not supplied, $DATE will be parsed according to the standard pattern which is %d-%b-%Y. Defaults to None.

  • time_pattern (str | None) – If supplied, will be used as an alternative pattern when parsing $BTIM and $ETIM. The values "%!" or "%@" may be used to match 1/60 seconds or centiseconds respectively. If not supplied, $BTIM and $ETIM will be parsed according to the standard pattern which is version-specific. Defaults to None.

  • allow_pseudostandard (bool) – If True allow non-standard keywords with a leading $. The presence of such keywords often means the version in HEADER is incorrect. Defaults to False.

  • allow_unused_standard (bool) – If True allow unused standard keywords to be present. Defaults to False.

  • disallow_deprecated (bool) – If True throw error if a deprecated key is encountered. Defaults to False.

  • fix_log_scale_offsets (bool) – If True fix log-scale PnE and keywords which have zero offset (ie X,0.0 where X is non-zero). Defaults to False.

  • nonstandard_measurement_pattern (str | None) – Pattern to use when matching nonstandard measurement keys. Must be a regular expression pattern with %n which will represent the measurement index and should not start with $. Otherwise should be a normal regular expression as defined in regexp-syntax. Defaults to None.

  • ignore_time_gain (bool) – If True ignore the $PnG (gain) keyword. This keyword should not be set according to the standard} however, this library will allow gain to be 1.0 since this equates to identity. If gain is not 1.0, this is nonsense and it can be ignored with this flag. Defaults to False.

  • parse_indexed_spillover (bool) – Parse $SPILLOVER with numeric indices rather than strings (ie names or $PnN) Defaults to False.

  • disallow_localtime (bool) – If true, require that $BEGINDATETIME and $ENDDATETIME have a timezone if provided. This is not required by the standard, but not having a timezone is ambiguous since the absolute value of the timestamp is dependent on localtime and therefore is location-dependent. Only affects FCS 3.2. Defaults to False.

  • text_data_correction (tuple[int, int]) – Corrections for DATA offsets in TEXT. Defaults to (0, 0).

  • text_analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in TEXT. Defaults to (0, 0).

  • ignore_text_data_offsets (bool) – If True ignore DATA offsets in TEXT Defaults to False.

  • ignore_text_analysis_offsets (bool) – If True ignore ANALYSIS offsets in TEXT Defaults to False.

  • allow_header_text_offset_mismatch (bool) – If True allow TEXT and HEADER offsets to mismatch. Defaults to False.

  • allow_missing_required_offsets (bool) – If True allow required DATA and ANALYSIS (3.1 or lower) offsets in TEXT to be missing. If missing, fall back to offsets from HEADER. Defaults to False.

  • truncate_text_offsets (bool) – If True truncate offsets that exceed end of file. Defaults to False.

  • allow_optional_dropping (bool) – If True drop optional keys that cause an error and emit warning instead. Defaults to False.

  • transfer_dropped_optional (bool) – If True transfer optional keys to non-standard dict if dropped. Defaults to False.

  • integer_widths_from_byteord (bool) – If True set all $PnB to the number of bytes from $BYTEORD. Only has an effect for FCS 2.0/3.0 where $DATATYPE is I. Defaults to False.

  • integer_byteord_override (list[int] | None) – Override $BYTEORD for integer layouts. Defaults to None.

  • disallow_range_truncation (bool) – If True throw error if $PnR values need to be truncated to match the number of bytes specified by $PnB and $DATATYPE. Defaults to False.

  • allow_uneven_event_width (bool) – If True allow event width to not perfectly divide length of DATA. Does not apply to delimited ASCII layouts. Defaults to False.

  • allow_tot_mismatch (bool) – If True allow $TOT to not match number of events as computed by the event width and length of DATA. Does not apply to delimited ASCII layouts. Defaults to False.

  • warnings_are_errors (bool) – If True all warnings will be regarded as errors. Defaults to False.

  • hide_warnings (bool) – If True hide all warnings. Defaults to False.

Return type:

list[tuple[CoreDataset2_0 | CoreDataset3_0 | CoreDataset3_1 | CoreDataset3_2, StdDatasetOutput]]

Raises:
  • OverflowError – if limit or skip is less than 0 or greater than 2**64-1

  • ConfigError – if nonstandard_measurement_pattern does not have "%n"

  • ConfigError – if time_pattern does not have specifiers for hours, minutes, seconds, and optionally sub-seconds (where "%!" and "%@" correspond to 1/60 seconds and centiseconds respectively) as outlined in chrono

  • ConfigError – if date_pattern does not have year, month, and day specifiers as outlined in chrono

  • ConfigError – if other_width is less than 1 and greater than 20

  • ConfigError – if time_meas_pattern is not a valid regular expression as described in regexp-syntax

  • InvalidKeywordValueError – if integer_byteord_override is not a list of integers including all from 1 to N where N is the length of the list (up to 8)

  • ConfigError – if field 1 in dict value in field 1 or 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • OverflowError – if field 1 or 2 in analysis_correction, data_correction, supp_text_correction, text_analysis_correction, text_correction, or text_data_correction is less than -2**31 or greater than 2**31-1

  • OverflowError – if field 1 or 2 in any in other_corrections is less than -2**31 or greater than 2**31-1

  • ParseKeyError – if any in field 1 in demote_from_standard, ignore_standard_keys, or promote_to_standard contains non-ASCII characters or is empty

  • ConfigError – if any in field 2 in demote_from_standard, ignore_standard_keys, or promote_to_standard is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict key in append_standard_keywords, rename_standard_keys, or replace_standard_key_values contains non-ASCII characters or is empty

  • ParseKeyError – if dict key in field 1 in substitute_standard_key_values contains non-ASCII characters or is empty

  • ConfigError – if dict key in field 2 in substitute_standard_key_values is not a valid regular expression as described in regexp-syntax

  • ParseKeyError – if dict value in rename_standard_keys contains non-ASCII characters or is empty

  • ConfigError – if references in replacement string in dict value in field 1 or 2 in substitute_standard_key_values do not match captures in regular expression

  • FileLayoutError – If HEADER, TEXT, or DATA are unparsable

  • ParseKeyError – If any keys from TEXT contain non-ASCII characters and allow_non_ascii_keywords is False

  • FCSDeprecatedError – If any keywords or their values are deprecated and disallow_deprecated is True

  • ParseKeywordValueError – If any keyword values could not be read from their string encoding

  • RelationalError – If keywords are incompatible with indicated layout of DATA or if keywords that are referenced by other keywords do not exist

  • EventDataError – If values in DATA cannot be read

  • ExtraKeywordError – If any standard keys are unused and allow_pseudostandard or allow_unused_standard are False

pyreflow.api.fcs_read_header(path, text_correction=(0, 0), data_correction=(0, 0), analysis_correction=(0, 0), other_corrections=[], max_other=None, other_width=8, squish_offsets=False, allow_negative=False, truncate_offsets=False, dataset_offset=0)

Read the HEADER of an FCS file.

Parameters:
  • path (Path) – Path to be read.

  • text_correction (tuple[int, int]) – Corrections for TEXT offsets in HEADER. Defaults to (0, 0).

  • data_correction (tuple[int, int]) – Corrections for DATA offsets in HEADER. Defaults to (0, 0).

  • analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in HEADER. Defaults to (0, 0).

  • other_corrections (list[tuple[int, int]]) – Corrections for OTHER offsets if they exist. Each correction will be applied in order. If an offset does not need to be corrected, use (0, 0). This will not affect the number of OTHER segments that are read; this is controlled by max_other. Defaults to [].

  • max_other (int | None) – Maximum number of OTHER segments that can be parsed. None means limitless. Defaults to None.

  • other_width (int) – Width (in bytes) to use when parsing OTHER offsets. Defaults to 8.

  • squish_offsets (bool) – If True and a segment’s ending offset is zero, treat entire offset as empty. This might happen if the ending offset is longer than 8 digits, in which case it must be written in TEXT. If this happens, the standards mandate that both offsets be written to TEXT and that the HEADER offsets be set to 0,0, so only writing one is an error unless this flag is set. This should only happen in FCS 3.0 files and above. Defaults to False.

  • allow_negative (bool) – If true, allow negative values in a HEADER offset. If negative offsets are found, they will be replaced with 0. Some files will denote an “empty” offset as 0,-1, which is logically correct since the last offset points to the last byte, thus 0,0 is actually 1 byte long. Unfortunately this is not what the standards say, so specifying 0,-1 is an error unless this flag is set. Defaults to False.

  • truncate_offsets (bool) – If true, truncate offsets that exceed the end of the file. In some cases the DATA offset (usually) might exceed the end of the file by 1, which is usually a mistake and should be corrected with data_correction (or analogous for the offending offset). If this is not the case, the file is likely corrupted. This flag will allow such files to be read conveniently if desired. Defaults to False.

  • dataset_offset (int) – Starting position in the file of the dataset to be read Defaults to 0.

Return type:

Header

Raises:
  • ConfigError – if other_width is less than 1 and greater than 20

  • OverflowError – if field 1 or 2 in analysis_correction, data_correction, or text_correction is less than -2**31 or greater than 2**31-1

  • OverflowError – if field 1 or 2 in any in other_corrections is less than -2**31 or greater than 2**31-1

  • FileLayoutError – if HEADER segment is unparsable

pyreflow.api.fcs_read_flat_dataset_with_keywords(path, version, std, data_seg, analysis_seg=(0, 0), other_segs=[], text_data_correction=(0, 0), text_analysis_correction=(0, 0), ignore_text_data_offsets=False, ignore_text_analysis_offsets=False, allow_header_text_offset_mismatch=False, allow_missing_required_offsets=False, truncate_text_offsets=False, allow_optional_dropping=False, transfer_dropped_optional=False, integer_widths_from_byteord=False, integer_byteord_override=None, disallow_range_truncation=False, allow_uneven_event_width=False, allow_tot_mismatch=False, warnings_are_errors=False, hide_warnings=False, dataset_offset=0)

Read dataset from FCS file from keywords in flat mode.

Parameters:
  • path (Path) – Path to be read.

  • version (Literal[“FCS2.0”, “FCS3.0”, “FCS3.1”, “FCS3.2”]) – Version to use when parsing TEXT.

  • std (dict[str, str]) – Standard keywords.

  • data_seg (tuple[int, int]) – The DATA segment from HEADER.

  • analysis_seg (tuple[int, int]) – The DATA segment from HEADER. Defaults to (0, 0).

  • other_segs (list[tuple[int, int]]) – The OTHER segments from HEADER. Defaults to [].

  • text_data_correction (tuple[int, int]) – Corrections for DATA offsets in TEXT. Defaults to (0, 0).

  • text_analysis_correction (tuple[int, int]) – Corrections for ANALYSIS offsets in TEXT. Defaults to (0, 0).

  • ignore_text_data_offsets (bool) – If True ignore DATA offsets in TEXT Defaults to False.

  • ignore_text_analysis_offsets (bool) – If True ignore ANALYSIS offsets in TEXT Defaults to False.

  • allow_header_text_offset_mismatch (bool) – If True allow TEXT and HEADER offsets to mismatch. Defaults to False.

  • allow_missing_required_offsets (bool) – If True allow required DATA and ANALYSIS (3.1 or lower) offsets in TEXT to be missing. If missing, fall back to offsets from HEADER. Defaults to False.

  • truncate_text_offsets (bool) – If True truncate offsets that exceed end of file. Defaults to False.

  • allow_optional_dropping (bool) – If True drop optional keys that cause an error and emit warning instead. Defaults to False.

  • transfer_dropped_optional (bool) – If True transfer optional keys to non-standard dict if dropped. Defaults to False.

  • integer_widths_from_byteord (bool) – If True set all $PnB to the number of bytes from $BYTEORD. Only has an effect for FCS 2.0/3.0 where $DATATYPE is I. Defaults to False.

  • integer_byteord_override (list[int] | None) – Override $BYTEORD for integer layouts. Defaults to None.

  • disallow_range_truncation (bool) – If True throw error if $PnR values need to be truncated to match the number of bytes specified by $PnB and $DATATYPE. Defaults to False.

  • allow_uneven_event_width (bool) – If True allow event width to not perfectly divide length of DATA. Does not apply to delimited ASCII layouts. Defaults to False.

  • allow_tot_mismatch (bool) – If True allow $TOT to not match number of events as computed by the event width and length of DATA. Does not apply to delimited ASCII layouts. Defaults to False.

  • warnings_are_errors (bool) – If True all warnings will be regarded as errors. Defaults to False.

  • hide_warnings (bool) – If True hide all warnings. Defaults to False.

  • dataset_offset (int) – Starting position in the file of the dataset to be read Defaults to 0.

Return type:

FlatDatasetWithKwsOutput

Raises:
  • ValueError – if analysis_seg or data_seg has offsets which exceed the end of the file, are inverted (begin after end), or are either negative or greater than 2**64-1

  • InvalidKeywordValueError – if integer_byteord_override is not a list of integers including all from 1 to N where N is the length of the list (up to 8)

  • OverflowError – if field 1 or 2 in text_analysis_correction or text_data_correction is less than -2**31 or greater than 2**31-1

  • ValueError – if any in other_segs has offsets which exceed the end of the file, are inverted (begin after end), or are either negative or greater than 2**64-1

  • ParseKeyError – if dict key in std is empty, does not start with "$", or is only a "$"

  • FileLayoutError – If DATA is unparsable

  • FCSDeprecatedError – If an ASCII layout is used and FCS version is 3.1 or 3.2

  • ParseKeywordValueError – If any keyword values could not be read from their string encoding

  • RelationalError – If keywords are incompatible with indicated layout of DATA

  • EventDataError – If values in DATA cannot be read

Outputs

These are neatly bundled classes of data returned by each of the functions above.

class pyreflow.api.Header(version, segments)

The HEADER segment from an FCS dataset.

Variables:
  • version (Literal[“FCS2.0”, “FCS3.0”, “FCS3.1”, “FCS3.2”]) – (read-only) The FCS version.

  • segments (HeaderSegments) – (read-only) The segments from HEADER.

class pyreflow.api.FlatTEXTOutput(version, kws, parse)

Parsed HEADER and TEXT.

Variables:
  • version (Literal[“FCS2.0”, “FCS3.0”, “FCS3.1”, “FCS3.2”]) – (read-only) The FCS version.

  • kws (ValidKeywords) – (read-only) Standard and non-standard keywords.

  • parse (FlatTEXTParseData) – (read-only) Miscellaneous data obtained when parsing TEXT.

class pyreflow.api.StdTEXTOutput(tot, dataset_segs, extra, parse)

Miscellaneous data when standardizing TEXT.

Variables:
  • tot (int | None) – (read-only) Value of $TOT from TEXT.

  • dataset_segs (DatasetSegments) – (read-only) Offsets used to parse DATA and ANALYSIS.

  • extra (ExtraStdKeywords) – (read-only) Extra keywords from TEXT standardization

  • parse (FlatTEXTParseData) – (read-only) Miscellaneous data when parsing TEXT.

Raises:

OverflowError – if tot is less than 0 or greater than 2**64-1

class pyreflow.api.FlatDatasetOutput(text, dataset)

Dataset from FCS file parsed with flat mode.

Variables:
class pyreflow.api.StdDatasetOutput(dataset, parse)

Miscellaneous data when standardizing TEXT.

Variables:
class pyreflow.api.FlatDatasetWithKwsOutput(data, analysis, others, dataset_segs)

Dataset from parsing flat TEXT.

Variables:
  • data (polars.DataFrame) – (read-only) A dataframe encoding the contents of DATA. Number of columns must match number of measurements. May be empty. Types do not necessarily need to correspond to those in the data layout but mismatches may result in truncation.

  • analysis (bytes) – (read-only) Contents of the ANALYSIS segment.

  • others (list[bytes]) – (read-only) A list of byte strings encoding the OTHER segments.

  • dataset_segs (DatasetSegments) – (read-only) Offsets used to parse DATA and ANALYSIS.

Raises:

EventDataError – If data contains columns which are not unsigned 8/16/32/64-bit integers or 32/64-bit floats

class pyreflow.api.StdDatasetWithKwsOutput(dataset_segs, extra)

Miscellaneous data when standardizing TEXT from keywords.

Variables:
  • dataset_segs (DatasetSegments) – (read-only) Offsets used to parse DATA and ANALYSIS.

  • extra (ExtraStdKeywords) – (read-only) Extra keywords from TEXT standardization

Common outputs

These are which are reused when returning data from the above functions.

class pyreflow.api.HeaderSegments(text_seg, data_seg, analysis_seg, other_segs)

The segments from HEADER

Variables:
  • text_seg (tuple[int, int]) – (read-only) The primary TEXT segment from HEADER.

  • data_seg (tuple[int, int]) – (read-only) The DATA segment from HEADER.

  • analysis_seg (tuple[int, int]) – (read-only) The DATA segment from HEADER.

  • other_segs (list[tuple[int, int]]) – (read-only) The OTHER segments from HEADER.

Raises:
  • ValueError – if analysis_seg, data_seg, or text_seg has offsets which exceed the end of the file, are inverted (begin after end), or are either negative or greater than 2**64-1

  • ValueError – if any in other_segs has offsets which exceed the end of the file, are inverted (begin after end), or are either negative or greater than 2**64-1

class pyreflow.api.FlatTEXTParseData(header_segments, supp_text, nextdata, delimiter, non_ascii, byte_pairs)

Miscellaneous data obtained when parsing TEXT.

Variables:
  • header_segments (HeaderSegments) – (read-only) Segments from HEADER.

  • supp_text (tuple[int, int] | None) – (read-only) Supplemental TEXT offsets if given.

  • nextdata (int | None) – (read-only) The value of $NEXTDATA.

  • delimiter (int) – (read-only) Delimiter used to parse TEXT.

  • non_ascii (list[tuple[str, str]]) – (read-only) Keywords with a non-ASCII but still valid UTF-8 key.

  • byte_pairs (list[tuple[bytes, bytes]]) – (read-only) Keywords with invalid UTF-8 characters.

Raises:

ValueError – if supp_text has offsets which exceed the end of the file, are inverted (begin after end), or are either negative or greater than 2**64-1

class pyreflow.api.ValidKeywords(std, nonstd)

Standard and non-standard keywords.

Variables:
  • std (dict[str, str]) – (read-only) Standard keywords.

  • nonstd (dict[str, str]) – (read-only) Non-standard keywords.

Raises:
  • ParseKeyError – if dict key in nonstd is empty or starts with "$"

  • ParseKeyError – if dict key in std is empty, does not start with "$", or is only a "$"

class pyreflow.api.ExtraStdKeywords(pseudostandard, unused)

Extra keywords from TEXT standardization.

Variables:
  • pseudostandard (dict[str, str]) – (read-only) Keywords which start with $ but are not part of the standard.

  • unused (dict[str, str]) – (read-only) Keywords which are part of the standard but were not used.

Raises:

ParseKeyError – if dict key in pseudostandard or unused is empty, does not start with "$", or is only a "$"

class pyreflow.api.DatasetSegments(data_seg, analysis_seg)

Segments used to parse DATA and ANALYSIS

Variables:
  • data_seg (tuple[int, int]) – (read-only) The DATA segment from HEADER or TEXT.

  • analysis_seg (tuple[int, int]) – (read-only) The DATA segment from HEADER or TEXT.

Raises:

ValueError – if analysis_seg or data_seg has offsets which exceed the end of the file, are inverted (begin after end), or are either negative or greater than 2**64-1