Skip to content

trudag.score

Library for Trustable score calculations.

In practice, this means functions for interfacing between dotstop and graphalyzer.

item_sensitivity

item_sensitivity(
    graph: TrustableGraph,
    items_labels: list[str] | None = None,
)

Do the sensitivity analysis for a given TrustableGraph or to a certain set of nodes using items_labels filter

Parameters:

Name Type Description Default
graph TrustableGraph

Graph to be analysed

required
items_labels list[str]

Optional items filter to be evaluated

None

Returns: (dict[str, dict[str,float]]): Dictionary of items and with a map of how that item is important to the others items

score

score(
    graph: TrustableGraph,
    validator: Validator | None = None,
    concurrent: bool = False,
    workers: int | None = None,
    dump: Path | None = None,
) -> dict[str, ItemScore]

Compute the trustable score of graph and all its reviewed Items.

The score is calculated recursively from leaf Items, with each Item being assigned a score equal to the weighted sum of its child Items. The score for each leaf Item should be recorded in an attribute named [score][trudag.dotstop.core.item.Item.score], else its score will be assumed to be zero.

Unreviewed Items will not be scored. Contributions from child Items associated by a suspect link are ignored.

Warning

Scoring is still under heavy development. Currently:

  • Scores for non-leaf nodes are ignored.
  • All weights are assumed to be one and are normalised accordingly.

This behaviour is likely to change.

Parameters:

Name Type Description Default
graph TrustableGraph

Graph to score.

required
validator Validator | None

Validator class object to run validations with.

None
dump Path

Output file path for the Trustable Scores file

None

Returns:

Type Description
dict[str, float]

Dictionary of (UID, confidence score) pairs for all reviewed Items.

score_origin

score_origin(
    item: Item,
    premises: list[Item],
    review_statuses: dict[str, bool],
    validations: dict[str, dict],
    score: float,
) -> str

Given an item extracts its score origin.

scores_from_graph

scores_from_graph(
    premises: list[Item],
    review_statuses: dict[str, bool],
    validations: dict[str, dict],
) -> dict[str, float]

Given premises with their review statuses and validations, extract the Evidence scores.

Where items have neither an SME or validation score, assign them a score of zero, logging a warning. Where items have an SME score but are missing a validation score AND references, assign them a score of zero, logging a warning. Where items have both an SME and validation score, take their product.

Parameters:

Name Type Description Default
premises Graph

List of premises to extract Evidence scores from.

required
review_statuses dict[str, bool]

review statuses of all items.

required
validations dict[str, dict]

validation results of all items.

required

Returns:

Type Description
dict[str, float]

Dictionary of (item_str, confidence score) pairs for Evidence items