Methodology
This section explains how this abstract model can be applied to XYZ in practice. Applying the Trustable Methodology is an iterative process. It has distinct stages, but these can be tackled in any order. They are:
- Setting Expectations
- Providing Evidence
- Documenting Assumptions
- Recording Reasoning
- Assessing Confidence
Whichever order(s) you perform these stages in, moving from one stage to another will always require Modification, which we discuss separately.
Setting Expectations
Stakeholders of XYZ must agree on the most important functional or non-functional requirements for XYZ within the context of the project. These may arise from or be informed by requirements outside of XYZ. This is anticipated and will be addressed directly later. Stakeholders should express these critical requirements as a set of Statements.
Thinking now about the model, we notice these are Requests that are not used to make any further Claims about XYZ. Therefore they must be Expectations. We conclude that Expectations originate from Stakeholders, who may be:
- Consumers (e.g. a customer or product owner providing requirements)
- Contributors (e.g. an architect identifying a key design goal)
- Others (e.g. the authors of a regulation or safety standard)
Tip
If you find you have high-level requirements that are important both within and outside of your project, don't worry! You can manage this using Access Specifiers, which are introduced later.
Providing Evidence
Contributors are often better placed than other Stakeholders to understand the properties of XYZ. It is recommended that, at least initially, Contributors own the process of gathering and proving Evidence.
In order to support the Claims we make about XYZ, we must express these properties as Statements and then measure or otherwise determine them. This will always require Artifacts: Identifiable components, products or byproducts of XYZ's execution and development. These Artifacts can be persistent (e.g. source code, process documentation) or transient (e.g. test results, incident response times).
Properties of transient Artifacts are best measured algorithmically. For instance, we can procedurally inspect and average the results of performance tests and record the desired performance threshold as a Statement. Our algorithm will now automatically tell us to what extent this Statement is True. We call this relationship between a Statement and an Artifact a Validation.
Properties of persistent Artifacts such as documentation or source code can be measured algorithmically, but often we are interested in more complex or "soft" properties when dealing with them. In such cases, we make a Statement about the Artifact and then use SME review to determine the extent to which that Statement is True. We call this relationship between a Statement and an Artifact a Reference.
Tip
When starting a project, it is often easier to start by referencing all or most of the Artifacts you are interested in, before thinking about validation; this is encouraged.
We appeal to Artifacts when we cannot support a Statement with other Statements. We call such Statements Evidence: Premises that are supported by reference to or validation of Artifacts.
Documenting Assumptions
Some requirements for XYZ can never be satisfied within the context of the project. For instance, nearly all software projects will require a specific operating system(s) or hardware; it is not sensible or reasonable to expect the author of a small library to also provide an OS!
These requirements should still be recorded as Statements, but we leave them "dangling". That is, they are Premises for which we have no justification at all. We refer to such Statements as Assumptions.
Recording Assumptions is just as important as recording Evidence, if not more so.
Tip
During the evolution of XYZ, you will often find Assumptions that actually reflect work that needs to be done within XYZ by its Contributors, not the Consumer. This is encouraged as it provides transparency for developers and users; they can clearly see what they need to fix or provide to achieve the functionality XYZ promises.
Recording Reasoning
The measurable properties of XYZ that we capture as Evidence, and the Assumptions we make about XYZ's properties cannot always be directly Linked to our Expectations for XYZ. For instance, there may be a significant logical argument or chain of implication between Expectations and the properties of XYZ. This is particularly true for complex projects.
Instead of making large and undocumented leaps in logic, we make smaller logical steps using intermediate Statements. These Statements will necessarily have parents and children; they are therefore Assertions.
In a strictly logical sense the set of Statements alone should be enough to capture the argument, but they will not necessarily be sufficient to explain it to Contributors and Consumers. This is the role of Informative items. These are passages associated with a set of Statements and Links, which explain the reasoning for and thinking behind the argument represented by the subgraph.
The intermediate reasoning we make can be complex and detailed. It is not reasonable to assume it can all be broken down into simple sentences. For this reason we allow Assertions to be Qualified by Artifacts. That is, the Assertion can state that some complex property expressed in an Artifact is True, rather than reproducing the definition in the Statement itself. This relationship between Artifacts and Assertions is called Qualification.
Assessing Confidence
At any stage in the project's lifecycle and ideally every time it is changed, the Contributors to and Consumers of XYZ need to understand to what extent XYZ has achieved its stated goals. This is the role of a Confidence Assessment.
Evidence is scored, both using SME review of Evidence and executing validation algorithms. Scores for every Statement in the Trustable Graph are then calculated recursively. These scores should be used to inform where effort should be focused next.
Modification
While it can be helpful to consider a Trustable Graph as "requirements-as-code" in many contexts, there is a crucial difference we must bear in mind: unlike code, we cannot lint, test or otherwise automatically verify the underlying logic of the argument. Ultimately humans, not machines, have to decide whether the Links between Statements are valid.
To help us maintain the quality of the underlying logic, we introduce a new adjective to describe Statements and Links in our tooling: Suspect.
The first time a Statement is written, we confirm that it is a valid Statement. If it has references, then, when confidence is assessed, it will receive an SME review and a score. If the Statement later changes, how do we know that the Statement and its recorded score are still valid? We enforce the convention that any change to a Statement makes it Suspect, until it is reviewed by a human. By convention, we refer to items that are not Suspect as reviewed. Therefore, unlike ordinary Statements, Suspect Statements and their scores are not treated as valid.
Similarly, the first time we record a Link, we confirm it represents a logical implication. What happens if the parent or child Statement changes? The two Statements are still related, but we can't be sure that the relationship is a logical implication. The Link is now Suspect, until reviewed by a human. By convention, we refer to links that are not Suspect as clear or cleared.
Tip
An example process for managing modifications is provided here.
Summary
In this section we will quickly recap the new objects and relationships we introduced in the methodology:
- Artifacts are components, products or byproducts of XYZ.
- Evidence is a Premise that is supported with an Artifact
- Assumptions are Premises that are unsupported
- Artifacts can Qualify Assertions
- Artifacts can Validate Premises, making them Evidence
- Premises can Reference Artifacts to create Evidence
Composing Projects
A key aim of Trustable is composability. Where XYZ and its dependencies both use Trustable, their respective Graphs can be easily combined.
When composing XYZ and a dependency, we simply decide which Assumptions in XYZ can be satisfied by Requests in the dependency and add a Link. What was an Assumption in the parent and a Request in the child becomes an Assertion in the context of the wider project.
Warning
Currently composing projects is done by manually vendoring items between repositories. This is a complex process that is error-prone and tedious. Tooling solutions are in active development.
Access Specifiers
Allowing consumers to depend on implementation-specific Assumptions or Assertions in your argument is a recipe for breaking changes. To this end, Statements can be classified accordingly:
- Public Statements can be freely used by consumers. Any change to Public Statements is a breaking change. A good project will ensure Consumers using Public Statements can easily move between versions of the software and argument.
- Private Statements are not intended to be used by consumers. Upstream projects may change, add or remove Private Statements as needed. Sensible Consumers will not use Private Statements from upstream sources.
Danger
This feature is not yet implemented in tooling. Instead, you should maintain a dialogue with your up- and downstream projects and communicate this information by other means. Clearly, this is a potential bottleneck for projects using TSF and great care should be taken when managing this, at least until tools are available.
Applying TSF
The Eclipse Trustable Software Framework is a special kind of project that is purely formed of requirements. It is intended to be composed with large software projects like XYZ, enabling them to audit the quality, completeness and correctness of their own arguments.
The TSF and XYZ should be managed separately. We recommend that you first perform one iteration of the Methodology directly to XYZ, before applying TSF.
This means deciding on your Expectations and building an argument for them out of Statements. At this stage, it's fine to have many broad Assumptions. The image on the right shows an example of how this may look: We have two Expectations, X1 and X2, supported by several Assertions which in turn are linked to Statements left as Assumptions, Zi.
Now we apply the TSF. Each TA in the TSF is in fact an Assumption that must be satisfied by XYZ. When we compose TSF with XYZ, we turn these Assumptions into Assertions by linking them into both new and existing argumentation.
To turn each TA from an Assumption into an Assertion, consider the Statements and Artifacts from XYZ that can be used to support the TAs. Note that this may require treating some Statements as Artifact. For instance, in TA-BEHAVIOURS you will need to reference XYZ's Expectations. Similarly, TA-CONSTRAINTS requires you to reference XYZ's Assumptions. You may need to make new Artifacts to support TAs you have not considered before. For example, TA-ITERATIONS requires you to assemble and provide all source code with each constructed iteration of XYZ.
Note
The example below is incomplete and does not represent a sufficient argument for any TA.
The image below illustrates what this may look like for a subset of the TSF. Intermediate Statements Ui are used to tie XYZ's Statements and Artifacts into XYZ.
- U1 makes a Statement about the source code XYZ provides, supporting TA-ITERATIONS
- U2 makes a Statement about a property of XYZ's Expectations, supporting TA-BEHAVIOURS
- U3 makes a Statement about a property of XYZ's Assumptions, supporting TA-CONSTRAINTS
The Expectation in the TSF, TRUSTABLE-SOFTWARE therefore provides a
transparent and arms-length (though not truly independent) assessment of the trust
we can place in XYZ's Expectations and their score. This structure, although
optional, allows upstream and downstream projects to reuse the argumentation
body independently of the TSF and to separately reevaluate their trustability.