
Functional Safety Validation and Testing Procedures
Guide to validation testing for safety instrumented systems covering proof testing, partial stroke testing, diagnostics verification, and documentation.
Published on February 23, 2026
Functional Safety Validation and Testing Procedures
This guide describes validation and testing procedures for safety instrumented systems (SIS) and safety-related parts of control systems (SRP/CS). It covers proof testing, partial stroke testing (PST), diagnostics verification, fault-injection techniques, FAT/SAT/SIT acceptance, and the documentation and lifecycle controls required by international standards. The guidance below combines practical implementation tactics, numerical targets, and references to the governing standards and tools that automation engineers and project teams must apply to demonstrate and maintain functional safety.
Key Concepts
Safety Objectives, Safety Functions, and Safety Integrity
Functional safety validation begins with clear safety objectives derived from a hazard analysis and risk assessment (HARA). Each safety objective maps to one or more safety functions (for example: emergency shutdown, burner management, overpressure protection). Standards require quantitative integrity targets — Safety Integrity Levels (SIL 1-4 per IEC 61508) or Performance Levels (PL a-e per ISO 13849) — and probabilistic metrics such as PFDavg (probability of failure on demand) or PFH (probability of dangerous failure per hour). According to IEC 61508, design and validation must demonstrate that the achieved PFDavg/PFH meets the SIL target and that diagnostic coverage (DC) and MTTFd assumptions used in calculations are supported by testing and FMEDA evidence [4].
Proof Testing vs Partial Stroke Testing (PST)
Proof testing is the full-stroke verification of a safety function (e.g., cycling a shutdown valve through its complete travel to confirm it moves and seats correctly). Proof tests detect failures latent to diagnostics and may be scheduled based on PFDavg calculations. Partial stroke testing (PST) moves actuators a limited percentage of full stroke to detect early signs of sticking or actuator degradation without taking the process offline. Typical PST amplitudes fall into two categories:
- Traditional PST: 20–50% of full valve stroke, used to validate valve and actuator movement and to detect sticking or friction that would prevent full closure [3][4].
- Micro or low-amplitude PST: 1–5% of full stroke applied frequently (daily to monthly) to catch small changes in actuator dynamics; these require high-fidelity sensors and diagnostics to preserve test effectiveness and avoid nuisance trips [1][4].
Project teams should select PST stroke and frequency based on failure mode distributions, actuator sizing, and diagnostic coverage targets (typical design targets: DC >90% for many systems; for SIL 3, aim for DC >99% where feasible) [1][2][4].
Diagnostics Verification and Fault Coverage
Diagnostics verification proves that built-in self-tests and monitoring logic detect the classes of faults assumed in safety analyses (single point, diagnostic, and latent faults). Verification methods include automated self-test confirmation, redundancy checks, cross-channel voting, and simulated fault injection. The validation team must demonstrate the fault detection ratio and the time-to-detect metrics that feed into PFD/PFH models and FMEDA outputs [4].
Testing Scope: FAT, SAT, SIT
Factory Acceptance Testing (FAT), Site Integration Testing (SIT), and Site Acceptance Testing (SAT) are distinct, mandatory test phases in process and industrial projects. FAT verifies equipment and software off-site against technical specifications; SIT verifies integration with site systems and I/O; SAT verifies performance under operational conditions and user acceptance. ISA-105 style procedures define checklists, roles (owner, vendor, contractor), and exit criteria for each phase, excluding commissioning loop checks in their scope but requiring readiness checks and punch-list closure [5].
Implementation Guide
Five-Step Validation Workflow
Apply a structured 5-step validation workflow to reduce ambiguity and support auditability. The steps below synthesize ISO 13849 validation guidance and industry best practice:
- 1. Hazard Identification and Risk Assessment (HARA): Define scenarios, severity, frequency, and required risk reduction factor to determine SIL/PL/ASIL targets. Document bidirectional traceability between hazards, safety requirements, and tests [3][4].
- 2. Design Verification: Produce FMEDA, fault trees, and Markov models where required to justify component selection, redundancy, and diagnostics. Ensure assumptions (MTTFd, DC, CCF factors) match vendor data and will be proven by tests or product certificates [4].
- 3. Functional and Failure Mode Testing: Execute proof tests, PST, and diagnostics verification per safety requirements. Use fault-injection to exercise protection against single-point and latent faults; include HIL/MIL/SIL simulation for edge scenarios [1][2].
- 4. Residual Risk Analysis: Re-evaluate residual risk after implemented safety measures; update safety case and operation limits if residual risk exceeds tolerable levels [3].
- 5. Documentation, Change Control, and Release: Archive test records, acceptance certificates, traceability matrices, and FMEDA. Apply change-management processes for future modifications and schedule recurring proof/PST based on PFD/PFH calculations [4][5].
Practical Test Procedures and Acceptance Criteria
Design test procedures with measurable acceptance criteria and pass/fail conditions. Example elements for a proof test on a shutdown valve:
- Pre-test readiness checklist: instrument health, control logic version, communication paths validated, operator safety briefed (FAT checklist per ISA-105) [5].
- Execution steps: initiate test command, measure travel time, confirm seating/torque signatures, verify end-of-travel switches or positioners, record stroke profile and valve torque curve [4].
- Acceptance criteria: stroke within specified time window, seat leakage below defined threshold, no unexpected alarms, diagnostics identify nominal condition. Record evidence in the test report and link to the related safety requirement in the traceability matrix.
Fault Injection and Robustness Testing
Fault injection validates the system’s reaction to realistic faults: sensor drift, stuck-at outputs, communication errors, or single-channel failures. Tools such as Ignitarium’s Fault Campaign Manager (FCM) combined with simulators like Xcelium Fault Simulator (XFS) enable controlled fault campaigns and reproducible test runs for software-dominated systems and E/E/PE devices. Fault campaigns should include:
- Single-point fault tests and latent/failure combination scenarios.
- Time-to-failure and time-to-detect metrics to validate assumptions used in FMEDA/Markov models [1].
- Regression and repeatability checks automated wherever possible to support long-term maintenance and certification evidence [2][7].
Best Practices
Standards Compliance and Certification Strategy
Select a compliance and certification strategy at project outset. For general E/E/PE systems, IEC 61508 remains the primary reference for SIL justification and validation procedures; it requires probabilistic calculations, proof test intervals derived from PFDavg, and documented FMEDA and fault trees [4]. For machinery control systems, follow ISO 13849-1/2 validation steps and testing procedures. For automotive or safety-critical embedded controls, adopt ISO 26262 testing practices including unit/integration testing, HIL simulation, and fault injection, with bidirectional traceability to HARA/ASIL items [2][3]. Use ISA-105 checklists to structure FAT/SAT/SIT planning and acceptance criteria [5].
Traceability, Configuration Management, and Evidence
Maintain bidirectional traceability from hazard → safety requirement → design element → test case → test result. Use certified tools that generate trace matrices for auditability (QA Systems tools for ISO 26262 and Cantata for unit/integration testing are examples) [2]. Archive FMEDA spreadsheets, test logs, fault-injection results, and change requests. Ensure configuration management covers software versions, safety parameter settings, and calibration records; require third-party sign-off on changes that impact safety claims [2][4].
Automate Where It Matters
Automation reduces human error and increases repeatability. Automate proof-test result capture where sensors and positioners support it; integrate PST scheduling and trending in maintenance systems. Automate fault-injection campaigns and regression test suites for software using established tools (Ignitarium FCM, XFS) and include HIL/MIL environments for non-invasive edge-case validation before field deployment [1][7].
Third-Party Validation and Independent Assessment
Independent validation strengthens the safety case and reduces conflicts of interest. Consider engaging certification bodies or specialist service providers (for example, Pilz for ISO 13849 machinery validation) to conduct impartial verification, particularly for SIL 3/4 or PL e functions and where regulatory authorities require independent assessment [8].
Operational Best Practices and Maintenance
Translate validation into ongoing operational safety by:
- Scheduling and enforcing proof-test and PST intervals based on calculated PFDavg and observed degradation rates; update intervals when new field data changes failure rate assumptions [4].
- Targeting diagnostic coverage targets consistent with SIL/PL goals (e.g., aim for DC >99% for SIL 3 if practical; otherwise demonstrate maintenance and proof test regimes that compensate) [1][4].
- Implementing periodic audits, trend analysis from PST data, and root-cause investigations for recurring failures. Prioritize software/firmware updates through a controlled change process with re-validation for safety-impacting changes [3].
Standards and Regulatory Requirements
This section summarizes the standards commonly used to govern validation and testing for SIS and SRP/CS. Select the standard(s) applicable to your industry and safety targets early in the project.
| Standard | Scope / Applicability | Key Requirements | Typical Metrics |
|---|---|---|---|
| IEC 61508 | E/E/PE safety systems across industries | Probabilistic failure rates, FMEDA, PFDavg/PFH, proof-test intervals, lifecycle validation | PFDavg targets by SIL (SIL 1–4), PFH in failures/hr, DC assumptions |
| ISO 13849-1/2 | Machinery SRP/CS | Performance Levels (PL a–e), categories B–4, validation by analysis & testing | MTTFd bands, DCavg, CCF analysis, PL assessment |
| ISO 26262 | Automotive E/E systems (adaptable for embedded industrial) | ASIL A–D, HARA, unit/integration testing, HW/SW fault injection, traceability | ASIL-specific test/coverage targets, documented test artefacts |
| ISA-105 (ANSI/ISA-62381) | FAT/SAT/SIT best practices for process industry | Readiness checks, checklists, owner/vendor roles, test evidence handling | FAT/SAT exit criteria, punch-list metrics |
Test Types, Frequencies and Specification Table
Below is a condensed specification table linking common test types to recommended frequency and typical objectives. Tailor frequencies to the project's PFD/PFH calculations and field experience.
| Test Type | Typical Frequency | Primary Objective | Targets / Notes |
|---|---|---|---|
| Full Proof Test | Interval defined by PFDavg (months–years) | Detect latent failures not covered by diagnostics | Complete stroke; record travel, seat, torque; used to compute proof-test effectiveness [4] |
| Partial Stroke Test (PST) | Monthly–annually (or daily for micro-PST) | Detect valve sticking/actuator degradation without shutdown | 20–50% stroke common; micro-PST 1–5% frequent. Diagnostic coverage targets >90–99% depending on SIL [1][4] |
| Diagnostics Verification | During commissioning and after changes | Validate self-tests and redundancy detect faults | Confirm declared DC, time-to-detect metrics feed FMEDA [4] |
| Fault Injection | Commissioning; periodic campaigns | Prove safety logic and error handling under failures | Include single-point, latent, and complex failure combos using FCM/XFS tools [1] |
| FAT / SIT / SAT | Project milestones | Verify vendor deliverables, integration, site performance | Use ISA-105 checklists and acceptance criteria [5] |
Documentation and Traceability
Validation stands or falls on evidence. The Safety Case must include:
- Bidirectional traceability matrices linking hazards, safety requirements, design elements, and test cases [2].
- FMEDA and fault-tree documentation used to