When NCCD staff meet with supervisors, managers, and leadership for the first time in a new partnership with a jurisdiction, we often hear a desire for an assessment tool that helps improve consistency in practice, process, and decision making. While we work to strike a balance among a set of core Structured Decision Making®
(SDM) values in the development and implementation of any SDM®
assessment, consistency is the quarterback, so to speak. It tends to drive the most enthusiasm behind a jurisdiction’s change process.
In child welfare decision making, consistency means that if workers have the same information, they will come to the same conclusion. If multiple workers use the same assessment with the same client, we want to see that they arrive at the same results. Consistency helps promote equity in decision making and helps ensure that clients receive the support they need, regardless of which social worker it comes from. The SDM assessments ensure that each social worker addresses a common set of factors when making decisions at critical points throughout a case.
When developing an SDM assessment, we capture consistency several ways. First, we task workgroups of vested local experts with customizing the wording and thresholds embedded in the assessment. This includes taking a fine-tooth comb to the definitions to ensure consistent understanding. The more clarity in the tool and definitions, the more workers will understand and use it in the same way. On the other hand, the more gaps, overlap, and vagueness appear in the tool and its definitions, the more we see the tool resulting in inconsistent, more arbitrary decision making. It’s important for us to strive for clarity in how the tool is interpreted and used.
Once the workgroup customization step is complete and a stable draft is produced, we conduct inter-rater reliability testing (IRR). Volunteer IRR testers from the local jurisdiction review case vignettes and then use the draft assessment tool and definitions with the information provided. Everyone has the same information, so our researchers can analyze how the testers scored areas of an assessment where there is less agreement. If we see low agreement on a draft assessment section or item, we can hypothesize what may be confusing about that element and bring the workgroup back together to add more clarity to the finalized version of the assessment. Lastly, clear expectations and policy on when the assessment is to be used at which point(s) in a case also boosts consistent practice.
Stay tuned to SDM News
in the coming months as we highlight the three other SDM values: accuracy, equity, and utility.