Neurodivergent Scale for Interacting with Robots

The Neurodivergent Scale for Interacting with Robots (NSIR), developed by Sadownik (2025), is a specialized psychometric instrument designed to measure the quality of human-robot interactions (HRI) through a neurodivergent lens. It moves beyond traditional neurotypical performance metrics to focus on the subjective internal experience and relationship quality of the user.

Core Dimensions and Factors

The scale typically measures three to four primary dimensions of the user-robot bond:

  • Anthropomorphic Connection/Kinship: Measures the personal bond and perceived similarity between the user and the robot (e.g., “The robot is more like me than anyone else I know” or “I gave my robot a name”).
  • Social Comfort/Trust: Assesses the robot’s perceived emotional intelligence and reliability, such as its ability to detect the user’s feelings (e.g., “My robot can tell when I am sad”).
  • Safety: Evaluates the user’s sense of security and vulnerability, often using high-trust indicators such as “I feel comfortable undressing in front of my robot”.

Heuristic and Theoretical Grounding

The NSIR serves as a heuristic evaluation tool for designers to ensure that social robots and AI systems are inclusive of neurodivergent communication patterns. It is often used in conjunction with the NSIR Heuristic, which addresses:

  • Neural Signal Speed (N): How robotic consistency accommodates the “biological latency” or atypical signal speed in neurodivergent social processing.
  • Social Predictability (S): The preference for logic-based, non-judgmental social partners over complex, high-speed human social cues.
  • Information Density (I): Reducing cognitive load by providing a “low-complexity” social environment.
  • Regulatory Comfort (R): Reaching a state of psychological safety where the user can be their authentic self without the need for camouflaging.

Practical Applications

  • Diagnostic for Inclusivity: It validates non-normative behaviors—such as staring at the robot or sharing thoughts without speaking—as positive markers of connection rather than deficits.
  • Technical Implementation: It is used to evaluate AI and LLM behaviors, assessing how emergent capabilities (like chain-of-thought reasoning) impact a user’s trust and perceived safety.
  • The “Autistic Grawlix”: The scale is used to justify reclassifying non-standard symbolic substitutions (e.g., “F*$king”) as logic-driven communication rather than data noise.