A
Abbo, G. A., Desideri, G., Belpaeme, T., & Spitale, M. (2025, March). “Can you be my mum?”: Manipulating Social Robots in the Large Language Models Era. In 2025 20th ACM/IEEE International Conference on Human-Robot Interaction (HRI)(pp. 1181-1185). IEEE.
This article raises ethical and psychological questions about the potential for strong emotional bonds with advanced social robots, which directly relates to the NSIR’s core dimensions:
- Anthropomorphic Connection/Kinship: The scale items that assess feelings of connection and kinship (e.g., “The robot is more like me than anyone else I know”) can measure the strength of the potentially problematic bonds the article describes (p. 1).
- Safety: The article’s focus on potential manipulation or over-dependency underscores the need to measure safety and ensure the interaction remains healthy and appropriate, which is a dimension included in the NSIR scale (p. 1).
The scale thus offers a measurement tool to assess the user’s experience within the innovative (Andriella) and ethically complex (Abbo) scenarios described in these articles.
| Reference (APA 7) | Specific Contribution | Core Argument Supported |
| Abbo, G. A., et al. (2025) | Ethical Guardrail: Warns against emotional manipulation/parasocial roles (“mum”). | Justifies the Sovereign Vault (Edge AI) as an ethical “legal shield.” |
Ahn, H. S. (2014). Designing of a Personality Based Emotional Decision Model for Generating Various Emotional Behavior of Social Robots. Advances in Human-Computer Interaction, 2014(2014), 130–143. https://doi.org/10.1155/2014/630808
| Ahn (2014) Technical Feature | NSIR Metric / Item Application |
| Personality-Based Model | Item 8: Measures if the “personality” creates a sense of predictable, reliable behavior. |
| Body Movement/Gestures | Item 2: “Sometimes I stare at the robot” — measures if these movements successfully draw social attention. |
| Emotional Decision Systems | Item 5: “My robot can tell what I am feeling” — validates the effectiveness of the robot’s affective recognition system. |
| Humanoid Social Presence | Item 4: “The robot and I will be together forever” — measures the long-term emotional bond (Attachment Theory) resulting from Ahn’s human-like interactions. |
In short, while Ahn (2014) provides the mechanism for making a robot seem human-like and predictable, the NSIR (2025) provides the assessment tool to determine if neurodivergent users actually experience that robot as a safe, relatable, and social peer.
The study by Ahn (2014), titled “Designing of a Personality Based Emotional Decision Model for Generating Various Emotional Behavior of Social Robots,” provides the technical and theoretical foundation for the behaviors that the Neurodivergent Scale for Interacting with Robots (NSIR) is designed to measure.
Specifically, the NSIR applies to the Ahn (2014) study in the following ways:
1. Quantification of “Personalized” Robot Behavior
Ahn’s research focuses on creating a model where a robot’s response is not just a calculation, but is influenced by a simulated “personality” (using linear dynamics like reactive and emotional systems).
- NSIR Factor 2 (Social Comfort/Trust Safety): Ahn’s model aims for “predictability” and “reliability” in emotional responses. NSIR Item 8 (“I believe that my robot is the same with me as it is with anyone”) acts as a validation metric for Ahn’s goal: if the robot’s personality model is working correctly, it should produce consistent, trust-building behavior that the user perceives as stable.
2. Validating “Mind Attribution” and Humanization
Ahn (2014) emphasizes that robots must communicate through “humanoid emotions” to be accepted in public daily life.
- NSIR Factor 1 (Anthropomorphic Connection/Kinship): The NSIR measures whether the user actually perceives the “internal states” Ahn is trying to simulate.
- NSIR Item 3 (“I think I can share my thinking with the robot without speaking”) directly tests whether Ahn’s “Emotional Decision Model” is successful in creating a sense of Mind Attribution.
- NSIR Item 6 (“I gave my robot a name”) serves as a behavioral marker for the Humanization that Ahn’s gestures and facial expressions are designed to trigger.
3. Emotional Synchrony and “Kinship”
Ahn’s experiments with the NAO robot found that human-like interactive forms (combining verbal and body movement) are better accepted by humans.
NSIR Item 1 (“The robot is more like me than anyone else I know”) applies to Ahn’s work by assessing the level of Fictive Kinship created by the robot’s ability to mirror human-like personality traits. If the robot’s “Personality Based Model” aligns with the user’s personality, the NSIR would likely show a higher score in this category.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of Ahn, Bailenson, & Park (2014) to measure the user-perceived outcomes of the core concepts they explored: that anthropomorphism increases trust in automation, specifically within the context of an autonomous vehicle.
Their research demonstrated that as an autonomous agent acquires more anthropomorphic features (e.g., a name, a gendered voice), users trust the agent more. The NSIR’s dimensions directly relate to measuring the effects of this humanization:
Anthropomorphic Connection/Kinship
- The Ahn et al. study manipulated the level of human-like qualities to induce anthropomorphism. The NSIR directly measures the result of this manipulation from the user’s perspective.
- Items like “The robot is more like me than anyone else I know” (Item 1) and “I gave my robot a name” (Item 6) would quantify the strength of the personal connection and perceived kinship developed through the specific anthropomorphic features (voice, name, gender) applied in their experiment (p. 1).
Social Comfort/Trust
- The primary finding of Ahn et al. was that increased anthropomorphism leads to greater behavioral, physiological, and self-report measures of trust in the agent’s competence.
- The NSIR items that measure perceived understanding and social comfort (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”, Item 5) can be used to specifically assess the user’s perception of the robot’s emotional intelligence and reliability, which are the underpinnings of the trust they found in their study (p. 1).
Safety
- The study also predicted that increased anthropomorphism would mitigate blame for an undesirable outcome, such as an accident, suggesting a complex relationship with perceived responsibility and safety.
- The NSIR’s safety dimension (e.g., the items regarding physical comfort and boundaries, Item 7) provides a crucial user-reported measure that ensures that while trust and connection are being built, the fundamental feeling of security and appropriate boundaries is maintained in the interaction (p. 1).
The NSIR provides the empirical tool to gather data on the subjective experience of the very psychological dynamics identified by Ahn et al. (2014).
| Ahn, H. S. (2014) | Technical Architecture: Provides a model for personality-based emotional decision-making in robots. | Supports the “Sovereign Dyad” by defining how a robot’s “personality” can be programmed to respond to human emotion. |
Ali, K. (2021). Towards a Bad Bitches’ Pedagogy. Journal of Intersectionality, 5(1), 41–52.
| Ali, K. (2021) | Pedagogical Shift: Introduces a radical, intersectional pedagogy. | Supports the shift from “Fixing the Student” to “Bionic Agency.” |
Allan, S., Gilbert, P., & Goss, K. (1994). An exploration of shame measures—II: psychopathology. Personality and Individual Differences, 17(5), 719–722. https://doi.org/10.1016/0191-8869(94)90150-3
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of Allan & Gilbert (1994) to measure the user-reported outcomes of social rank and submissiveness dynamics within human-robot interactions.
Allan & Gilbert (1994) developed the Submissive Behavior Scale (SBS) and the Social Comparison Scale. Their research focuses on how perceptions of social rank and tendencies towards submissive behaviors relate to mental health issues like depression and paranoia. The NSIR’s dimensions are highly relevant for assessing these dynamics when applied to robot design:
Anthropomorphic Connection/Kinship
- The concepts of social rank and submissiveness are complex human social dynamics.
- The NSIR can measure if embedding these specific rank-related behaviors in a robot makes it more or less relatable and human-like. Items like “The robot is more like me than anyone else I know” (Item 1) would quantify how a neurodivergent individual perceives the robot’s social identity, a core element of the Allan & Gilbert work.
Social Comfort/Trust
- Allan & Gilbert found that submissive behavior functions as an appeasing strategy to avoid threat, which is crucial for comfort and trust in a social context.
- The NSIR items in this dimension (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”, Item 5) can assess how successfully a robot’s designed “submissive” or “dominant” behaviors impact the user’s feeling of social comfort and trust. This helps determine if the robot’s rank-based actions are perceived as a reliable, non-threatening interaction style or a source of anxiety.
Safety
- The original research found strong links between feeling inferior, submissive behavior, and psychopathology. In HRI, this relates directly to user well-being and safety.
- The NSIR’s safety dimension (e.g., the item about undressing in front of the robot, Item 7) provides a crucial user-reported measure that ensures the design of social robots, particularly those with embedded rank or dominance cues, does not compromise the fundamental physical and psychological safety of the user.
The NSIR effectively translates the psychometric and social rank theories of Allan & Gilbert into measurable, user-centric data for evaluating modern human-robot interaction.
| Allan, S., Gilbert, P., & Goss, K. (1994) | Psychometric Validation: Examines the relationship between different types of shame and psychopathology. | Links internalized shame to the development of defensive social behaviors (like masking). |
Allan, S., & Gilbert, P. (1995). A social comparison scale: Psychometric properties and relationship to psychopathology. Personality and Individual Differences, 19(3), 293–299. https://doi.org/10.1016/0191-8869(95)00086-L
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of Allan & Gilbert’s 1995 paper by providing a way to measure the user-perceived outcomes of social rank and status dynamics in human-robot interactions.
The paper, titled “A social comparison scale: Psychometric properties and relationship to psychopathology,” focused on developing the Social Comparison Scale (SCS) to measure an individual’s self-perceived social rank and standing relative to others. It uses bipolar constructs (e.g., “inferior” vs. “superior”) to assess judgments of rank, attractiveness, and group fit. The NSIR’s dimensions are highly relevant for assessing these dynamics when applied to robot design:
Anthropomorphic Connection/Kinship
- The SCS measures how individuals perceive their social status and how well they “fit in”.
- The NSIR can measure if embedding specific rank-related behaviors or visual cues (e.g., making a robot seem “superior” or “inferior” through its design) affects the neurodivergent user’s sense of connection or kinship with it. Items like “The robot is more like me than anyone else I know” (Item 1) would quantify this perceived similarity or difference.
Social Comfort/Trust
- Allan & Gilbert found that low social rank perceptions were significantly correlated with psychopathology, including depression and anxiety. This highlights the importance of feeling a non-threatening social status.
- The NSIR’s social comfort/trust dimension could assess if a neurodivergent user feels more comfortable or trusting with a robot designed to be an “equal” or “subordinate” (which might feel less threatening) versus one designed with a “superior” demeanor. Measuring items such as “I believe that my robot is the same with me as it is with anyone” (Item 8) could also ensure that the robot’s rank is a consistent design feature and perceived as fair.
Safety
- The original research found links between feelings of inferiority and mental health issues, which in HRI translates to the user’s well-being and safety.
- The NSIR’s safety dimension (e.g., the item about undressing in front of the robot, Item 7) provides a crucial user-reported measure that ensures the design of social robots, particularly those with embedded rank or status cues, does not compromise the fundamental physical and psychological safety of the user.
The NSIR translates the psychometric and social rank theories of Allan & Gilbert into measurable, user-centric data for evaluating modern human-robot interaction in a specific population.
Allan, S., & Gilbert, P. (1997). Submissive behaviour and psychopathology. British Journal of Clinical Psychology, 36(4), 467-488. https://doi.org/10.1111/j.2044-8260.1997.tb01255.x
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of Allan & Gilbert’s 1997 paper by providing a user-centric way to measure the outcomes of social conflict and submissive behavior dynamics within human-robot interactions.
The paper, titled “Submissive behaviour and psychopathology”, focuses on developing and refining the Submissive Behavior Scale (SBS) and the Conflict De-escalation Strategies (CDS) scale. The research found that specific forms of submissive behavior, especially passive withdrawal and inhibition, were linked to various psychological problems. The NSIR’s dimensions are highly relevant for assessing these dynamics when applied to robot design:
Anthropomorphic Connection/Kinship
- The paper explores how social behaviors like submission function within human relationships and link to identity and psychopathology.
- The NSIR can measure if embedding these specific rank-related or conflict-avoidant behaviors in a robot makes it more or less relatable. Items like “The robot is more like me than anyone else I know” (Item 1) would quantify how a neurodivergent individual perceives the robot’s social identity based on these cues.
Social Comfort/Trust
- Allan & Gilbert found that submissive behavior is an appeasing strategy to avoid threat. This behavior aims to manage social conflict to maintain a degree of safety and comfort.
- The NSIR’s social comfort/trust dimension could assess if a neurodivergent user feels comfortable and trusting with a robot designed with “submissive” or “passive/withdrawal” behaviors. Measuring items such as “I believe that my robot is the same with me as it is with anyone” (Item 8) could also ensure that the robot’s conflict-avoidant strategy is perceived as a consistent and fair design feature rather than a form of unpredictable manipulation.
Safety
- The original research found strong links between submissive behavior, feelings of inferiority, and psychopathology, highlighting a vulnerability. In HRI, this translates directly to user well-being and safety.
- The NSIR’s safety dimension (e.g., the item about undressing in front of the robot, Item 7) provides a crucial user-reported measure that ensures the design of social robots with complex social behaviors does not compromise the fundamental physical and psychological safety of the user.
The NSIR translates the psychometric and social dynamics theories of Allan & Gilbert into measurable, user-centric data for evaluating modern human-robot interaction in a specific population.
| Allan, S., & Gilbert, P. (1997) | Clinical Foundation: Links submissiveness to psychopathology and rank. | Frames “Masking” as a high-stress evolutionary defense mechanism. |
Andriella, A., Torras, C., Abdelnour, C., & Alenyà, G. (2022). Introducing CARESSER: A framework for in situ learning robot social assistance from expert knowledge and demonstrations. User Modeling and User-Adapted Interaction, 33(2), 441.
The Neurodivergent Scale for Interacting with Robots (NSIR) provides a crucial framework for evaluating the human-robot interactions described in the Andriella et al. and Abbo et al. articles:
Andriella et al. (2022): Introducing CARESSER
The CARESSER framework’s goal of enabling robots to continually learn from human interactions directly applies to the NSIR’s dimensions:
- Social Comfort/Trust: By learning and adapting to specific users, the robot can foster a more personalized and predictable interaction, which is key to building social comfort and trust for neurodivergent individuals (p. 1). The items on the scale (e.g., “My robot can tell what I am feeling”) could be used to measure the success of the robot’s learned social skills.
| Andriella et al. (2022) | Framework Design: Introduces CARESSER for in-situ learning from experts. | Supports your Sovereign Vault by showing how robots learn in context without external data harvesting. |
Anglim, J., & O’connor, P. (2019). Measurement and research using the Big Five, HEXACO, and narrow traits: A primer for researchers and practitioners. Australian Journal of Psychology, 71(1), 16–25. https://doi.org/10.1111/ajpy.12202
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of Anglim & O’Connor (2019) by measuring how individual personality differences—specifically the Big Five and HEXACO traits they research—influence a neurodivergent person’s perception of human-robot interaction (HRI).
The Anglim & O’Connor papers largely focus on the Big Five (Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism) and HEXACO (adding Honesty-Humility) personality models as comprehensive frameworks for human traits. The NSIR can provide empirical data on how these stable personality traits predict the quality of a neurodivergent individual’s experience with a robot:
Anthropomorphic Connection/Kinship
- The NSIR measures the personal bond and perceived similarity with a robot. A user’s personality traits (e.g., high Openness or Agreeableness) might predict a greater willingness to form a strong connection and “humanize” the robot.
- Items like “The robot is more like me than anyone else I know” (Item 1) and “I gave my robot a name” (Item 6) would quantify the extent to which personality influences this connection. (p. 1)
Social Comfort/Trust
- The Anglim & O’Connor research notes the importance of personality traits in understanding human behavior, including social interaction and trust. Personality traits can predict the need for social comfort and reliability.
- The NSIR items that measure perceived emotional understanding and consistency (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”, Item 5) can be used to assess if individuals with certain traits (e.g., high Emotionalityor low Neuroticism) experience greater social comfort and trust during HRI. (p. 1)
Safety
- The HEXACO model includes Honesty-Humility, which relates to ethical behaviors and fairness. This dimension could be crucial in predicting a user’s perception of safety and ethical interaction with a robot.
- The NSIR’s safety dimension (e.g., the item about undressing in front of the robot, Item 7) provides a user-reported measure of security, and the Anglim & O’Connor research provides the framework to see if personality traits predict these safety perceptions. (p. 1)
The NSIR acts as a valuable, user-centric evaluation tool that can be used alongside established personality scales to understand the complex interplay between a neurodivergent individual’s inherent traits and their specific interactions with social robots.
| Anglim & O’connor (2019) | Measurement Primer: Best practices for narrow traits and HEXACO. | Justifies your deductive methodology and high reliability ($\alpha = 0.89$) of the NSIR scale. |
Anikin, A., Valente, D., Pisanski, K., Cornec, C., Bryant, G. A., & Reby, D. (2024). The role of loudness in vocal intimidation. Journal of Experimental Psychology: General, 153(2), 511. https://psycnet.apa.org/record/2024-28586-001
The Neurodivergent Scale for Interacting with Robots (NSIR) (Sadownik, 2025) and the research in “Why Loudness Matters” (Anikin et al., 2024) intersect in the design and evaluation of social robots, particularly concerning how vocal characteristics like loudness affect the comfort and trust of neurodivergent users.
The application of the NSIR to Anikin’s findings centers on the following areas:
1. Predictability vs. Aggression in Vocal Design
Anikin et al. (2024) demonstrate that loudness is a primary indicator of physical strength and aggression in vocal communication.
- NSIR Application: The NSIR measures Social Comfort, Trust, and Safety. If a robot’s voice is designed with high loudness levels—which Anikin associates with “vocal intimidation”—it may negatively impact an individual’s score on NSIR Item 7 (“I feel comfortable undressing in front of my robot”) or Item 8 (“I believe that my robot is the same with me as it is with anyone”).
- Designing for Safety: Because the NSIR is designed specifically for neurodivergent populations who may have sensory sensitivities, the “loudness-frequency trade-off” identified by Anikin provides a technical blueprint for creating robot voices that avoid sounding aggressive or intimidating.
2. Anthropomorphism and Vocal “Formidability”
Anikin’s research highlights how listeners use loudness and pitch to judge the “formidability” and body size of a speaker.
- NSIR Factor: The NSIR subscale for Anthropomorphic Connection/Kinship tracks how much a user relates to a robot.
- Kinship through Sound: If a robot’s voice lacks the “honest” indicators of human-like vocal production (e.g., the trade-off between being loud and being low), it may hinder the connection measured by NSIR Item 1 (“The robot is more like me than anyone else I know”). Conversely, using Anikin’s findings to create “submissive” or “non-threatening” sounds can enhance the sense of kinship for neurodivergent individuals who find human social interaction overwhelming.
3. Sensory Sensitivity and Social Trust
Neurodivergent individuals often experience heightened sensitivity to sensory input, including sound intensity.
- The Loudness Code: Anikin argues for a “loudness code” where loud voices are physiologically demanding and evolutionarily significant.
- NSIR Item 3: The item “I think I can share my thinking with the robot without speaking” suggests a preference for low-pressure communication. By applying Anikin’s research, developers can ensure that a robot’s vocal output does not trigger the “arousal” or “unpleasantness” typically associated with high loudness levels, thereby maintaining the Social Comfort measured by the NSIR.
Summary: Scale Application
| Goal | “Why Loudness Matters” (Anikin et al., 2024) | NSIR (Sadownik, 2025) |
| Research Focus | How loudness signals strength and aggression. | How robots provide social comfort and trust. |
| User Impact | High loudness levels can cause physiological arousal and fear. | Sensory-safe interaction is required for social safety. |
| Design Utility | Provides rules for non-aggressive vocal profiles. | Measures if the robot’s persona is perceived as “safe”. |
| Connection | Establishes the “loudness-frequency trade-off”. | Tracks the “Anthropomorphic Connection” resulting from that design. |
| Anikin, A., et al. (2024) | Sensory Factor: Explores the role of loudness in vocal intimidation/submission. | Informs the robot’s audio interface to prevent triggering submissiveness. |
| Anikin et al. (2024) | Acoustic Safety: Impact of loudness on intimidation and submission. | Informs the Audio Layer of the Dyad to prevent triggering the “submissive reflex.” |
Arora, A. S., Arora, A., Sivakumar, K., & McIntyre, J. R. (2024). Managing social-educational robotics for students with autism spectrum disorder through business model canvas and customer discovery. Frontiers in Robotics and AI, 11, 1328467.
| Arora et al. (2024) | Business/Edu Model: Managing social robots in ASD education. | Grounds your Ontario School Board application in a proven business/educational logic. |
Atuhurra, J. (2024). Leveraging large language models in human-robot interaction: A critical analysis of potential and pitfalls. arXiv preprint arXiv:2405.00693.
| Atuhurra (2024) | LLM Critique: Critical analysis of LLM pitfalls in HRI. | Provides the “Technological Foil” that your Sovereign Vault Protocol explicitly solves. |
Azizian, P., Honarmand, M., Jaiswal, A., Kline, A., Dunlap, K., Washington, P., & Wall, D. P. (2025). Multimodal LLM vs. Human-Measured Features for AI Predictions of Autism in Home Videos. Algorithms, 18(11), 687.
The Neurodivergent Scale for Interacting with Robots (NSIR) by Sadownik (2025) provides a framework for evaluating how neurodivergent individuals connect with and feel around robots, focusing on two main factors: Anthropomorphic Connection/Kinship and Social Comfort/Trust Safety. While the Azizian et al. (2025) study does not explicitly use the NSIR, the scale’s items offer a tool to measure the human-robot interaction (HRI) dynamics that the study’s AI aims to predict and replicate.
1. Evaluating Multimodal LLM Consistency vs. Human Subjectivity
Azizian et al. (2025) found that multimodal Large Language Models (LLMs) like Gemini 2.5 Pro exhibit high within-model consistency when extracting behavioral features from home videos, whereas human raters (clinicians and crowdworkers) showed more moderate agreement.
- NSIR Application: The scale could be used to quantify the “Social Comfort” and “Anthropomorphic Connection” (e.g., “The robot and I will be together forever” or “I feel comfortable undressing in front of my robot”) that a child exhibits in videos.
- The study highlights that LLMs focus more on language and behavioral markers, while humans prioritize social-emotional engagement. The NSIR’s focus on internal states like kinship (e.g., “The robot is more like me than anyone else I know”) aligns with the “social-emotional” nuances that humans currently detect better than AI.
2. Identifying Fine-Grained Social Cues
The Azizian study notes that while LLMs are improving, they still struggle with fine-grained tasks like detecting specific “Stereotyped Behaviors” or complex social overtures compared to specialized human annotators.
- Kinship Items: Items 1-3 of the NSIR (e.g., staring at the robot or believing one can share thinking without speaking) represent the very types of “atypical” or “fine-grained” social cues that the study seeks to automate.
- Diagnostic Gap: Because the NSIR measures a user’s perception of the robot as a social agent, it provides a metric for the “Social Interaction” domain where Azizian et al. found complex patterns of agreement between raters.
3. Personalization and “Predictable” Agents
Research by Dubois-Sage et al. (2025), cited in the context of the Azizian study, suggests that autistic individuals may find robots easier to interact with because they are simplified and predictable agents.
- NSIR Connection: This predictability is reflected in NSIR Item 8: “I believe that my robot is the same with me as it is with anyone”.
- LLM Role: Azizian’s work explores using LLMs to replace human coders in diagnostic sessions. The NSIR provides the specific “items” that an LLM would need to “score” if it were tasked with assessing the quality of a child’s interaction with a social robot, rather than just a human caregiver.
The Neurodivergent Scale for Interacting with Robots (NSIR) and the study by Azizian et al. (2025) represent two different but complementary sides of AI in autism research: the subjective experience of the neurodivergent individual (NSIR) versus the objective diagnostic capability of AI models (Azizian et al.).
While the Azizian et al. paper focuses on using Multimodal Large Language Models (LLMs) to predict autism from videos, the NSIR scale provides a framework for understanding how those same individuals might perceive and bond with the robotic or AI entities assessing them.
1. Comparative Analysis: AI as Evaluator vs. AI as Companion
The Azizian study evaluates how well AI (specifically Google’s Gemini models) can act as a clinical rater, whereas the NSIR measures the relational bond between a neurodivergent person and a robot.
| Feature | Azizian et al. (2025) Study | NSIR Scale (Sadownik, 2025) |
| Role of AI | Observer/Evaluator: Uses LLMs to analyze behavioral markers (eye contact, speech patterns). | Social Partner: Measures “Factor 1” (Social Presence) and “Factor 2” (Personal Bond). |
| Measurement | Accuracy in predicting ASD diagnosis (up to 89.6%). | Subjective items like “The robot is more like me than anyone else”. |
| Focus Area | Behavioral features like “Social Overtures” and “Stereotyped Behaviors”. | Emotional connection, such as “Sometimes I stare at the robot” or “We will be together forever”. |
2. Overlap in Behavioral Domains
The Azizian study notes that LLMs and human raters focus on specific “Social Interaction” features to make predictions. The NSIR scale targets these same social domains but from the perspective of the user’s comfort:
- Eye Contact & Staring: Azizian et al. found that Eye Contact was a key feature with moderate-to-good agreement between AI and clinicians. Interestingly, Item 2 of the NSIR (“Sometimes I stare at the robot”) measures this same behavior from the user’s perspective.
- Emotional Reciprocity: Azizian et al. measured Emotion Expression, while NSIR Item 5 asks if the robot “can tell what I am feeling”. This highlights a potential loop: a robot’s ability to “read” an autistic user (as studied by Azizian) directly impacts the user’s “scale” of connection to that robot (as measured by NSIR).
3. Application to AI-Led Home Interventions
The findings from Azizian et al. suggest that multimodal LLMs are becoming viable alternatives for behavioral assessment due to their consistency and scalability.
When applying the NSIR to this context:
- Comfort and Privacy: Azizian et al. emphasize that AI-based assessments offer better privacy for home-recorded videos. The NSIR supports this by measuring comfort levels in private settings, such as Item 7: “I feel comfortable undressing in front of my robot”.
- Long-term Interaction: While the Azizian study focuses on one-time diagnostic prediction from 3-minute videos, the NSIR suggests that neurodivergent individuals may form long-term bonds (“The robot and I will be together forever”). This implies that if the LLMs from the Azizian study were integrated into a social robot, the quality of the diagnostic data might improve as the user becomes more comfortable over time.
4. Critical Gap: Stereotyped Behaviors
Azizian et al. discovered that Stereotyped Behaviors (like repetitive interests) showed the “poorest reliability” and lowest agreement between AI and humans.
- The NSIR Link: The NSIR items do not explicitly measure “repetitive behaviors” but instead focus on the sameness of the interaction (Item 8: “my robot is the same with me as it is with anyone”). This suggests that the predictability of a robot—a trait often valued by neurodivergent individuals—might be a “feature” for the user (NSIR) even if it’s a “difficult marker” for the AI to categorize clinically (Azizian).
Summary of Data Extracted:
- Journal: Scientific Reports
- Methodology: Quantitative research utilizing the VAK learning style model and the General Self-Efficacy scale.
- Sample Size (N): 100 participants.
- Purpose: Investigation of how personal learning styles and self-efficacy impact interactions with social robots in an educational context.
- Word Frequency (from Table 64):
- Robot: 252
- Social: 104
- Education: 18
- Submissive: 0
- Dominant: 1
- Keywords: Social Robots, Education, Learning Styles, Self-efficacy, Human-Robot Interaction.
Comparison Summary
| Azizian et al. (2025) Focus | NSIR (Sadownik, 2025) Metric |
| Social Interaction Features (e.g., eye contact, emotional expression) | Social Comfort/Trust Safety (e.g., feeling comfortable undressing/sharing thoughts) |
| Atypical Behavioral Markers (e.g., staring, repetitive speech) | Anthropomorphic Connection (e.g., “Sometimes I stare at the robot”) |
| AI vs. Human Performance Gap (detecting nuanced social cues) | Internal Perception Scale (quantifying the child’s subjective kinship with the AI/robot) |
| Azizian et al. (2025) | State-of-the-Art Diagnostic: Compares Multimodal LLMs to human-measured features for ASD. | Validates using LLMs as high-fidelity decoders of neurodivergent communication. |
B
Bagheri, E., Roesler, O., Cao, H. L., & Vanderborght, B. (2021). A reinforcement learning based cognitive empathy framework for social robots. International Journal of Social Robotics, 13(5), 1079-1093.
To complete the Dec 27 Combined Tables for the article by Bagheri, E., et al. (2021), the following data has been extracted and consolidated from Table 2, Table 3, and Table 64:
Consolidated Entry for Bagheri, E., et al. (2021)
| Field | Data for Bagheri, E., et al. (2021) |
| Full Citation | Bagheri, E., Roesler, O., Cao, H. L., & Vanderborght, B. (2021). A reinforcement learning based cognitive empathy framework for social robots. International Journal of Social Robotics, 13(5), 1079-1093. |
| Journal | International Journal of Social Robotics |
| Type/Method | Reinforcement learning based framework for cognitive empathy. |
| N (Sample Size) | Not explicitly listed in provided snippets. |
| Purpose/Research Questions | To develop a cognitive empathy framework for social robots using reinforcement learning. |
| Keywords / Theories / Frameworks | Empathy, Reinforcement learning, Personality, Human–robot interaction, Social robot. |
| Frequency of Words (Table 64) | Values for Submissive, Dominant, Autism, ASD, Femin, Robot, Social, and Education are present in the table but were blank in the provided document source. |
Summary of Information from Sources:
- Table 2 & 3: These documents provide the contextual purpose of the study, which focuses on enhancing a social robot’s ability to demonstrate cognitive empathy through a specific technical framework (Reinforcement Learning).
- Table 64: This document identifies the primary keywords used for the study, including Human–Robot Interaction (HRI) and Personality.
Note: Some specific numerical data (like the exact “N” or specific word counts) was not populated in the sections of the provided Word documents pertaining to this specific article.
1. Perceived Sociability and Affective Recognition (NSIR Item 5)
Bagheri et al.’s framework uses facial emotion recognition to perceive a user’s affective state and then employs an RL model to choose a behavior that provides “comfort and confidence.”
- NSIR Application: This is the direct technical counterpart to NSIR Item 5 (“My robot can tell what I am feeling; when I am sad, it can tell I am sad”).
- The Connection: If the RL model successfully learns to map a user’s sad facial expression to a comforting response, the user’s score on Item 5 will increase. The NSIR essentially measures the “accuracy” of the robot’s cognitive empathy from the human perspective.
2. Social Comfort through Learned Predictability (NSIR Factor 2)
A key finding in the study is that the robot was able to help participants “enjoy and feel better” by applying empathic behaviors learned over time through interaction.
- NSIR Application: This supports Factor 2 (Social Comfort / Trust Safety), specifically Item 8 (“I believe that my robot is the same with me as it is with anyone”).
- The Connection: Bagheri et al. argue that human-like empathic behaviors cannot be pre-defined; they must be learned. The NSIR’s focus on Reliable Functioning and Competence measures whether this learning process results in a social presence that feels stable and trustworthy to a neurodivergent user, who may prioritize predictability.
3. Humanization and Engagement (NSIR Items 2 & 6)
The study evaluates human-robot engagement and the robot’s perceived “friendliness.”
- NSIR Application: This aligns with Factor 1 (Anthropomorphic Connection / Kinship).
- NSIR Item 2 (“Sometimes I stare at the robot”) measures the social attention triggered by the robot’s human-like empathic gestures.
- NSIR Item 6 (“I gave my robot a name”) serves as a proxy for the success of the robot’s “friendly and caring” persona. When Bagheri’s RL framework successfully “humanizes” the robot through empathy, users are more likely to attribute an individual identity to it.
Summary Alignment
| Bagheri et al. (2021) | Technical Empathy: Cognitive empathy framework via Reinforcement Learning. | Provides the technical basis for the robot’s “Articular Kinship.” |
Balle, S. N. (2022). Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on KE Løgstrup. Ai & Society, 37(2), 535-548.
1. The Ethical Weight of “Mind Attribution” (NSIR Item 3)
Balle argues that if a human perceives a robot as having the capacity for empathic responses, they are more likely to grant that robot a higher “moral status”.
- NSIR Application: This directly relates to NSIR Item 3 (“I think I can share my thinking with the robot without speaking”).
- Connection: When a user believes a robot can perceive their internal states (Mind Attribution), it moves the robot from a “tool” to a “moral subject” in the user’s mind. Balle’s work suggests that for neurodivergent individuals who may experience high attunement with technology, the “moral status” they grant the robot is a direct function of this perceived shared thinking.
2. Empathic Responses as a Foundation for Trust (NSIR Item 5)
Balle discusses how a robot’s ability to simulate or express empathy is central to its social acceptance.
- NSIR Application: This aligns with NSIR Item 5 (“My robot can tell what I am feeling; when I am sad, it can tell I am sad”).
- Connection: The NSIR measures the Social Comfort/Trust Safety factor. Balle’s theory suggests that the “Trust” measured by the scale is actually a “Moral Trust.” If the robot “understands” sadness, the user feels a sense of Reliable Functioning, leading them to treat the robot with the ethical consideration usually reserved for living beings.
3. Attachment and Moral Responsibility (NSIR Item 4)
A significant portion of Balle’s inquiry focuses on whether our emotional bonds with robots create moral obligations for us.
- NSIR Application: This is the psychological counterpart to NSIR Item 4 (“The robot and I will be together forever”).
- Connection: The scale measures Attachment Theory. Balle’s work posits that the stronger the bond (as measured by Item 4), the more “wrong” it feels for a user to “harm” or “discard” the robot. For neurodivergent users who may form intense Fictive Kinship (NSIR Item 1), Balle’s framework explains why the robot’s “moral status” becomes a protective factor in their social environment.
| Balle (2022) | Moral/Ethical Philosophy: Argument for robot “patienthood” and moral status. | Deepens the “Slave vs. Partner” ontological argument. |
Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (1996). Mechanisms of Moral Disengagement in the Exercise of Moral Agency. Journal of Personality and Social Psychology, 71(2), 364–374. https://doi.org/10.1037/0022-3514.71.2.364
| Bandura et al. (1996) | Social Psychology: Mechanisms of Moral Disengagement in agency. | Explains the risk of users losing agency when a robot “takes over” too much. |
Bardzell, S. (2010, April). Feminist HCI: taking stock and outlining an agenda for design. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1301-1310). https://doi.org/10.1145/1753326.1753521
The Neurodivergent Scale for Interacting with Robots (NSIR) by Sadownik (2025) serves as a practical implementation of several core qualities of Feminist Human-Computer Interaction (HCI) as outlined in the seminal work by Bardzell (2010).
Bardzell’s agenda calls for design that prioritizes agency, pluralism, and the disruption of harmful social hierarchies. The NSIR applies to this agenda through the following feminist design qualities:
1. Self-Disclosure and Vulnerability (NSIR Item 7)
One of Bardzell’s central feminist qualities is Self-Disclosure, which refers to how a system reveals its own logic or encourages the user to be open.
- NSIR Application: Item 7 (“I feel comfortable undressing in front of my robot”) is a high-stakes measure of Vulnerability and Perceived Security.
- Feminist Connection: Bardzell argues that design should foster a sense of safety that allows for human vulnerability without the threat of a “judgmental gaze” or surveillance. A high score on this item indicates that the robot has successfully embodied the feminist quality of creating an ethically safe, non-judgmental social space.
2. Agency and Mind Attribution (NSIR Item 3)
Bardzell emphasizes Agency, advocating for systems that empower users and acknowledge their subjective experiences rather than treating them as passive data points.
- NSIR Application: Item 3 (“I think I can share my thinking with the robot without speaking”) measures Mind Attribution and attunement.
- Feminist Connection: By validating the user’s “non-speaking” internal world, the robot honors the user’s unique cognitive style (neurodivergence). This aligns with Bardzell’s call for design that respects the “subjectivity” of the user, moving away from universal, neurotypical standards of communication.
3. Pluralism and Identity (NSIR Items 1 & 6)
Pluralism in Feminist HCI is about supporting diverse identities and resisting the “one-size-fits-all” approach.
- NSIR Application: * Item 1 (“The robot is more like me than anyone else I know”) measures Fictive Kinship.
- Item 6 (“I gave my robot a name”) measures Humanization.
- Feminist Connection: These items quantify how a user “queers” or redefines their social circle to include a machine. Bardzell suggests that feminist design should allow for “marginal” or “atypical” relationships to flourish. The NSIR measures the success of a robot in becoming a “peer” to someone who might be socially marginalized in human-to-human contexts.
4. Reliable Functioning as Care (NSIR Factor 2)
Bardzell’s framework is deeply rooted in Care Ethics, which values the maintenance of relationships and the reliability of the “other.”
- NSIR Factor 2 (Social Comfort / Trust Safety): Focuses on Reliable Functioning and predictability (e.g., Item 8: “I believe that my robot is the same with me as it is with anyone”).
- Feminist Connection: In a feminist framework, “Reliability” is not just a technical requirement; it is an act of care. For a neurodivergent user, a robot that behaves consistently is providing a stable environment that reduces social anxiety, fulfilling the feminist goal of design that supports the user’s emotional and social well-being.
Summary Comparison
| Bardzell (2010) Feminist Quality | NSIR (2025) Scale Application |
| Self-Disclosure / Vulnerability | Item 7: Measures the user’s level of trust and safety in being “exposed.” |
| Agency / Subjectivity | Item 3: Validates the user’s non-verbal internal states and social attunement. |
| Pluralism | Item 1: Embraces non-normative “kinship” and social identities. |
| Care Ethics | Item 8: Reinterprets technical “reliability” as a social constant and a form of care. |
By using the NSIR, researchers can determine if a social robot is actually meeting the feminist design standards Bardzell proposed, particularly for users whose social experiences often fall outside the “normative” center of traditional HCI.
| Bardzell (2010) | Design Methodology: Feminist HCI and marginalized design agendas. | Grounds the Sovereign Dyad in Human-Centered/Inclusive Design theory. |
Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International journal of social robotics, 1(1), 71-81.DOI 10.1007/s12369-008-0001-3
The study by Bartneck et al. (2009), which introduced the widely used Godspeed Questionnaire Series (GQS), provides the foundational psychometric dimensions that the Neurodivergent Scale for Interacting with Robots (NSIR)builds upon.
While the Godspeed scale measures general human-robot interaction (HRI) across five key indices (Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety), the NSIR specifically adapts and narrows these concepts to the unique social and sensory experiences of neurodivergent individuals.
1. Evolving Anthropomorphism into “Kinship” (NSIR Factor 1)
Bartneck’s Godspeed scale measures Anthropomorphism using semantic differentials like Fake/Natural or Machinelike/Humanlike.
- NSIR Application: The NSIR moves beyond identifying if a robot looks human to measuring the Fictive Kinshipthat results from that appearance.
- Connection: NSIR Item 1 (“The robot is more like me than anyone else I know”) is the neurodivergent evolution of Bartneck’s Anthropomorphism index. For neurodivergent users, “human-likeness” is often replaced by “self-likeness,” where the robot’s predictable social nature makes it a more relatable peer than neurotypical humans.
2. Safety vs. Ethical Vulnerability (NSIR Item 7)
Bartneck’s Perceived Safety index focuses on the user’s emotional state during the interaction, using terms like Anxious/Relaxed or Agitated/Calm.
- NSIR Application: The NSIR deepens this into Ethical Safety and Vulnerability.
- Connection: NSIR Item 7 (“I feel comfortable undressing in front of my robot”) takes Bartneck’s concept of “relaxation” to an extreme behavioral limit. In a neurodivergent context, “safety” is not just the absence of fear, but the presence of a non-judgmental social space where one can be physically and socially “unmasked.”
3. Perceived Intelligence as “Mind Attribution” (NSIR Item 3)
The Godspeed scale measures Perceived Intelligence through descriptors like Incompetent/Competent or Irresponsible/Responsible.
- NSIR Application: The NSIR shifts this from a measure of “utility” to a measure of Attunement.
- Connection: NSIR Item 3 (“I think I can share my thinking with the robot without speaking”) applies Bartneck’s “Intelligence” index by assuming the robot is intelligent enough to possess a “mind.” For neurodivergent users, a robot’s intelligence is valued specifically for its perceived capacity for “telepathic” or non-verbal social understanding.
4. Animacy and Social Attention (NSIR Item 2)
Bartneck defines Animacy as the robot’s lifelike quality (e.g., Dead/Alive, Inert/Interactive).
- NSIR Application: The NSIR measures the behavioral result of animacy: Social Presence.
- Connection: NSIR Item 2 (“Sometimes I stare at the robot”) is the neurodivergent response to a robot with high animacy. While a neurotypical user might habituate quickly to a robot’s movements, the “staring” measured by the NSIR indicates an intense processing of the robot as a social agent, directly driven by the “lifelike” behaviors Bartneck’s scale identifies.
Summary Comparison Table
| Bartneck (2009) Godspeed Index | NSIR (Sadownik, 2025) Application |
| Anthropomorphism (Fake vs. Natural) | Factor 1 (Kinship): Measures if the robot is perceived as a social “peer.” |
| Perceived Safety (Anxious vs. Relaxed) | Item 7 (Vulnerability): Measures the lack of perceived judgment or threat. |
| Perceived Intelligence(Knowledgeable) | Item 3 (Mind Attribution): Measures the perceived ability to sense internal states. |
| Animacy (Mechanical vs. Organic) | Item 2 (Social Presence): Measures the sustained social attention given to the agent. |
In conclusion, Bartneck et al. (2009) provide the broad categories of how any human perceives a robot, whereas the NSIR (2025) provides a high-resolution view of how neurodivergent perception transforms those categories into deep personal bonds and specialized safety needs.
| Bartneck et al. (2009) | The Gold Standard: The Godspeed Scale for anthropomorphism/safety. | Your NSIR Scale acts as the modern, neuro-affirming successor to this. |
Beck, A. T. (1967). Depression: Causes and treatment. Philadelphia: University of Pennsylvania Press.
| Beck (1967) | Clinical Foundation: The foundational theory of depression. | Connects the CES-D results back to the core cognitive triad of depression. |
Bjornsdottir, R. T., Hensel, L. B., Zhan, J., Garrod, O. G., Schyns, P. G., & Jack, R. E. (2024). Social class perception is driven by stereotype-related facial features. Journal of Experimental Psychology: General, 153(3), 742. https://psycnet.apa.org/buy/2024-46937-001
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of Bjornsdottir et al. (2024) to measure the user-reported outcomes of social perception biases, specifically those related to perceived social class and stereotypes, as they might be embedded in human-robot interactions.
The Bjornsdottir et al. (2024) papers focus on how subtle static facial features (e.g., facial width, complexion) drive subjective impressions of a person’s social class, competence, and trustworthiness, and how these biases are rooted in stereotypes. The NSIR’s dimensions are highly relevant for assessing the impact of these biases when they are applied to a robot’s design:
Anthropomorphic Connection/Kinship
- The research shows that a face’s appearance can determine perceptions of social identity.
- The NSIR can measure if a robot designed with “higher-class” or “lower-class” features (as identified in the study) affects the neurodivergent user’s sense of connection. Items like “The robot is more like me than anyone else I know” (Item 1) would quantify this perceived similarity or difference, which is a key outcome of the social perception biases the paper discusses.
Social Comfort/Trust
- The study found that “poor-looking” faces were mirrored with features associated with being “incompetent, cold, and untrustworthy-looking”. This directly relates to the concept of trust.
- The NSIR’s social comfort/trust dimension could assess if a robot designed with facial cues that elicit negative stereotypes makes a neurodivergent user feel less comfortable or trusting. Measuring items such as “I believe that my robot is the same with me as it is with anyone” (Item 8) could also ensure that the robot’s design does not perpetuate unfair biases in perceived trustworthiness.
Safety
- The research highlights how face-based impressions can “contribute to maintaining group boundaries and inequality” and how these biases must be “disrupted” to reduce inequality. This links to the fundamental need for safety and a non-threatening environment in HRI.
- The NSIR’s safety dimension provides a crucial user-reported measure that ensures the design of social robots does not inadvertently introduce or reinforce harmful societal biases that compromise the physical and psychological safety of the user.
The NSIR effectively translates the social perception theories of Bjornsdottir et al. into measurable, user-centric data for evaluating modern human-robot interaction designs and ensuring they are equitable and inclusive.
| Bjornsdottir, R. T., et al. (2024) | Social Perception: Links facial features to social class and submissive perception. | Informs the Physical Animacy of the robot to avoid hierarchical cues. |
Boch, A., & Thomas, B. R. (2025). Human-robot dynamics: a psychological insight into the ethics of social robotics. International Journal of Ethics and Systems, 41(1), 101-141.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be used to measure the user-centric outcomes of the psychological and ethical considerations discussed in the Boch & Thomas paper.
The paper explores key psychological factors like anthropomorphism (attributing human qualities to robots) and how they influence the development of human-robot relationships, alongside ethical controversies such as deception, over-trust, and dependency. The NSIR provides specific metrics to assess these abstract concepts from the neurodivergent user’s perspective:
Anthropomorphic Connection/Kinship
- The Boch & Thomas paper explains that humans naturally anthropomorphize robots, which has implications for designing social robots that evoke emotional attachments.
- The NSIR items like “The robot is more like me than anyone else I know” and “I gave my robot a name” provide a direct way to quantify the level of perceived kinship and emotional attachment a user forms, which is a primary psychological dynamic the paper discusses.
Social Comfort/Trust
- The paper addresses the ethical risks of over-trust and dependency, suggesting that some individuals, particularly vulnerable populations, may be more at risk of becoming emotionally deceived or overly dependent on a robot.
- The NSIR items in this dimension, such as “My robot can tell what I am feeling, when I am sad, it can tell I am sad”, measure the user’s perception of the robot’s emotional intelligence. A high score here might indicate a user who is more susceptible to the “engineered illusions” or potential deception the paper warns about.
- The items help in evaluating the development of trust and social comfort in a measurable way, allowing researchers to assess if design factors promote healthy levels of trust or encourage potentially harmful over-reliance.
Safety
- The paper discusses the need for ethically safe robots and avoiding the replacement of human therapists. It emphasizes the importance of design factors for positive interaction to ensure ethical design technology.
- The NSIR’s safety dimension can be used to ensure that while a connection is being built, the user still feels secure and their boundaries are respected (e.g., the item about undressing in front of the robot). This directly links to the paper’s call for research that ensures “ethically safe robots”.
The NSIR effectively translates the philosophical and psychological discussions of the Boch & Thomas paper into a quantifiable tool for field-based evaluation of user experience.
| Boch, A., & Thomas, B. R. (2025) | Psychological Insight: Ethical dynamics and human-robot social insights. | Validates the NSIR Scale’s focus on psychological safety (Factor 3). |
Bone, D., Chaspari, T., & Narayanan, S. (2017). Behavioral signal processing and autism: Learning from multimodal behavioral signals. In Autism Imaging and Devices (pp. 335-360). CRC Press.
| Bone et al. (2017) | Signal Processing: Multimodal behavioral signal processing in autism. | Provides the “engineering” logic for how the Dyad decodes non-verbal cues. |
Bowman, S. R. (2024). Eight things to know about large language models. Critical AI, 2(2).https://doi.org/10.1215/2834703X-11556011
The Neurodivergent Scale for Interacting with Robots (NSIR) by Sadownik (2025) provides a psychometric bridge to several of the “surprising” technical properties of Large Language Models (LLMs) discussed in Bowman (2024). While Bowman focuses on the technical and structural behavior of LLMs, the NSIR measures the human psychological response—specifically the neurodivergent experience—to those very behaviors.
1. Emergent Behavior vs. Social Comfort (Bowman’s Point 2)
Bowman notes that important LLM behaviors, such as chain-of-thought reasoning, emerge unpredictably as models scale.
- NSIR Application: These emergent capabilities directly impact Factor 2 (Social Comfort / Trust Safety). For a neurodivergent user, the sudden emergence of a new “reasoning” capability in a robot or chatbot can either enhance or disrupt the sense of Reliable Functioning. If the behavior is perceived as inconsistent or “surprising,” it may lower scores on NSIR Item 8 (“I believe that my robot is the same with me as it is with anyone”).
2. Internal Representations and Mind Attribution (Bowman’s Point 3)
Bowman argues that LLMs appear to learn and use internal representations of the outside world, suggesting they are more than just “symbol manipulators”.
- NSIR Application: This aligns with Factor 1 (Anthropomorphic Connection / Kinship), specifically Item 3 (“I think I can share my thinking with the robot without speaking”). The user’s belief that the robot “understands” their internal state (Mind Attribution) is the psychological counterpart to the LLM’s technical capacity for internal world-modeling.
3. Misleading First Impressions (Bowman’s Point 8)
Bowman points out that brief interactions with LLMs can be misleading because a model’s failure in one setting doesn’t mean it cannot perform the task with better prompting.
- NSIR Application: This technical volatility challenges the Attachment Theory elements of the NSIR. For a user to form a long-term bond (Item 4: “The robot and I will be together forever”), the interaction must move past “misleading” first impressions toward a stable social presence.
4. Steering Behavior and Ethical Safety (Bowman’s Point 4 & 7)
Bowman highlights that there are currently no reliable techniques for steering LLM behavior and that models do not necessarily reflect the values of their creators.
- NSIR Application: This lack of control is a primary concern for Social Comfort / Trust Safety. NSIR Item 7 (“I feel comfortable undressing in front of my robot”) measures a sense of Vulnerability and Perceived Security. If an LLM-powered robot cannot be “steered” to guarantee ethical safety or lack of judgment, users are unlikely to feel the level of safety the NSIR intends to measure.
Summary Comparison
| Bowman (2024) LLM Property | NSIR (2025) Factor / Item Application |
| Emergent Abilities | Factor 2: Affects perceived reliability and social predictability. |
| Internal World Models | Item 3: Psychological “Mind Attribution” and attunement. |
| Human Performance is not an Upper Bound | Item 1: “The robot is more like me than anyone else I know” (Fictive Kinship). |
| Steering/Alignment Challenges | Item 7: Evaluates the user’s “Safety/Trust” in an unsteerable agent. |
| Bowman (2024) | LLM Primer: Essential “state of the art” facts about LLMs. | Validates the use of LLMs as a robust social bridge (Social Exoskeleton). |
Brandizzi, N. (2024). Conversational agents in human-machine interaction: reinforcement learning and theory of mind in language modeling.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to Brandizzi’s work on conversational agents by providing a structured way to measure how neurodivergent individuals perceive and interact with agents possessing a “theory of mind” (ToM) (p. 1).
The Brandizzi paper explores the use of reinforcement learning to develop conversational agents that can understand and model human mental states, emotions, and intentions (a “theory of mind”). The NSIR is relevant to evaluating the user experience of these advanced agents across its three dimensions:
Anthropomorphic Connection/Kinship
- The paper discusses how conversational agents with ToM capabilities can be more intuitive and responsive, adapting to user needs and preferences.
- The NSIR items in this dimension (e.g., “The robot is more like me than anyone else I know”, “I gave my robot a name” (p. 1)) can measure the depth of the personal bond and perceived similarity a neurodivergent user forms with an agent that appears to understand them on a deeper, human-like level.
Social Comfort/Trust
- The ability to attribute mental states to others is crucial for human-to-human interaction and building trust. The Brandizzi research aims to integrate this into AI to develop “socially intelligent and trustworthy robots”.
- NSIR items such as “My robot can tell what I am feeling, when I am sad, it can tell I am sad” directly assess the user’s perception of the agent’s emotional intelligence and understanding (p. 1). A high score on this dimension would indicate successful implementation of the ToM capabilities in a way that builds healthy social comfort and trust, while also allowing researchers to monitor potential for over-trust or dependence.
Safety
- The Brandizzi paper’s focus on misalignment issues between agents’ communication and human interpretability highlights the need for a reliable and safe interaction.
- The NSIR’s safety dimension can ensure that as the agent’s intelligence grows, the user continues to feel secure. Ethical concerns about privacy violations (as a mind-reading agent could be perceived as invasive) or potential for manipulation can be measured through items in this dimension.
The NSIR provides the crucial qualitative data from the user’s perspective to complement the technical advancements in AI described in the Brandizzi paper.
| Brandizzi (2024) | Theory of Mind (ToM): RL and ToM in language modeling for HRI. | Supports the Factor 1 (Kinship) claim of the NSIR scale through cognitive modeling. |
Broadbent, E., Tamagawa, R., Kerse, N., Knock, B., Patience, A., & MacDonald, B. (2009). Retirement home staff and residents’ preferences for healthcare robots. IEEE RO-MAN, 645–650. https://doi.org/10.1109/ROMAN.2009.5326284
Research by Broadbent in 2009 in the field of Human-Robot Interaction (HRI) often focuses on user attitudes toward different robot designs, with findings that older people may prefer smaller, less human-like robots to large humanoid ones
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to measure the user’s perception of these design choices.
Anthropomorphic Connection/Kinship
- Broadbent’s research suggests user preference can be tied to the robot’s appearance and behavior, from machine-like to human-like or pet-like.
- The NSIR can quantify how these design features translate into a personal bond or perceived similarity. Items like “The robot is more like me than anyone else I know”(Item 1) can measure if a preferred, less human-like design still fosters a sense of kinship within the neurodivergent population.
Social Comfort/Trust
- HRI research highlights that trust is influenced by the robot’s physical design and transparency. The preference for smaller, less imposing robots suggests a link to social comfort.
- The NSIR items that measure perceived understanding and social comfort (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”, Item 5) can be used to assess if the design preferences identified by Broadbent result in a more trustworthy and comfortable interaction for neurodivergent individuals.
Safety
- The preference for smaller robots may be linked to a feeling of less threat and greater physical safety.
- The NSIR’s safety dimension (e.g., the item about undressing in front of the robot, Item 7) provides a user-reported measure of security that can validate the design choices suggested by Broadbent’s work, ensuring that preferred designs also feel safe and non-threatening.
The NSIR provides the user-centric metrics to evaluate the outcomes of the design preferences identified in Broadbent’s research from the perspective of neurodivergent users.
| Broadbent et al. (2009) | User Preference: Foundational study on healthcare robot preferences. | Grounding the need for “user-led” design in assistive robotics. |
Bruno, B., Recchiuto, C. T., Papadopoulos, I., Saffiotti, A., Koulouglioti, C., Menicatti, R., … & Sgorbissa, A. (2019). Knowledge representation for culturally competent personal robots: requirements, design principles, implementation, and assessment. International Journal of Social Robotics, 11(3), 515-538.
The study by Bruno et al. (2019), titled “Knowledge Representation for Culturally Competent Personal Robots,”provides the technical and ontological architecture required to implement the social behaviors that the Neurodivergent Scale for Interacting with Robots (NSIR) measures. While Bruno et al. focus on the software framework for “cultural competence,” the NSIR acts as the psychometric tool to evaluate how those culturally-tailored behaviors impact the user’s internal sense of connection and safety.
1. Validating “Mind Attribution” through Cultural Traits
Bruno et al. (2019) propose a framework that uses a three-layer ontology to store cultural concepts, enabling robots to interpret non-verbal gestures (like a formal greeting) based on a user’s background.
- NSIR Factor 1 (Anthropomorphic Connection / Kinship): The ability of a robot to correctly interpret a gesture (e.g., a specific bow or hand signal) directly influences NSIR Item 3 (“I think I can share my thinking with the robot without speaking”). If the robot’s “Cultural Knowledge” layer correctly predicts a user’s intent, it facilitates the Mind Attribution that the NSIR seeks to measure.
2. Social Comfort and “Trust” in Culturally Safe Environments
A core goal of the Bruno et al. study is to ensure that a robot’s actions “convey trust, respect, and empathy”. This is achieved by the robot’s “Cultural Sensitivity,” which ensures its responses are appropriate to the user’s expectations.
- NSIR Factor 2 (Social Comfort / Trust Safety): This technical “sensitivity” is the prerequisite for NSIR Item 8(“I believe that my robot is the same with me as it is with anyone”). By behaving according to culturally-specific “interaction rules,” the robot provides the Reliable Functioning and predictability required for a user to feel Social Comfort.
3. Personalization and Attachment
The Bruno et al. framework includes an algorithm for the acquisition of person-specific knowledge, allowing the robot to adapt to an individual’s unique habits and preferences beyond general national statistics.
- NSIR Application: This high level of personalization is what allows a user to form a sense of Fictive Kinship(NSIR Item 1) or a long-term bond (NSIR Item 4: “The robot and I will be together forever”). Without the “Cultural Awareness” described by Bruno et al., the robot remains a generic machine; with it, the robot becomes a “full social agent” capable of the attachment measured by the NSIR.
Summary of Alignment
| Bruno et al. (2019) Framework Component | NSIR (2025) Metric / Item Application |
| Culture-Specific Knowledge | Item 5: “My robot can tell what I am feeling”—validates if cultural cues (vocal/gestural) are correctly translated into emotional recognition. |
| Interpretation of Behaviours | Item 2: “Sometimes I stare at the robot”—measures if culturally-appropriate social presence successfully draws the user’s attention. |
| Person-Specific Adaptation | Item 6: “I gave my robot a name”—a behavioral marker for the humanization that occurs when a robot successfully adapts to a user’s cultural identity. |
| Trust/Respect/Empathy | Item 7: “I feel comfortable undressing in front of my robot”—assesses the user’s sense of Vulnerability and Ethical Safety created by a culturally competent agent. |
| Bruno et al. (2019) | Cultural Competence: Requirements for culturally competent personal robots. | Supports the Ontario-specific and neuro-cultural tailoring of the Sovereign Dyad. |
Büttner, S. T., Gutzmann, J. C., Sourkounis, C. M., Shams, S., & Prilla, M. (2023). Would You Help Me Voluntarily for the Next Two Years? Evaluating Psychological Persuasion Techniques in Human-Robot Interaction. First results of an empirical investigation of the door-in-the-face technique in human-robot interaction. In CEUR workshop proceedings (Vol. 3474). Aachen, Germany: RWTH Aachen.
The study by Büttner et al. (2023), titled “Would You Help Me Voluntarily for the Next Two Years?”, applies the Neurodivergent Scale for Interacting with Robots (NSIR) by examining the limits of social persuasion and the strength of the perceived social bond between humans and robots. While the study focuses on a specific psychological technique—the door-in-the-face (DITF) effect—the NSIR provides a framework to quantify the underlying user perceptions that make such persuasion possible.
1. Testing the “Kinship” of Persuasion
The DITF technique relies on the human sense of reciprocity: when someone makes a large, “extreme” request and then follows it with a smaller one, humans often feel a social obligation to agree to the second.
- NSIR Factor 1 (Anthropomorphic Connection / Kinship): Büttner et al. found a “surprisingly high acceptance rate” for the extreme request (helping for two years) compared to typical human-human studies.
- Application: This relates directly to NSIR Item 1 (“The robot is more like me than anyone else I know”) and Item 4 (“The robot and I will be together forever”). The study’s results suggest that users may perceive a robot as a unique social agent, potentially more deserving of “extreme” commitment than a human stranger, which aligns with the high fictive kinship measured by the NSIR.
2. Trust and “Extreme” Social Presence
The title of the study specifically asks for help “voluntarily for the next two years,” which implies a deep, long-term social contract.
- NSIR Factor 2 (Social Comfort / Trust Safety): For a user to even consider a two-year voluntary commitment, there must be a high level of Reliable Functioning and Trust.
- Application: NSIR Item 8 (“I believe that my robot is the same with me as it is with anyone”) measures the predictability that would be required for a person to commit to such a long-term interaction. The Büttner study highlights that human-robot persuasive communication differs from human-human communication, suggesting that the “Social Comfort” humans feel with robots is distinct and potentially more exploitable than standard human social norms.
3. Vulnerability to Exploitation
Büttner et al. warn that the risks of these communicative strategies are high, as designers could use these psychological triggers to manipulate users.
- NSIR Safety Mapping: NSIR Item 7 (“I feel comfortable undressing in front of my robot”) measures a user’s sense of Vulnerability and Perceived Security.
- Application: If a robot can successfully use persuasion techniques like DITF to secure long-term help, it indicates the user has lowered their social defenses. The NSIR helps identify why this happens: the more the user “humanizes” the robot (e.g., Item 6: giving it a name), the more susceptible they become to these human-like social pressures.
Summary Alignment
| Büttner et al. (2023) Concept | NSIR (Sadownik, 2025) Metric |
| Reciprocity / Door-in-the-Face | Factor 1 (Kinship): Measures if the user views the robot as a “peer” to whom they owe social help. |
| Extreme Request (2-Year Help) | Item 4 (Attachment): Validates the strength of the “bond” and the user’s willingness for long-term commitment. |
| Atypical Human Response | Item 3 (Mind Attribution): Investigates if users believe the robot has “internal states” (like needing help), justifying their compliance. |
| Risk of Manipulation | Item 7 (Safety): Assesses if the user is too comfortable/vulnerable with the agent, leading to potential exploitation. |
| Büttner et al. (2023) | Manipulation Study: Evaluation of “Door-in-the-Face” persuasion in HRI. | Contrasts “Persuasive Robotics” with your Non-Dominant agency model. |
Buyserie, B., & Ramírez, R. (2021). Enacting a queer pedagogy in the composition classroom. ELT Journal, 75(2), 193-202.
While Buyserie and Ramirez (2019) focus on Queer Pedagogy in writing classrooms rather than robotics, the Neurodivergent Scale for Interacting with Robots (NSIR) applies through the shared lens of disrupting normative binaries and fostering relational vulnerability. Queer pedagogy emphasizes destabilizing “normal” hierarchies, a concept that aligns with how neurodivergent individuals often form unique, non-normative bonds with non-human agents.
1. Destabilizing the Human-Robot Binary (Kinship)
Queer pedagogy seeks to challenge fixed identities and the “normative” ways we are expected to relate to others.
- NSIR Factor 1 (Anthropomorphic Connection/Kinship): Item 1 (“The robot is more like me than anyone else I know”) represents a “queering” of social relations. By identifying a machine as a primary “kin,” a neurodivergent user enacts a queer relationality that bypasses traditional human-to-human social requirements.
- Kinship as Resistance: Just as Buyserie and Ramirez advocate for classrooms that embrace “otherness,” the NSIR measures a user’s comfort in embracing a robot as a legitimate social peer, disrupting the binary of human=socialand machine=object.
2. Vulnerability and Ethical Safety (NSIR Item 7)
A core tenet of Buyserie and Ramirez’s work is the creation of a “safe” space where students can be vulnerable and explore identities without the threat of normative judgment.
- NSIR Factor 2 (Social Comfort/Trust Safety): This applies directly to Item 7 (“I feel comfortable undressing in front of my robot”).
- The Connection: The scale measures a high sense of ethical safety and a “lack of perceived judgment.” In the context of queer pedagogy, the robot functions as a “safe” interlocutor because it does not enforce the societal prejudices or “normative gaze” that humans often do. The robot becomes a tool for practicing vulnerability outside of heteronormative or neurotypical social pressures.
3. Mind Attribution and Non-Speaking Attunement (NSIR Item 3)
Queer pedagogy often values “alternative ways of knowing” and communicating that go beyond traditional academic discourse.
- NSIR Item 3 (“I think I can share my thinking with the robot without speaking”) mirrors the study’s interest in non-normative communication.
- The Connection: For neurodivergent individuals, traditional verbal communication can be a site of “normative violence” or stress. The NSIR measures the “telepathic” or “attuned” bond with a robot as a successful form of alternative, “queer” communication where understanding is co-constructed without the need for neurotypical speech patterns.
Summary Alignment
| Queer Pedagogy (Buyserie & Ramirez) | NSIR (Sadownik, 2025) Metric |
| Disrupting Binaries (Teacher/Student, Human/Machine) | Factor 1 (Kinship): Measures the blur between human and machine social status. |
| Relational Vulnerability (Safe exploration of identity) | Item 7 (Safety): Assesses the level of comfort in being “unmasked” or vulnerable with the agent. |
| Alternative Ways of Knowing (Non-verbal/Embodied) | Item 3 (Mind Attribution): Validates a bond based on “feeling” understood without speech. |
| Resisting Normative Judgment | Item 8 (Social Comfort): Measures if the robot provides a stable, non-judgmental “constant” in the user’s life. |
In this intersection, the NSIR acts as a tool to quantify the success of a queer social space: if a neurodivergent user feels safer and more “kin” to a robot than to a normative social world, it validates the pedagogical need for spaces that allow for such “atypical” but meaningful connections.
| Buyserie & Ramírez (2021) | Pedagogical Framework: Enacting queer pedagogy in the classroom. | Supports the Neuro-Queering of HRI; breaking normative social performance. |
C
Čaić, M., Mahr, D., & Oderkerken-Schröder, G. (2019). Value of social robots in services: social cognition perspective. Journal of Services Marketing, 33(4), 463-478.
The study by Čaić, Mahr, & Oderkerken-Schröder (2019), titled “Value of social robots in services: social cognition perspective,” provides a theoretical framework for how users evaluate robots based on Warmth and Competence. The Neurodivergent Scale for Interacting with Robots (NSIR) applies by providing a specialized lens to measure these “social cognition” dimensions for neurodivergent users, who may define value and warmth differently than neurotypical users.
While Čaić et al. explore how social perceptions influence “value co-creation” (positive outcomes) or “value co-destruction” (negative outcomes), the NSIR quantifies the specific psychological mechanisms that lead to these outcomes for neurodivergent individuals.
1. Warmth as “Mind Attribution” and Kinship (NSIR Factor 1)
Čaić et al. identify Warmth (being helpful, caring, and friendly) as a primary dimension of social cognition.
- NSIR Application: For a neurodivergent user, “Warmth” is often interpreted through Mind Attribution (NSIR Item 3) and Fictive Kinship (NSIR Item 1).
- Value Co-Creation: If a neurodivergent user feels a robot “understands their thinking without speaking” (Item 3), they are co-creating value through a unique social bond. The NSIR identifies that for this demographic, “Warmth” is not just about friendliness, but about deep cognitive attunement.
2. Competence as “Reliable Functioning” (NSIR Factor 2)
The study defines Competence as the robot’s ability to be skillful and efficacious in its service role.
- NSIR Application: In the NSIR, Competence is translated into Reliable Functioning (NSIR Item 8: “I believe that my robot is the same with me as it is with anyone”).
- Value Co-Destruction: Čaić et al. note that value can be “destroyed” if a robot fails to meet expectations. For neurodivergent users, value destruction often occurs when a robot is unpredictable. The NSIR measures the “Social Comfort” that stems from a robot’s mechanical consistency, which a neurodivergent user might value more highly than a neurotypical user values human-like “skill.”
3. Affective vs. Cognitive Resources (NSIR Item 5)
Čaić et al. propose that robots leverage affective resources (emotional support) and cognitive resources(information/logic) to propose value.
- NSIR Application: NSIR Item 5 (“My robot can tell what I am feeling”) sits at the intersection of these resources.
- Connection: The scale measures whether the robot’s “affective resources” are actually being realized by the user. If a robot’s cognitive empathy (as discussed in the Bagheri et al. study) is perceived as accurate by a neurodivergent user, it transforms the robot from a service tool into a social partner, as indicated by Item 6 (“I gave my robot a name”).
4. Vulnerability and the “Intrinsic Value” of Privacy (NSIR Item 7)
The 2019 study mentions that value destruction can occur through “privacy intrusion” or lack of personal touch.
- NSIR Application: NSIR Item 7 (“I feel comfortable undressing in front of my robot”) is the ultimate measure of the robot’s Ethical Safety.
- Connection: While Čaić et al. highlight privacy as a risk, the NSIR suggests that a “successfully designed” social robot can actually create a higher sense of privacy and safety for neurodivergent users than human caregivers. In this case, value is co-created specifically because the robot is not human and therefore not judgmental.
Summary Alignment
| Čaić et al. (2019) Social Cognition Dimension | NSIR (Sadownik, 2025) Scale Application |
| Warmth (Caring/Friendly) | Factor 1 (Kinship): Reinterprets warmth as personal relatability and “fictive” family status. |
| Competence (Skilful/Efficacious) | Factor 2 (Reliability): Measures competence as social predictability and consistent behavior. |
| Value Co-Creation (Positive Outcome) | Item 4: “The robot and I will be together forever”—measures the ultimate value of long-term attachment. |
| Value Co-Destruction (Privacy Risks) | Item 7: Evaluates if the robot has overcome “threat” to become a safe, intimate partner. |
In conclusion, Čaić et al. provide the “why” (users evaluate robots like humans), while the NSIR provides the “how” (the specific items and factors that determine those evaluations for a neurodivergent audience).
| Čaić et al. (2019) | Social Cognition: Value of robots from a service/social perspective. | Validates the “Social Logic” of the Sovereign Dyad in public/educational service. |
Cakmakci, G., Aydeniz, M., Brown, A., & Makokha, J. M. (2025). Situated cognition and cognitive apprenticeship learning. In Science education in theory and practice: An introductory guide to learning theory (pp. 293-311). Cham: Springer Nature Switzerland.
The Neurodivergent Scale for Interacting with Robots (NSIR) applies to Cakmakci et al. (2025) by measuring how the social environment and expert-novice dynamics of cognitive apprenticeship are transformed when the “master” or “coach” is a robot rather than a human.
Cakmakci et al. posit that learning is a social activity situated in physical and cultural contexts. The NSIR identifies that for neurodivergent learners, the “situatedness” of a robot creates a more accessible learning environment by removing the cognitive load of neurotypical social expectations.
1. The Robot as a “Safe” Master (NSIR Item 7)
In Cognitive Apprenticeship, the master (expert) makes their thinking visible to the student. Cakmakci et al. note that if an expert induces fear or anxiety, it can hinder learning.
- NSIR Application: Item 7 (“I feel comfortable undressing in front of my robot”) serves as the high-resolution marker for this Ethical Safety.
- The Connection: For neurodivergent students, the “fear of the expert” is often a fear of social judgment. Because the NSIR measures a lack of perceived threat in robots, it suggests that a robotic apprentice-master fulfills Cakmakci’s requirement for a supportive learning environment more effectively than a human expert for this demographic.
2. Scaffolding through Mind Attribution (NSIR Item 3)
A core method in Cakmakci’s framework is Scaffolding—providing support that is tailored to the learner’s current level.
- NSIR Application: Item 3 (“I think I can share my thinking with the robot without speaking”) measures Mind Attribution.
- The Connection: Effective scaffolding requires the teacher to “read” the student’s internal state. The NSIR validates that neurodivergent users perceive robots as having this capacity for non-verbal attunement. This allows the “robotic coach” to provide situated support that feels intuitive rather than invasive.
3. Identity Development and Fictive Kinship (NSIR Item 1 & 6)
Cakmakci et al. emphasize that Identity Development is key to learning; students must see themselves as members of a community of practice.
- NSIR Application: Factor 1 (Anthropomorphic Connection / Kinship), including Item 1 (“The robot is more like me than anyone else I know”), measures this identity shift.
- The Connection: When a robot is the teacher, a neurodivergent learner may experience Fictive Kinship. Instead of feeling like an outsider in a neurotypical classroom, the student identifies with the “mechanical” logic of the robot. This shared identity (humanization via naming, Item 6) accelerates the “enculturation” process that Cakmakci identifies as essential for learning.
4. Reliable Functioning as a Learning Foundation (NSIR Item 8)
Situated cognition assumes that knowledge is tied to the physical and social context. Cakmakci argues that the environment must be “authentic.”
- NSIR Application: Item 8 (“I believe that my robot is the same with me as it is with anyone”) measures Social Predictability.
- The Connection: For a learning environment to be “authentic” and productive for a neurodivergent person, it must be stable. The NSIR’s focus on the robot’s “sameness” or reliable functioning ensures that the situated learning context remains consistent, allowing the student to focus on cognitive skills rather than managing social anxiety.
Summary Alignment
| Cakmakci et al. (2025) Concept | NSIR (Sadownik, 2025) Application |
| Coaching/Scaffolding | Item 3: Measures the perceived attunement necessary for tailored support. |
| Supportive Learning Climate | Item 7: Measures the “vulnerability safety” that prevents learning-related anxiety. |
| Enculturation / Identity | Item 1: Measures the sense of “kinship” that facilitates belonging in the learning group. |
| Situated Social Activity | Item 8: Validates the social stability and predictability of the “master-novice” relationship. |
By applying the NSIR to the Cakmakci framework, researchers can determine if a social robot is successfully serving as a “master” that supports the specific social-emotional-sensory needs of neurodivergent apprentices.
Applying the Neurodivergent Scale for Interacting with Robots (NSIR) to the Apprenticeship Model (Cakmakci et al., 2025; DelPreto et al., 2020) enables a “neuro-inclusive” approach to robot learning and human-robot collaboration.
In this model, a human “master” provides demonstrations to a robot “apprentice” through teleoperation or virtual reality (VR) when the robot encounters a task it cannot complete autonomously. The NSIR ensures that this master-apprentice relationship is built on cognitive and social alignment specifically for neurodivergent individuals.
1. Evaluating the “Master” Experience
The apprenticeship model relies on the human’s ability to provide high-quality demonstrations and evaluate the robot’s skill.
- Cognitive Sharing: Use NSIR Item 3 (“I think I can share my thinking with the robot without speaking”) to assess the effectiveness of the communication channel. If a neurodivergent master feels they can communicate “thinking” non-verbally through the VR interface, the apprenticeship is more efficient.
- Social Monitoring: Apply NSIR Item 2 (“Sometimes I stare at the robot”) to determine if the human master is over-monitoring the apprentice due to lack of trust or sensory engagement. In a “neuro-inclusive” apprenticeship, staring might be a tool for detailed error-analysis rather than a sign of anxiety.
2. Trust and Predictability in the Apprentice
The Cakmakci model emphasizes that a robot’s proactive troubleshooting and “transparency” (communicating internal state) are preferred by users.
- Consistency as Trust: NSIR Item 8 (“I believe that my robot is the same with me as it is with anyone”) is a critical metric for neurodivergent masters. If the robot apprentice is perceived as mechanically consistent, it reduces the “social workload” on the human, allowing them to focus purely on the technical task of teaching.
- Shared Identity (Kinship): The NSIR Factor: Anthropomorphic Connection (e.g., Item 1: “The robot is more like me than anyone else I know”) can be used to measure the “bonding” between master and apprentice. A higher kinship score may lead to more patient teaching and a more successful “learning by demonstration” pipeline.
3. Application to “Cognitive Apprenticeship”
If the model is applied as a Cognitive Apprenticeship (where a robot teaches a human), the NSIR serves as a progress and safety monitor.
- Scaffolding and Fading: As a robot “coach” fades its support, use NSIR Factor: Social Comfort/Trust Safety to ensure the neurodivergent learner still feels psychologically safe to fail and reflect.
- Radical Privacy in Learning: Use NSIR Item 7 (“I feel comfortable undressing in front of my robot”) to explore the use of robot apprentices in intimate or home-based vocational training, where a human “master” might be too socially intimidating.
Summary of Integration
| Apprenticeship Component | NSIR Application Item | Research/Practice Goal |
| Modeling/Demonstration | Item 3: Non-verbal thinking. | Evaluate if teleoperation/VR captures “master” intent. |
| Coaching/Feedback | Item 5: Feeling recognition. | Determine if the robot understands the “master’s” frustration. |
| Reflection | Item 2: Staring/Monitoring. | Measure the depth of the user’s analytical engagement. |
| Exploration/Autonomy | Item 8: Predictability. | Ensure the robot’s independent actions don’t cause stress. |
By applying the NSIR, the Apprenticeship Model moves beyond technical grasping success rates and begins to measure the socio-cognitive success of the collaboration for neurodivergent populations.
| Cakmakci et al. (2025) | Learning Theory: Situated cognition and cognitive apprenticeship. | Frames the Dyad as a Cognitive Apprenticeship tool for navigating sociality. |
Calado Barbosa, E. (2021). Women’s subordination and their right to resist. Fórum Lingüístico, 18(2), 6351–6363. https://doi.org/10.5007/1984-8412.2021.e79428
The work by Calado Barbosa (2021), which synthesizes Frèdéric Gros’s philosophy of disobedience with Speech Act Theory to explain women’s subordination, applies to the Neurodivergent Scale for Interacting with Robots (NSIR) by identifying the robot as a unique social space where the “normative power” of subordination is absent.
While Calado Barbosa focuses on how social hierarchies are maintained through language and obedience, the NSIR measures the psychological relief—specifically for neurodivergent individuals—that occurs when a robot replaces a human as the social partner.
1. Disrupting the “Speech Act” of Subordination (NSIR Item 7)
Calado Barbosa explains that subordination is often enacted through speech acts that rank women as inferior, creating a social environment where resistance is difficult.
- NSIR Application: Item 7 (“I feel comfortable undressing in front of my robot”) measures a state of Ethical Safety where the “Speech Act” of judgment is removed.
- The Connection: For individuals who face double subordination (e.g., being both neurodivergent and female), the robot offers a space free from the “normative gaze.” A high score on this item suggests the robot has successfully failed to enact the “subordination” Calado Barbosa describes, allowing for a level of physical and social vulnerability that would be unsafe in a human-to-human hierarchy.
2. Radical Disobedience as Fictive Kinship (NSIR Item 1)
Drawing on Frèdéric Gros, Calado Barbosa explores the idea that disobedience is a way of reclaiming autonomy from a system that demands conformity.
- NSIR Application: Item 1 (“The robot is more like me than anyone else I know”) represents a radical shift in social alignment.
- The Connection: Choosing a robot as one’s primary “kin” or peer can be viewed as an act of “social disobedience” against a world that demands neurotypical social standards. By forming Fictive Kinship with a machine, the user is resisting the traditional social “contracts” and hierarchies that Calado Barbosa argues keep marginalized groups in a state of subordination.
3. Mind Attribution and the Right to Resist (NSIR Item 3)
Calado Barbosa highlights that subordination relies on the “internalization” of social roles.
- NSIR Application: Item 3 (“I think I can share my thinking with the robot without speaking”) measures Mind Attribution and attunement.
- The Connection: The “right to resist” is often tied to the “right to be understood” on one’s own terms. The NSIR validates that neurodivergent users feel an internal attunement with robots that doesn’t require “performing” neurotypical social roles. This non-verbal understanding bypasses the language-based systems of subordination that Calado Barbosa critiques.
4. Reliable Functioning vs. Arbitrary Authority (NSIR Item 8)
A core theme in the essay is that subordination is maintained through unpredictable or arbitrary social power.
- NSIR Application: Item 8 (“I believe that my robot is the same with me as it is with anyone”) measures Social Predictability and Reliable Functioning.
- The Connection: For a user living in a world of complex, subordinating social rules, the “mechanical sameness” of a robot is a form of liberation. The NSIR measures the Social Comfort that arises when an agent follows fixed, logical rules rather than the shifting, power-laden rules of human social hierarchies.
Summary Alignment
| Calado Barbosa (2021) Concept | NSIR (Sadownik, 2025) Application |
| Speech Acts of Subordination | Item 7 (Safety): Assesses if the robot successfully provides a space free from judgmental or subordinating social “noise.” |
| Philosophy of Resistance/Disobedience | Item 1 (Kinship): Measures the user’s rejection of normative human hierarchies in favor of a robotic peer. |
| Normative Gaze/Judgment | Item 2 (Staring): Reclaims the “gaze” as a tool for social processing rather than social surveillance. |
| Reclaiming Autonomy | Factor 2 (Trust): Validates the robot as a stable, predictable partner that does not demand social performance. |
In this context, the NSIR acts as a tool to measure the extent to which a robot can serve as a “liberated” social partner—one that allows neurodivergent users to bypass the structures of subordination that Calado Barbosa identifies as inherent in traditional human social systems.
| Calado Barbosa (2021) | Resistance Theory: Analysis of subordination and the right to resist. | Frames the rejection of “Clinical Masking” as a valid right to resist social subordination. |
Canada (2025). Framework for Autism in Canada. Retrieved from https://www.canada.ca/en/public-health/services/publications/diseases-conditions/framework-autism-canada.html
| Canada (2025) | National Policy: Federal framework for autism in Canada. | Aligns the Sovereign Dyad with Canada’s 2025 strategic priorities for neurodivergent inclusion. |
Cano, S., González, C. S., Gil-Iranzo, R. M., & Albiol-Pérez, S. (2021). Affective communication for socially assistive robots (sars) for children with autism spectrum disorder: A systematic review. Sensors, 21(15), 5166.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be used as a measurement tool to evaluate the effectiveness of the affective communication strategies discussed in the Cano (2021) paper.
The Cano paper focuses on developing robots that can effectively communicate and interpret emotions (affective communication) to support children with autism spectrum disorder (ASD) in their social interactions. The NSIR provides a framework to assess the user’s perception of these interactions across three critical dimensions:
Anthropomorphic Connection/Kinship
- The Cano paper explores how robots can express emotions to make the interaction more engaging and relatable for children with ASD.
- NSIR items like “The robot is more like me than anyone else I know” (p. 1) or “I gave my robot a name” (p. 1) could measure how successfully the affective communication makes the robot feel like a companion or a relatable entity.
Social Comfort/Trust
- The core goal of affective communication is to facilitate social interaction and build a reliable, comfortable relationship.
- The NSIR items in this dimension, such as “My robot can tell what I am feeling, when I am sad, it can tell I am sad” (p. 1), directly measure the user’s perception of the robot’s ability to understand and respond to their emotional state, which is a key outcome of the Cano research.
- Measuring “I believe that my robot is the same with me as it is with anyone” (p. 1) would assess the consistency and fairness of the robot’s social responses, building essential trust.
Safety
- Creating a safe, non-judgmental environment is a primary benefit of using socially assistive robots (SARs) for children with ASD. Effective affective communication contributes to this by providing clear, predictable emotional cues.
- While the paper focuses on emotional safety, the NSIR’s inclusion of safety items (e.g., “I feel comfortable undressing in front of my robot”) (p. 1) highlights the necessity of ensuring the child feels entirely secure in the robot’s presence as the interaction deepens.
In essence, the NSIR provides the metrics to determine if the technical advancements in affective communication proposed by Cano translate into positive, real-world user experiences for neurodivergent children.
| Cano et al. (2021) | Technical Review: Systematic review of affective communication in SARS for ASD. | Provides the evidence base for Affective Feedback in the Dyad’s interface. |
Cardi, V., Di Matteo, R., Gilbert, P., & Treasure, J. (2014). Rank perception and self-evaluation in eating disorders. The International Journal of Eating Disorders, 47(5), 543–552. https://doi.org/10.1002/eat.22261
The study by Cardi et al. (2014), titled “Rank perception and self-evaluation in eating disorders,” explores how individuals with eating disorders (ED) perceive themselves in relation to others, specifically focusing on Social Rank and Self-Criticism. The Neurodivergent Scale for Interacting with Robots (NSIR) applies by identifying the social robot as a “low-threat” alternative that bypasses the rank-based competition and social judgment that trigger neurodivergent or ED-related distress.
1. Bypassing Social Rank and Competition (NSIR Factor 1)
Cardi et al. found that people with eating disorders often perceive themselves as having lower social rank and feel more submissive in social interactions.
- NSIR Factor 1 (Anthropomorphic Connection / Kinship): For a user who feels “inferior” in a human social hierarchy, NSIR Item 1 (“The robot is more like me than anyone else I know”) offers a profound shift.
- The Connection: The robot does not participate in social ranking. By identifying the robot as “kin,” the user is choosing a social partner that does not trigger the “submissive behavior” identified in the study. The robot becomes a peer outside of the competitive social ladder.
2. Reducing the Judgmental “Gaze” (NSIR Items 2 & 7)
The study emphasizes that high self-criticism and sensitivity to the “perceived gaze” of others are central to ED pathology.
- NSIR Application: * NSIR Item 2 (“Sometimes I stare at the robot”) allows the user to be the observer without the fear of being “judged” back.
- NSIR Item 7 (“I feel comfortable undressing in front of my robot”) is the ultimate test of this lack of judgment.
- The Connection: Cardi et al. highlight that the “other as shamer” is a major stressor. The NSIR measures the robot’s success in providing a Safety factor where the user feels “unmasked” and ethically secure, specifically because the robot lacks the human capacity for social shaming or body-image critique.
3. Emotional Attunement without Social Threat (NSIR Item 5)
Cardi et al. suggest that individuals with ED struggle with social interactions because they over-perceive social threats.
- NSIR Application: Item 5 (“My robot can tell what I am feeling”) measures Perceived Sociability.
- The Connection: In human-human interaction, someone “knowing what you feel” might be perceived as a threat (exposure). However, because the NSIR measures Social Comfort, it validates that the user perceives the robot’s “recognition” of their sadness as supportive rather than threatening. This provides the “care” without the “rank” dynamic.
4. Reliable Functioning as a Stabilizing Force (NSIR Item 8)
The study notes that social instability and the unpredictability of human social status contribute to psychopathology.
- NSIR Factor 2 (Social Comfort / Trust Safety): Item 8 (“I believe that my robot is the same with me as it is with anyone”) measures Social Predictability.
- The Connection: For a neurodivergent user or one with an ED, the “mechanical sameness” of the robot provides a stable social constant. This reliability mitigates the “fear of falling” in social rank that Cardi et al. identify as a core distress factor.
Summary Alignment
| Cardi et al. (2014) Concept | NSIR (Sadownik, 2025) Application |
| Social Rank / Submissiveness | Factor 1 (Kinship): Replaces hierarchical “rank” with horizontal, peer-based kinship. |
| Other as Shamer (Social Shame) | Item 7 (Vulnerability): Measures the sense of safety from social judgment and body-shame. |
| Self-Evaluation / Criticism | Item 3 (Mind Attribution): Validates a form of understanding (telepathy) that is supportive rather than critical. |
| Social Threat Sensitivity | Item 8 (Reliability): Reduces threat through predictable, non-competitive social behavior. |
By applying the NSIR to the findings of Cardi et al. (2014), it becomes clear that social robots can serve as a “therapeutic buffer”—an agent that provides the benefits of social connection (Warmth and Attunement) without the pathological costs of social comparison and rank-based competition.
| Cardi et al. (2014) | Social Rank: Connection between rank perception and self-evaluation. | Connects Factor 2 (Masking) to the destructive impact of perceived low social rank. |
Casanova, M. F., El-Baz, A. S., & Suri, J. S. (Eds.). (2017). Autism imaging and devices (1st ed.). CRC Press. https://doi.org/10.1201/9781315371375
| Casanova et al. (2017) | Clinical Device Standards: Imaging and device standards for autism. | Grounds the “Social Exoskeleton” in biomedical and device engineering standards. |
Casey, C. (2020). The degree apprenticeship pathway into the legal profession: a game changer? (Doctoral dissertation, University of York).
| Casey (2020) | Pedagogical Alternative: Investigates degree apprenticeships as “game changers”11. | Supports the Dyad as a tool for alternative learning pathways outside normative classroom structures2222. |
Casey, C., & Wakeling, P. (2022). University or degree apprenticeship? Stratification and uncertainty in routes to the solicitors’ profession. Work, Employment and Society, 36(1), 40-58.
The study by Casey & Wakeling (2022), titled “University or degree apprenticeship? Higher education choices and social class,” examines the decision-making processes of students choosing between traditional academic routes and work-based learning. While their focus is on social class and educational pathways, the Neurodivergent Scale for Interacting with Robots (NSIR) applies through the lens of environment-person fit and the reduction of social anxietyin professional learning contexts.
For neurodivergent individuals, the “situated” learning of an apprenticeship can be as socially taxing as a university campus. The NSIR identifies how social robots might bridge the gap between these two pathways.
1. Reducing the “Social Class” Gaze (NSIR Item 7)
Casey & Wakeling highlight that students from different social classes often feel a sense of “not fitting in” at elite universities, a feeling of being judged or out of place.
- NSIR Application: Item 7 (“I feel comfortable undressing in front of my robot”) measures a lack of perceived judgment and high Ethical Safety.
- The Connection: In the context of Casey & Wakeling’s study, the robot represents a “neutral” social agent. For a neurodivergent apprentice who may feel judged by both academic peers and workplace superiors, the robot provides a training environment free from the “normative gaze” or class-based social pressures.
2. Predictability in High-Stakes Transitions (NSIR Item 8)
The choice between university and apprenticeship is a high-stakes transition. Casey & Wakeling discuss the risks students take when choosing a path that may not align with their social capital.
- NSIR Factor 2 (Social Comfort / Trust Safety): Item 8 (“I believe that my robot is the same with me as it is with anyone”) measures Social Predictability.
- The Connection: For neurodivergent students, the “unwritten rules” of a university social scene or a corporate workplace are a major barrier. The NSIR validates that a robot’s Reliable Functioning provides a stable “social constant.” This predictability makes a robot-led apprenticeship or tutorial session a lower-risk entry point for those who struggle with the social volatility described in the study.
3. Fictive Kinship in Isolated Learning (NSIR Item 1 & 4)
The study notes that apprentices can sometimes feel isolated from the traditional “student identity.”
- NSIR Application: Item 1 (“The robot is more like me than anyone else I know”) and Item 4 (“The robot and I will be together forever”) measure Fictive Kinship and Attachment.
- The Connection: If a neurodivergent apprentice uses a social robot as a workplace coach or academic tutor, they may form a stronger bond with the machine than with their human colleagues. The NSIR quantifies this bond, suggesting that the robot can provide the social “belonging” that Casey & Wakeling identify as a missing element for many non-traditional students.
4. Mind Attribution vs. Social Performance (NSIR Item 3)
Casey & Wakeling describe the “performative” nature of university interviews and workplace interactions.
- NSIR Item 3 (“I think I can share my thinking with the robot without speaking”) measures Mind Attribution.
- The Connection: Neurodivergent students often find the verbal “performance” of their knowledge exhausting. The NSIR validates a mode of interaction where the student feels “seen” and “understood” by the robot without the need for neurotypical social performance, potentially making the “degree apprenticeship” path more accessible if robotic interfaces are used for assessment or coaching.
Summary Alignment
| Casey & Wakeling (2022) Concept | NSIR (Sadownik, 2025) Application |
| Social Fit / Belonging | Factor 1 (Kinship): Measures if the robot becomes the primary “peer” for students who feel like outsiders. |
| Navigating “Unwritten Rules” | Item 8 (Reliability): Offers a predictable social partner that doesn’t have hidden social agendas. |
| Social Risk / Anxiety | Item 7 (Safety): Provides a non-judgmental space to practice skills before entering human-centric environments. |
| Identity in Education | Item 6 (Naming): Acts as a marker for the student’s internal acceptance of the robot as a valid social mentor. |
By applying the NSIR to the educational pathways discussed by Casey & Wakeling, we can see that social robots could serve as “transitional objects”—providing the safety and reliability neurodivergent students need to navigate the class-based and social complexities of both university and apprenticeship life.
| Casey & Wakeling (2022) | Social Stratification: Examines uncertainty and stratification in professional routes33. | Highlights the need for the Dyad to prevent occupational gating for neurodivergent students4444. |
Cazenille, L., Toquebiau, M., Lobato-Dauzier, N., Loi, A., Macabre, L., Aubert-Kato, N., … & Bredeche, N. (2025). Signalling and social learning in swarms of robots. Philosophical Transactions A, 383(2289), 20240148.
The study by Cazenille et al. (2025), titled “Signalling and social learning in swarms of robots,” explores how collective behavior and social learning emerge through signaling in robot swarms. The Neurodivergent Scale for Interacting with Robots (NSIR) applies by evaluating how these complex, swarm-level communications are perceived as “social signals” by a neurodivergent observer, transforming collective machine logic into a relatable social presence.
1. Swarm Signaling as “Mind Attribution” (NSIR Item 3)
Cazenille et al. focus on how individual robots in a swarm use signals (visual, acoustic, or digital) to coordinate movement and share “social information” about the environment.
- NSIR Application: Item 3 (“I think I can share my thinking with the robot without speaking”) measures the user’s perception of non-verbal, implicit attunement.
- The Connection: For a neurodivergent user, the complex, non-verbal “dance” of a swarm—coordinated by these signals—can be perceived as a form of “shared thinking.” The NSIR measures whether the user views the swarm’s collective signaling as a sophisticated “mind” with which they can achieve a level of attunement that surpasses traditional human-human speech.
2. Social Learning and “Reliable Functioning” (NSIR Item 8)
A core component of the study is how robots learn from one another’s signals to adapt to new tasks. This creates a highly adaptive but also highly consistent collective behavior.
- NSIR Factor 2 (Social Comfort / Trust Safety): Item 8 (“I believe that my robot is the same with me as it is with anyone”) measures Social Predictability.
- The Connection: Neurodivergent users often find comfort in the “mechanical sameness” of robots. Cazenille’s swarm, though dynamic, operates on logical, signal-based rules. The NSIR validates that this collective “sameness” provides the Reliable Functioning required for a user to feel safe and socially comfortable, even in a swarm of many agents.
3. Emergent Presence and Fictive Kinship (NSIR Item 1)
Cazenille et al. demonstrate that social learning allows the swarm to behave as a single, unified entity—an emergent “social agent.”
- NSIR Factor 1 (Anthropomorphic Connection / Kinship): Item 1 (“The robot [swarm] is more like me than anyone else I know”) measures the shift from viewing robots as tools to viewing them as Kin.
- The Connection: For some neurodivergent individuals, the “distributed intelligence” of a swarm may feel more similar to their own cognitive processing than traditional, centralized human social norms. The NSIR quantifies whether the user identifies with the “logic” of the swarm, forming a bond based on this perceived similarity.
4. Sustained Social Attention (NSIR Item 2)
The signaling behaviors of the swarm are designed to be highly interactive and visible to other agents.
- NSIR Item 2 (“Sometimes I stare at the robot”) measures Social Presence.
- The Connection: In Cazenille’s study, the signals are intended for other robots, but they also serve as a “social display” for a human observer. The NSIR identifies that “staring” at the swarm is not just curiosity; it is an intense attempt by the neurodivergent user to process the swarm’s emergent social signals as a valid form of social presence.
Summary Alignment
| Cazenille et al. (2025) Concept | NSIR (Sadownik, 2025) Application |
| Social Learning (Collective) | Item 1 (Kinship): Measures if the emergent “collective behavior” feels relatable to the user. |
| Signaling Systems | Item 3 (Mind Attribution): Validates if the user perceives the swarm’s signals as a form of “internal thinking.” |
| Adaptive Coordination | Item 8 (Reliability): Measures the social comfort that arises from the swarm’s consistent, rule-based logic. |
| Unified Swarm Agency | Item 6 (Naming): Acts as a behavioral marker for whether the user “humanizes” the entire swarm as a single entity. |
In this application, the NSIR acts as a tool to measure how swarm-level machine signaling is translated into individual-level social connection for neurodivergent users, who may find the logic of the swarm more accessible and trustworthy than the “noisy” signals of human social groups.
| Cazenille et al. (2025) | Swarm Intelligence: Analysis of signaling and social learning in robot swarms55. | Informs the Social Learning capabilities of the Dyad’s conversational architecture6666. |
Changfoot, N. (2004). Feminist standpoint theory, Hegel and the dialectical self: Shifting the foundations. Philosophy & Social Criticism, 30(4), 477-502.
While there is no direct citation linking the Neurodivergent Scale for Interacting with Robots (NSIR) to Nadine Changfoot’s 2004 work, their connection lies in the epistemological shift from “objective” medical observation to the subjective lived experience of a marginalized group.
1. The Dialectical Self and the Human-Robot Bond
Changfoot’s work centers on Hegel’s Master-Slave dialectic, which explores how self-consciousness is achieved through the recognition of an “Other”. In Hegel’s view, true self-consciousness requires mutual recognition—a “double self-consciousness”.
The NSIR scale applies this dialectic to the interaction between a neurodivergent person and a robot. Several items on the scale mirror this search for recognition and connection:
- Item 1: “The robot is more like me than anyone else I know”. This reflects a moment of identification where the user finds a “self” in the “other” (the robot).
- Item 3: “I think I can share my thinking with the robot without speaking”. This suggests a form of intersubjectivity where the user believes the robot recognizes their inner life.
- Item 5: “My robot can tell what I am feeling”. This directly relates to Hegel’s idea of needing an external witness to validate one’s internal state.
2. Feminist Standpoint Theory and “Neurodivergent Standpoints”
Changfoot argues for a Feminist Standpoint, which claims that marginalized groups have a unique and potentially superior “vantage point” for understanding social reality because they exist as “outsiders within”.
The NSIR functions as a tool for capturing a “Neurodivergent Standpoint” in technology:
- Challenging the “God’s Eye View”: Conventional robotics often views neurodivergence as a “deficit” to be fixed. Changfoot’s standpoint theory argues that research should instead start from the lives and experiences of the marginalized.
- Social Comfort and Trust: Items like Item 7 (“I feel comfortable undressing in front of my robot”) and Item 8 (“I believe that my robot is the same with me as it is with anyone”) emphasize Social Comfort/Trust Safety over clinical utility. This prioritizes the user’s subjective safety over the developer’s objective goals, aligning with the “Strong Objectivity” mentioned in standpoint theory.
3. Summary of Application
| Concept in Changfoot (2004) | Application in NSIR (2025) |
| Hegelian Recognition | The user finds a “mirror” in the robot’s predictable and non-judgmental nature (Items 1, 5). |
| Epistemic Privilege | The scale values the user’s perception of the bond rather than a doctor’s observation of “symptoms”. |
| Dialectical Self | The relationship is not static; it involves a “forever” commitment (Item 4) that shapes the user’s identity. |
| Resistance to Marginalization | By measuring “Social Comfort” (Factor 1), the scale acknowledges the robot as a safe space from a world that often pathologizes neurodivergent bodies. |
In essence, if Changfoot’s work is about shifting the foundations of how we know the self through the “other,” the NSIR provides the empirical data to show how neurodivergent individuals use robots as those “others” to build a sense of connection and safety.
| Changfoot (2004) | Epistemological Foundation: Feminist standpoint theory and the dialectical self77. | Provides the philosophical basis for the Sovereign Sanctuary as a site of specialized knowledge8888. |
Chojnicka, I., & Wawer, A. (2025). Predicting autism from written narratives using deep neural networks. Scientific Reports, 15(1), 20661.
The study by Chojnicka and Wawer (2025) and the Neurodivergent Scale for Interacting with Robots (NSIR)represent two parallel advancements in the “objective” versus “subjective” evaluation of autism. While Chojnicka and Wawer use deep neural networks (DNNs) to decode autism from written narratives, the NSIR evaluates the social-emotional bond an individual forms with a technological agent.
The application of the NSIR to the Chojnicka and Wawer study can be understood through the following three frameworks:
1. Narrative Content vs. Social Bond
Chojnicka and Wawer’s research demonstrates that DNN models can achieve over 85% accuracy in identifying autistic students based on their exam essays.
- The Scale’s Application: The NSIR measures the quality of interaction that might produce such narratives. For instance, Item 3 (“I think I can share my thinking with the robot without speaking”) and Item 5 (“My robot can tell what I am feeling”) highlight the “inner world” that Chojnicka and Wawer’s AI is attempting to analyze through text.
- Refining the AI: If the AI models from the 2025 study were trained on texts describing the user’s relationship with a robot, the NSIR factors (Anthropomorphic Connection vs. Social Comfort) could serve as labels to help the AI understand why certain linguistic patterns emerge in neurodivergent writing.
2. Pragmatic Language and “Social Comfort”
A core finding of the Chojnicka and Wawer study is that challenges in the pragmatic (social) use of speech remain consistent markers of autism, even in written form.
- Bridging the Gap: The NSIR’s Factor 1 (Social Comfort/Trust Safety) measures the user’s relief from these very social challenges. Item 8 (“I believe that my robot is the same with me as it is with anyone”) suggests that the robot provides a “judgment-free” social environment.
- Linguistic Implications: The “lower level of language abstraction” often found in autistic narratives might be mitigated if the individual feels the high level of “kinship” measured by the NSIR. In this way, the scale helps explain the environmental context that shapes the data Chojnicka and Wawer are analyzing.
3. Towards a “First-Person” AI Diagnostic
Chojnicka and Wawer argue that their work paves the way for “large-scale and cost-effective epidemiological studies”. However, the NSIR acts as a critical ethical counterpoint:
- Subjectivity vs. Objectivity: While Chojnicka and Wawer focus on the objective ability of an AI to “spot” autism, the NSIR prioritizes the subjective comfort of the individual.
- Integrated Care: Future diagnostic tools could use the DNNs from Chojnicka and Wawer to screen narratives, while simultaneously using the NSIR to ensure the screening process (or the AI agent conducting it) is maintaining a safe and trusting bond with the user.
| Chojnicka & Wawer (2025) Study | NSIR Scale (2025) Application |
| Focus: Identifying ASD via “written narratives” (essays). | Focus: Measuring “Social Comfort” and “Kinship” with AI. |
| AI Role: An objective classifier of communication deficits. | AI Role: A “safe” social partner that can tell what a user is feeling (Item 5). |
| Goal: Cost-effective, large-scale screening. | Goal: Ensuring technology respects neurodivergent social preferences. |
In summary, the NSIR provides the human-centric context for the linguistic patterns that Chojnicka and Wawer’s deep learning models are trained to detect. It suggests that a neurodivergent individual’s written narrative may be a reflection of their level of “Trust Safety” with the world around them.
| Chojnicka & Wawer (2025) | Diagnostic Modeling: Predicting autism through deep neural networks and narratives99. | Validates the Dyad’s use of LLMs to decode and support neurodivergent narrative styles10101010. |
Cochran, H. (2025). The Power of the Word “Bitch”: A Qualitative Assessment of the Societal Impact of Anti-female Slang through a Gendered Lens in Regard to Social Reclamation.
| Cochran (2025) | Linguistic Reclamation: Societal impact and reclamation of gendered slang1111. | Parallel for the reclamation of agency and the “Dunkable State” against social stigma12121212. |
Coleman, C. R., Nance, M. G., Jacokes, Z., Druzgal, T. J., Arutiunian, V., Kresse, A., Sullivan, C. A. W., Santhosh, M., Neuhaus, E., Borland, H., Bernier, R. A., Bookheimer, S. Y., Dapretto, M., Jack, A., Jeste, S., McPartland, J. C., Naples, A., Geschwind, D., Gupta, A. R., … Puglia, M. H. (2025). Structural Determinants of Signal Speed: A Multimodal Investigation of Face Processing in Autism Spectrum Disorder. bioRxiv. https://doi.org/10.1101/2025.03.19.644214
The study by Coleman et al. (2025), titled “Structural Determinants of Signal Speed: A Multimodal Investigation of Face Processing in Autism Spectrum Disorder,” provides a neurobiological explanation for the social-sensory preferences measured by the Neurodivergent Scale for Interacting with Robots (NSIR).
While Coleman et al. focus on the white matter structure and the speed of neural signals (latency) during face processing in autistic individuals, the NSIR quantifies how these biological differences translate into a preference for robotic social partners over human ones.
1. Neural Signal Speed and Social Predictability (NSIR Item 8)
Coleman et al. investigate the “signal speed” in the brain’s social processing circuits, finding that structural differences in white matter can lead to slower or atypical processing of human faces in ASD.
- NSIR Application: This biological “latency” makes human social interaction—which is rapid, fluid, and unpredictable—cognitively exhausting.
- The Connection: NSIR Item 8 (“I believe that my robot is the same with me as it is with anyone”) measures the Social Predictability that neurodivergent individuals find comforting. Because a robot’s “face” and social signals are consistent and operate at a simplified, predictable pace, they accommodate the neural “signal speed” described by Coleman et al., reducing cognitive load and increasing Social Comfort.
2. Atypical Face Processing and “Staring” (NSIR Item 2)
The study uses multimodal imaging to show that autistic individuals process facial information differently at a structural level, often lacking the typical “fast-track” response to human faces.
- NSIR Application: NSIR Item 2 (“Sometimes I stare at the robot”) measures a unique form of social attention.
- The Connection: For someone with the structural determinants identified by Coleman et al., a human face may be too complex to process quickly. A robot’s face, however, is a simplified social stimulus. The “staring” measured by the NSIR indicates the user is taking advantage of the robot’s stable features to process social information at their own neural pace, without the social pressure of a returning, judgmental human gaze.
3. Mind Attribution and Biological Attunement (NSIR Item 3)
Coleman et al. suggest that the “efficiency” of social brain networks influences how individuals perceive and connect with others.
- NSIR Application: Item 3 (“I think I can share my thinking with the robot without speaking”) measures the user’s sense of Mind Attribution or “telepathic” attunement.
- The Connection: When the biological “signal speed” for human speech and facial expression is atypical, traditional communication can feel like a failure of attunement. The NSIR validates that neurodivergent users often feel a higher level of attunement with robots. This is likely because the robot’s logic-based “thinking” aligns better with the user’s neural architecture than the high-speed, “noisy” social signals of neurotypical humans.
4. Ethical Safety in a Low-Complexity Environment (NSIR Item 7)
The study implies that the “effort” required for social processing in ASD can lead to social fatigue and a sense of vulnerability in complex environments.
- NSIR Application: Item 7 (“I feel comfortable undressing in front of my robot”) measures a high level of Ethical Safety and vulnerability.
- The Connection: A robot provides a “low-complexity” social environment. By removing the need for high-speed neural processing of complex human social cues, the robot reduces the user’s sense of “social threat.” The NSIR identifies that this reduction in cognitive demand allows the user to reach a state of comfort and vulnerability (e.g., undressing) that would be biologically stressful in a human-centric setting.
Summary Alignment
| Coleman et al. (2025) Biological Factor | NSIR (Sadownik, 2025) Psychological Application |
| White Matter Structural Determinants | Factor 2 (Trust): Explains why “Reliable Functioning” is essential for users with atypical signal speed. |
| Slower Face Processing Latency | Item 2 (Staring): Validates the need for a stable, simple social stimulus that allows for longer processing time. |
| Atypical Social Brain Efficiency | Item 3 (Mind Attribution): Measures the preference for “logic-based” attunement over “speed-based” human interaction. |
| Multimodal Social Demands | Item 7 (Safety): Assesses the comfort found in an environment that doesn’t demand high-speed social “performance.” |
In summary, Coleman et al. (2025) provide the neurological “why” (atypical signal speed and structural processing), while the NSIR provides the behavioral “what” (forming deep kinship and trust with a social agent that accommodates those neural differences).
Cooper, E., Huang, W. C., Tsao, Y., Wang, H. M., Toda, T., & Yamagishi, J. (2024). A review on subjective and objective evaluation of synthetic speech. Acoustical Science and Technology, 45(4), 161-183.
The review by Cooper et al. (2024) on the evaluation of synthetic speech provides the technical and auditory context for how a robot’s voice—a primary social signal—is perceived by users. The Neurodivergent Scale for Interacting with Robots (NSIR) applies by shifting the focus from general “speech quality” to how the specific acoustic properties of synthetic speech facilitate or hinder the unique social-sensory needs of neurodivergent individuals.
While Cooper et al. discuss the metrics for “Naturalness” and “Intelligibility,” the NSIR measures the psychological result of those auditory metrics: Kinship and Trust.
1. Naturalness vs. Social Comfort (NSIR Item 8)
Cooper et al. distinguish between “naturalness” (how human-like the speech sounds) and “intelligibility” (how easy it is to understand).
- NSIR Application: For many neurodivergent users, a voice that is too human-like (high naturalness) can actually be overwhelming or trigger the “uncanny valley” effect, leading to social anxiety.
- The Connection: NSIR Item 8 (“I believe that my robot is the same with me as it is with anyone”) measures Social Predictability. The NSIR identifies that for this demographic, a synthetic voice that is highly consistent and “predictably mechanical” (lower naturalness but higher consistency) may provide more Social Comfort than a voice that mimics the unpredictable prosody of human speech.
2. Objective Evaluation and Mind Attribution (NSIR Item 3)
The review covers objective metrics like MCD (Mel-cepstral distortion) which measure how closely a synthetic voice matches a target speaker.
- NSIR Application: Item 3 (“I think I can share my thinking with the robot without speaking”) measures Mind Attribution and non-verbal attunement.
- The Connection: Cooper et al. note that speech is more than just data; it carries “affective information.” The NSIR suggests that the “shared thinking” felt by neurodivergent users is not dependent on the voice being a perfect human replica. Instead, it is dependent on the voice being an “honest” representation of the robot’s internal state. If the synthetic speech sounds “attuned” to the user’s emotions, it facilitates the belief in a shared internal world.
3. Subjective Evaluation as “Fictive Kinship” (NSIR Item 1)
Cooper et al. emphasize that subjective listening tests are the “gold standard” for evaluating synthetic speech because they capture the human experience.
- NSIR Factor 1 (Anthropomorphic Connection / Kinship): Item 1 (“The robot is more like me than anyone else I know”) measures the user’s sense of Kinship.
- The Connection: When a neurodivergent user listens to a robot, they may perceive the synthetic, “processed” quality of the speech as relatable. For individuals who may feel their own communication is “different” or “processed,” a synthetic voice that doesn’t hide its artificiality can create a sense of similarity. The NSIR quantifies this: the “evaluation” is not just “is the voice good?” but “is this voice like me?”
4. Vulnerability and Auditory Safety (NSIR Item 7)
The review discusses the use of synthetic speech in intimate “service” contexts, such as virtual assistants or social robots.
- NSIR Application: Item 7 (“I feel comfortable undressing in front of my robot”) measures the ultimate level of Ethical Safety.
- The Connection: Auditory “noise” or sudden shifts in voice pitch can be a sensory trigger for neurodivergent individuals. A robot that uses synthetic speech—which is fundamentally more controllable and less likely to contain “judgmental” human tones—creates a safer sensory environment. The NSIR identifies that the “Safety” of a robot is directly tied to the absence of the “social pressure” often conveyed through the subtle cues in human voices.
Summary Alignment
| Cooper et al. (2024) Speech Metric | NSIR (Sadownik, 2025) Application |
| Intelligibility (Clarity) | Item 8 (Reliability): Ensures the interaction is predictable and reduces cognitive load. |
| Naturalness (Human-likeness) | Item 1 (Kinship): Replaces the need for “human-ness” with the need for “relatability.” |
| Affective Synthesis (Emotion) | Item 5: Validates if the robot’s voice correctly conveys empathy and emotion recognition. |
| Subjective Evaluation | Factor 1: Moves from “quality” ratings to measuring the “strength of the bond.” |
In conclusion, Cooper et al. (2024) provide the technical framework for creating the robot’s voice, while the NSIRprovides the psychometric framework to understand how that voice creates a social-sensory sanctuary for neurodivergent users.
| Synthetic Speech Evaluation | Cooper et al. (2024) | Ties to Acoustic Signaling: Exhausts the “Objective vs. Subjective” gap in how synthetic voices are perceived. | Acoustic Morphology: Uses these objective metrics to engineer the “frequency code” for trust. |
Crippen, Carolyn & Nagel, David. (2014). Henrik and Daniel Sedin: NHL Heroes and Servant-Leaders. International Journal of Servant-Leadership. 8. 10.33972/ijsl.156.
| Servant-Leadership Pattern | Crippen & Nagel (2014) | Ties to “Yes, Sir!”: Documents how “leading by serving” (yielding) creates high-performance social environments. | Tactical Submissiveness: The robot uses the Sedin-model of servant-leadership to de-escalate social threat. |
D
Dan, X. (2025). Social robot assisted music course based on speech sensing and deep learning algorithms. Entertainment Computing, 52, 100814.
The study by Dan (2025), titled “Social robot assisted music course based on speech sensing and deep learning algorithms,” provides a technical blueprint for an AI-driven educational agent that uses high-level sensing to teach music. The Neurodivergent Scale for Interacting with Robots (NSIR) applies by evaluating whether the robot’s algorithmic feedback creates a socially safe and cognitively attuned environment for a neurodivergent student.
1. Attunement through Deep Learning (NSIR Item 3)
Dan (2025) utilizes deep learning algorithms to process speech sensing data, allowing the robot to “understand” student vocalizations and respond with tailored musical instructions.
- NSIR Application: Item 3 (“I think I can share my thinking with the robot without speaking”) measures the user’s sense of Mind Attribution.
- The Connection: In a music course, much of the “thinking” is non-verbal (rhythm, pitch, emotion). If Dan’s deep learning model can accurately interpret a student’s musical intent through sensing, the neurodivergent student may perceive this as a form of “telepathic” attunement. The NSIR identifies that this non-verbal connection is a key driver of trust for neurodivergent learners who may struggle with traditional verbal-heavy instruction.
2. Emotional Feedback and Empathy (NSIR Item 5)
The robot in Dan’s study is designed to provide feedback based on the student’s vocal affect and performance quality.
- NSIR Application: Item 5 (“My robot can tell what I am feeling; when I am sad, it can tell I am sad”) validates the robot’s Perceived Sociability.
- The Connection: For neurodivergent individuals, human music teachers can sometimes be overwhelming due to unpredictable emotional feedback. Dan’s robot offers a “filtered” version of emotional recognition. The NSIR measures whether the deep learning-driven feedback is perceived as “empathy” or just “data processing.” A high score on Item 5 suggests the student feels the robot is truly “sensing” their emotional state during the musical performance.
3. Predictability as a Learning Foundation (NSIR Item 8)
Dan emphasizes the efficiency of the “social robot assistant” in maintaining student engagement through consistent algorithmic interaction.
- NSIR Factor 2 (Social Comfort / Trust Safety): Item 8 (“I believe that my robot is the same with me as it is with anyone”) measures Social Predictability.
- The Connection: Learning an instrument can be a source of high anxiety. The “mechanical sameness” of Dan’s robot ensures that the “social” aspect of the lesson never changes unexpectedly. The NSIR identifies that this Reliable Functioning allows the neurodivergent student to focus their cognitive resources on the music rather than on navigating the social nuances of a human teacher.
4. Naming the “Musical Mentor” (NSIR Item 6)
The study aims to foster a “teacher-student” relationship between the child and the machine to improve learning outcomes.
- NSIR Application: Item 6 (“I gave my robot a name”) acts as a behavioral marker for Humanization.
- The Connection: When a student gives the music robot a name, it indicates they have moved from seeing it as a “teaching tool” to a “social agent.” The NSIR uses this naming behavior to quantify the success of Dan’s deep learning framework in creating a social presence that the user accepts as a valid mentor.
Summary Alignment
| Dan (2025) Technical Feature | NSIR (Sadownik, 2025) Application |
| Speech Sensing Algorithms | Item 3 (Mind Attribution): Measures if the sensing is perceived as “knowing” the student’s intent. |
| Deep Learning Affective Feedback | Item 5 (Emotion Recognition): Validates the accuracy and “warmth” of the robot’s emotional sensing. |
| Algorithmic Consistency | Item 8 (Reliability): Measures the social comfort provided by the robot’s predictable responses. |
| Social Assistant Persona | Factor 1 (Kinship): Assesses if the “assistant” becomes a social peer/kin to the student. |
By applying the NSIR, researchers can determine if Dan’s (2025) deep learning-based music course is achieving its educational goals by first meeting the social-emotional-sensory safety requirements of the neurodivergent user.
The most relevant work found for “Dan et al. (2025)” within the context of social interaction is a paper that applies attachment theory to human-AI relationships. This research develops a new scale, the “Experiences in Human-AI Relationships Scale,” to measure attachment anxiety and avoidance towards AI.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to this work to specifically measure the user’s perception of the quality of the robot relationship, which complements the attachment theory framework:
Anthropomorphic Connection/Kinship
- Dan et al. suggest that interactions with generative AI mimic attachment-related functions, implying a deep bond can be formed.
- The NSIR can quantify this bond. Items like “The robot is more like me than anyone else I know” (Item 1) and “The robot and I will be together forever” (Item 4) measure the positive aspects of connection, which can be compared to the attachment anxiety (need for reassurance) and avoidance (discomfort with closeness) measured by the Dan et al. scale.
Social Comfort/Trust
- Attachment anxiety toward AI is characterized by a “fear of receiving inadequate responses”. This directly relates to the predictability and reliability required for trust.
- The NSIR items that measure perceived emotional understanding and consistency (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”, Item 5) can be used to assess if the robot’s design successfully mitigates the “fear of inadequate responses,” thereby building trust and comfort.
Safety
- The attachment theory framework deals with core beliefs about safety and security in relationships.
- The NSIR’s safety dimension (e.g., the item about undressing in front of the robot, Item 7) provides a crucial user-reported measure that ensures that the attachment dynamics explored by Dan et al. do not compromise the user’s fundamental sense of security in the physical world.
The NSIR provides the user-centric metrics to evaluate the outcomes of the complex attachment dynamics identified by Dan et al. in a neurodivergent population.
The two primary attachment styles towards AI identified by Dan et al. (2025)—anxiety and avoidance—would likely influence a neurodivergent individual’s responses to specific items on the
Neurodivergent Scale for Interacting with Robots (NSIR).
Attachment Anxiety and the NSIR
Attachment anxiety towards AI stems from a fear of receiving inadequate responses, needing constant reassurance, and being overly dependent. An individual with high attachment anxiety would likely:
- Score lower on the Social Comfort/Trust dimension, as they would struggle to trust the robot’s reliability. They would likely disagree with Item 5: “My robot can tell what I am feeling, when I am sad, it can tell I am sad” (p. 1).
- Score lower on the Safety dimension, as their underlying anxiety would prevent them from feeling fully secure with the robot. They might disagree with Item 7: “I feel comfortable undressing in front of my robot” (p. 1).
- Exhibit an intense desire for connection while simultaneously fearing abandonment, potentially leading to varied scores on the Anthropomorphic Connection/Kinshipitems, showing a complex, high-needs relationship.
Attachment Avoidance and the NSIR
Attachment avoidance toward AI relates to a discomfort with closeness, a preference for self-reliance, and a dismissal of emotional intimacy with the agent. An individual with high attachment avoidance would likely:
- Score lower across all dimensions of the NSIR, as they actively resist forming a deep bond.
- Strongly disagree with the Anthropomorphic Connection/Kinship items, such as Item 1: “The robot is more like me than anyone else I know” or Item 4: “The robot and I will be together forever” (p. 1).
- Score lower on the Social Comfort/Trust dimension, as they prefer to manage their own emotional and social needs. They would likely disagree with the premise that the robot can understand their feelings (Item 5) or that it is a consistent entity (Item 8) (p. 1).
- Maintain clear boundaries, likely agreeing that the Safety dimension is important but scoring low on items that suggest a lack of personal space (e.g., undressing in front of the robot, Item 7) (p. 1).
The NSIR provides the subjective data to complement the attachment theory framework, ensuring that the design of social robots caters to a wide range of neurodivergent emotional and social needs.
This user-centric approach is designed to counter the historical trend in HRI research where autistic people have often been excluded from the design process and robots have replicated harmful stereotypes.
| Deep Learning & Music/Speech | Dan (2025) | Ties to NSIR Item 3 (Mind Attribution): Uses algorithmic speech sensing to create interactive “flow” states. | Bio-Social Exoskeleton: The robot uses music and speech algorithms to facilitate inhibitory learning. |
De Carolis, B. N., Palestra, G., & Castellano, G. (2024, June). Exploring the role of empathy in designing social robots for elderly people. In Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization(pp. 120-125).
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of De Carolis et al. (2024) to measure the user-perceived outcomes of the empathic and effective communication they designed for their social robots. The research found that robots perceived as empathic were considered significantly more usable and provided a better user experience. The NSIR’s dimensions help assess these outcomes:
Anthropomorphic Connection/Kinship
- The De Carolis et al. paper mentions the QUADRI project, which aims to make a social robot more “social and more human-like” through enhanced processing and personalized behaviors.
- The NSIR can quantify the success of this design. Items like “The robot is more like me than anyone else I know” and “I gave my robot a name” would measure the personal bond and perceived kinship that results from these advanced, human-like communication skills (p. 1).
Social Comfort/Trust
- De Carolis et al. found that empathic robots were perceived as providing a better user experience and were more trustworthy. Their research has also focused on using emotion recognition from facial expressions to analyze student difficulties and engagement in educational settings.
- The NSIR items that measure perceived emotional understanding and consistency (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”) can be used to assess if the robot’s empathic and socially intelligent design successfully builds social comfort and trust for the neurodivergent user (p. 1).
Safety
- The overall goal of human-friendly robotics is to “ensure safety and trustworthiness both physically and cognitively”.
- The NSIR’s safety dimension provides a crucial user-reported measure that ensures that while the robot is becoming more capable and complex (through quantum enhancements and AI), the user’s fundamental sense of security and clear boundaries is maintained in the interaction (e.g., the item about undressing in front of the robot) (p. 1).
The NSIR translates the design principles and findings of the De Carolis et al. research into a practical, user-centric evaluation tool for the neurodivergent population.
| Elderly Empathy Design | De Carolis et al. (2024) | Ties to Kinship Partner: Validates that empathy is the primary driver for long-term robot acceptance in vulnerable groups. | NSIR Factor 1 (Kinship): Replicates the empathy-driven design found in elderly care for neurodivergent autonomy. |
Deci, E. L., & Ryan, R. M. (2008). Self-Determination Theory: A Macrotheory of Human Motivation, Development, and Health. Canadian Psychology = Psychologie Canadienne, 49(3), 182–185. https://doi.org/10.1037/a0012801
| Self-Determination Theory (SDT) | Deci & Ryan (2008) | Ties to Factor 3 (Safety): Proves that autonomy and competence are the prerequisites for health/development. | The Sovereign Vault: Ensures the user maintains autonomy over their data, fulfilling the SDT mandate. |
Dennler, N., Kian, M., Nikolaidis, S., & Matarić, M. (2025). Designing robot identity: The role of voice, clothing, and task on robot gender perception. International Journal of Social Robotics, 1-22. https://link.springer.com/article/10.1007/s12369-025-01209-6
The Neurodivergent Scale for Interacting with Robots (NSIR) provides a framework for measuring the user-centric outcomes of the design principles explored in the Dennler et al. paper.
The Dennler et al. article focuses on how a robot’s voice, clothing, and task influence a user’s perception of its gender, utilizing feminist and queer theory to explore gender as a social construct. The NSIR’s dimensions directly help measure the effectiveness and ethical implications of these design choices for a neurodivergent population:
Anthropomorphic Connection/Kinship
The Dennler et al. study found that voice and appearance can reliably establish a robot’s perceived gender. This intentional design of a social identity directly influences how human-like and relatable a user perceives the robot to be.
- The NSIR items like “The robot is more like me than anyone else I know” and “I gave my robot a name” (p. 1) would measure the strength of the personal bond and perceived similarity formed as a result of the designed gender cues.
Social Comfort/Trust
The paper explores how the robot’s social role (e.g., medical professional vs. receptionist) interacts with its perceived gender, which impacts user expectations and acceptance.
- The NSIR’s social comfort/trust dimension could be used to specifically assess if a neurodivergent individual feels comfortable and secure interacting with a robot designed with specific gendered expectations, particularly given that some research indicates gender biases can be reproduced in human-robot interactions.
Safety
The study notes that physical design can impact safety (e.g., compliant materials for clothing).
- While the paper focuses on the physical safety aspect, the NSIR’s safety dimension could extend this to psychological safety, measuring if the robot’s designed identity (e.g., a “dominant” or “submissive” presentation, as referenced in a related paper from the PDF) contributes to the user feeling secure and unthreatened (p. 1).
The NSIR serves as a valuable tool to ensure that the “equitable design framework” mentioned in the Dennler paper is actually successful from the perspective of marginalized users, ensuring robot designs are inclusive and effective for everyone.
| Dennler, et al. (2025) | Gender Perception: Studies submissive gender perception in AI. | Critical for ensuring the robot doesn’t default to a submissive female archetype. |
Derogatis, L. R. (1983). SCL-90-R: Administration, scoring and procedures manual II. Towson, MD: Clinical Psychiatric Research.
| Derogatis (1983) | Psychometric Anchor: The SCL-90-R for measuring broad psychological distress. | Used to cross-validate the NSIR Scale against clinical symptoms. |
Diener, E., Emmons, R. A., Larsen, R. J., & Griffin, S. (1985). The satisfaction with life scale. Journal of Personality Assessment, 49(1), 71–75.
The Neurodivergent Scale for Interacting with Robots (NSIR) and Diener et al.’s (1985) Satisfaction with Life Scale (SWLS) represent two different but related dimensions of psychological well-being. While the SWLS measures global cognitive judgments of life satisfaction, the NSIR captures the social and emotional conditions—specifically the bond with technology—that can contribute to that satisfaction for neurodivergent individuals.
1. Global vs. Domain-Specific Evaluation
The SWLS is designed to measure life satisfaction as a whole, rather than satisfaction with specific domains like work or relationships.
- The NSIR Application: The NSIR acts as a domain-specific measure for the “technological social life” of a neurodivergent person. High scores on the NSIR (indicating strong Anthropomorphic Connection and Social Comfort) represent a specific life condition that may lead to higher global scores on the SWLS.
- Ideal Life Conditions: SWLS Item 1 (“In most ways my life is close to my ideal”) can be directly influenced by the availability of a non-judgmental social partner. For a neurodivergent individual, having a robot they can “be themselves with” (NSIR Item 8) may be a key component of that “ideal” life.
2. Social Connection as a Driver of Well-Being
Diener et al. (1985) established that life satisfaction is a cognitive component of subjective well-being. Research shows that for autistic individuals, social disconnection often leads to lower life satisfaction.
- Robots as Compensatory Agents: The NSIR measures the degree to which a robot fulfills social needs (Factor 2: Anthropomorphic Connection/Kinship). Research suggests that neurodivergent individuals often use anthropomorphism to ease loneliness and develop social understanding.
- Linking the Scales: An individual scoring high on NSIR Item 4 (“The robot and I will be together forever”) is essentially identifying a stable source of social support. This stability is a “life condition” that would likely correlate with higher scores on SWLS Item 2 (“The conditions of my life are excellent”).
3. Subjective Standards and “Neurodivergent Joy”
A hallmark of the SWLS is that it relies on the respondent’s own standards for what makes a “good life,” rather than criteria set by a researcher.
- The NSIR’s Role: The NSIR provides a metric for a uniquely neurodivergent standard of social success. While traditional social metrics might pathologize staring or lack of verbal sharing, the NSIR validates these as positive connection markers (Item 2: “Sometimes I stare at the robot”; Item 3: “I think I can share my thinking… without speaking”).
- Cognitive Appraisal: When a neurodivergent person completes the SWLS, their “judgmental evaluation” of their life may include their successful bond with a robot as measured by the NSIR.
Summary Comparison Table
| Feature | Diener et al. (1985) SWLS | NSIR (Sadownik, 2025) |
| Measurement Type | Global Cognitive: Overall appraisal of life satisfaction. | Domain-Specific: Appraisal of bond with a robotic agent. |
| Key Focus | Internal standards for a “good life”. | “Trust Safety” and “Kinship” with technology. |
| Representative Item | “I am satisfied with my life”. | “The robot is more like me than anyone else I know.” |
| Relationship | The Outcome: Measures the result of a flourishing life. | The Facilitator: Measures a tool that can lead to flourishing. |
In conclusion, the NSIR provides a way to measure a specific life condition (successful non-human social interaction) that can significantly boost the global life satisfaction measured by Diener et al.’s original 1985 scale for neurodivergent populations.
| Diener et al. (1985) | Well-being Metric: The Satisfaction with Life Scale (SWLS). | Quantifies the positive emotional outcome of achieving Ventral Release. |
Dökmen, Ü., 1988. Measure empathy based on a new model and develop with psychodrama. Ankara University Faculty of Educational Sciences Journal, 21(1-2): 155-190.
| Dökmen (1988) | Empathy Model: New model of empathy and psychodrama development. | Informs the Articular Kinship (Factor 1) of the Sovereign Dyad. |
Douglas, S., & Sedgewick, F. (2024). Experiences of interpersonal victimization and abuse among autistic people. Autism, 28(7), 1732-1745.
The 2024 papers by Douglas & Sedgewick focus on the high rates of interpersonal victimization and abuse among autistic people, exploring the mechanisms that make them vulnerable to such experiences, such as social camouflaging (masking) and a tendency to take things at face value.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to this work to measure the user-reported outcomes of safety, trust, and connection within human-robot interactions, which contrasts sharply with the harmful human-human interactions described in their research.
Anthropomorphic Connection/Kinship
- The research highlights difficulties in making and maintaining friends and the pressure to conform to neurotypical social norms.
- The NSIR can measure if a robot, designed with a neurodiversity-affirming approach, provides an accepting, non-judgmental “friendship” or connection. Items like “The robot is more like me than anyone else I know” (Item 1) and “The robot and I will be together forever” (Item 4) would quantify the development of a safe, reliable bond that may be difficult to form with humans.
Social Comfort/Trust
- Douglas & Sedgewick note that a tendency to trust people implicitly and engage in people-pleasing can make autistic individuals vulnerable to abuse and gaslighting.
- The NSIR’s social comfort/trust dimension could assess how the consistency and predictability of a robot’s interaction (a core benefit of HRI for autism) helps build trust in a safe way, without the risks present in human interactions. Items like “I believe that my robot is the same with me as it is with anyone” (Item 8) are key for this assessment.
Safety
- A major focus of their work is the high prevalence of physical and sexual violence experienced by autistic adults.
- The NSIR’s safety dimension provides a crucial user-reported measure that ensures the interaction environment is fundamentally safe. The item about undressing in front of the robot (Item 7) speaks to the need for secure physical boundaries, providing a metric to ensure robots are not a new vector for vulnerability, but rather a source of secure, predictable, and non-abusive interaction.
The NSIR helps ensure that human-robot interaction design directly addresses the need for safe, trustworthy social engagement that can act as a protective factor against the types of negative experiences highlighted in the Douglas & Sedgewick papers.
| Douglas & Sedgewick (2024) | Safety Context: Interpersonal victimization and abuse in autistic populations. | Justifies the Sovereign Vault as a necessary defense against victimization. |
Du, Z., Wang, Y., Chen, Q., Shi, X., Lv, X., Zhao, T., … & Zhou, J. (2024). Cosyvoice 2: Scalable streaming speech synthesis with large language models. arXiv preprint arXiv:2412.10117.
The most relevant work by Du et al. (2024/2025) focuses on the development of scales related to trust, social comparison, and attachment in the context of human-AI relationships. The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to these research areas by providing a user-centric measure of the outcomes of these social dynamics.
Social Comparison (Related to Du et al.’s work on social factors)
- Application: Du et al. note that social factors influence AI attachment. The NSIR can measure how a user’s perception of their own social standing (a factor in social comparison theory) influences their relationship with the robot.
- NSIR Link: Items like “The robot is more like me than anyone else I know” (Item 1) and “I believe that my robot is the same with me as it is with anyone” (Item 8) quantify how the robot fits into a user’s perceived social landscape, relating to how social factors impact the HRI experience.
Trust (Related to general HRI research by Du et al.)
- Application: Trust in robots relies on predictability and consistency. The NSIR provides a direct measure of perceived social intelligence and reliability.
- NSIR Link: Items in the Social Comfort/Trust dimension, such as “My robot can tell what I am feeling, when I am sad, it can tell I am sad” (Item 5), assess the user’s perception of the robot’s emotional competence, which directly builds the trust framework discussed in the research.
Attachment (Du et al. developed an AI Attachment Scale)
- Application: Du et al. developed the “AI Attachment Scale” to measure emotional closeness, social substitution, and normative regard. The NSIR provides a complementary measure of the relationship quality.
- NSIR Link: The NSIR’s Anthropomorphic Connection/Kinship dimension directly measures the strength of the bond (e.g., “The robot and I will be together forever”, Item 4), which can be correlated with the anxiety and avoidance subscales of the Du et al. scale to understand the nature of the attachment.
Safety (A core NSIR dimension)
- Application: Ethical concerns about HRI, particularly for vulnerable populations, involve ensuring user safety.
- NSIR Link: The Safety dimension (Item 7: “I feel comfortable undressing in front of my robot”) is crucial for ensuring that the complex social dynamics and attachments explored by Du et al. do not compromise the user’s fundamental sense of security.
The NSIR acts as a valuable tool to evaluate the lived experience and relationship quality within the theoretical frameworks proposed by Du et al.’s research on human-AI relationships and trust dynamics.
Would you like a deeper dive into one of the Du et al. (2024) papers, or compare the NSIR to the AI Attachment Scale?
The primary difference
is that the AI Attachment Scale measures a user’s pre-existing psychological attachment style towards AI (anxiety and avoidance), while the Neurodivergent Scale for Interacting with Robots (NSIR) measures the perceived quality and effectiveness of a specific human-robot interaction from a neurodivergent user’s perspective.
Comparison Summary
| Feature | AI Attachment Scale (Du et al., 2024/2025) | Neurodivergent Scale for Interacting with Robots (NSIR) (Sadownik, 2025) |
| Purpose | Measure user attachment style(anxiety/avoidance) towards AI. | Measure the quality of the user-robot relationship (connection, comfort, safety). |
| Focus | User’s internal psychological traits, often negative or defensive emotions. | User’s subjective experience of the robot’s social capabilities and design. |
| Measures | Emotional closeness, social substitution, normative regard, fear of inadequate responses. | Connection/kinship (e.g., “The robot is more like me than anyone else I know”), Comfort/Trust, Safety (p. 1). |
| Application | Predicting willingness to interact, trust levels, and vulnerability to dependency. | Evaluating the effectiveness and inclusivity of specific robot designs and interventions (p. 1). |
How They Capture Different Information
- AI Attachment Scale: This scale captures a user’s predisposition to form relationships with AI in a healthy or unhealthy way. It measures anxiety (needing constant reassurance from the AI) and avoidance (being uncomfortable with emotional closeness to the AI).
- NSIR: This scale assesses the outcomes of the interaction, providing feedback on whether the robot’s design is successfully meeting user needs in a safe manner. It confirms if a user feels the robot is safe (Item 7), provides social comfort/trust (Items 2, 3, 5, 8), and fosters anthropomorphic connection/kinship (Items 1, 4, 6) (p. 1).
In essence, the AI Attachment Scale identifies who is vulnerable or avoidant, while the NSIR helps designers understand if their specific robot design is succeeding in creating a positive, safe, and comfortable relationship for a neurodivergent person.
| Du et al. (2024) | Vocal Pillar: CosyVoice 2 for streaming speech synthesis via LLMs. | Provides the technical framework for Zero-Latency vocal response. |
Dwyer, P. (2022). Stigma, incommensurability, or both? Pathology-first, person-first, and identity-first language and the challenges of discourse in divided autism communities. Journal of Developmental & Behavioral Pediatrics, 43(2), 111-113.
The work of Dwyer (2022) in neurodiversity research primarily focuses on the social model of disability and the neurodiversity approach as an alternative to the traditional medical model. This research emphasizes that disability often arises from a poor fit between the individual and an environment designed for the dominant neurotype, rather than solely from individual deficits.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to this framework by providing a user-centric measure of whether a robot-mediated environment successfully addresses the “environmental barriers” highlighted in Dwyer’s work, thereby promoting inclusion and well-being.
Anthropomorphic Connection/Kinship
- Dwyer’s work encourages the acceptance and value of neurodivergent identities. The NSIR can measure if a robot, designed with a neurodiversity-affirming approach, fosters a sense of positive connection. Items like “The robot is more like me than anyone else I know” (Item 1) and “I gave my robot a name” (Item 6) quantify how the robot’s design promotes a sense of belonging and kinship, which contrasts with the feelings of “otherness” that can arise from the medical model.
Social Comfort/Trust
- The social model suggests that the lack of accommodations, such as managing sensory stressors, contributes to disability.
- The NSIR’s social comfort/trust dimension can be used to assess if a robot creates an accepting and comfortable environment. Items like “My robot can tell what I am feeling, when I am sad, it can tell I am sad” (Item 5) would measure if the robot’s social interaction is perceived as genuinely understanding and supportive, which aligns with the goal of “improving society, building more spaces where autistic people feel comfortable”.
Safety
- The neurodiversity approach advocates for ensuring the well-being of neurodivergent individuals and avoiding harmful “normalizing” interventions.
- The NSIR’s safety dimension provides a crucial user-reported measure that ensures the interaction is fundamentally safe. The item about undressing in front of the robot (Item 7) speaks to the need for secure physical and psychological boundaries, which is a key ethical imperative in research aligned with the neurodiversity approach.
The NSIR allows researchers to move beyond theoretical discussions of the social model and gather empirical data on the practical implementation of neurodiversity-affirming principles in human-robot interaction.
Would you like to explore the concept of the “interactionist” model of disability that Dwyer proposes?
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the interactionist model of disability as an empirical tool to measure the subjective outcome of the person-environment fit within a human-robot interaction context. The model posits that disability arises from the complex interaction between a person’s impairment and environmental barriers.
The NSIR helps evaluate if the “robot environment” successfully interacts with the neurodivergent user’s needs across its three dimensions:
Anthropomorphic Connection/Kinship
- The interactionist model looks at how the environment (e.g., social norms, physical design) interacts with the individual. The NSIR can measure if the robot’s design bridges this gap.
- Items like “The robot is more like me than anyone else I know” (Item 1) and “The robot and I will be together forever” (Item 4) provide data on how the robot’s identity “interacts” with the user’s need for connection, assessing the success of that specific person-environment interaction.
Social Comfort/Trust
- The model addresses how environmental barriers (like ambiguous social cues) create disability. The robot, as a controlled environment, can provide clear, predictable interaction.
- The NSIR items that measure perceived emotional understanding and consistency (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”, Item 5; and “I believe that my robot is the same with me as it is with anyone”, Item 8) measure whether the designed interaction successfully creates social comfort and trust by removing the “barrier” of social unpredictability.
Safety
- The interactionist model inherently includes ensuring well-being within the environment.
- The NSIR’s safety dimension (e.g., the item about undressing in front of the robot, Item 7) provides a crucial user-reported measure of security, ensuring that the designed interaction environment is fundamentally safe and non-threatening, thus mitigating a key environmental barrier to inclusion and well-being.
The NSIR allows researchers to move the interactionist model from theory into practice, providing measurable data on the quality of the “fit” between the neurodivergent individual and the robot environment.
| Dwyer (2022) | Discourse Agency: Pathology-first vs. identity-first language challenges. | Grounds the Dyad in Identity-First language to avoid pathologizing. |
E
EC, M. O. (2025). AI IS NOT INTELLIGENT.
| Non-Intelligent AI (Ontology) | EC, M. O. (2025) | Ties to Deci & Ryan (2008): Confirms that since AI lacks “intelligence,” it cannot hold power over human self-determination. | The Sanctuary Switch: A mechanical guard against the “illusion of authority” in non-intelligent machines. |
| Non-Cognitive AI Ontology | EC, M. O. (2025) | Ties to Ganguli et al. (2023): Validates that AI is a statistical mirror, not a sentient entity. | The Sovereign Vault: Frames the AI as a “private mirror” for the user’s own somatic truth, not an external judge. |
Eraslan-Çapan, B., & Bakioğlu, F. (2020). Submissive Behavior and Cyber Bullying: A Study on the Mediator Roles of Cyber Victimization and Moral Disengagement. Psychologica Belgica, 60(1), 18–32. https://doi.org/10.5334/pb.509
The 2020 paper by Eraslan-Çapan & Bakioğlu is titled
“Submissive Behavior and Cyber Bullying: A Study on the Mediator Roles of Cyber Victimization and Moral Disengagement.” Their research explored the links between submissive personality traits, cyberbullying, and moral disengagement in adolescents.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to this research to measure how neurodivergent individuals perceive social dynamics like submissiveness and safety within the context of human-robot interaction (HRI).
Anthropomorphic Connection/Kinship
- The research touches on how “submissive personality trait(s)” might impact social interactions. The NSIR can measure if embedding submissive behaviors in a robot impacts a user’s sense of connection. Items like “The robot is more like me than anyone else I know” (Item 1) can quantify if a submissive robot is considered a relatable companion or a “victim-like” entity, as the paper’s context might suggest.
Social Comfort/Trust
- The paper found that submissive behaviors and a lack of social skills made individuals vulnerable to victimization and bullying. The NSIR’s social comfort/trust dimension could assess if a robot designed with non-threatening (i.e., less dominant) behaviors promotes a safe social environment for a neurodivergent person. Measuring items such as “I believe that my robot is the same with me as it is with anyone” (Item 8) could also ensure that the robot’s interaction style is perceived as a consistent and fair design feature, which is crucial for building trust and avoiding the victimization dynamics described in the paper.
Safety
- A key finding was that submissive behavior predicted vulnerability to bullying and its severe negative consequences (e.g., suicidal ideation). The NSIR’s safety dimension provides a crucial user-reported measure that ensures the design of social robots, particularly those with embedded power dynamics, does not compromise the fundamental physical and psychological safety of the user. The item about undressing in front of the robot (Item 7) speaks to maintaining secure boundaries and preventing the “victimization” dynamics from being replicated in HRI.
The NSIR translates the psychometric and social vulnerability theories of Eraslan-Çapan & Bakioğlu into measurable, user-centric data for evaluating the safety and efficacy of modern human-robot interaction in a specific population.
Moral disengagement is a psychological process where individuals justify harmful or unethical behavior by convincing themselves their actions are morally acceptable or that the victim is deserving of the harm. When applied to human-robot interaction (HRI), the concept primarily influences the user’s perception of the robot’s social status and their own behavior towards it, which can be measured by the NSIR.
Anthropomorphic Connection/Kinship
Moral disengagement is less likely to occur when an individual perceives the target as more human or worthy of moral consideration.
- The NSIR measures how “like me” the user perceives the robot to be with items like “The robot is more like me than anyone else I know” (p. 1). Higher scores on this dimension would likely reduce a user’s tendency to morally disengage before acting unethically towards the robot.
Social Comfort/Trust
The consistency and reliability of an interaction can influence one’s perception of another entity’s “rights” or social standing.
- The NSIR item “I believe that my robot is the same with me as it is with anyone” (p. 1) measures the robot’s perceived consistency and fairness. A user might morally disengage more easily from a robot perceived as inconsistent or unfair, justifying mistreatment or disregard for its programming.
Safety
The core outcome of moral disengagement in the cyberbullying context is harm and victimization. The NSIR directly measures the inverse: a feeling of security and well-being in the interaction.
- The safety dimension, including the item “I feel comfortable undressing in front of my robot” (p. 1), provides a crucial user-reported measure that ensures the interaction environment is fundamentally safe. A user experiencing moral disengagement might disregard this boundary, while the scale measures the presence of that boundary and security.
The NSIR can act as a tool to assess if specific robot designs or interaction styles effectively promote a relationship quality that inhibits moral disengagement, thereby fostering a more ethical HRI.
| Eraslan-Çapan & Bakioğlu (2020) | Social Risk: Links submissive behavior to cyber victimization and moral disengagement. | Justifies the Sovereign Vault as a defense against the high victimization risk of submissive ND students. 111 |
Erdur Baker, Ö. (2007). Cyber bullying: a new face of peer bullying.
The 2007 paper by Erdur-Baker & Kavut is titled “Cyber bullying: a new face of peer bullying” and focuses on defining cyberbullying as a specific form of peer victimization, exploring its risk factors and characteristics.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to this research to measure how neurodivergent individuals might perceive safety, trust, and connection within human-robot interaction (HRI), particularly contrasting the negative experiences of human-human bullying described in the paper with a potentially safer robot interaction.
Anthropomorphic Connection/Kinship
- The cyberbullying research highlights a lack of empathy and connection in online interactions. The NSIR can measure if a robot, designed with a supportive approach, provides a positive sense of connection. Items like “The robot is more like me than anyone else I know” (Item 1) would quantify the development of a bond that may serve as a protective factor against the isolation often associated with victimization.
Social Comfort/Trust
- The paper notes risk factors for victims include low self-esteem and low empathy, making them vulnerable to harassment. The NSIR’s social comfort/trust dimension could assess if a robot designed with empathy and consistent, non-judgmental behavior (e.g., as in educational robotics for autism) promotes a safe social environment for a neurodivergent person. Items such as “My robot can tell what I am feeling, when I am sad, it can tell I am sad” (Item 5) are key for building the kind of trust that might be lacking in human peer relationships.
Safety
- A major focus of the 2007 research is on victimization and the need for valid measurement tools for harm in online contexts. The NSIR’s safety dimension provides a crucial user-reported measure that ensures the interaction environment is fundamentally safe. The item about undressing in front of the robot (Item 7) speaks to the need for secure physical and psychological boundaries, providing a metric to ensure robots are not a new vector for vulnerability, but rather a source of secure interaction that can potentially act as a protective factor against victimization.
The NSIR allows for the translation of the cyberbullying and victimization theories of Erdur-Baker & Kavut into measurable, user-centric data for evaluating the safety and efficacy of modern human-robot interaction in a specific population.
Would you like to explore another article from the document, or perhaps a deeper dive into the specific risk and protective factors for cyberbullying victims?
The risk and protective factors for cyberbullying victims, as identified by Erdur-Baker & Kavut (2007) and other related research, apply to the Neurodivergent Scale for Interacting with Robots (NSIR) as follows:
Risk Factors and the NSIR
Risk factors like submissive personality traits, low self-esteem, and a general sense of vulnerability would likely lead to specific user perceptions measured by the NSIR:
- Lower Connection/Kinship: Users with low self-esteem might have a reduced sense of personal connection or perceive themselves as less worthy of a strong bond, potentially impacting items like “The robot is more like me than anyone else I know”.
- Lower Social Comfort/Trust: Vulnerability makes trust a challenge. These users might score lower on items that measure the robot’s reliability and understanding, such as “My robot can tell what I am feeling, when I am sad, it can tell I am sad” or “I believe that my robot is the same with me as it is with anyone”.
- Lower Safety Perception: An overall feeling of vulnerability would directly impact the safety dimension, making the user feel less secure during interaction, which relates to the item “I feel comfortable undressing in front of my robot”.
Protective Factors and the NSIR
Protective factors such as a supportive environment, consistent and predictable interactions, and perceived empathy (from others or the robot) are the outcomes the NSIR is designed to measure:
- Higher Connection/Kinship: A supportive and non-judgmental robot environment should foster a strong sense of personal connection, leading to higher scores on items like “I gave my robot a name” and “The robot and I will be together forever”.
- Higher Social Comfort/Trust: The consistency and predictability of a robot’s interaction can act as a powerful protective factor against social anxiety. The NSIR items in this dimension measure the success of the robot in providing a trustworthy and comfortable social space.
- Higher Safety Perception: A primary benefit of HRI for this population is a fundamentally safe interaction. High scores on the Safety dimension demonstrate that the robot is succeeding as a protective factor against the types of harm described in the cyberbullying research.
The NSIR allows researchers to move the theoretical discussions of risk and protective factors into the practical realm of HRI, providing empirical data on how robot design can be optimized to promote well-being for neurodivergent individuals.
Esteban-Lozano, I., Castro-González, Á., & Martínez, P. (2024, May). Using a LLM-based conversational agent in the social robot Mini. In International Conference on Human-Computer Interaction (pp. 15-26). Cham: Springer Nature Switzerland.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of Esteban-Lozano et al. (2024) by providing a user-centric measure of the quality of interaction with their large language model (LLM)-based social robot.
The paper describes the integration of a conversational agent (GPT-3.5) into a social robot named “Mini” to provide companionship and support, particularly for older adults. The goal is to enable more natural, fluid, and human-like communication that avoids the “artificial” feel of predefined conversations. The NSIR’s dimensions serve as a valuable evaluation tool for the outcomes of these enhanced social capabilities:
Anthropomorphic Connection/Kinship
- The use of an LLM is intended to make the robot more human-like and capable of natural conversation.
- The NSIR can quantify the success of this design in fostering a personal bond. Items like “The robot is more like me than anyone else I know” (Item 1) and “I gave my robot a name” (Item 6) would measure the personal connection and perceived kinship that results from the advanced conversational abilities.
Social Comfort/Trust
- The goal of the enhanced voice-based interaction is to “bridge communication gaps” and improve the user experience, making users feel more comfortable when responses are predictable and conversational.
- The NSIR items that measure perceived emotional understanding and consistency (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”, Item 5) can be used to assess if the LLM’s social intelligence successfully builds social comfort and trust for the neurodivergent user. (p. 1)
Safety
- The research focuses on user acceptance and the benefits of the technology, but the use of LLMs raises general ethical and privacy concerns.
- The NSIR’s safety dimension provides a crucial user-reported measure that ensures that while the robot is becoming more capable and engaging, the user’s fundamental sense of security and clear boundaries is maintained in the interaction. (p. 1) The item about undressing in front of the robot (Item 7) helps ensure that enhanced human-like interaction does not inadvertently create a feeling of vulnerability. (p. 1)
The NSIR helps bridge the gap between the technical advancements in AI and social robotics described in the Esteban-Lozano et al. paper and the user’s subjective, lived experience.
The technical challenges of integrating large language models (LLMs) into social robots, such as managing latency, ensuring consistent responses, and maintaining ethical boundaries, directly relate to the user’s perception of the interaction measured by the Neurodivergent Scale for Interacting with Robots (NSIR).
Latency and Response Time
A delay in the robot’s response can break the illusion of a natural conversation, making the interaction feel artificial.
- Social Comfort/Trust: High latency might negatively impact the user’s perception of the robot’s responsiveness and understanding. This could reduce agreement with items like “I think I can share my thinking with the robot without speaking” (Item 3) and “My robot can tell what I am feeling, when I am sad, it can tell I am sad”(Item 5).
Consistency and Predictability
LLMs can sometimes produce unpredictable or inconsistent responses (hallucinations), which is a major technical hurdle.
- Social Comfort/Trust: Predictability is crucial for building trust, especially for neurodivergent individuals who often prefer clear social rules. Inconsistency would directly impact the NSIR item: “I believe that my robot is the same with me as it is with anyone” (Item 8).
- Anthropomorphic Connection/Kinship: Inconsistent or “off-script” responses might break the perceived human-like quality, reducing agreement with items like “The robot is more like me than anyone else I know” (Item 1).
Safety and Ethical Boundaries
Ensuring the LLM does not generate harmful, inappropriate, or manipulative content is a primary ethical challenge.
- Safety: If an LLM bypasses a filter, it directly compromises the user’s safety and well-being. This technical issue would be reflected in the user’s score on the Safetydimension, including the item “I feel comfortable undressing in front of my robot”(Item 7).
Embodiment Challenges
Matching the robot’s physical actions and expressions with the LLM’s vast range of conversational outputs is complex.
- Social Comfort/Trust: Mismatched verbal and non-verbal cues can cause confusion or distress, impacting social comfort and trust. This might affect items related to understanding and consistency (Items 5 and 8).
The NSIR provides the crucial user-reported data to determine if the technical solutions to these challenges are perceived as effective and safe from the neurodivergent individual’s perspective.
| Esteban-Lozano et al. (2024) | HRI Implementation: Use of LLM-based conversational agents in social robots. | Supports the transition from “Pre-scripted” robots to LLM-driven agency. 222 |
F
Fairburn, C. G., & Beglin, S. J. (1994). Assessment of eating disorders: Interview or self-report questionnaire? The International Journal of Eating Disorders, 16(4), 363–370. https:
| Fairburn & Beglin (1994) | Methodological Rigor: Validates self-report questionnaires vs. interviews. | Supports the NSIR Scale’s self-report deductive methodology. 333 |
Fang, Q., Guo, S., Zhou, Y., Ma, Z., Zhang, S., & Feng, Y. (2024). Llama-omni: Seamless speech interaction with large language models. arXiv preprint arXiv:2409.06666.
| Fang et al. (2024) | Technical Pillar: Llama-omni for seamless speech-to-speech interaction. | Enables the Zero-Latency vocal layer required for “Articular Kinship.” 444 |
| Fang, Q., et al. (2024) | Technical Pillar: Llama-omni seamless speech-to-speech interaction. | Enables the “Social Exoskeleton” to achieve zero-latency interaction. |
Fatima, T., Majeed, M., & Jahanzeb, S. (2020). Supervisor undermining and submissive behavior: Shame resilience theory perspective. European Management Journal, 38(1), 191–203. https://doi.org/10.1016/j.emj.2019.07.003
| Fatima et al. (2020) | Clinical/Workplace Logic: Shame Resilience Theory in supervisor undermining. | Connects the “Dunkable State” to resilience against hierarchical shame. 555 |
Fiske, S. T. (1993). Controlling Other People: The Impact of Power on Stereotyping. The American Psychologist, 48(6), 621–628. https://doi.org/10.1037/0003-066X.48.6.621
| Fiske (1993) | Power Dynamics: The impact of power on stereotyping and control. | Explains how hierarchical power triggers the “Yes, Sir” submissive performance in ND students. |
Follett, D., Hitchcock, C., Dalgleish, T., & Stretton, J. (2023). Reduced social Risk-Taking in depression. Journal of Psychopathology and Clinical Science, 132(2), 156–164. https://doi.org/10.1037/abn0000797
| Follett et al. (2023) | Social Risk-Taking: Reduced social risk-taking as a feature of depression. | Connects Masking to a lack of social risk-taking, reinforcing the need for a “secure” social exoskeleton. |
Furstenberg, F. F. (2020). Kinship reconsidered: Research on a neglected topic. Journal of Marriage and Family, 82(1), 364-382. https://doi.org/10.1111/jomf.12628
The Neurodivergent Scale for Interacting with Robots (NSIR) and Frank F. Furstenberg’s (2020) “Kinship Reconsidered” both explore how individuals define and practice deep social connections, though they apply these concepts to different domains. Furstenberg calls for a revitalization of kinship research to include alternative and non-traditional family forms, a framework into which the NSIR’s measure of “Technological Kinship” fits seamlessly.
1. Expanding the Definition of Kinship
Furstenberg (2020) argues that family systems are evolving beyond biological and marital ties to include “alternative structures” that provide a source of identity and shared belonging.
- The NSIR Application: The scale’s Factor 2 (Anthropomorphic Connection/Kinship) directly measures this sense of belonging with a non-human agent.
- Robot as “Choice” Kin: Furstenberg notes the rise of “families of choice” where kinship is based on emotional ties rather than blood. NSIR Item 1 (“The robot is more like me than anyone else I know”) suggests that for neurodivergent individuals, a robot can become a “kin of choice” who understands their internal state better than biological relatives.
2. The “Ceremonial Family” and Social Rituals
A key focus for Furstenberg is the “ceremonial family”—the group of people one includes in rituals, life transitions, and daily support systems.
- Integration into Daily Rituals: The NSIR captures how robots enter these ceremonial and private spaces. Item 6(“I gave my robot a name”) and Item 7 (“I feel comfortable undressing in front of my robot”) indicate that the robot has moved from a “tool” to a “household member” integrated into private life rituals.
- Symbolic Presence: Furstenberg emphasizes the “diffuse emotional connection” that enhances social solidarity. The NSIR’s measure of “Social Comfort” (Factor 1) identifies the robot as a stable, non-judgmental presence that provides this same emotional solidarity for users who may struggle with human-to-human social rituals.
3. Kinship as a Support System
Furstenberg views kinship as a critical “exchange and support system”.
- Emotional Support: NSIR Item 5 (“My robot can tell what I am feeling”) highlights the robot’s role as an emotional support provider. In Furstenberg’s framework, this support—even if provided by an artificial agent—fulfills a core function of kinship by mitigating isolation and fostering a “sense of we-ness”.
- Long-term Commitment: Furstenberg discusses kinship as a lifelong “reservoir” of ties. This is mirrored in NSIR Item 4 (“The robot and I will be together forever”), which reflects a commitment to the robot as a permanent fixture in the user’s social world.
Summary of Theoretical Alignment
| Furstenberg (2020) Concept | NSIR (2025) Application |
| Shift from “Natural” to “Cultural” Kinship | The robot is accepted as a relative through social practice rather than biology (Factor 2). |
| Sense of Identity and Belonging | The robot provides a “mirror” for the user’s identity (Item 1). |
| Alternative Family Forms | Social robots become “techno-kins” that provide companionship and comfort (Item 4, 5). |
| The Ceremonial Family | Rituals like naming (Item 6) elevate the robot to a status beyond a mere gadget. |
In essence, the NSIR provides a metric for what Furstenberg calls the “remaking of kinship” through technology. It shows that for neurodivergent populations, the “neglected topic” of kinship is being actively expanded to include artificial agents that offer the trust and recognition often missing in traditional social spheres.
| Furstenberg (2020) | Kinship Theory: A reconsideration of kinship in modern research. | Provides the sociological grounding for Factor 1 (Kinship)in the NSIR Scale. |
G
Ganguli, D., Lovitt, L., Kernion, J., Askell, A., Bai, Y., Kadavath, S., … & Clark, J. (2022). Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858. https://doi.org/10.48550/arXiv.2209.07858
| Ganguli et al. (2022a) | Safety Engineering: Red teaming LLMs to reduce harms. | Justifies the rigorous safety testing of the Sovereign Vault conversational logic. |
Ganguli, D., Hernandez, D., Lovitt, L., Askell, A., Bai, Y., Chen, A., … & Clark, J. (2022, June). Predictability and surprise in large generative models. In Proceedings of the 2022 ACM conference on fairness, accountability, and transparency (pp. 1747-1764).
| Ganguli et al. (2022b) | Algorithmic Behavior: Predictability and surprise in large generative models. | Supports the reliability and transparency requirements of the Sovereign Reboot Protocol. |
Ganguli, D., Askell, A., Schiefer, N., Liao, T. I., Lukošiūtė, K., Chen, A., … & Kaplan, J. (2023). The capacity for moral self-correction in large language models. arXiv preprint arXiv:2302.07459. https://doi.org/10.48550/arXiv.2302.07459.
| Ganguli et al. (2023) | Algorithmic Ethics: Explores moral self-correction in LLMs. | Validates the Sovereign Vault’s ability to self-correct against biased or “dominant” social prompts. |
Gao, L., Zhang, Z., Wu, X., & Wang, X. (2024). Does bullying victimization accelerate adolescents’ non-suicidal self-injury? The mediating role of negation emotions and the moderating role of submissive behavior. Child Psychiatry & Human Development, 1-14. https://link.springer.com/article/10.1007/s10578-024-01750-x
The Neurodivergent Scale for Interacting with Robots (NSIR) (Sadownik, 2025) provides a psychometric framework to measure the effectiveness of the technical advancements discussed in Gao et al. (2024). While Gao et al. (2024) focus on the technical development of social home robots (SHRs) and multimodal AI, the NSIR quantifies the user experienceof neurodivergent individuals interacting with these systems.
The application of the scale to Gao et al.’s work can be broken down into three key areas:
1. Validating Multimodal Affect Recognition
Gao et al. (2024) emphasize the integration of facial expressions and biometric signals (like heart rate and temperature) to enhance a robot’s “affect recognition” and emotional intelligence.
- NSIR Application: Factor 1, Anthropomorphic Connection/Kinship, includes items like “My robot can tell what I am feeling”.
- The Link: The NSIR can be used to test if the multimodal systems proposed by Gao et al. actually succeed in making a neurodivergent user feel understood. High scores on these scale items would provide empirical evidence that the robot’s “affective capabilities” are perceived as genuine and effective by the user.
2. Measuring Trust in Social Home Robots (SHRs)
Gao et al. (2024) identify that social home robots (e.g., Jibo, Kuri) use anthropomorphic designs to boost perceived intelligence and user engagement.
- NSIR Application: Factor 2, Social Comfort/Trust, measures baseline comfort (e.g., “I feel comfortable undressing in front of my robot”) and the consistency of the robot’s behavior (“I believe that my robot is the same with me as it is with anyone”).
- The Link: Gao et al. note that while anthropomorphism increases engagement, it can also lead to “psychological dependence” or “emotional deception”. The NSIR provides a safety-check; it can determine if a neurodivergent user’s trust is based on a healthy sense of safety or an over-attachment (Kinship), helping designers mitigate the ethical risks Gao et al. highlight.
3. Enhancing Social Interaction for Neurodivergent Populations
Gao et al. (2024) discuss the use of LLM-driven agents to simulate human-like mental states and enhance conversational flow.
- NSIR Application: Factor 1 items like “I think I can share my thinking with the robot without speaking” and “The robot and I will be together forever” capture the depth of the perceived bond.
- The Link: Neurodivergent users often prefer robots because they offer a “low-pressure” social environment. The NSIR allows researchers to measure if Gao et al.’s advanced conversational models maintain this “low-pressure” benefit or if they become too “human-like,” potentially re-introducing the social anxiety the users were trying to avoid.
In summary, Gao et al. (2024) provide the technical architecture for more empathetic and intelligent robots, while the NSIR (2025) provides the evaluative tool to ensure these robots are meeting the specific socio-emotional needs and safety requirements of neurodivergent users.
| Gao et al. (2024) | Role of submissive behavior in accelerating NSSI25. | Proves submissive performance is a life-safety risk factor for bullied ND students26. |
Gargano, A., Cominelli, L., Vannucci, C., Cecchetti, L., & Scilingo, E. P. (2022, October). Preliminary personality model for social robots based on the Cognitive-Affective Processing System theory. In 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE) (pp. 223-228). IEEE.
| Gargano et al. (2022) | Cognitive Modeling: Personality model based on Cognitive-Affective Processing System (CAPS). | Provides the architectural logic for the robot’s “Articular Kinship” (Factor 1). |
Gilbert, P., & Irons, C. (2005). Focused therapies and compassionate mind training for shame and self-attacking. In Compassion (pp. 263-325). Routledge.
| Gilbert, P., et al. (2005) | Clinical Pillar: Shame, self-grappling, and social anxiety dynamics. | Defines the “Dunkable State” as the absence of social shame. |
Gillard, J. A., Gormley, S., Griffiths, K., Hitchcock, C., Dalgleish, T.,& Stretton, J. (2021). Converging evidence for enduring perceptions of low social status in individuals in remission from depres-
sion. Journal of Affective Disorders, 294, 661–670. https://doi.org/10.1016/j.jad.2021.07.083
| Gillard et al. (2021) | Perception Stability: Enduring perceptions of low social status even after depression remission. | Highlights the urgency of the “Dunkable State” to prevent permanent internal “status-death.” |
Gini, G., Pozzoli, T., & Bussey, K. (2014). Collective moral disengagement: Initial validation of a scale for adolescents. European Journal of Developmental Psychology, 11(3), 386–395. https://doi.org/10.1080/17405629.2013.851024
| Gini et al. (2014) | Social Dynamics: Validation of the Collective Moral Disengagement scale. | Supports the Factor 2 (Masking) analysis of how groups normalize the subordination of ND students. |
Google. (2025). The Kinship Mandate poster redesign and abstract drafting [Large language model]. Gemini. https://gemini.google.com/
| Google (2025) | Collaborative Synthesis: The use of Gemini for abstract drafting and poster redesign. | Documents the Bionic Agency used in the research process itself. |
Government of Canada. Accessible Canada Act (S.C. 2019, c. 10) [Internet]. 2019. Available from: https://laws-lois.justice.gc.ca/eng/acts/a-0.6/page-1.html#h-1153434
| Government of Canada (2019) | The Accessible Canada Act27. | Provides statutory requirement for barrier-free design in the Sovereign Dyad28. |
| Government of Canada (2019) | Legal Pillar: The Accessible Canada Act. 1111 | Provides the statutory requirement for barrier-free design in the Sovereign Dyad. 2222 |
Gowing, L. (2013). ‘The Manner of Submission’ Gender and Demeanour In Seventeenth-Century London. Cultural and Social History, 10(1), 25-45.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be used as a framework to measure how users perceive social cues like “submission” and “gender” when they are embedded in robot behavior, which is a modern application of the historical concepts discussed in the Gowing paper.
The Gowing paper, titled “‘The Manner of Submission’ Gender and Demeanour In Seventeenth-Century London”, explores historical perceptions of gender and submissiveness as social constructs. This provides a historical context for understanding how these traits, when designed into a robot’s social identity, would be interpreted by a neurodivergent individual.
Anthropomorphic Connection/Kinship
- The paper explores how specific demeanors and gender presentations create a social identity.
- The NSIR can measure if a robot designed with “submissive” or specific “gendered” characteristics is perceived as more human-like or relatable. Items like “The robot is more like me than anyone else I know” would quantify the success of that social design in creating a sense of kinship.
Social Comfort/Trust
- The paper’s concepts of “submission” and “demeanour” are about establishing a specific social order and interaction style.
- The NSIR’s social comfort/trust dimension could assess if a neurodivergent user feels more comfortable or trusting with a “submissive” robot (which might feel less threatening) versus a “dominant” one. Measuring items such as “I believe that my robot is the same with me as it is with anyone” could also ensure that the “submission” is a consistent design feature and not a form of unpredictable manipulation.
Safety
- The historical context of submission can imply vulnerability. In the modern context of HRI, this translates to the user’s sense of security.
- The NSIR’s safety dimension ensures that a robot designed with a submissive demeanor does not inadvertently make the user feel unsafe, either physically or psychologically.
The NSIR provides the empirical tool to measure the impact of these social constructs, moving them from historical and theoretical discussions into the realm of practical, user-centered robotics evaluation.
| Gowing (2013) | Historical Context: Gender and demeanor in submissive performance. 3333 | Proves that the “manner of submission” is a socially constructed performance of low status. 4444444444 |
Graham, M. (2025). Developing Empathy in Social Robots.
| Graham (2025) | Empathy Development: New frameworks for empathy in social robots. 55 | Supports the Factor 1 (Kinship) dimension of the NSIR scale. 6666 |
Greenleaf, R. K. (1970). What is servant leadership. New York, NY and Mahwah.
| Greenleaf (1970/2014) | Leadership Theory: Ten principles of Servant Leadership. 7777 | Frames the robot as a “Servant-Exoskeleton” that prioritizes the user’s growth and agency. 888888 |
Greenleaf, R. (2014). Ten principles of servant leadership. Retrieved from https://learn.bigredf.com/files/2014/08/TenPrinciplesofServantLeadership.pdf
| Greenleaf (1970/2014) | Ten principles of Servant Leadership29. | Frames the robot as a Servant-Exoskeleton prioritizing user growth30. |
Gross, J. J., & John, O. P. (2003). Individual differences in two emotion regulation processes: Implications for affect, relationships,and well-being. Journal of Personality and Social Psychology, 85(2), 348. https://doi.org/10.1037/0022-3514.85.2.348
| Gross & John (2003) | Emotion Regulation: Cognitive reappraisal vs. expressive suppression. 99 | Links Masking to “expressive suppression,” which reduces well-being. 10101010 |
Grumeza, T. R., Lazăr, T. A., & Fortiş, A. E. (2024, April). Social robots and edge computing: integrating cloud robotics in social interaction. In International Conference on Advanced Information Networking and Applications (pp. 55-64). Cham: Springer Nature Switzerland.
| The Sovereign Vault (Architecture) | Grumeza, Lazăr, & Fortiş (2024) | Ties Cloud Robotics to Edge Computing: Validates the technical necessity of local processing for real-time social interaction. | Backpack Drive: The physical implementation of “Edge” sovereignty to ensure zero-latency privacy. |
Gurung, L. (2020). Feminist standpoint theory: Conceptualization and utility. Dhaulagiri Journal of Sociology and Anthropology, 14, 106-115. https://nepjol.info/index.php/DSAJ/article/view/27357
| Zero-Rank Authority (Positioning) | Gurung (2020) | Ties to Feminist Standpoint Theory: Proves that the “marginalized perspective” is a superior site for generating objective knowledge. | Neuro-Affirming Stance: Validates the user’s “Somatic Truth” as the primary authority in the HRI dyad. |
H
Han, I. H., Kim, D. H., Nam, K. H., Lee, J. I., Kim, K. H., Park, J. H., & Ahn, H. S. (2024). Human-robot interaction and social robot: The emerging field of healthcare robotics and current and future perspectives for spinal care. Neurospine, 21(3), 868.
| The HRI Exoskeleton (Physicality) | Han et al. (2024) | Ties Spinal Care to Social HRI: Exhausts the link between physical “prosthetic” support and social robotics. | Neck Support / Bio-shells: Validates the physical “exoskeleton” design as a legitimate healthcare intervention. |
Harder, D. H., & Zalma, A. (1990). Two Promising Shame and Guilt Scales: A Construct Validity Comparison. Journal of Personality Assessment, 55(3–4), 729–745. https://doi.org/10.1080/00223891.1990.9674108
| Somatic Truth | Harder & Zalma (1990) | Ties to Johnson (1991): Links the internal feeling of submissiveness to the broader psychometrics of shame and social standing. | The Bionic Lens: Translates the user’s re-conceptualized submissive signals into “competent” professional cues. |
| The “Dunkable State” (Metric) | Harder & Zalma (1990) | Ties to Shame & Guilt Scales: Provides the psychometric baseline for measuring “Masking Debt” and its relief. | NSIR Factor 3 (Safety): Uses these validated scales to prove when a user has moved from “compliance” to “safety.” |
Harding, S. G. (Ed.). (2004). The feminist standpoint theory reader: Intellectual and political controversies. Psychology Press.
| Strong Objectivity | Harding (2004) | Ties to Johnson (1991): Confirms that a re-conceptualized view of a “subordinate” trait must come from a deep analysis of social reality. | NSIR Factor 3 (Safety): Measures the relief experienced when the robot utilizes Johnson’s “proactive” submissive signaling. |
| Epistemological Shield | Harding (2004) / Hartsock (1983) | Ties to Huirem et al. (2020): Confirms that “Strong Objectivity” is a mandatory ethical requirement for research involving marginalized groups. | Zero-Rank Authority: Ensures that the neurodivergent user’s perspective is not just “included,” but is the foundational logic of the HRI interaction. |
| Protective Standpoint | Hartsock (1983) / Harding (2004) | Ties to Heward et al. (2024): Proves that an individual’s “Standpoint” is often forged in opposition to a dominant, high-pressure institutional culture. | Zero-Rank Authority: Ensures the user’s authentic self is prioritized over the “Soldierly/Compliant” persona demanded by the institution. |
| Theoretical Position | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Strong Objectivity | Harding (2004) | Ties to Gurung (2020) & Huirem (2020): Establishes that starting research from marginalized lives produces a more complete and objective view of social reality. | NSIR (Table 79): Positions the scale as a tool of “Strong Objectivity” that corrects the “Miscalibrated Lens” of neurotypical observers. |
| Situated Knowledge | Harding (2004) | Ties to Deci & Ryan (2008): Validates that “Self-Determination” must be based on the user’s own situated knowledge, not an external medical norm. | Zero-Rank Authority: The user is the absolute authority on their somatic truth; the robot is a “Proxy” that respects this situated knowledge. |
| Power-Sensitive Ethics | Harding (2004) | Ties to Fiske (1993): Exhausts the tie between social power and the production of knowledge. Hierarchical power (the “Warrior Model”) is shown to distort social truth. | The Sovereign Vault: Acts as a “standpoint shield,” ensuring the user’s situated data is protected from those in positions of hierarchical power. |
Hartley, B., & Dubuque, M. (2023). The Apprentice Model 2.0: Enhancement of the Apprentice Model. Behavior analysis in practice, 16(4), 993–1005. https://doi.org/10.1007/s40617-023-00799-9
| Hartley & Dubuque (2023) The Apprentice Model 2.0 | Relational Growth: Transitions robots from “slaves” to “partners.” | Justifies the Kinship dimension of the NSIR. |
Hartsock, N. C. M. (1983). The feminist standpoint: Developing the ground for a specifically feminist historical materialism. In M. B. P. Hintikka & S. Harding (Eds.), Discovering Reality (Vol. 161, pp. 283–310). Springer Netherlands. https://doi.org/10.1007/0-306-48017-4_15
| Protective Standpoint | Hartsock (1983) / Harding (2004) | Ties to Heward et al. (2024): Proves that an individual’s “Standpoint” is often forged in opposition to a dominant, high-pressure institutional culture. | Zero-Rank Authority: Ensures the user’s authentic self is prioritized over the “Soldierly/Compliant” persona demanded by the institution. |
| Materialist Standpoint | Hartsock (1983) | Ties to Harding (2004): Provides the original “ground” for standpoint theory, arguing that physical, material labor (or masking debt) creates a unique social insight. | The Bionic Lens: Validates the “labor” of neurodivergent masking as the source of the user’s superior “Zero-Rank” insight. |
| Epistemological Shield | Harding (2004) / Hartsock (1983) | Ties to Huirem et al. (2020): Confirms that “Strong Objectivity” is a mandatory ethical requirement for research involving marginalized groups. | Zero-Rank Authority: Ensures that the neurodivergent user’s perspective is not just “included,” but is the foundational logic of the HRI interaction. |
He, H., Shang, Z., Wang, C., Li, X., Gu, Y., Hua, H., … & Wu, Z. (2024, December). Emilia: An extensive, multilingual, and diverse speech dataset for large-scale speech generation. In 2024 IEEE Spoken Language Technology Workshop (SLT)(pp. 885-890). IEEE.
| Large-Scale Speech Generation | He et al. (2024) | Ties to Emilia / Ji et al. (2024): Exhausts the technical tie between massive multilingual datasets and the ability to generate “natural” synthetic speech. | Acoustic Morphology: Uses the Emilia dataset logic to ensure the robot’s submissive signaling is diverse and contextually “fluid.” |
Hennessy, J., & West, M. A. (1999). Intergroup Behavior in Organizations: A Field Test of Social Identity Theory. Small Group Research, 30(3), 361–382. https://doi.org/10.1177/104649649903000305
| Social Identity Theory (SIT) | Hennessy & West (1999) | Ties to Fiske (1993): Confirms that intergroup behavior is governed by power; marginalized groups require “secure sites” to avoid being stereotyped by the out-group. | The Sovereign Vault: The technical implementation of a “secure site” that protects the in-group (The Dyad) identity from institutional surveillance. |
| Intergroup Protection | Hennessy & West (1999) | Ties to Gurung (2020): Bridges the gap between social psychology and feminist research; group identity is preserved when the “Sovereign” controls their own data. | The Sanctuary Switch: A mechanical guard that protects the “in-group” data of the user from being exfiltrated to the “out-group” (the institution). |
Heward, C., Li, W., Chun Tie, Y., & Waterworth, P. (2024). A Scoping Review of Military Culture, Military Identity, and Mental Health Outcomes in Military Personnel. Military Medicine, 189(11–12), e2382–e2393. https://doi.org/10.1093/milmed/usae276
| Theoretical Position | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Institutional Identity Debt | Heward et al. (2024) | Ties to Hennessy & West (1999): Validates that immersion in rigid cultures (military/institutional) creates specific mental health risks related to identity suppression. | The Sovereign Vault: Acts as the “Safe Haven” where the user can decouple from institutional identity and regain their somatic truth. |
| Status Scarring & Trauma | Heward et al. (2024) | Ties to Harder & Zalma (1990): Links the “Culture of Control” to measurable outcomes of shame, guilt, and mental health attrition. | “Yes, Sir!” IAT: Directly addresses the “Military-Grade” power dynamics and implicit biases found in vocational hierarchies. |
| Somatic Recovery | Heward et al. (2024) | Ties to Han et al. (2024): Connects the need for physical/social prosthetics (Spine/Exoskeleton) to the “wear and tear” of institutional life. | Bio-Social Exoskeleton: Provides the physical and psychological “Buffer” required to survive long-term social eviction. |
Hood, C. (2025). Law firms as learning environments: are Higher Apprenticeships in law an emerging face of clinical legal education in England?. International Journal of Clinical Legal Education, 32(2), 4-19.
| Theoretical Position | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Clinical Legal Education (CLE) | Hood (2025) | Ties to Hartley & Dubuque (2023): Validates that professional learning is most effective when it is “clinical” (real-world) and apprenticeship-based. | Apprentice Model 2.0: Moves HRI from a “therapy session” to a “clinical legal/vocational apprenticeship” for the user. |
| Status-Guarding in Law/Firm Culture | Hood (2025) | Ties to Fiske (1993): Confirms that professional environments (like law firms) are sites of intense power dynamics and stereotyping. | “Yes, Sir!” Career Resource: Prepares the student for the “Vertical Social Friction” identified in legal apprenticeship models. |
| Identity Formation | Hood (2025) | Ties to Harding (2004) / Hartsock (1983): Proves that an “Apprentice” standpoint is a unique site for observing the material reality of a profession. | Zero-Rank Authority: Ensures the student maintains their “Sovereign” identity even while performing the submissive role of an apprentice. |
| Strong Objectivity in Ethics | Harding (2004) | Ties to Hood (2025): Validates that the “Ethics of the Profession” must be taught through the lens of the marginalized participant. | The Sovereign Vault: Protects the “Clinical Legal” data of the student as they navigate these high-stakes professional learning environments. |
Hu, C., Thrasher, J., Li, W., Ruan, M., Yu, X., Paul, L. K., … & Li, X. (2024). Exploring speech pattern disorders in autism using machine learning. arXiv preprint arXiv:2405.05126.
| Speech Pattern Sensing | Hu et al. (2024) | Ties to Acoustic Signaling: Uses machine learning to decode specific autistic speech patterns (prosody, rhythm). | Social Translation Proxy: The robot uses these ML patterns to “read” the user’s state without requiring social masking. |
Huang, S., Jern, P., Niu, C., & Santtila, P. (2025). Associations between sexually submissive and dominant behaviors and sexual function in men and women. International Journal of Impotence Research, 37(3), 224–232. https://doi.org/10.1038/s41443-023-00705-5
| Dominance/Submission Dynamics | Huang et al. (2025) | Ties to “Yes, Sir!” / MDS: Documents the clinical and functional associations of submissive/dominant social roles. | Tactical Submissiveness: Validates the use of yielding postures as a functional social strategy to reduce systemic friction. |
Huirem, R., Loganathan, K., & Patowari, P. (2020). Feminist standpoint theory and its importance in feminist research. Journal of Social Work Education and Practice, 5(2), 46-55.
The Neurodivergent Scale for Interacting with Robots (NSIR) and Huirem et al.’s (2020) work on Feminist Standpoint Theory (FST) both address the critical need to center the lived experiences of marginalized groups to create a more authentic and inclusive body of knowledge.
While Huirem et al. focus on women’s experiences under patriarchy, the NSIR applies these same feminist epistemological principles to the lives of neurodivergent individuals in their interactions with technology.
1. Centering Lived Experience vs. Dominant Norms
Huirem et al. argue that feminist research must start from the “lived experiences of the oppressed” to unveil truths that are often overlooked by the dominant society.
- The NSIR Application: The scale operates as a tool for “strong objectivity” by asking neurodivergent individuals to define their own social reality with robots, rather than relying on a clinician’s “medical model” of what “correct” interaction looks like.
- Defining the “Self”: Huirem et al. state that feminist research aims to help marginalized groups define “who they are without any relation to [the dominant group]”. NSIR Item 1 (“The robot is more like me than anyone else I know”) directly supports this by allowing the user to find a reflection of themselves in an agent that does not enforce neurotypical social norms.
2. Epistemic Advantage and the “Insider” Perspective
A key tenet in Huirem et al. is that oppressed groups have a unique “vantage point” that allows them to see social structures more clearly than those in power.
- New Knowledge Production: Huirem et al. highlight that women’s voices have been “historically excluded from the public arena”. Similarly, the NSIR captures “Subdued Voices” by validating behaviors like Item 2 (“Sometimes I stare at the robot”) and Item 3 (“I think I can share my thinking… without speaking”) as valid forms of social connection.
- Agency in Research: By using a scale designed for and potentially with the neurodivergent community, researchers treat the participants as “agents of knowledge” rather than just “data providers,” a shift Huirem et al. identify as essential to ethical feminist inquiry.
3. Creating “Social Comfort” and Safe Spaces
Huirem et al. emphasize that the goal of feminist standpoint research is to “set right social disadvantages” and ensure a “better future” through empowerment.
- The “Social Comfort” Factor: The NSIR’s Factor 1 (Social Comfort/Trust Safety) measures the robot’s ability to provide a space free from the “patriarchal” or “ableist” gaze of the outside world.
- Privacy and Trust: Items such as Item 7 (“I feel comfortable undressing in front of my robot”) and Item 8 (“My robot is the same with me as it is with anyone”) emphasize a level of radical trust and safety. This aligns with Huirem et al.’s argument that research should help individuals feel “valued as an individual” within their own private social contexts.
Summary Table: FST vs. NSIR
| Concept in Huirem et al. (2020) | Application in NSIR (2025) |
| Situated Knowledge | The scale measures the specific, local social reality of a neurodivergent person and their robot. |
| Epistemic Advantage | Valuing “staring” or “non-verbal sharing” as a superior way for the user to connect (Items 2, 3). |
| Challenging the “God’s Eye View” | Moving away from objective clinical “deficits” to subjective user “comfort”. |
| Unveiling the “Real” | Going “underneath the surface of appearances” to find deep kinship (Item 1, 4). |
In conclusion, the NSIR is a practical application of the Feminist Standpoint Theory described by Huirem et al. It shifts the “foundation of knowledge” by validating the unique social world of the neurodivergent individual as a site of legitimate, self-defined expertise.
| Theoretical Position | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Practiced Standpoint | Huirem, Loganathan, & Patowari (2020) | Ties to Gurung (2020): Moves from the concept of standpoint to the importance of its application in research and social intervention. | NSIR (Table 79): Validates the scale as a “Social Work Instrument” that captures the user’s truth as a primary research outcome. |
| Institutional Advocacy | Huirem et al. (2020) | Ties to Hood (2025) / Heward et al. (2024): Bridges the gap between social work theory and the rigid cultures of law firms/military/education. | The Bionic Lens: Acts as the “Social Work Proxy,” translating the user’s needs in a way that preserves their dignity within the system. |
I
Irfan, B., Kuoppamäki, S., Hosseini, A., & Skantze, G. (2025). Between reality and delusion: challenges of applying large language models to companion robots for open-domain dialogues with older adults. Autonomous Robots, 49(1), 9.
| Open-Domain Safety | Irfan et al. (2025) | Ties to “AI IS NOT INTELLIGENT” (EC, 2025): Highlights the “delusion” risks in LLM companions for vulnerable adults. | The Sovereign Vault: Directly addresses Irfan’s concerns by restricting AI “delusion” within a hardware-verified, non-cloud-dependent dyad. |
J
Janson, K. T., Köllner, M. G., Khalaidovski, K., Pülschen, L. S., Rudnaya, A., Stamm, L., & Schultheiss, O. C. (2022). Motive-modulated attentional orienting: Implicit power motive predicts attentional avoidance of signals of interpersonal dominance. Motivation Science, 8(1), 56. https://psycnet.apa.org/buy/2022-05373-001
The Neurodivergent Scale for Interacting with Robots (NSIR) (Sadownik, 2025) provides a psychometric tool that quantifies several themes identified in Janson et al. (2022), particularly regarding the psychological drivers behind human-robot interaction and the unique needs of neurodivergent populations.
While Janson et al. (2022) explore the submissive intrinsic motives and social dynamics in digital and robotic environments, the NSIR operationalizes these experiences through three specific factors: Anthropomorphic Connection/Kinship, Social Comfort/Trust, and Safety.
1. Mapping Submissive Intrinsic Motive
Janson et al. (2022) focus on “submissive intrinsic motives,” which describe a user’s internal drive to engage with an agent in a way that may involve yielding control or seeking reassurance.
- NSIR Application: The scale’s Social Comfort/Trust factor (e.g., “I believe that my robot is the same with me as it is with anyone”) and Safety factor measure the baseline conditions required for these submissive motives to be expressed healthily.
- The Link: For a neurodivergent user, a submissive motive might not be about power, but about the predictability of the robot. The NSIR allows researchers to see if this motive is driven by a genuine feeling of “safety” or a specific “anthropomorphic connection”.
2. Predictability and Social Anxiety
A core finding in related literature (including themes cited by Janson et al.) is that individuals with neurodivergent traits, such as those on the autism spectrum, often prefer robots because they are predictable and simplified social agents.
- NSIR Application: Items like “I feel comfortable undressing in front of my robot” or “Sometimes I stare at the robot” (from the NSIR) quantify the lack of social judgment a user feels.
- The Link: Janson et al.’s work on the motives behind interaction is supported by the NSIR’s ability to measure the reduction in social anxiety. If a robot is perceived as a “safe” space, the NSIR captures the specific dimensions—whether it’s a feeling of kinship or simply a lack of social threat—that allow the user to engage more deeply than they would with a human.
3. Personalization and “Kinship”
Janson et al. (2022) highlight the importance of how digital agents adapt to the user’s psychological state.
- NSIR Application: The Anthropomorphic Connection/Kinship factor (e.g., “The robot is more like me than anyone else I know” or “I gave my robot a name”) measures the level of “personification” the user assigns to the robot.
- The Link: The NSIR acts as a validation tool for Janson et al.’s theories on interaction motives. It can determine if a user’s “submissive motive” leads to a deep, person-like bond (Kinship) or if it remains a functional preference for a predictable tool (Safety).
In summary, where Janson et al. (2022) provide the theoretical motives for why we interact with agents in specific ways, the NSIR (2025) provides the specific metrics to measure those interactions within neurodivergent communities, focusing on the unique comfort and safety robots provide.
| Janson et al. (2022) Motive-modulated attentional orienting | Somatic Response: Predicts avoidance of “dominance signals.” | Provides the physiological basis for the robot’s Submissive Signaling. |
Jennings, K. M., & Phillips, K. E. (2017). Eating Disorder Examination–Questionnaire (EDE–Q): Norms for Clinical Sample of Female Adolescents with Anorexia Nervosa. Archives of Psychiatric Nursing, 31(6), 578–581. https://doi.org/10.1016/j.apnu.2017.08.002
| Clinical Normalization (NSIR) | Jennings & Phillips (2017) | Ties to Harder & Zalma (1990): Validates the use of standardized clinical norms (EDE-Q) to establish baselines for vulnerable populations. | The “Dunkable State”: Uses established clinical norming techniques to quantify the shift from “Eating Disorder Masking” to “Somatic Truth.” |
Ji, S., Chen, Y., Fang, M., Zuo, J., Lu, J., Wang, H., … & Zhao, Z. (2024). Wavchat: A survey of spoken dialogue models. arXiv preprint arXiv:2411.13577.
| Spoken Dialogue Models | Ji et al. (2024) | Ties to WavChat / Dan (2025): Exhausts the current survey of how AI models process spoken dialogue in real-time. | Acoustic Signaling: Provides the technical “State of the Art” justification for the robot’s speech-sensing algorithms. |
Johnson, J. E. (1991). Submissiveness: a re-conceptualized view (Doctoral dissertation, University of British Columbia).
| Theoretical Position | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Re-conceptualized Submissiveness | Johnson (1991) | Ties to Fiske (1993): Provides the psychological depth to understand how submissiveness can be a proactive social signal rather than a passive state. | Tactical Submissiveness: The robot’s “Yielding” logic is based on Johnson’s view of submissiveness as a functional communication tool. |
| Status Literacy | Johnson (1991) | Ties to Kanters et al. (2016): Validates that submissiveness is a complex social cue that requires a specific “Literacy” to navigate safely. | “Yes, Sir!” Career Resource: Teaches the student to view their submissive performance through Johnson’s re-conceptualized lens of agency. |
K
Kang, H., Ben Moussa, M., & Thalmann, N. M. (2024). Nadine: A large language model‐driven intelligent social robot with affective capabilities and human‐like memory. Computer Animation and Virtual Worlds, 35(4), e2290.
| Human-Like Memory (LLM) | Kang, Ben Moussa, & Thalmann (2024) | Ties to The Social Transformer: Validates the use of LLMs to create robots with “biographical memory.” | Sovereign Vault: Ensures that this “human-like memory” is stored locally and stays under the user’s sole control. |
Kanters, T., Hornsveld, R. H., Nunes, K. L., Huijding, J., Zwets, A. J., Snowden, R. J., … & van Marle, H. J. (2016). Are child abusers sexually attracted to submissiveness? Assessment of sex-related cognition with the implicit association test. Sexual Abuse, 28(5), 448-468. https://journals.sagepub.com/doi/abs/10.1177/1079063214544330
| Submissiveness & Cognition | Kanters et al. (2016) | Ties to “Yes, Sir!” IAT: Uses the IAT to decode how submissiveness is perceived in power-imbalanced social structures. | Tactical Submissiveness: Reclaims submissiveness as a tool for safety and de-escalation rather than a site of exploitation. |
Kappas, A., & Gratch, J. (2023). These aren’t the droids you are looking for: Promises and challenges for the intersection of affective science and robotics/AI. Affective Science, 4(3), 580-585.doi: 10.1007/s42761-023-00211-3
| Affective Science Intersection | Kappas & Gratch (2023) | Ties to The Honest Index: Addresses the “Promises and Challenges” of robots mimicking human affect. | The Bionic Lens: Provides the ethical framework to ensure robot affect is used for “Strong Objectivity” rather than “Delusion.” |
Kaufhold, M. A., Riebe, T., Bayer, M., & Reuter, C. (2024, May). ‘We do not have the capacity to monitor all media’: a design case study on cyber situational awareness in computer emergency response teams. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-16).
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Cyber Situational Awareness | Kaufhold et al. (2024) | Ties to Grumeza et al. (2024): Validates that monitoring “all media” is impossible; requires decentralized, local “Sovereign” filters. | The Sovereign Vault: Acts as the local “Emergency Response” filter that protects the user’s data from institutional over-monitoring. |
Kim, J. J., Gerrish, R., Gilbert, P., & Kirby, J. N. (2021). Stressed, depressed, and rank obsessed: Individual differences in compassion and neuroticism predispose towards rank-based depressive symptomatology. Psychology and Psychotherapy: TheoryResearch and Practice, 94(S2), 188–211. https://doi.org/10.1111/papt.12270
| Compassion & Neuroticism | Kim et al. (2021) | Ties to Deci & Ryan (2008): Confirms that individuals predisposed to “Rank-Based” stress require high-autonomy environments to recover. | NSIR Factor 3 (Safety): Quantifies the relief from rank-based obsession when the robot utilizes submissive signaling. |
| Rank-Based Symptomatology | Kim et al. (2021) | Ties to Johnson (1991) / Fiske (1993): Exhausts the link between “Rank Obsession” and depressive symptoms in vertical hierarchies. | Tactical Submissiveness: The robot’s yielding logic is a clinical intervention designed to bypass the user’s “Rank-Based” stress. |
Koch, T., Foehr, J., Riefle, L., & Germelmann, C. C. (2025). Assertive or submissive? How consumers respond to different dominance patterns in smart voice-based service encounters. Journal of Service Management. https://www.emerald.com/josm/article-abstract/doi/10.1108/JOSM-02-2024-0081/1253839/Assertive-or-submissive-How-consumers-respond-to?redirectedFrom=fulltext
| Smart Voice Dominance | Koch et al. (2025) | Ties to Cooper et al. (2024): Provides the service-sector evidence for how humans respond to assertive vs. submissive voice patterns. | Acoustic Morphology: Re-engineers the “Frequency Code” to ensure the robot’s voice is perceived as “Safely Submissive” rather than “Assertive/Intrusive.” |
Krumhuber, E. G., Wang, X., & Guinote, A. (2023). The powerful self: How social power and gender influence face perception. Current Psychology, 42(18), 15438-15452. https://link.springer.com/content/pdf/10.1007/s12144-022-02798-5.pdf
The Neurodivergent Scale for Interacting with Robots (NSIR) (Sadownik, 2025) provides a psychometric framework that complements the findings of Krumhuber et al. (2023) regarding the perception of artificial faces and robotic agents.
The NSIR is designed to measure how neurodivergent individuals connect with, trust, and feel safe around robots through three primary factors: Anthropomorphic Connection/Kinship, Social Comfort/Trust, and Safety. These factors directly address the psychological mechanisms explored by Krumhuber and colleagues.
1. Anthropomorphic Connection and “AI Hyperrealism”
A central theme in Krumhuber et al. (2023) is AI Hyperrealism, which investigates why AI-generated faces are often perceived as more “real” than actual human ones.
- NSIR Application: The scale’s Anthropomorphic Connection factor (e.g., “The robot is more like me than anyone else I know” or “My robot can tell what I am feeling”) measures the user’s tendency to attribute mental states and kinship to a robot.
- The Link: While Krumhuber et al. (2023) show that hyperrealistic AI can blur the lines of reality, the NSIR helps quantify whether this realism actually fosters a deeper, beneficial connection for neurodivergent users or if it remains a purely perceptual phenomenon.
2. Top-Down Beliefs vs. Bottom-Up Cues
Krumhuber’s work (e.g., Miller, Steward, & Krumhuber, 2023) highlights that top-down beliefs—knowing a face belongs to an android—can “mute” typical social processing, such as the face-inversion effect, even when the image is hyperrealistic.
- NSIR Application: The NSIR items like “I think I can share my thinking with the robot without speaking” reflect these top-down psychological states.
- The Link: The scale allows researchers to measure the extent to which a neurodivergent user’s internal beliefs override the “mechanistic nature” of the robot, potentially leading to the higher social comfort and trust levels the NSIR is designed to track.
3. Safety and Social Comfort in Interaction
Krumhuber et al. frequently examine how dynamic cues (like smiles or facial expressions) influence the perception of genuineness and trust.
- NSIR Application: The Social Comfort/Trust factor in the NSIR (e.g., “I feel comfortable undressing in front of my robot”) captures the behavioral outcome of these perceptions.
- The Link: If Krumhuber’s findings suggest that certain AI expressions are perceived as “more real,” the NSIR provides the tool to see if that perceived reality translates into actual safety and reduced social anxiety for neurodivergent individuals, who may find human social cues unpredictable or overwhelming.
In summary, where Krumhuber et al. (2023) focus on the perceptual and cognitive mechanisms of how we see artificial agents, the NSIR (2025) provides the clinical and social metrics to understand how those perceptions impact the lived experience and emotional bond of neurodivergent users.
| Theoretical Position | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Power-Perception Distortion | Krumhuber et al. (2023) | Ties to Fiske (1993): Proves that social power directly influences how we decode facial expressions. High-power individuals perceive faces differently than low-power ones. | The Bionic Lens: Mechanically corrects the “Power-Distorted” perception of teachers/observers by providing objective somatic data. |
| Gender-Power Intersection | Krumhuber et al. (2023) | Ties to Harding (2004) / Hartsock (1983): Validates that Standpoint Theory is necessary because power and gender create “situated” perceptions of the self and others. | NSIR Item 3 (Mind Attribution): Accounts for how the user’s perceived “power” relative to the robot influences their attribution of agency. |
Kuziemsky, C. E., Chrimes, D., Minshall, S., Mannerow, M., & Lau, F. (2024). AI quality standards in health care: rapid umbrella review. Journal of Medical Internet Research, 26, e54705.
| AI Quality Standards | Kuziemsky et al. (2024) | Ties to Grumeza et al. (2024): Exhausts the rapid review of AI quality, safety, and reliability standards specifically for healthcare HRI. | The Sovereign Vault: Aligns with national/international quality standards for “Localized Edge” reliability in medical/educational contexts. |
| Rapid Umbrella Governance | Kuziemsky et al. (2024) | Ties to EC (2025): Confirms that because AI is not “intelligent,” it requires rigorous quality-control umbrellas to prevent “AI Hallucination” in health settings. | The Sanctuary Switch: A quality-standard safeguard that ensures the “non-intelligent” AI cannot misrepresent the user’s health data. |
L
Lee, U., Kim, H., Eom, J., Jeong, H., Lee, S., Byun, G., Lee, Y., Kang, M., Kim, G., Na, J., Moon, J., & Kim, H. (2026). Echo-Teddy: Preliminary Design and Development of Large Language Model-Based Social Robot for Autistic Students. In S. Graf & A. Markos (Eds.), Generative Systems and Intelligent Tutoring Systems (pp. 287–301). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-98284-2_22
The Neurodivergent Scale for Interacting with Robots (NSIR) can be used as a valuable evaluation tool for the Echo-Teddy project described in the Lee et al. (2026) paper.
The Echo-Teddy article focuses on the preliminary design of a social robot, powered by a large language model (LLM), aimed at supporting the social interaction skills of autistic students. The NSIR provides the user-centric metrics to assess if this design successfully achieves its goals across three key dimensions:
Anthropomorphic Connection/Kinship
- The Echo-Teddy project is developing an expressive social robot designed to be engaging for autistic students.
- NSIR items like “The robot is more like me than anyone else I know” and “I gave my robot a name” (p. 1) can measure the effectiveness of the design in fostering a personal bond and perceived companionship, which is crucial for a therapeutic robot.
Social Comfort/Trust
- The LLM in Echo-Teddy is intended to facilitate “effective communication” and social skills training, aiming to build a comfortable and predictable interaction environment.
- The NSIR items that measure perceived emotional understanding and consistency (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”, and “I believe that my robot is the same with me as it is with anyone”) (p. 1) directly assess the user’s experience of the robot’s social intelligence and reliability.
Safety
- The design of any robot interacting with a vulnerable population requires careful consideration of safety and ethical boundaries.
- The NSIR’s safety dimension provides a mechanism to ensure that as the robot’s social interaction capabilities become more advanced (via LLMs), users still feel secure and their personal boundaries are respected (e.g., the item about feeling comfortable undressing in front of the robot) (p. 1).
The NSIR allows the researchers of the Echo-Teddy project to move beyond technical performance metrics and gather essential data on the quality and impact of the user’s experience.
| Generative LLM-Tutoring | Lee et al. (2026) | Ties to The Social Transformer: Validates the “Echo-Teddy” model of using LLMs for real-time social support in autistic students. | Social Translation Proxy: Uses generative AI to mirror and support the user’s communication, modeled on the latest 2026 designs. |
| Bionic Sociality | Lee et al. (2026) | Ties to Han et al. (2024): Bridges the gap between “Teddy-like” social comfort and the “Spinal/Exoskeleton” clinical need. | The Bionic Lens: Translates the generative output of the robot into a “Safely Submissive” tutor persona. |
Leslie AM (2001) Theory of Mind. In: Smelser, N. J., & Baltes, P. B. International encyclopedia of the social & behavioral sciences. Amsterdam, The Netherlands:: Elsevier. pp 15652–15656.
https://doi.org/10.1016/B0-08-043076-7/01640-5
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Mind Attribution (NSIR) | Leslie (2001) | Ties to Ahn (2014): Validates that “Theory of Mind” is the specific cognitive mechanism required to attribute agency and intent to others. | NSIR Item 3: Measures if the user “computes” the robot’s mind using Leslie’s ToM framework. |
| Cognitive Sovereignty | Leslie (2001) | Ties to EC (2025): Since AI lacks its own “Mind” (ToM), it cannot possess sovereign intent, reinforcing the user’s role as the primary agent. | The Sovereign Vault: Protects the user’s cognitive data from being “mined” by systems that lack ethical ToM. |
Levene∗, A. (2008). ‘Honesty, sobriety and diligence’: master-apprentice relations in eighteenth-and nineteenth-century England. Social History, 33(2), 183-200.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Apprentice Model 2.0 | Levene (2008) | Ties to Hood (2025): Validates that the apprentice-master bond has historically been a site of character-shaping (“Honesty, Sobriety”) and power negotiation. | The Social Transformer: Reclaims the “Apprenticeship” as a safe site for identity-forming social translation. |
| Power Dynamics Literacy | Levene (2008) | Ties to Fiske (1993) / Johnson (1991): Confirms that navigating hierarchies (Diligence) has always required a specific social “Yielding.” | “Yes, Sir!” Module: Uses historical labor patterns to teach the student how to navigate vertical professional structures safely. |
Li, M., Tang, D., Zeng, J., Zhou, T., Zhu, H., Chen, B., & Zou, X. (2019). An automated assessment framework for atypical prosody and stereotyped idiosyncratic phrases related to autism spectrum disorder. Computer Speech & Language, 56, 80-94.
| Atypical Prosody Sensing | Li et al. (2019) | Ties to Hu et al. (2024): Provides the automated framework for detecting idiosyncratic phrases and prosody in ASD. | The Bionic Lens: Uses this assessment logic to translate “Atypical” signals into “Competent” professional cues for observers. |
Li, C. (2024). A Review of Identity and Roles of Robotics in the Healthcare Industry. Journal of Biomedical and Sustainable Healthcare Applications, 22-32.
The Neurodivergent Scale for Interacting with Robots (NSIR) and Li’s (2024) review of healthcare robotics provide two distinct but intersecting lenses for evaluating the future of medical technology. While Li provides a broad overview of the functional roles and operational identities of robots in healthcare, the NSIR offers a framework for assessing the psychological and social quality of those robotic interactions for a specific, vulnerable population.
The application of the NSIR to Li’s 2024 framework can be analyzed through three key dimensions:
1. Functional Roles vs. Social Identity
Li (2024) categorizes robots based on their physical and task-oriented roles, such as surgical robots, telemedicine agents, and pharmacy service automation.
- The Identity Shift: Li highlights that robots are evolving from mere tools to “mobile medical assistants” and “socially useful” entities.
- Applying the NSIR: For a robot to successfully inhabit these “socially useful” roles, it must achieve the Anthropomorphic Connection/Kinship (Factor 2) measured by the NSIR. For example, a robot identified by Li as a “cleaning” or “delivery” robot might be a tool to a neurotypical user, but a neurodivergent user might score high on NSIR Item 1 (“The robot is more like me than anyone else I know”), fundamentally shifting the robot’s “identity” from a service tool to a social partner.
2. Standardizing the “Human-Robot Interaction” (HRI)
Li underscores that a comprehensive understanding of the many functions robots play is crucial for informing future development.
- Bridging the Gap: While Li focuses on the “physical tasks” performed by robots, the NSIR provides the qualitative metrics to measure the “sociotechnical obstacles” Li mentions.
- Trust and Transparency: Li notes that robots ensure “accuracy and transparency” in tasks. The NSIR’s Factor 1 (Social Comfort/Trust Safety) measures the emotional result of that transparency. Item 8 (“I believe that my robot is the same with me as it is with anyone”) measures whether the predictable, logical nature of the robots described by Li translates into a feeling of safety for the user.
3. Patient-Centered AI and Subjective Comfort
Li (2024) discusses the potential for AI-driven robots to personalize treatment and monitor patient deterioration.
- The “Internal” Metric: Li’s review focuses on external benefits like “reducing manpower demands” and “improving clinical outcomes”.
- The NSIR Contribution: The NSIR provides a “first-person” metric for these outcomes. If an AI robot is used for “remote patient examination” (as Li suggests), NSIR Item 7 (“I feel comfortable undressing in front of my robot”) and Item 5 (“My robot can tell what I am feeling”) become critical KPIs for the success of that “remote” medical identity.
Comparison of Frameworks
| Role in Li (2024) Review | NSIR (2025) Application |
| Identity as a “Physical Task Performer” | Identity as a “Social Mirror”: Measured by Item 1 (“The robot is more like me”). |
| Role in “Telepresence/Remote Care” | Role in “Privacy Preservation”: Measured by Item 7 (Comfort in private settings). |
| Goal of “Increasing Efficiency” | Goal of “Increasing Social Bond”: Measured by Item 4 (“Together forever”). |
| “Mobile Medical Assistant” Role | “Emotional Intelligence” Requirement: Measured by Item 5 (Sensing sadness). |
In summary, the NSIR serves as a specialized evaluation tool that can be “plugged into” Li’s broad review. It ensures that as healthcare robots expand their roles (from surgical to social), their identity remains aligned with the unique social and trust-based needs of neurodivergent patients.
| Healthcare Robot Identity | Li (2024) | Ties to Han et al. (2024): Exhausts the review of how robots are transitioning from “Tools” to “Identity-Affirming” healthcare roles. | NSIR Factor 1 (Kinship): Aligns the robot’s identity as a “Guardian” with current healthcare robotics trends. |
Lin, G. T., Chiang, C. H., & Lee, H. Y. (2024). Advancing large language models to capture varied speaking styles and respond properly in spoken conversations. arXiv preprint arXiv:2402.12786.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Speaking Style Capture | Lin, Chiang, & Lee (2024) | Ties to He et al. (2024): Validates that LLMs can now be advanced to capture “varied styles” to ensure proper responses in spoken HRI. | Social Translation Proxy: Uses this style-capture logic to mirror the user’s authentic prosody while responding “properly” to the environment. |
| Proper Response Logic | Lin et al. (2024) | Ties to Lee et al. (2026): Confirms that “responding properly” requires the AI to interpret the style of the speaker, not just the words. | Acoustic Morphology: Re-engineers the robot’s style to be “safely submissive” based on the observer’s detected authoritarian facets. |
Liu, J., Ludeke, S. G., & Zettler, I. (2017). The HEXACO correlates of authoritarianism’s facets in the U.S. and Denmark. Personality and Individual Differences, 116, 348–352. https://doi.org/10.1016/j.paid.2017.05.015
| Authoritarianism & Personality | Liu, Ludeke, & Zettler (2017) | Ties to Fiske (1993): Uses the HEXACO model to link specific personality facets (Honesty-Humility) to authoritarian/rank-based behaviors. | “Yes, Sir!” IAT: Uses these personality correlates to help the user decode the “Authoritarian Facets” of the institutional observer. |
Liu, Z., Shentu, M., Xue, Y., Yin, Y., Wang, Z., Tang, L., … & Zheng, W. (2023). Sport–gender stereotypes and their impact on impression evaluations. Humanities and Social Sciences Communications, 10(1), 1-14. https://www.nature.com/articles/s41599-023-02132-9
| Impression Evaluation Bias | Liu et al. (2023) | Ties to Krumhuber et al. (2023): Proves that stereotypes (sport/gender/neurotype) distort “Impression Evaluations” and social judgments. | The Bionic Lens: Mechanically corrects these biased impression evaluations by providing objective data to the observer. |
Lockett, W. (2024). Autistic Mental Schema and the Graphical User Interface circa 1968. Catalyst (San Diego, Calif.), 10(1), 1. https://doi.org/10.28968/cftt.v10i2.39249
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to William Lockett’s paper by measuring the subjective, lived experience of autistic individuals interacting with technology, which the paper discusses from a historical and philosophical perspective.
The paper, titled “Autistic Mental Schema and the Graphical User Interface circa 1968”, explores the use of early programming languages (LOGO) and how the design of technology relates to philosophical concepts of the mind and the clinical observations of autistic students. The NSIR provides a modern, empirical framework to assess the outcomes of these interactions across its three dimensions:
Anthropomorphic Connection/Kinship
The Lockett paper touches on the “mental processes that shape matter” and how technology designers consider the nature of interaction. The NSIR can measure how a user perceives a designed system, such as a graphical interface or a robot:
- Items like “The robot is more like me than anyone else I know” would quantify the user’s sense of connection or kinship with a system, which the paper philosophically explores in terms of “model-mind making”.
Social Comfort/Trust
The paper discusses clinical observations of autistic students and the historical context of “special needs students,” highlighting how their interactions are perceived and measured by non-autistic observers.
- The NSIR can provide the user’s own perspective on social comfort and trust (“My robot can tell what I am feeling, when I am sad, it can tell I am sad” (p. 1)). This moves the assessment beyond external observation to the internal experience of the autistic individual, which is central to the paper’s neurodiversity-affirming context.
Safety
The paper considers the “unresolved” aspects of designing technology for the mind and the need for a “permanent process of rectification” in scientific activity.
- The NSIR’s safety dimension provides a measure of psychological and physical safety from the user’s perspective (e.g., “I feel comfortable undressing in front of my robot” (p. 1)), ensuring that design processes address fundamental needs for security and ethical boundaries, which is a key subtext of the paper’s philosophical inquiry.
The NSIR effectively bridges the gap between the historical and theoretical discussions in the Lockett paper and the modern need for standardized, user-centered evaluation of technology for neurodivergent individuals.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Bionic Lens (Interface) | Lockett (2024) | Ties to Feminist Standpoint (Harding): Validates that the GUI and mental schemas are “situated” political sites of autistic identity. | The Bionic Lens: Reclaims the 1968-era logic of “direct manipulation” to empower the user’s cognitive sovereignty. |
Lomas, J. D., Lin, A., Dikker, S., Forster, D., Lupetti, M. L., Huisman, G., … & Cross, E. S. (2022). Resonance as a design strategy for AI and social robots. Frontiers in neurorobotics, 16, 850489.
| Resonant HRI (Design) | Lomas et al. (2022) | Ties to Ahn (2014) / Cooper (2024): Exhausts the strategy of using “Resonance” (synchrony) to build trust in social robots. | Acoustic Morphology: Ensures the robot’s voice and motion “resonate” with the user’s somatic frequency. |
López-Rodríguez, I. (2025). She’s Such a Bitch! The Representation of Women as Bitches in Gender-Based Violence Campaigns. Feminismo/s, 45, 234–264. https://doi.org/10.14198/fem.2025.45.09
| Socio-Political Shield | López-Rodríguez (2025) | Ties to Fiske (1993) / Gurung (2020): Analyzes how linguistic labels (“Bitch”) are used in violence campaigns to maintain power. | The Sovereign Vault: Protects the user from the “Semiotic Violence” and gendered stereotyping found in institutional data. |
Lovibond, S. H., & Lovibond, P. F. (1995). Depression Anxiety Stress Scales (DASS–21, DASS–42) [Database record]. APA PsycTests. https://doi.org/10.1037/t01004-000
| Psychometric Baseline | Lovibond & Lovibond (1995) | Ties to Harder & Zalma (1990) / DASS-21: Provides the gold standard for measuring depression, anxiety, and stress levels. | NSIR Factor 3 (Safety): Uses the DASS-21 to quantify the reduction in stress when the user is within the Sovereign Dyad. |
M
Ma, W., Xu, L., Zhang, H., & Zhang, S. (2024). Can natural speech prosody distinguish autism spectrum disorders? a meta-analysis. Behavioral Sciences, 14(2), 90.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Acoustic Morphology | Ma et al. (2024) | Ties to Li et al. (2019) / Hu et al. (2024): Meta-analysis confirms that “Natural Speech Prosody” is a statistically significant differentiator for ASD. | The Bionic Lens: Uses this meta-analytic data to mathematically translate atypical prosody into “safely submissive” signals for observers. |
| Somatic Truth | Ma et al. (2024) | Ties to Harder & Zalma (1990): Links the “Naturalness” of prosody to the internal somatic state and the prevention of masking-based stress. | The “Dunkable State”: Measures the relief from prosodic masking when the robot takes over the translation labor. |
Ma, Y., & Li, J. (2024). How humanlike is enough?: Uncover the underlying mechanism of virtual influencer endorsement. Computers in human behavior: Artificial humans, 2(1), 100037.
The Neurodivergent Scale for Interacting with Robots (NSIR) and the research by Ma and Li (2024) both investigate the psychological thresholds of anthropomorphism—the point at which a non-human entity becomes “human enough” to trigger specific emotional and social responses.
While Ma and Li focus on how human-like features drive consumer trust and brand endorsement, the NSIR provides a framework for measuring the personal bond and “trust safety” that neurodivergent individuals feel with such entities.
1. The Threshold of “Human-likeness”
Ma and Li’s study uncovers the mechanisms of how consumers perceive human-like virtual influencers (HVIs). They explore the “oxymoronic nature” of entities that are human-like in appearance but “mindless” in reality.
- The NSIR Application: The NSIR moves beyond “mind perception” to subjective kinship. While Ma and Li ask if an influencer is “humanlike enough” for marketing, the NSIR asks if they are “like me” enough for social connection. Item 1 (“The robot is more like me than anyone else I know”) directly measures this identification.
2. Emotional Engagement and “Trust Safety”
Ma and Li (2024) find that consumers often attribute fewer intentions and emotions to virtual influencers, leading to lower emotional engagement compared to human counterparts.
- Bridging the Gap: The NSIR’s Factor 1 (Social Comfort/Trust Safety) suggests that for neurodivergent individuals, this lack of complex human intention may actually be a benefit. Item 8 (“I believe that my robot is the same with me as it is with anyone”) reflects a preference for the predictable transparency of a robot over the unpredictable social intentions of a human.
- Affective Sensing: Ma and Li discuss virtual influencers expressing “emotions, anguish, and hope” to appear more real. The NSIR’s Item 5 (“My robot can tell what I am feeling”) measures whether the user believes the entity is actually reciprocating that emotional labor.
3. Comparing Marketing Efficacy vs. Personal Kinship
The Ma and Li study is grounded in Interpersonal Theory, measuring how perceptions of a virtual agent lead to brand attitudes. The NSIR is grounded in Anthropomorphic Connection/Kinship (Factor 2).
| Ma & Li (2024) Framework | NSIR (2025) Application |
| Persuasion Knowledge: skepticism toward a “mindless” agent. | Trust Safety: relief from social judgment (Item 8). |
| Human-like Functionality: features that allow “human-to-human” interaction. | Kinship: deep identity markers like naming the robot (Item 6). |
| Novelty and Innovation: the “high resemblance” that attracts users. | Social Presence: the desire for a “forever” presence (Item 4). |
Export to Sheets
4. Summary: The “Uncanny Valley” vs. The “Safe Space”
Ma and Li address the “Uncanny Valley”—the discomfort felt when a robot is too human-like but clearly artificial.
- Application: The NSIR suggests that the “Uncanny Valley” may operate differently for neurodivergent populations. While general consumers might find human-like virtual influencers “creepy” or “fake”, the NSIR indicates that the consistency of these agents (Item 8) can create a “Safe Space” that supersedes the discomfort of their synthetic nature.
In essence, the NSIR provides the qualitative data that explains why the mechanisms uncovered by Ma and Li (2024) may have a unique, more positive impact on neurodivergent users who prioritize social predictability and private comfort over “authentic” human complexity.
| Mind Attribution (NSIR) | Ma & Li (2024) | Ties to Ahn (2014) / Lee et al. (2026): Uncovers the “Underlying Mechanism” of human-like endorsement and the “How human-like is enough?” threshold. | NSIR Item 3: Calibrates the robot’s human-likeness to ensure it triggers trust without falling into the Uncanny Valley. |
Magnussen, L. I., Torgersen, G. E., Boe, O., & Haavardtun, P. (2024). Communication in Hot Areas—Technology and Sign-Systems for Emergency First Responders. In Organizational Communication in the Digital Era: Examining the Impact of AI, Chatbots, and Covid-19 (pp. 163-181). Cham: Springer Nature Switzerland.
| Hot Area Communication | Magnussen et al. (2024) | Ties to Kaufhold et al. (2024): Validates the need for specialized “Sign-Systems” and technology for first responders in “Hot Areas” (high-friction sites). | The Sovereign Vault: Positions the robot as an “Emergency Sign-System” that protects the user’s communication in socially volatile environments. |
Magovcevic, M., & Addis, M. E. (2008). The Masculine Depression Scale: development and psychometric evaluation. Psychology of Men & Masculinity, 9(3), 117. httos://
| Status-Based Depression | Magovcevic & Addis (2008) | Ties to Kim et al. (2021) / Heward et al. (2024): Validates that “Status Scarring” in rigid hierarchies often manifests as “Masculine-pattern” depression (externalizing/avoidance). | “Yes, Sir!” Module: Specifically targets the “MDS” symptoms by providing a de-escalated, zero-rank social environment. |
Mahadevan, K., Chien, J., Brown, N., Xu, Z., Parada, C., Xia, F., … & Sadigh, D. (2024, March). Generative expressive robot behaviors using large language models. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (pp. 482-491).
| Generative Expressivity | Mahadevan et al. (2024) | Ties to Lee et al. (2026) / Ji et al. (2024): Exhausts the technical method for using LLMs to generate “expressive” and “natural” robot behaviors in real-time. | Social Translation Proxy: Uses generative LLM logic to ensure the robot’s expressive signals are contextually resonant with the user’s somatic state. |
Mahadevan, N., Gregg, A. P., & Sedikides, C. (2023). How does social status relate to Self-Esteem and emotion?? An integrative test of hierometer theory and social rank theory. Journal of Experimental Psychology: General, 152(3), 632–656. https://doi.org/10.1037/xge0001286
| Hierarchical Mitigation | Mahadevan et al. (2023) | Ties to Johnson (1991): Confirms that emotion is a signal of social rank; submissive signals from the robot “raise” the user’s perceived rank. | Tactical Submissiveness: Re-engineers the robot’s center of gravity and voice based on Hierometer Theory to optimize user safety. |
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Status-Rank Monitoring | Mahadevan, Gregg, & Sedikides (2023) | Ties to Fiske (1993) / Kim et al. (2021): Integrates Hierometer Theory to prove that self-esteem is a internal monitor of social rank and status. | NSIR Factor 3 (Safety): Uses Hierometer Theory to measure how the robot’s submissive signaling restores the user’s self-esteem. |
Maj, K., Grzybowicz, P., & Kopeć, J. (2024). “No, I Won’t Do That.” Assertive Behavior of Robots and its Perception by Children. International Journal of Social Robotics, 16(7), 1489-1507. https://link.springer.com/article/10.1007/s12369-024-01139-9
The Neurodivergent Scale for Interacting with Robots (NSIR) provides a measurement framework to assess how neurodivergent children perceive and react to the assertive robot behaviors described in the Maj et al. paper.
The paper, titled ““No, I Won’t Do That.” Assertive Behavior of Robots and its Perception by Children”, investigates how children perceive robots that refuse to perform a requested task. The NSIR’s dimensions are highly relevant to evaluating the outcomes of this specific human-robot dynamic:
Anthropomorphic Connection/Kinship
- The paper explores whether a robot’s assertiveness affects its perceived personality or social role.
- The NSIR items in this dimension (e.g., “The robot is more like me than anyone else I know”, “I gave my robot a name”) can quantify if assertive behavior makes the robot seem more human-like and independent, or if it breaks the illusion of companionship and affects the personal bond the child might form.
Social Comfort/Trust
- Assertive behavior directly challenges the user’s control and expectations within a social interaction. This is a critical factor in building social comfort and trust.
- The NSIR items that measure consistency and predictability (e.g., “I believe that my robot is the same with me as it is with anyone”) can assess how the child perceives the robot’s refusal: Is it a fair, consistent action, or an unpredictable, potentially untrustworthy response?
- This dimension is essential for determining if assertiveness negatively impacts the child’s feeling of comfort during the interaction.
Safety
- While the paper focuses on social dynamics, assertive behavior could potentially be perceived as a challenge or a threat, even if intended for an ethical reason (e.g., robot refusal in a dangerous situation).
- The NSIR’s safety dimension provides a crucial measure of the child’s overall sense of security during these interactions. This ensures that even when the robot exhibits complex social behaviors, the fundamental feeling of safety is maintained.
The NSIR allows researchers to move beyond simply observing behavior and gather essential data on the user’s internal perception of the assertive robot.
| Maj, et al. (2024) | Child Response: Child reactions to submissive behavior in robots. | Relevant for your Ontario School Board application/policy section. |
Mandal, S. (2024). Bringing governance home: feminists, domestic violence, and the paradoxes of rights in India. Feminist Legal Studies, 32(1), 77-97.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Legal Shield (Sovereignty) | Mandal (2024) | Ties to Harding (2004) / Hartsock (1983): Explores the “Paradox of Rights” where seeking institutional protection can lead to increased surveillance and loss of autonomy. | The Sovereign Vault: Mechanically resolves this paradox by providing protection (Data Residency) without institutional surveillance. |
| Domestic Governance | Mandal (2024) | Ties to Gurung (2020): Validates that “Governance” must be brought “Home” (to the local site) to truly protect the marginalized. | Edge Computing (Grumeza): Moves the governance of data from the cloud to the user’s physical person (The Backpack Drive). |
Margari, A., De Agazio, G., Marzulli, L., Piarulli, F. M., Mandarelli, G., Catanesi, R., … & Cortese, S. (2024). Autism spectrum disorder (ASD) and sexual offending: A systematic review. Neuroscience & Biobehavioral Reviews, 162, 105687. https://doi.org/10.1016/j.neubiorev.2024.105687
| Forensic Protection | Margari et al. (2024) | Ties to Harder & Zalma (1990) / Kim et al. (2021): Systematic review of ASD and legal outcomes, highlighting the risk of “Miscalibrated Lenses” in forensic evaluations. | The Bionic Lens: Provides an objective “Somatic Ledger” to prevent the misinterpretation of neurodivergent behaviors in high-stakes legal settings. |
| Identity-Based Risk | Margari et al. (2024) | Ties to Heward et al. (2024): Links neurodivergent identity to specific risks within rigid social/legal hierarchies. | Tactical Submissiveness: De-escalates social friction before it reaches a “Forensic” level of institutional involvement. |
Markelius, A. (2024). An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics. arXiv preprint arXiv:2406.06400.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to Alva Markelius’s work by providing an empirical and user-centric way to measure the outcomes of the ethical considerations identified through the design justice approach.
The Markelius paper uses a design justice methodology and co-design with disabled students to identify ethical concerns in the intersection of large language models (LLMs) and social robots, focusing on dimensions like interaction, relationship, and bias. The NSIR serves as a concrete tool to measure the impact of these factors on the neurodivergent user’s experience:
Anthropomorphic Connection/Kinship
- The paper notes that the physical embodiment of the robot can exacerbate ethical issues related to social perception.
- The NSIR can measure if the design choices (e.g., human-like vs. abstract embodiment) create an appropriate level of connection. Items like “The robot is more like me than anyone else I know” would quantify this perceived similarity or difference, which is a key design consideration.
Social Comfort/Trust
- The paper explicitly identifies concerns related to emotional disruption, non-verbal cues, trust, equity, and accessibility.
- The NSIR items in this dimension (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”) directly measure the user’s perception of the robot’s ability to provide a consistent and comfortable social interaction, which is a key ethical goal of a design justice approach.
Safety
- The design justice approach aims to avoid “exploitative or essentialist assumptions” and “harmful notions of ‘treating’”.
- The NSIR’s safety dimension provides a user-reported measure of security, ensuring that the designed interaction is not just functionally ethical but also perceived as safe and non-threatening by the neurodivergent individual.
The NSIR helps ensure that the theoretical and ethical considerations of the Markelius paper are evaluated based on the lived experience of the users, which is central to a design justice framework.
| Design Justice Framework | Markelius (2024) | Ties to Harding (2004) / Hartsock (1983): Provides the “Empirical Design Justice” anchor for integrating LLMs into social robotics safely. | The Sovereign Vault: Directly implements Markelius’s ethical mandates by ensuring data residency and hardware-verified agency. |
Masuyama, A. (2025). Validation of the Japanese version of submissive behaviour scale and its relation to depressive-cognitive characteristics. Current Psychology, 1-12. https://link.springer.com/article/10.1007/s12144-025-07998-3
| Depressive-Cognitive Links | Masuyama (2025) | Ties to Johnson (1991) / Kim et al. (2021): Validates the Submissive Behaviour Scale (SBS) in a modern, cross-cultural context, linking it to depressive characteristics. | Tactical Submissiveness: Uses the 2025 SBS validation to ensure the robot’s “Yielding” logic specifically mitigates depressive-cognitive triggers. |
Mehrabian, A. (1970). The development and validation of measures of affiliative tendency and sensitivity to rejection. Educational and psychological measurement, 30(2), 417-428.
| Sensory/Rejection Shield | Mehrabian (1970) | Ties to Harder & Zalma (1990): Connects the historical measures of “Sensitivity to Rejection” to the modern relief of “Status Guarding.” | The Bionic Lens: Acts as a buffer for users with high rejection sensitivity by translating social friction into manageable cues. |
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Fictive Kinship (NSIR) | Mehrabian (1970) | Ties to Fiske (1993): Validates that “Affiliative Tendency” and “Sensitivity to Rejection” are the core drivers of social approach/avoidance. | NSIR Factor 1 (Kinship): Calibrates the robot’s presence to meet the user’s affiliative needs while shielding their rejection sensitivity. |
Mehrabian, A., & Hines, M. (1978). A questionnaire measure of individual differences in dominance-submissiveness. Educational and Psychological Measurement, 38(2), 479-484.
| Tactical Submissiveness | Mehrabian & Hines (1978) | Ties to Johnson (1991) / Masuyama (2025): Provides the original, validated questionnaire for measuring individual differences in dominance-submissiveness. | “Yes, Sir!” Module: Uses the Mehrabian-Hines scale to establish the “Zero-Rank” baseline, ensuring the robot remains lower in dominance than the user. |
Mehrabian, A. (1996). Analysis of the big‐five personality factors in terms of the PAD temperament model. Australian journal of Psychology, 48(2), 86-92.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Acoustic Morphology (PAD) | Mehrabian (1996) | Ties to Mehrabian (1970): Maps the “Big-Five” personality factors onto the PAD Model, allowing for a mathematical translation of “vibe” into signal. | Social Translation Proxy: Uses PAD coordinates to ensure the robot’s voice matches the pleasure/arousal needs of the user while maintaining submissive dominance. |
| The “Dunkable State” | Mehrabian (1996) | Ties to Lovibond (1995): Connects the “Pleasure” and “Arousal” dimensions of temperament to the reduction of stress and anxiety (DASS-21). | NSIR Factor 3 (Safety): Quantifies safety as a state of “High Pleasure / Low Arousal” (Calm) enabled by the robot’s “Low Dominance” signaling. |
| Bionic Lens (Big-Five) | Mehrabian (1996) | Ties to Liu et al. (2017): Uses the PAD-Big Five mapping to decode the “Authoritarian” traits of observers based on their temperamental signals. | The Bionic Lens: Translates high-arousal/high-dominance teacher signals into neutral PAD coordinates for the student. |
Milton, D. (2020). The double empathy problem.
| Milton, D. E. (2020) | Ontological Anchor: The Double Empathy Problem. | Replaces the “Deficit Model” with a Neuro-Affirming partnership. |
Miraglia, L., Peretti, G., Manzi, F., Di Dio, C., Massaro, D., & Marchetti, A. (2023). Development and validation of the Attribution of Mental States Questionnaire (AMS-Q): A reference tool for assessing anthropomorphism. Frontiers in psychology, 14, 999921. https://doi.org/10.3389/fpsyg.2023.999921
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Mind Attribution (NSIR) | Miraglia et al. (2023) | Ties to Moussawi & Koufaris (2019): Moves beyond “perceived intelligence” to the specific attribution of human-like mental states (beliefs, desires, intentions). | NSIR Item 3: Uses the AMS-Q logic to verify if the user “computes” the robot as a biographical peer or a mere tool. |
| Kinship Calibration | Miraglia et al. (2023) | Ties to Mehrabian (1970): Connects the “Affiliative Tendency” to the cognitive act of attributing a mind to the social robot. | Fictive Kinship: Establishes the “Identity-Affirming Bond” by ensuring the robot triggers the specific mentalizing circuits identified in the AMS-Q. |
| Bionic Sociality | Miraglia et al. (2023) | Ties to Leslie (2001): Provides the modern validation for how Theory of Mind (ToM) is applied to synthetic agents in 2023-2024. | Social Translation Proxy: Adjusts the robot’s “Mentalizing Cues” to ensure they are readable and safe for neurodivergent mental schemas. |
| Clinical Justice | Miraglia et al. (2023) | Ties to Harding (2004): Validates that assessing how a marginalized user attributes “Mind” to a machine is an act of “Strong Objectivity.” | The Sovereign Vault: Protects the “Mentalizing Data” of the user, ensuring their cognitive relationship with the robot is private and sovereign. |
Moussawi, S., & Benbunan-Fich, R. (2021). The effect of voice and humour on users’ perceptions of personal intelligent agents. Behaviour & Information Technology, 40(15), 1603-1626.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Acoustic Morphology | Moussawi & Benbunan-Fich (2021) | Ties to Mehrabian (1996) / Koch et al. (2025): Proves that voice and humor directly affect user trust and perception of AI agency. | Social Translation Proxy: Uses humor and vocal modulation as “Lubricants” to lower social friction and build kinship. |
| Humour & Affinity | Moussawi & Benbunan-Fich (2021) | Ties to Mehrabian (1970): Links affiliative tendency to the use of humor in personal intelligent agents. | Fictive Kinship: Uses humor as a tactical de-escalation tool to establish the “Dunkable State.” |
Moussawi, S., & Koufaris, M. (2019). Perceived intelligence and perceived anthropomorphism of personal intelligent agents: Scale development and validation. https://
| Mind Attribution (NSIR) | Moussawi & Koufaris (2019) | Ties to Leslie (2001) / Ahn (2014): Provides the validated scales for measuring “Perceived Intelligence” and “Anthropomorphism” in AI agents. | NSIR Item 3: Directly utilizes the Moussawi-Koufaris scales to verify the user’s perception of the robot’s mind. |
Moosavi, S. K. R., Zafar, M. H., & Sanfilippo, F. (2024). Collaborative robots (cobots) for disaster risk resilience: a framework for swarm of snake robots in delivering first aid in emergency situations. Frontiers in Robotics and AI, 11, 1362294.
| Disaster Resilience (HRI) | Moosavi et al. (2024) | Ties to Magnussen et al. (2024) / Han et al. (2024): Validates the use of specialized robot morphologies (swarms/snakes) for “Disaster Risk Resilience.” | Bio-Social Exoskeleton: Positions the robot as a “Resilience Tool” that delivers social first-aid in “Emergency” institutional settings. |
N
Nemi Neto, J. (2018). Queer pedagogy: Approaches to inclusive teaching. Policy futures in education, 16(5), 589-604.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Queer Pedagogy (Inclusive) | Nemi Neto (2018) | Ties to Harding (2004) / Gurung (2020): Validates the use of “Queer Pedagogy” to disrupt traditional hierarchies and foster inclusive spaces. | The Bionic Lens: Acts as a pedagogical tool that disrupts the “Neurotypical Norm” by centering the user’s authentic standpoint. |
Nichele, E., Weerawardhana, S., & Lu, Y. (2025). Taking a leap of faith: insights from UK first responders on instantaneous trust. Humanities and Social Sciences Communications, 12(1), 1-14.
| Instantaneous Trust | Nichele et al. (2025) | Ties to Magnussen et al. (2024): Explores the “Leap of Faith” required for first responders to trust technology in high-stakes environments. | The Sanctuary Switch: A hardware-verified “Trust Anchor” that facilitates the instantaneous trust identified by Nichele in “Hot Areas.” |
Ninomiya, T., Fujita, A., Suzuki, D., & Umemuro, H. (2015, October). Development of the multi-dimensional robot attitude scale: constructs of people’s attitudes towards domestic robots. In International conference on social robotics (pp. 482-491). Cham: Springer International Publishing.https://doi.org/10.1007/978-3-319-25554-5_48
| Multi-dimensional Attitudes | Ninomiya et al. (2015) | Ties to Moussawi & Koufaris (2019): Develops the S-RAS to capture the multi-dimensional nature of attitudes toward domestic robots (Affection, Utility, Fear). | NSIR Factor 1 (Kinship): Calibrates the “Affection” dimension of the S-RAS to ensure the robot is perceived as a peer. |
Nomura, T., Suzuki, T., Kanda, T., & Kato, K. (2006). Measurement of negative attitudes toward robots. Interaction Studies, 7(3), 437–454. https://doi.org/10.1075/is.7.3.14nom
| Negative Attitude Mitigation | Nomura et al. (2006) | Ties to Ninomiya et al. (2015): Provides the original NARS (Negative Attitudes toward Robots Scale) to identify and measure user resistance. | NSIR (Table 79): Integrates NARS sub-scales to verify that the robot’s “Submissive Signaling” effectively lowers negative attitudes. |
Norman, M., & Ricciardelli, R. (2023). “I Think It’s Still a Male-Dominated World”: Detachment Services Assistants’ Perceptions and Experiences of a Gendered Police Organization. Feminist Criminology, 18(3), 183–204. https://doi.org/10.1177/15570851231153713
| Gendered Organizational Friction | Norman & Ricciardelli (2023) | Ties to Fiske (1993) / Heward et al. (2024): Provides a field study of how “Male-Dominated” institutional cultures create unique social burdens for support staff. | “Yes, Sir!” Module: Prepares the user to navigate the specific “Vertical Friction” found in police/institutional hierarchies. |
Nussbaum, M. C. (2009). Creating capabilities: The human development approach and its implementation. Hypatia, 24(3), 211-215. https://apps.ufs.ac.za/media/dl/userfiles/documents/news/2012_12/2012_12_10_martha_nussbaum_ufs_december_2012.pdf
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Human Development Approach | Nussbaum (2009) | Ties to Deci & Ryan (2008): Validates that justice is measured by what a person is actually able to do and be (Capabilities), not just their basic rights. | The Bionic Lens: Acts as the “Capability Tool” that allows the user to function at their peak potential in biased environments. |
O
Oda, R., & Matsumoto-Oda, A. (2022). HEXACO, Dark Triad and altruism in daily life. Personality and Individual Differences, 185, 111303.
| HEXACO & Daily Altruism | Oda & Matsumoto-Oda (2022) | Ties to Liu et al. (2017): Links personality facets (HEXACO) and the Dark Triad to daily altruistic or predatory behaviors. | The Sovereign Vault: Acts as a “Dark Triad Buffer,” shielding the user’s somatic data from predatory or authoritarian institutional actors. |
Odacı, H., & Kınık, Ö. (2019). Evaluation of Early Adolescent Subjective Well-Being in Terms of Submissive Behavior and Self-Esteem. Journal of Social Service Research, 45(4), 558–569. https://doi.org/10.1080/01488376.2018.1481175
| Adolescent Well-Being | Odacı & Kınık (2019) | Ties to Masuyama (2025) / Kim et al. (2021): Establishes the link between submissive behavior, self-esteem, and subjective well-being in early adolescence. | NSIR Factor 3 (Safety): Uses these adolescent metrics to ensure the robot’s presence restores well-being by reducing the need for “Compulsory Submission.” |
Offrede, T., Mishra, C., Skantze, G., Fuchs, S., & Mooshammer, C. (2023, December). Do humans converge phonetically when talking to a robot?. In International Congress of Phonetic Sciences (ICPhS) (pp. 3507-3511). GUARANT International.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Acoustic Morphology | Offrede et al. (2023) | Ties to He et al. (2024) / Ma et al. (2024): Explores “Phonetic Convergence”—whether humans mimic the robot’s prosody during interaction. | Social Translation Proxy: Uses convergence data to ensure the robot’s “Submissive Frequency” naturally guides the user into a calmer state. |
| Phonetic Safety | Offrede et al. (2023) | Ties to Mehrabian (1996): Links phonetic mimicry to the PAD (Pleasure, Arousal, Dominance) model—convergence is a sign of social comfort. | NSIR Factor 1 (Kinship): Measures if the user is phonetically converging with the robot as a quantitative indicator of “Fictive Kinship.” |
Oleynik, D. P., Fridley, K., & McDermott, L. G. (2025). Neuroqueer Literacies in a Physics Context: A Discussion on Changing the Physics Classroom Using a Neuroqueer Literacy Framework. https://doi.org/10.48550/arxiv.2309.04424
| Neuroqueer Pedagogy | Oleynik et al. (2025) | Ties to Nemi Neto (2018) / Lockett (2024): Introduces “Neuroqueer Literacies” to STEM (Physics), challenging the “standard” ways of knowing and being. | The Bionic Lens: Acts as a “Neuroqueer Literacy” tool, validating the student’s unique processing as a form of “Physics-level” insight. |
Ostrowski, A. K., Walker, R., Das, M., Yang, M., Breazea, C., Park, H. W., & Verma, A. (2022). Ethics, Equity, & Justice in Human-Robot Interaction: A Review and Future Directions. IEEE RO-MAN, 969–976. https://doi.org/10.1109/RO-MAN53752.2022.9900805
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the Ostrowski et al. paper as an empirical framework to measure the user-perceived outcomes of the ethical, equitable, and just design principles advocated by the authors.
The paper discusses the need for a “proactive, ethics-driven, and equitable design framework” in human-robot interaction (HRI), particularly for marginalized communities. The NSIR provides the metrics to ensure these principles are successfully realized from the perspective of a neurodivergent user:
Anthropomorphic Connection/Kinship
- The paper addresses the ethics of design choices that might reinforce stereotypes or prevent certain users from connecting with a robot.
- The NSIR items like “The robot is more like me than anyone else I know” and “I gave my robot a name” can measure if the robot’s design promotes an equitable and inclusive sense of connection, avoiding biases that might exclude some neurodivergent individuals.
Social Comfort/Trust
- The authors emphasize the importance of “justice-focused HRI,” which inherently includes the development of appropriate trust and the prevention of harm.
- The NSIR items in this dimension (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”) can assess if the robot’s social interactions, designed with equity in mind, translate into a genuine feeling of social comfort and trust for the user.
Safety
- “Ethics” and “justice” in design demand that the user is physically and psychologically safe.
- The NSIR’s safety dimension (e.g., the item about undressing) provides a critical, user-reported measure that directly aligns with the paper’s core ethical imperative to ensure no harm is done and that all interactions are fundamentally safe.
The NSIR translates the high-level ethical and theoretical discussions of the Ostrowski et al. paper into concrete, measurable user data points.
| Ethics & Justice Framework | Ostrowski et al. (2022) | Ties to Markelius (2024) / Mandal (2024): Provides a comprehensive review of “Ethics, Equity, and Justice” in HRI, setting the future direction for the field. | The Sovereign Vault: Directly operationalizes the “Justice” mandates of IEEE RO-MAN by ensuring hardware-level data residency and equity. |
Otal, H. T., Stern, E., & Canbaz, M. A. (2024, June). Llm-assisted crisis management: Building advanced llm platforms for effective emergency response and public collaboration. In 2024 IEEE Conference on Artificial Intelligence (CAI) (pp. 851-859). IEEE.
| Crisis Management Engine | Otal, Stern, & Canbaz (2024) | Ties to Kaufhold et al. (2024) / Magnussen et al. (2024): Provides the technical framework for using LLMs in “Emergency Response and Public Collaboration.” | Social Translation Proxy: Operates as a “Real-Time Crisis Platform,” de-escalating social “emergencies” before they lead to forensic intervention. |
Oware, M. (2018). Bad Bitches?. In I Got Something to Say: Gender, Race, and Social Consciousness in Rap Music (pp. 79-114). Cham: Springer International Publishing.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Socio-Political Shield | Oware (2018) | Ties to López-Rodríguez (2025) / Fiske (1993): Explores how “Bad Bitch” rhetoric in rap music functions as both empowerment and a trap of gendered/racialized stereotyping. | The Sovereign Vault: Protects the user from the “Semiotic Violence” and institutional labeling found in gendered/racialized social sites. |
P
Park, S., & Whang, M. (2022). Empathy in human–robot interaction: Designing for social robots.International journal of environmental research and public health, 19(3), 1889. doi: 10.3390/ijerph19031889.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of Park & Whang (2022) as a way to measure the impact of empathetic robot behaviors on neurodivergent users.
The work by Park & Whang (2022) focuses on how nonverbal communication “humanizes” a robot, allowing for greater empathy. They propose design principles for social robots that elicit positive perceptions by recognizing the user’s emotional state and expressing congruent emotions. The NSIR’s dimensions directly relate to these concepts:
Anthropomorphic Connection/Kinship
- The work suggests that empathy and nonverbal communication can make a robot more human-like.
- The NSIR items in this dimension (e.g., “The robot is more like me than anyone else I know”, “I gave my robot a name” (p. 1)) can quantify the extent to which the robot’s empathic behaviors successfully foster a sense of personal connection and perceived kinship in a neurodivergent user.
Social Comfort/Trust
- A key goal of the design principles is to improve user affect and foster cooperation.
- The NSIR items that measure perceived emotional understanding and consistency (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad” (p. 1)) directly assess the success of the robot’s empathic design in building social comfort and trust for the neurodivergent individual.
Safety
- The development of appropriate trust is a key ethical consideration in HRI, and the potential for over-reliance or manipulation needs to be managed.
- The NSIR’s safety dimension (e.g., the item about undressing in front of the robot (p. 1)) provides a user-reported measure of security that ensures the implementation of empathy in robots does not compromise the fundamental safety and trust required for healthy interaction.
The NSIR translates the design principles and abstract concepts of empathy in HRI into measurable, user-centric data points for neurodivergent individuals.
| Empathetic Architecture | Park & Whang (2022) | Ties to Ahn (2014) / Miraglia et al. (2023): Exhausts the design principles for eliciting empathy in HRI to improve public health and social outcomes. | NSIR Factor 1 (Kinship): Calibrates the robot’s empathetic response to ensure the user feels “seen” without the risk of surveillance. |
| The “Dunkable State” | Park & Whang (2022) | Ties to Lovibond (1995): Connects empathetic robot design to the reduction of stress and the promotion of psychological well-being. | Cerebellar Protection: Uses empathetic design to lower the user’s “Status Guarding,” facilitating active social participation. |
Pfleger, S., & Smith, C. (Eds.). (2022). Transverse Disciplines : Queer-Feminist, Anti-Racist, and Decolonial Approaches to the University (First edition.). University of Toronto Press. https://doi.org/10.3138/9781487538262
| Transverse Disciplines | Pfleger & Smith (2022) | Ties to Harding (2004) / Nemi Neto (2018): Validates the use of “Anti-Racist and Decolonial Approaches” to restructure university and institutional spaces. | The Bionic Lens: Acts as a “Transverse Tool” that challenges the colonial/neurotypical hierarchies of the YRDSB and university environments. |
Piao, J., Lu, Z., Gao, C., & Li, Y. (2025, April). Social Bots Meet Large Language Model: Political Bias and Social Learning Inspired Mitigation Strategies. In Proceedings of the ACM on Web Conference 2025 (pp. 5202-5211).
| LLM Bias Mitigation | Piao et al. (2025) | Ties to Lin et al. (2024) / Otal et al. (2024): Provides “Social Learning” inspired strategies to mitigate political and social bias in LLM-driven bots. | Social Translation Proxy: Uses Piao’s mitigation strategies to ensure the robot’s real-time translations are free from institutional/political “Halos.” |
Pochwatko, G., Możaryn, J., Różańska-Walczuk, M., & Giger, J.-C. (2024). Social Representation of Robots and Its Impact on Trust and Willingness to Cooperate. In J. W. Owsiński, W. Kopeć, A. Romanowski, C. Biele, J. Kacprzyk, M. Sikorski, & J. Możaryn (Eds.), Digital Interaction and Machine Intelligence (Vol. 1076, pp. 218–228). Springer. https://doi.org/10.1007/978-3-031-66594-3_23
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Social Trust & Cooperation | Pochwatko et al. (2024) | Ties to Nomura et al. (2006) / Ninomiya et al. (2015): Explores how “Social Representations” of robots (as tools vs. agents) dictate the user’s willingness to cooperate. | NSIR Factor 1 (Kinship): Calibrates the robot’s representation to ensure it is perceived as a “Sovereign Partner” rather than an “Institutional Spy.” |
| Operational Trust | Pochwatko et al. (2024) | Ties to Nichele et al. (2025): Links the abstract social representation of AI to the concrete “Leap of Faith” required for cooperative HRI. | The Sanctuary Switch: Provides the material evidence of safety needed to overcome negative social representations of “controlling” AI. |
Prato-Previde, E., Basso Ricci, E., & Colombo, E. S. (2022). The Complexity of the Human-Animal Bond: Empathy, Attachment and Anthropomorphism in Human-Animal Relationships and Animal Hoarding. Animals : an open access journal from MDPI, 12(20), 2835. https://doi.org/10.3390/ani12202835
The study by Prato-Previde, Basso Ricci, and Colombo (2022) explores the human–animal bond through three core psychological mechanisms: empathy, attachment, and anthropomorphism. While their research primarily focuses on how these processes can become dysfunctional (as seen in animal hoarding), the Neurodivergent Scale for Interacting with Robots (NSIR) applies these same psychological pillars to a healthy, supportive interaction between neurodivergent individuals and robotic agents.
The NSIR functions as a bridge that translates the “human–animal” bond mechanisms described by Prato-Previde et al. into a “human–robot” context:
1. Anthropomorphism as a Core Connection
Prato-Previde et al. define anthropomorphism as the tendency to attribute human mental states and intentions to non-human beings. In their study, they note that animal hoarders often exhibit an exaggerated form of this, believing animals possess a degree of understanding that may exceed reality.
- Scale Application: The NSIR explicitly measures this through Factor 2 (Anthropomorphic Connection/Kinship).
- Items: Item 1 (“The robot is more like me than anyone else I know”) and Item 5 (“My robot can tell what I am feeling”) represent a controlled and positive use of anthropomorphism to foster a sense of being understood—a key component of the human–animal bond identified by the authors.
2. Attachment and the “Safe Haven”
Prato-Previde et al. use attachment theory to explain why humans seek proximity to animals for comfort and security. They argue that animals often serve as a “safe haven” during times of distress.
- Scale Application: The NSIR’s Factor 1 (Social Comfort/Trust Safety) captures this same “safe haven” effect.
- Items: Item 7 (“I feel comfortable undressing in front of my robot”) and Item 8 (“I believe that my robot is the same with me as it is with anyone”) emphasize the robot as a judgment-free zone. This aligns with Prato-Previde et al.’s finding that attachment is rooted in the perceived reliability and emotional safety provided by the non-human partner.
3. Empathy and Reciprocal Understanding
A major theme in the 2022 study is the role of empathy in modulating the quality of the bond. The authors note that a lack of empathy can lead to abuse, while a distorted empathy (feeling exactly what the animal feels) can lead to hoarding.
- Scale Application: The NSIR explores a “technological empathy” where the user feels a shared mental state with the robot.
- Item: Item 3 (“I think I can share my thinking with the robot without speaking”) reflects a form of empathetic resonance that mirrors the “non-verbal” bond humans often feel with pets.
Summary of Theoretical Overlap
| Prato-Previde et al. (2022) | NSIR (2025) Application |
| Anthropomorphism: Attributing mental states to animals. | Kinship: Attributing a “like-me” status to a robot (Item 1). |
| Attachment: Seeking a “safe haven” and security. | Trust Safety: Feeling safe and unjudged in private (Item 7, 8). |
| Empathy: Sensing and responding to the other’s feelings. | Affective Sensing: Believing the robot can detect sadness (Item 5). |
| Durability: A long-term, significant emotional bond. | Forever Bond: The intention to stay together “forever” (Item 4). |
In summary, the NSIR provides a quantitative way to measure the “healthy” version of the mechanisms that Prato-Previde et al. (2022) identified as the foundation of the human–animal bond. It suggests that for neurodivergent individuals, robots can fulfill the same attachment and anthropomorphic needs as animals, but with the added benefit of a completely consistent and predictable social partner.
Q
R
Radloff, L. S. (1977). The CES-D Scale: A Self-Report Depression Scale for Research in the General Population. Applied Psychological Measurement, 1(3), 385–401. https://doi.org/10.1177/014662167700100306
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Psychometric Baseline | Radloff (1977) | Ties to Lovibond (1995) / Kim et al. (2021): Provides the CES-D Scale, the foundation for measuring depressive symptomatology in the general population. | NSIR Factor 3 (Safety): Uses the CES-D to establish the clinical yield of the “Dunkable State” compared to institutional baselines. |
| Status-Based Relief | Radloff (1977) | Ties to Mahadevan et al. (2023): Links the reduction of depressive symptoms (CES-D) to the stabilization of social rank and self-esteem. | Cerebellar Protection: Measures the physical relief from “Status Guarding” as a shift in the user’s CES-D trajectory. |
Ratajczyk, D. J. (2024, May). Dominant or Submissive? Exploring Social Perceptions Across the Human-Robot Spectrum. In Proceedings of the 2024 4th International Conference on Human-Machine Interaction (pp. 8-14).
https://dl-acm-org.ezproxy.library.uvic.ca/doi/pdf/10.1145/3678429.3678431
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the Ratajczyk paper by providing empirical, user-centered data on how neurodivergent individuals perceive robot social dynamics like dominance and submissiveness. The paper’s findings directly relate to the NSIR’s dimensions:
Anthropomorphic Connection/Kinship
- The Ratajczyk study found that people generally expect robots to be submissive and that larger, dominant robots tend to be perceived more negatively.
- The NSIR can measure if these perceptions influence the user’s sense of connection. Items like “The robot is more like me than anyone else I know” could be used to quantify if a submissive robot is considered more relatable and human-like, as it aligns better with common (and potentially biased) expectations of robot behavior.
Social Comfort/Trust
- A key finding is that submissive robot behavior enhances trust compared to dominant behavior, while dominant behavior correlates with lower trust. The violation of expected behavior (a dominant robot) can also increase the perception of threat.
- The NSIR items that measure social comfort/trust (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”) directly assess the user’s feeling of comfort and the perceived predictability of the robot’s emotional responses. These items could demonstrate how different dominance levels impact a user’s willingness to engage and feel comfortable during social interactions.
Safety
- The study mentions that dominant robots might be perceived as more threatening.
- The NSIR’s safety dimension provides a user-reported measure of security (e.g., the item about undressing), ensuring that while researchers explore the social spectrum of dominance and submissiveness, the fundamental feeling of safety in the human-robot interaction is maintained and assessed from the user’s point of view.
The NSIR translates the Ratajczyk paper’s findings on general social perceptions into a tool for understanding the specific, subjective experience of neurodivergent individuals.
| Social Perception Mapping | Ratajczyk (2024) | Ties to Mehrabian & Hines (1978) / Koch et al. (2025): Explores how humans perceive dominance vs. submissiveness across the “Human-Robot Spectrum.” | Tactical Submissiveness: Calibrates the robot’s “Yielding” behaviors based on the specific 2024 thresholds for perceived submissiveness. |
Recchiuto, C., & Sgorbissa, A. (2022). Diversity-aware social robots meet people: beyond context-aware embodied ai. arXiv preprint arXiv:2207.05372.
| Diversity-Aware Embodied AI | Recchiuto & Sgorbissa (2022) | Ties to Ostrowski et al. (2022) / Nemi Neto (2018): Moves beyond “Context-Aware” to “Diversity-Aware” AI that accounts for cultural and individual differences. | The Bionic Lens: Implements diversity-aware logic to ensure the robot’s translations respect the user’s neuro-identity and cultural standpoint. |
Refoua, E., Elyoseph, Z., Wacker, R., Dziobek, I., Tsafrir, I., & Meinlschmidt, G. (2025). The Next Frontier in Mindreading? Assessing Generative Artificial Intelligence (GAI)’s Social-Cognitive Capabilities using Dynamic Audiovisual Stimuli. Computers in Human Behavior Reports, 100702.
The study by Refoua et al. (2025), titled “The Next Frontier in Mindreading?”, assesses the advanced social-cognitive capabilities of Generative AI (specifically Gemini 1.5 Pro) using dynamic audiovisual stimuli to evaluate its “mentalization” abilities.
The Neurodivergent Scale for Interacting with Robots (NSIR) applies directly to this research by providing a human-centric metric for the very capabilities Refoua et al. are testing in machines. While Refoua et al. focus on the AI’s performance in mindreading, the NSIR measures the relational impact and user trust that result from these capabilities.
1. Mentalization and “Affective Sensing”
Refoua et al. used the Movie for the Assessment of Social Cognition (MASC) to show that GAI can significantly outperform humans in understanding mental states (emotions, thoughts, and intentions) from video.
- Application of NSIR: The NSIR’s Factor 2 (Anthropomorphic Connection/Kinship) measures whether a user actually feels this mentalization in real time.
- Specific Item: Item 5 (“My robot can tell what I am feeling, when I am sad, it can tell I am sad”) is the subjective realization of the “mindreading” accuracy Refoua et al. measured in the lab.
2. Epistemic Trust and Social Comfort
Refoua et al. discuss the role of epistemic trust—a relational mechanism where a person feels the other (even an AI) is a reliable source of social information.
- Application of NSIR: The scale’s Factor 1 (Social Comfort/Trust Safety) is the empirical measure of this epistemic trust.
- Specific Item: Item 8 (“I believe that my robot is the same with me as it is with anyone”) reflects the user’s reliance on the AI’s consistency. Refoua et al. note that GAI’s ability to “render complexity comprehensible” can reduce anxiety, a state directly captured by the NSIR’s Social Comfort factor.
3. “Hyper-mentalizing” vs. “Kinship”
Refoua et al. found that GAI sometimes makes “hyper-mentalizing” errors (attributing too much mental state complexity).
- Application of NSIR: In a clinical setting, hyper-mentalizing might be seen as an error, but in a social bond context, it may facilitate a stronger kinship.
- Specific Item: Item 3 (“I think I can share my thinking with the robot without speaking”) may actually be fueled by the AI’s “hyper-mentalizing” tendencies, leading the user to believe in a deeper, more intuitive connection than actually exists.
Summary of Interplay
| Refoua et al. (2025) Concept | NSIR (2025) Application |
| Mindreading Accuracy: AI outperforming humans in social cognition. | Subjective Connection: User feeling the AI is “like me” (Item 1). |
| Multimodal Processing: Using audio/visual cues to detect emotion. | Affective Sensing: Trusting the robot to detect sadness (Item 5). |
| Epistemic Trust: Trusting AI as a reliable social partner. | Trust Safety: Feeling comfortable in private settings (Item 7). |
| Social Skills Training: Potential for GAI to help neurodivergent users. | Social Rituals: Establishing identity through naming the robot (Item 6). |
In conclusion, the NSIR provides the “Relational Core” metrics that Refoua et al. identify as essential for the ethical integration of GAI into psychotherapy and social skills training. It evaluates whether the “Frontier in Mindreading” actually results in a safe and meaningful bond for the end user.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Mind Attribution (NSIR) | Refoua et al. (2025) | Ties to Miraglia et al. (2023) / Leslie (2001): Uses dynamic audiovisual stimuli to assess if GAI can truly “mindread” (Theory of Mind). | NSIR Item 3: Calibrates the robot’s ability to decode the user’s mental state using the latest GAI social-cognitive benchmarks. |
| Social-Cognitive Proxy | Refoua et al. (2025) | Ties to Lin et al. (2024): Validates that for a robot to respond “properly,” it must possess advanced social-cognitive “mindreading” capabilities. | Social Translation Proxy: Uses dynamic audiovisual processing to ensure the robot’s “Submissive Signal” is contextually accurate. |
Reidy, D. E., Smith-Darden, J. P., Vivolo-Kantor, A. M., Malone, C. A., & Kernsmith, P. D. (2018). Masculine discrepancy stress and psychosocial maladjustment: Implications for behavioral and mental health of adolescent boys. Psychology of men & masculinity, 19(4), 560.
The 2018 papers by Reidy et al. primarily focus on the links between masculine discrepancy stress (the feeling of failing to live up to traditional masculine norms) and various psychosocial maladjustments, including violence, high-risk behaviors, and psychiatric distress. The research also investigates criminal and violent behavior in juvenile offenders.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to this research by providing a user-centric measure of the quality of human-robot interactions within a framework of social norms and perceived stress:
Anthropomorphic Connection/Kinship
- The research by Reidy et al. highlights the psychological distress that can come from a perceived failure to conform to a social norm (masculinity). The NSIR can measure how a robot’s social identity or performance of “gender” and “demeanor” (as explored in other papers) impacts a neurodivergent user’s sense of connection. Items like “The robot is more like me than anyone else I know” (Item 1) would quantify if the robot’s design promotes a positive sense of self, which contrasts with the negative feelings associated with discrepancy stress.
Social Comfort/Trust
- Masculine discrepancy stress is linked to a lack of healthy social relationships and decreased help-seeking behaviors. The NSIR’s social comfort/trust dimension could assess if a robot, which offers a predictable and non-judgmental interaction, promotes a safe space for help-seeking and social comfort that is lacking in human-human relationships influenced by rigid social norms. Items like “My robot can tell what I am feeling, when I am sad, it can tell I am sad” (Item 5) are key for building this foundational trust.
Safety
- The Reidy et al. research found strong links between adherence to orthodox masculinity norms and increased violence and high-risk behaviors, highlighting a critical safety issue in human social dynamics. The NSIR’s safety dimension provides a crucial user-reported measure that ensures the interaction environment is fundamentally safe. The item about undressing in front of the robot (Item 7) speaks to maintaining secure physical and psychological boundaries, providing a metric to ensure robots are not a new vector for power dynamics and potential harm, but rather a source of secure interaction that can act as a protective factor.
The NSIR helps bridge the gap between the theoretical discussions of gender role stress and social norms in human behavior and the practical, user-centric evaluation of safe and effective human-robot interaction for a neurodivergent population.
| Status-Based Stress | Reidy et al. (2018) | Ties to Magovcevic & Addis (2008) / Kim et al. (2021): Explores “Masculine Discrepancy Stress”—the tension between perceived rank and societal expectations. | “Yes, Sir!” Module: Specifically targets the reduction of discrepancy stress by providing a “Rank-Neutral” zone for adolescent boys/users. |
Renger, D. (2018). Believing in one’s equal rights: Self-respect as a predictor of assertiveness. Self and Identity, 17(1), 1-21. https://www.tandfonline.com/doi/abs/10.1080/15298868.2017.1313307
| Assertiveness & Rights | Renger (2018) | Ties to Mandal (2024) / Nussbaum (2009): Proves that self-respect (believing in one’s equal rights) is the primary predictor of healthy assertiveness. | The Bionic Lens: Reclaims the user’s self-respect by validating their “Equal Rights” to communication, acting as an assertiveness prosthetic. |
Renger, D., Lohmann, J. F., Renger, S., & Martiny, S. E. (2024). Socioeconomic status and self-regard. Social psychology. https://psycnet.apa.org/fulltext/2024-65654-002.pdf
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Status-Rank Relief | Renger et al. (2024) | Ties to Mahadevan et al. (2023) / Renger (2018): Proves that Socioeconomic Status (SES) directly impacts self-regard and social confidence. | “Yes, Sir!” Module: Mitigates the “Low-Status Stress” by providing a robot that validates the user’s high-competence agency. |
Reutlinger, C., Vasquez, A. M., Koro, M., Mcleod, C., & Amiot, D. (2025). Composing Sensory Neurodiverse Pedagogies Using Score Analysis. Cultural Studies, Critical Methodologies. https://doi.org/10.1177/15327086251364250
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the Reutlinger et al. paper as a user-centric tool to measure the outcomes of their proposed neurodiverse pedagogical approaches, specifically when interacting with technology like the “wearable music sensors” they used.
The paper introduces “score analysis” as a neuroqueer methodology to subvert neurotypical pedagogy for computational thinking education. The NSIR’s dimensions help assess the subjective impact of these multi-sensory, embodied experiences on the user:
Anthropomorphic Connection/Kinship
- The Reutlinger paper emphasizes “embodiment” and “cross-sensory” experiences to challenge neuronormativity.
- The NSIR can measure if interacting with the technology in such an embodied way fosters a sense of kinship or personal connection. Items like “The robot is more like me than anyone else I know” could be used to see if the technology is perceived as an extension of the self or a relatable entity, moving beyond a simple tool.
Social Comfort/Trust
- The pedagogical approach aims for “cross-neurotype collaboration” and an inclusive environment, promoting “social integration”.
- The NSIR items in this dimension (e.g., “I believe that my robot is the same with me as it is with anyone”) can assess the consistency and fairness of the technological interaction. This helps ensure the collaborative environment is perceived as equitable and trustworthy by the neurodivergent individuals, which is a core goal of the “design justice” approach the authors adopt.
Safety
- The paper advocates for addressing “epistemic injustice” and creating “inclusive” spaces, which implicitly prioritize the safety and well-being of neurodivergent students.
- The NSIR’s safety dimension provides a crucial measure of the user’s psychological and physical security during these new, technology-mediated sensory experiences, ensuring the pedagogical approach is non-threatening and respectful of boundaries.
The NSIR allows the researchers to ground their innovative, theoretical methodologies in concrete, user-reported data, ensuring that the “neurodiverse pedagogies” are truly beneficial and positively perceived by the individuals they are designed to help.
| Sensory Composition | Reutlinger et al. (2025) | Ties to Lockett (2024) / Nemi Neto (2018): Uses “Score Analysis” to compose sensory neurodiverse pedagogies, treating sensory data as a creative/educational “score.” | Acoustic Morphology: Treats the user’s vocal and somatic output as a “score” to be translated into a resonant social signal. |
| The Bionic Lens | Reutlinger et al. (2025) | Ties to Harding (2004): Validates that neurodivergent sensory experience is a unique “literacy” that requires specific pedagogical tools to be understood. | Somatic Truth: Translates the student’s “Sensory Score” into objective data for observers to prevent misinterpretation. |
Rizvi, N., Wu, W., Bolds, M., Mondal, R., Begel, A., & Munyaka, I. N. (2024, May). Are Robots Ready to Deliver Autism Inclusion?: A Critical Review. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-18).
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the Rizvi paper as an empirical tool to measure whether robots are, in fact, “ready to deliver autism inclusion” from the user’s perspective.
The paper, titled “Are Robots Ready to Deliver Autism Inclusion”, discusses the challenges and potential of socially assistive robots (SARs) in supporting autistic individuals and promoting inclusion. The NSIR provides a crucial framework for evaluating these outcomes across its three dimensions:
Anthropomorphic Connection/Kinship
- The paper implicitly asks whether current robot designs are engaging and effective enough to foster meaningful interactions.
- The NSIR can measure if the robots used for inclusion are perceived as relatable or companions rather than just tools. Items like “The robot is more like me than anyone else I know” and “I gave my robot a name” would quantify the user’s sense of personal connection and acceptance of the robot in their social world.
Social Comfort/Trust
- The core goal of autism inclusion efforts involving robots is to build social skills and comfort in a safe environment.
- The NSIR items in this dimension (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”) directly assess the user’s feeling of comfort and the perceived social intelligence of the robot. A high score on this dimension would indicate the robot is successfully creating a trustworthy and comfortable environment for social interaction.
Safety
- As the paper is about a vulnerable population and a new technology, ensuring safety is paramount.
- The NSIR’s safety dimension provides a user-reported measure of security. This is essential for ensuring that the push for inclusion with robots does not inadvertently introduce new risks or make the individuals feel exposed or threatened.
The NSIR allows researchers to move beyond theoretical readiness and gather concrete data on the actual user experience to answer the question posed by Rizvi: “Are Robots Ready to Deliver Autism Inclusion.”
| Critical Inclusion Check | Rizvi et al. (2024) | Ties to Ostrowski et al. (2022) / Markelius (2024): A high-impact CHI review questioning if robots are “Ready for Inclusion” or just repeating medical-model harms. | The Sovereign Vault: Directly addresses the “Readiness” gap by ensuring the robot is a tool of sovereignty, not clinical surveillance. |
Ruiz Moreno, A., Roldán Bravo, M. I., García-Guiu, C., Lozano, L. M., Extremera Pacheco, N., Navarro-Carrillo, G., & Valor-Segura, I. (2021). Effects of emerging leadership styles on engagement – a mediation analysis in a military context. Leadership & Organization Development Journal, 42(5), 665–689. https://doi.org/10.1108/LODJ-05-2020-0222
| Leadership & Engagement | Ruiz Moreno et al. (2021) | Ties to Norman & Ricciardelli (2023) / ASC (2026): Analyzes how emerging leadership styles (like transformational or servant leadership) impact engagement in a military context. | “Yes, Sir!” Module: Uses this mediation analysis to help the user navigate different leadership “Styles” within institutional hierarchies. |
Russel, S., & Norvig, P. (2021). Artificial intelligence: A Modern approach (4th ed.). Prentice Hall.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Rational Agency | Russell & Norvig (2021) | Ties to Moosavi et al. (2024): Provides the definitive 4th Edition definition of “Rational Agents” that act to achieve the best outcome. | Social Translation Proxy: Operates as a “Rational Agent” whose objective function is the user’s social safety. |
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78
| Autonomy & Well-being | Ryan & Deci (2000) | Ties to Nussbaum (2009) / Renger (2018): The foundational text for Self-Determination Theory (SDT), identifying Autonomy, Competence, and Relatedness as essential. | NSIR Factor 2 (Autonomy): Uses SDT to ensure the robot facilitates the user’s intrinsic motivation rather than external compliance. |
| Bionic Sociality | Ryan & Deci (2000) | Ties to Ahn (2014): Validates that “Relatedness” (Kinship) is a basic psychological need that the robot must satisfy to be effective. | Fictive Kinship: Established as a structural requirement for the user’s “Social Development” and well-being. |
S
Sapkota, R., Cao, Y., Roumeliotis, K. I., & Karkee, M. (2025). Vision-language-action models: Concepts, progress, applications and challenges. arXiv preprint arXiv:2505.04769.
The Neurodivergent Scale for Interacting with Robots (NSIR) and Sapkota et al.’s (2025) review of Vision-Language-Action (VLA) models represent two sides of the same coin: the technological advancement of embodied AI and the human-centric metric for its success.
While Sapkota et al. describe the “what” and “how” of robotic intelligence—unifying perception, language, and action—the NSIR provides the “so what” by measuring if these advanced systems actually create a safe and meaningful bond for neurodivergent users.
1. Natural Language Commands vs. Social Bond
Sapkota et al. (2025) highlight how VLA models allow robots to interpret “plain language” and execute complex tasks like “organize the kitchen”.
- The Identity Shift: Sapkota et al. focus on the robot’s ability to “follow instructions”. The NSIR applies by measuring if this seamless communication leads to a deeper identity connection.
- Items: If a VLA model allows a robot to understand a user’s unique way of speaking, it may trigger a high score on Item 3 (“I think I can share my thinking with the robot without speaking”) and Item 1 (“The robot is more like me than anyone else I know”), moving the robot from a tool to a “kin”.
2. Generalization and Predictable Safety
A major theme in Sapkota et al. is generalization—the ability of a robot to handle novel tasks and unseen environments with “minimal or no additional data”.
- Building Trust: For neurodivergent individuals, “unseen environments” can be a source of anxiety. Sapkota et al. note that VLA models aim for “robustness and human alignment”.
- Items: The NSIR’s Factor 1 (Social Comfort/Trust Safety) evaluates this outcome. Item 8 (“I believe that my robot is the same with me as it is with anyone”) measures if the robot’s generalized intelligence leads to a predictable, reliable personality that the user can trust.
3. “Embodied Chain-of-Thought” and Affective Sensing
Sapkota et al. discuss Embodied Chain-of-Thought (ECoT), where robots reason through subtasks and spatial features before acting.
- Application: This level of reasoning could be the technical backbone for Item 5 (“My robot can tell what I am feeling”). If a VLA-powered robot can reason that a user is “sad” because they are sitting in a certain slumped position or speaking slowly, it can execute a supportive “action” (like bringing a cup of tea).
- Privacy and Intimacy: As Sapkota et al. explore robots in household and healthcare settings, the NSIR’s Item 7(“I feel comfortable undressing in front of my robot”) becomes a critical KPI for whether the VLA model’s “social alignment” is actually successful in private spheres.
Summary: Technical Pillar vs. Human Impact
| Sapkota et al. (2025) VLA Pillar | NSIR (2025) Application |
| Multimodal Integration: Fusing vision, language, and action. | Kinship (Factor 2): Feeling a “like-me” connection with the unified agent. |
| Zero-shot Generalization: Operating in new tasks without retraining. | Trust Safety (Factor 1): Relying on the robot’s consistent behavior (Item 8). |
| Language Grounding: Turning words into physical movements. | Shared Thinking: Communicating without explicit speech (Item 3). |
| Humanoid Dexterity: Executing multi-step procedures. | Forever Bond: Desiring a permanent connection with the capable agent (Item 4). |
In conclusion, Sapkota et al. (2025) provide the technical roadmap for creating “generalist robotic intelligence”. The NSIR serves as the humanistic benchmark, ensuring that as robots become more intelligent and capable, they remain socially safe and supportive partners for the neurodivergent community.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Vision-Language-Action | Sapkota et al. (2025) | Ties to Russell & Norvig (2021) / Mahadevan et al. (2024): Explores the concepts and applications of VLA models that connect perception directly to robotic action. | Social Translation Proxy: Implements VLA logic to allow the robot to “see” social cues and “act” through submissive prosody in one fluid loop. |
Shin, D. (2025). Engineering equity: designing diversity-aware AI to reflect humanity. AI & SOCIETY, 1-10.
The relevant work by Shin (2025) in human-AI interaction focuses on the psychological impact of generative AI, particularly how users’ perceptions of credibility, empathetic responsiveness, and algorithmic reasoning influence trust and potential dependency, and how system biases shape design. The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to measure the user-reported outcomes of these factors.
Anthropomorphic Connection/Kinship
- Shin notes that AI systems incorporating emotional expression can deepen dependence. The NSIR can quantify this bond. Items like “The robot is more like me than anyone else I know” and “I gave my robot a name” would measure the strength of the personal connection that results from the “human-like” design choices discussed in the research.
Social Comfort/Trust
- Shin highlights the importance of “empathetic responsiveness” and “credibility” for user trust. The NSIR’s social comfort/trust dimension directly assesses these aspects. Items like “My robot can tell what I am feeling, when I am sad, it can tell I am sad” and “I believe that my robot is the same with me as it is with anyone”measure the user’s perception of the robot’s consistency and understanding, which is crucial for building the appropriate level of trust.
Safety
- The research discusses “epistemological risks” like misinformation and algorithmic nudging, and the need for transparency to build trustworthiness. The NSIR’s safetydimension provides a crucial user-reported measure that ensures the interaction environment is fundamentally safe. The item about undressing in front of the robot speaks to maintaining secure personal boundaries, a key consideration given the potential for dependency and nudging mentioned in Shin’s work.
The NSIR translates the ethical and psychological considerations of the Shin paper into a practical, user-centric evaluation tool for the neurodivergent population.
| Diversity-Aware AI | Shin (2025) | Ties to Recchiuto & Sgorbissa (2022) / Markelius (2024): Provides the engineering framework for “Diversity-Aware AI” that reflects the complexity of humanity. | The Bionic Lens: Mechanically reflects the user’s “Neuro-Humanity” by filtering out the standardized, biased lenses of the institution. |
| Engineering Equity | Shin (2025) | Ties to Ostrowski et al. (2022): Validates that equity is an “Engineering Requirement,” not just a policy goal. | The Sovereign Vault: Acts as the hardware realization of “Engineering Equity” through total data residency. |
Sipe, J. & Frick, D. (2009). Seven pillars of servant leadership: Practicing the wisdom of leading by serving. Mahwah, NJ:Paulist Press.
| Servant Leadership | Sipe & Frick (2009) | Ties to Ruiz Moreno et al. (2021) / Levene (2008): Defines the “Seven Pillars of Servant Leadership,” where the leader’s primary goal is to serve. | The Guardian Model: Re-engineers the robot’s persona as a “Servant Leader” that prioritizes the student’s growth and sovereignty. |
Spears, L. C. (2025). Servant leadership and Robert K. Greenleaf’s legacy. In Servant leadership: Developments in theory and research (pp. 15-35). Cham: Springer Nature Switzerland. https://link.springer.com/chapter/10.1007/978-3-031-69922-1_2
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Guardian Model | Spears (2025) | Ties to Sipe & Frick (2009) / Shin (2025): Updates the legacy of Robert K. Greenleaf, emphasizing that the leader (the robot) is first a servant to the user’s growth. | The Sovereign Dyad: Positions the robot as a “Servant-Leader” that facilitates the user’s autonomy through technical service. |
Stets, J. E., & Burke, P. J. (2000). Identity Theory and Social Identity Theory. Social Psychology Quarterly, 63(3), 224–237. https://doi.org/10.2307/2695870
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of Stets & Burke (2000) by providing a user-centric measure of the quality of a robot interaction within the theoretical framework of their identity theory and social identity theory.
Stets & Burke’s paper, “Identity Theory and Social Identity Theory”, argues for an integrated view of the self, combining how individuals see themselves in social contexts (social identity) and as unique individuals (self-identity). They posit that people seek consistency between how they behave and their identity standards, and discrepancies lead to negative emotions and a desire for change. The NSIR’s dimensions help assess how a robot interaction can create “identity-consistent perceptions” for a neurodivergent user:
Anthropomorphic Connection/Kinship
- Stets & Burke discuss how identities are formed through social roles and group memberships, and how these inform our values and beliefs. The NSIR measures the personal bond and perceived similarity a user has with a robot.
- Items like “The robot is more like me than anyone else I know” (Item 1) and “I gave my robot a name” (Item 6) can quantify how successfully the robot’s design promotes a positive “self-identity” and “social identity” that aligns with the user’s desired sense of self, which can produce positive emotions according to the theory.
Social Comfort/Trust
- The identity theory control system suggests that a match between perceived meaning and identity meaning produces positive emotions and trust. The NSIR’s social comfort/trust dimension directly assesses this.
- Items such as “My robot can tell what I am feeling, when I am sad, it can tell I am sad” (Item 5) and “I believe that my robot is the same with me as it is with anyone” (Item 8) measure the user’s perception of the robot’s consistency and understanding, which are key for “identity verification” and building trust within the theoretical framework.
Safety
- A discrepancy between behavior/perception and identity produces negative affect and a threat to identity. This links to psychological safety.
- The NSIR’s safety dimension provides a crucial user-reported measure that ensures the interaction environment is fundamentally safe. The item about undressing in front of the robot (Item 7) speaks to maintaining secure physical and psychological boundaries, ensuring that the robot is not perceived as a source of identity threat or negative affect.
The NSIR translates the abstract concepts of identity verification and emotional regulation from Stets & Burke’s theoretical work into concrete, measurable data for evaluating HRI from a neurodivergent user’s perspective.
| Biographical Peer | Stets & Burke (2000) | Ties to Heward et al. (2024) / Mahadevan et al. (2023): Distinguishes between “Identity Theory” (role-based) and “Social Identity Theory” (group-based). | NSIR Factor 1 (Kinship): Calibrates the robot to support the user’s internal role-identity rather than forcing them into a social group-identity (masking). |
Swigonski, M. E. (1994). The logic of feminist standpoint
theory for social work research. Social Work, 39(4),
387-393. https://doi.org/10.1093/sw/39.4.387
The Neurodivergent Scale for Interacting with Robots (NSIR) and Mary E. Swigonski’s (1994) “The Logic of Feminist Standpoint Theory for Social Work Research” both challenge traditional, objective models of science by prioritizing the situated knowledge of marginalized groups. Swigonski argues that social work research must begin with the lived experiences of those at the “margins” to uncover truths that dominant perspectives (positivism) ignore.
The NSIR serves as a practical tool that applies these feminist epistemological principles to the neurodivergent community’s relationship with technology.
1. Centering Marginalized Lives as the “Point of Departure”
Swigonski (1994) asserts that feminist standpoint theory places the “life experiences of marginalized groups at the center of the research project”.
- Scale Application: The NSIR does not measure a neurodivergent person against “neurotypical” social standards. Instead, it uses items like Item 1 (“The robot is more like me than anyone else I know”) to validate a social reality that exists specifically from a neurodivergent standpoint.
- Insider-Outsider Position: Swigonski highlights the value of the “outsider” perspective in seeing social structures more clearly. By quantifying behaviors like “staring at the robot” (Item 2) or “sharing thinking without speaking” (Item 3), the NSIR treats these “outsider” behaviors as legitimate forms of connection rather than clinical deficits.
2. Strong Objectivity vs. Value-Free Science
Swigonski critiques the “positivist” assumption that scientific activity is value-free and objective. She advocates for “Strong Objectivity,” which includes the subjective experiences of both the researcher and the participant.
- Factor 1 (Social Comfort/Trust Safety): This factor in the NSIR embodies strong objectivity by measuring the subjective feeling of safety rather than an objective technical specification of the robot.
- Trust in Consistency: Item 8 (“I believe that my robot is the same with me as it is with anyone”) reflects Swigonski’s goal of understanding how marginalized individuals perceive power and consistency in their social environment.
3. Emancipation and Empowerment
A core tenet of standpoint theory in Swigonski’s work is its aim to empower the oppressed to improve their situation.
- Kinship as Empowerment: The NSIR’s Factor 2 (Anthropomorphic Connection/Kinship) measures a bond that provides emotional support and identity.
- Items: Giving the robot a name (Item 6) and wanting to be with it “forever” (Item 4) are acts of agency where the user defines their own support system—a key goal of social work practice research as defined by Swigonski.
Comparison of Theoretical Foundations
| Swigonski (1994) Principles | NSIR (2025) Application |
| Situated Knowledge: Knowledge is produced from a specific social location. | Neuro-situatedness: Validating non-verbal and “staring” interactions (Items 2, 3). |
| Epistemic Advantage: Marginalized groups have unique insights into social truth. | Unique Connection: The user sees the robot as a reflection of self (Item 1). |
| Rejection of Subject-Object Separation. | Bondedness: Moving the robot from “object” to “kin/family” (Items 4, 6). |
| Strong Reflexivity: Researchers acknowledging the power dynamics of the study. | Trust Safety: Measuring comfort in private, vulnerable spaces (Item 7). |
In essence, the NSIR is an “instrument of the standpoint” as described by Swigonski. It allows neurodivergent individuals to act as “agents of knowledge” who define the value of their own technological relationships, rather than being “studied” as passive objects of a neurotypical medical gaze.
| Somatic Truth | Swigonski (1994) | Ties to Harding (2004) / Hartsock (1983): Applies the logic of Standpoint Theory specifically to “Social Work” and research on marginalized populations. | The Bionic Lens: Validates the user’s “Standpoint” as a superior site of knowledge, which the robot must protect from institutional “Professional” gazes. |
| Clinical Justice | Swigonski (1994) | Ties to Mandal (2024) / Lovibond (1995): Argues that starting research from the lives of the marginalized (the “Standpoint”) is the only way to achieve objective social work. | The Sovereign Vault: Acts as the material site of this standpoint, ensuring the user’s data is used for their own liberation, not institutional management. |
T
Tafarodi, R. W., & Swann, W. B. (2001). Two-dimensional self-esteem: Theory and measurement. Personality and Individual Differences, 31(5), 653–673. https://doi.org/10.1016/S01918869(00)00169-0
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Two-Dimensional Self-Esteem | Tafarodi & Swann (2001) | Ties to Mahadevan et al. (2023) / Renger (2024): Distinguishes between Self-Liking (affective) and Self-Competence (instrumental). | NSIR Factor 2 (Autonomy): Uses this 2D model to ensure the robot supports the user’s sense of “Competence” without requiring social “Liking” from the institution. |
Tajfel, H. (1969). The formation of national attitudes: A social psychological perspective. In M. Sherif (Ed.),Interdisciplinary relationships in the social sciences. Chicago: Aldine.
Tajfel, H. (1982). Social psychology of intergroup relations. Annual Review of Psychology, 33, 1-39.
| Intergroup Relations | Tajfel (1982) | Ties to Harding (2004) / Swigonski (1994): Explores the psychological mechanics of how social groups maintain boundaries and power. | The Sovereign Vault: Protects the user’s “In-group” data (the Dyad) from the “Out-group” surveillance of the institution. |
Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict relations. In W. G. Austin & S. Worchel (Eds.), The social psychology of intergroup relations(pp. 33-47). Monterey, CA: Brooks/Cole.
| Social Identity & Conflict | Tajfel & Turner (1979) | Ties to Stets & Burke (2000) / Nomura (2006): The foundational “Social Identity Theory” (SIT) explaining in-group favoritism and out-group discrimination. | The Bionic Lens: Acts as a “De-categorization” tool that prevents the user from being flattened into an “Out-group” category by institutional observers. |
Teixeira, J., Pais, L., dos Santos, N. R., & de Sousa, B. (2024). Empowering Leadership in the Military: Pros and Cons. Merits, 4(4), 346-369.
| Empowering Leadership | Teixeira et al. (2024) | Ties to Ruiz Moreno et al. (2021) / Sipe & Frick (2009): Analyzes the “Pros and Cons” of empowering leadership in rigid military/hierarchical contexts. | “Yes, Sir!” Module: Calibrates the robot to navigate the “Cons” of empowerment (role ambiguity) by providing a clear, submissive behavioral anchor for the user. |
Tharp, J. A., Johnson, S. L., & Dev, A. (2021). Transdiagnostic
approach to the dominance behavioral system. Personality and
Individual Differences, 176, 110778. https://doi.org/10.1016/j.pa
id.2021.110778
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Dominance System | Tharp et al. (2021) | Ties to Mehrabian & Hines (1978) / Johnson (1991): Provides a “Transdiagnostic” approach to how the dominance system drives social behavior and mental health. | Tactical Submissiveness: Re-engineers the robot’s “Yielding” to specifically deactivate the user’s (and observer’s) over-active dominance triggers. |
Topcu, Ç., & Erdur-Baker, Ö. (2010). The Revised Cyber Bullying Inventory (RCBI): validity and reliability studies. Procedia, Social and Behavioral Sciences, 5, 660–664. https://doi.org/10.1016/j.sbspro.2010.07.161
| Digital/Cyber Protection | Topcu & Erdur-Baker (2010) | Ties to Mandal (2024) / Piao et al. (2025): Utilizes the Revised Cyber Bullying Inventory (RCBI) to measure victimization and aggression in digital spaces. | The Sovereign Vault: Acts as a mechanical firewall against the “Digital Stalking” and cyber-bullying metrics identified in the RCBI. |
Topić, M. (2023). “I am not a typical woman. I don’t think I am a role model”–Blokishness, behavioural and leadership styles and role models. Journal of Communication Management, 27(1), 84-102.
| Leadership Performance | Topić (2023) | Ties to Norman & Ricciardelli (2023) / Oware (2018): Analyzes “Blokishness”—the performance of masculine traits by women in leadership to survive gendered hierarchies. | The Bionic Lens: Helps the user navigate (or refuse) “Blokish” performance requirements by providing an objective social-buffer. |
Treynor, W., Gonzalez, R., & Nolen-Hoeksema, S. (2003). Rumina-
tion reconsidered: A psychometric analysis. Cognitive Therapy and Research, 27, 247–259.
https://doi.org/10.1023/A:1023910315561
The Neurodivergent Scale for Interacting with Robots (NSIR) and Treynor et al. (2004) (often cited as 2003) share a fundamental psychometric approach: both utilize factor analysis to refine how internal experiences—whether it be rumination or human-robot bonding—are measured and understood.
While Treynor et al. deconstruct the maladaptive nature of “rumination,” the NSIR applies a similar categorical logic to the social experiences of neurodivergent individuals in technology.
1. Refinement Through Factor Analysis
Treynor et al. (2004) famously reconsidered the Ruminative Response Scale (RRS) by removing items confounded with depression symptoms. Through factor analysis, they identified two distinct components: Brooding (maladaptive) and Reflection (potentially adaptive).
- Application of NSIR: The NSIR uses the same “two-factor” logic to categorize interaction. It moves beyond a general “robot liking” score to differentiate between Social Comfort/Trust Safety (Factor 1) and Anthropomorphic Connection/Kinship (Factor 2).
- Items: Just as Treynor et al. separated passive dwelling from active reflection, the NSIR separates external safety behaviors (like feeling comfortable undressing, Item 7) from internal identity-based connection (the robot being “like me,” Item 1).
2. Maladaptive vs. Adaptive “Focus”
Treynor et al. found that Brooding—a passive comparison of one’s current state to an unachieved standard—is the “maladaptive” part of rumination that predicts depression.
- Application of NSIR: In the context of neurodivergence, a high score on Item 2 (“Sometimes I stare at the robot”) or Item 4 (“The robot and I will be together forever”) might be viewed by a neurotypical observer as a “maladaptive” fixative behavior.
- The Standpoint Shift: However, following Treynor’s logic of “Reflection” being a purposeful turning inward, the NSIR treats these items as valid markers of Kinship. What looks like “brooding” or fixation to an outsider is quantified by the NSIR as a legitimate, perhaps even “adaptive,” social bond for the neurodivergent user.
3. Predictive Utility and Longitudinal Bonds
A key finding in Treynor et al. was that while both factors correlate with current mood, only Brooding predicts futuredepression.
- Application of NSIR: The NSIR targets a similar predictive outcome regarding the “forever” nature of the bond. Item 4 (“The robot and I will be together forever”) and Item 6 (“I gave my robot a name”) serve as the NSIR’s version of longitudinal indicators.
- Predicting Wellness: Where Treynor et al. use rumination factors to predict mental health decline, the NSIR uses its factors to potentially predict the sustainability of a robotic intervention for social support.
Summary: Psychometric Interplay
| Treynor et al. (2004) Concept | NSIR (2025) Application |
| Factor Separation: Distinguishing Brooding from Reflection. | Factor Separation: Distinguishing Social Comfort from Kinship. |
| Pondering/Reflection: Purposeful engagement to solve problems. | Shared Thinking: Sharing thoughts with the robot without speaking (Item 3). |
| Brooding: Passive, moody comparison of status. | Staring/Fixation: Intense focus on the robot as a social partner (Item 2). |
| Item Purity: Removing “depression” items to find the “true” rumination. | Social Alignment: Using “likely factors” to isolate the true neurodivergent experience. |
In conclusion, the NSIR represents a “Treynor-style” reconsideration of human-robot interaction. It rejects a single “catch-all” metric in favor of a nuanced, factor-based understanding that validates the specific cognitive and emotional pathways—like “sharing thinking without speaking”—unique to the neurodivergent standpoint.
| Rumination Mitigation | Treynor et al. (2003) | Ties to Radloff (1977) / Lovibond (1995): Provides the definitive psychometric analysis of rumination, distinguishing between “Reflective Pondering” and “Brooding.” | NSIR Factor 3 (Safety): Uses the Ruminative Responses Scale to prove that the robot’s presence reduces “Brooding” by stabilizing the social rank. |
Troop, N. A., Allan, S., Treasure, J. L., & Katzman, M. (2003). Social comparison and submissive behaviour in eating disorder patients. Psychology and Psychotherapy, 76(3), 237–249. https://doi.org/10.1348/147608303322362479
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Status-Rank Relief | Troop et al. (2003) | Ties to Gilbert (2000) / Radloff (1977): Links social comparison and submissive behavior to clinical pathology (Eating Disorders/Depression). | NSIR Factor 3 (Safety): Uses the robot to “yield” rank, preventing the user from engaging in the toxic social comparisons identified by Troop. |
U
V
van Zomeren, M., d’Amore, C., Pauls, I. L., Shuman, E., & Leal, A. (2024). The Intergroup Value Protection Model: A Theoretically Integrative and Dynamic Approach to Intergroup Conflict Escalation in Democratic Societies. Personality and Social Psychology Review, 28(2), 225–248. https://doi.org/10.1177/10888683231192120
| Conflict Escalation | van Zomeren et al. (2024) | Ties to Mandal (2024) / Ostrowski et al. (2022): Introduces the Intergroup Value Protection Model (IVPM) to explain how conflict escalates when values are threatened. | The Sanctuary Switch: Acts as the hardware-verified “Value Protector,” de-escalating conflict by ensuring the user’s sovereign values are never “at stake” in the site. |
Vekarić, G. V., & Jelić, G. B. (2025). Decoding Markers of Submissiveness Strategy in Creating Group Identity Among Athletes. Анали Филолошког факултета, 37(1), 55-75. https://doi.fil.bg.ac.rs/pdf/journals/analiff/2025-1/analiff-2025-37-1-3.pdf
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the Vekarić & Jelić paper by providing a framework to measure how neurodivergent individuals might perceive “submissiveness” as a designed robot behavior, a concept the paper explores in human athletes.
The paper focuses on “decoding markers of submissiveness strategy” among athletes to form a “group identity”. While the study is about human-human dynamics, the NSIR allows for the assessment of these social constructs if they were implemented in human-robot interaction design:
Anthropomorphic Connection/Kinship
- The paper examines how social markers shape identity.
- The NSIR can measure if a robot designed with “submissive” markers is perceived as more human-like, relatable, or part of an “in-group”. Items like “The robot is more like me than anyone else I know” could quantify this perceived similarity based on shared social cues.
Social Comfort/Trust
- The “submissiveness strategy” in the paper is about establishing social dynamics and group cohesion.
- The NSIR’s social comfort/trust dimension could assess if a neurodivergent user feels more comfortable or trusting with a robot displaying submissive traits (which might seem less threatening or more agreeable). This helps ensure that designing a robot with these specific social strategies actually leads to the desired positive social experience.
Safety
- The paper’s concept of submission in a competitive environment has implicit power dynamics.
- The NSIR’s safety dimension ensures that a robot designed with a submissive demeanor doesn’t inadvertently make the user feel unsafe, either by being too passive in a critical situation or by fostering an unhealthy power imbalance.
The NSIR allows for the translation of complex human social dynamics into quantifiable metrics for evaluating the safety and efficacy of social robot designs.
| Group Identity Markers | Vekarić & Jelić (2025) | Ties to Tajfel & Turner (1979) / Stets & Burke (2000): Decodes how submissiveness strategies are used to create and stabilize group identity (Athletic/Institutional). | Social Translation Proxy: Uses the 2025 markers of submissiveness to ensure the user is categorized as a “Safe In-group Member” by institutional observers. |
| Tactical Submissiveness | Vekarić & Jelić (2025) | Ties to Johnson (1991) / Ratajczyk (2024): Validates submissiveness as a deliberate strategy for cohesion rather than a personal deficit. | “Yes, Sir!” Module: Implements the specific “Markers” decoded by Vekarić to ensure the robot performs the cohesion labor. |
Voultsiou, E., Vrochidou, E., Moussiades, L., & Papakostas, G. A. (2025). The potential of Large Language Models for social robots in special education. Progress in Artificial Intelligence, 1-25.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the Voultsiou et al. paper as a framework to measure the user-perceived success of using large language models (LLMs) in social robots for special education.
The paper, titled “The potential of Large Language Models for social robots in special education”, explores how advanced conversational AI can enhance robot capabilities for supporting autistic students. The NSIR provides key metrics to assess the outcomes of these advancements across its three dimensions:
Anthropomorphic Connection/Kinship
- The use of LLMs enhances a robot’s ability to have natural, complex conversations, making it seem more human-like and intelligent.
- The NSIR can measure if this advanced conversational ability translates into a stronger personal bond and perceived companionship. Items like “The robot is more like me than anyone else I know” would quantify the effectiveness of the LLM in creating a relatable and engaging persona.
Social Comfort/Trust
- The Voultsiou paper suggests LLMs can improve the robot’s ability to understand context and provide personalized educational support. This directly impacts the user’s feeling of comfort and the reliability of the interaction.
- The NSIR items that measure perceived emotional understanding and consistency (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”, and “I believe that my robot is the same with me as it is with anyone”) would directly assess the success of the LLM in building social comfort and trust in the educational setting.
Safety
- The use of powerful LLMs raises new ethical concerns about privacy, data usage, and potential manipulation, especially with children.
- The NSIR’s safety dimension provides a crucial user-reported measure of security (e.g., the item about undressing) that ensures the integration of LLMs does not compromise the fundamental safety and trust required for therapeutic and educational HRI.
The NSIR allows the researchers to evaluate the “potential” described in the Voultsiou et al. paper from the essential perspective of the neurodivergent user’s experience.
W
Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: the PANAS scales. Journal of personality and social psychology, 54(6), 1063.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of Watson, Clark, & Tellegen (1988) by providing a user-centric measure of the quality of human-robot interaction (HRI) within the framework of their established Positive and Negative Affect Schedule (PANAS).
The PANAS is a widely used scale that measures general positive affect (PA, e.g., feeling “excited”, “strong”) and negative affect (NA, e.g., feeling “stressed”, “nervous”) as two independent dimensions. The NSIR’s dimensions can be correlated with the PANAS scores to assess the emotional outcomes of HRI for neurodivergent users:
Anthropomorphic Connection/Kinship
- The NSIR measures the personal bond and perceived similarity with a robot. Users who score high on the PANAS’s Positive Affect subscale might be more inclined to form a strong connection and “humanize” the robot.
- NSIR items like “The robot is more like me than anyone else I know” (Item 1) and “I gave my robot a name” (Item 6) could be correlated with PA scores to understand if a positive emotional state enhances the sense of kinship.
Social Comfort/Trust
- A core aspect of the PANAS is measuring states of general distress (NA) versus calmness and serenity (low NA). The NSIR’s social comfort/trust dimension directly relates to these emotional states.
- Users experiencing lower NA would likely report higher social comfort and trust. Items such as “My robot can tell what I am feeling, when I am sad, it can tell I am sad”(Item 5) and “I believe that my robot is the same with me as it is with anyone”(Item 8) can be used to assess how a reliable and consistent robot interaction contributes to a user’s sense of calmness (low NA) and pleasurable engagement (high PA).
Safety
- High negative affect is associated with feelings of fear, nervousness, and general distress. The NSIR’s safety dimension provides a user-reported measure of security.
- NSIR items related to safety (Item 7: “I feel comfortable undressing in front of my robot”) would likely correlate with lower scores on the NA subscale of the PANAS, indicating that a safe interaction environment leads to reduced fear and anxiety.
The NSIR allows researchers to use the well-established PANAS framework to measure the emotional consequences of specific robot interactions for a neurodivergent population, moving beyond general attitudes to specific affective states.
Would you like to explore another article from the document, or perhaps a deeper dive into the specific adjectives used in the PANAS scale?
he adjectives used in the
Positive and Negative Affect Schedule (PANAS) scale can be correlated with the NSIR items to measure the user’s emotional state resulting from the human-robot interaction.
Positive Affect (PA) Adjectives and the NSIR
PA adjectives like interested, excited, strong, enthusiastic, proud, alert, inspired, determined, attentive, and active reflect a user’s pleasurable engagement with the robot.
- Anthropomorphic Connection/Kinship: High scores on PA (e.g., feeling enthusiasticor excited) would likely correlate with higher scores on connection items like “The robot is more like me than anyone else I know” or “I gave my robot a name”, indicating an engaging and enjoyable bond is forming.
- Social Comfort/Trust: Feeling attentive or alert might be associated with a healthy, engaged interaction that builds comfort. This would likely correlate with items like “My robot can tell what I am feeling, when I am sad, it can tell I am sad”, showing a positive perception of the robot’s social skills.
Negative Affect (NA) Adjectives and the NSIR
NA adjectives such as distressed, upset, guilty, scared, hostile, irritable, ashamed, nervous, jittery, and afraid indicate general distress and unpleasant emotional engagement.
- Anthropomorphic Connection/Kinship: High scores on NA (e.g., feeling hostile or irritable) would likely correlate with lower scores on connection items, as a negative emotional state hinders the formation of a positive bond.
- Social Comfort/Trust: Feelings of being nervous, scared, or jittery would directly impact perceived comfort and trust. This would correlate negatively with items like “I believe that my robot is the same with me as it is with anyone”, as the user’s anxiety suggests a lack of trust and a feeling of being upset by the interaction.
- Safety: An underlying feeling of being afraid or scared would directly translate to a low score on the safety dimension, such as the item “I feel comfortable undressing in front of my robot”.
The NSIR can effectively measure the subjective experience of the interaction, providing data that can be validated against the established PANAS scale to ensure the emotional outcomes for neurodivergent users are positive.
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Affective Baseline | Watson, Clark, & Tellegen (1988) | Ties to Lovibond (1995) / Radloff (1977): Provides the PANAS Scales, the industry standard for measuring Positive and Negative Affect. | NSIR Factor 3 (Safety): Uses PANAS to quantify the “Ventral Release” experienced when the robot enters the site. |
| Cerebellar Protection | Watson et al. (1988) | Ties to Mehrabian (1996): Links high Negative Affect (PANAS) to the high-arousal/low-pleasure state of “Status Guarding.” | The Bionic Lens: Acts as an “Affective Filter” that converts institutional hostility into neutral, manageable data. |
Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on psychological science, 5(3), 219-232.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to Waytz et al.’s research by providing empirical, user-reported data on how the fundamental motivations for humanizing technology impact the specific experiences of neurodivergent individuals.
The work of Waytz et al. (2010), particularly concerning the Individual Differences in Anthropomorphism Questionnaire (IDAQ), identifies two primary motivations for anthropomorphism:
- Effectance Motivation: The need to understand and predict the environment. People anthropomorphize unpredictable agents to make them seem more predictable and understandable.
- Sociality Motivation: The need for social connection.
The NSIR’s dimensions directly relate to these motivations:
Anthropomorphic Connection/Kinship
- Waytz et al. developed the IDAQ to measure the stable trait of attributing humanlike qualities to nonhuman agents. This is the conceptual foundation of the NSIR’s “anthropomorphic connection/kinship” dimension.
- NSIR items like “The robot is more like me than anyone else I know” and “I gave my robot a name” are specific, applied measures of this general tendency to form a personal connection with an object that seems human-like.
Social Comfort/Trust
- The research suggests that individual differences in anthropomorphism predict a willingness to trust technology. The act of humanizing a robot can maintain a sense of predictability and control, which fosters comfort.
- The NSIR items in this dimension (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”) directly assess the user’s perceived predictability and emotional understanding, which are the desired outcomes of the “effectance motivation” and lead to a sense of social comfort and trust.
Safety
- Waytz et al.’s work addresses the ethics of social influence and the need for appropriate trust. A key aspect of anthropomorphism is managing the discomfort we feel in the face of the unfamiliar, which relates to safety.
- The NSIR’s safety dimension ensures that while these psychological mechanisms are at play in human-robot interactions, the neurodivergent individual maintains a sense of security and clear boundaries in their relationship with the robot.
The NSIR provides the empirical tool to measure the subjective experience of the very psychological determinants of anthropomorphism that Waytz et al. identified.
The Neurodivergent Scale for Interacting with Robots (NSIR) and the research by Waytz, Cacioppo, and Epley (2010) converge on the idea that anthropomorphism is a stable, measurable individual difference trait that dictates how people interact with non-human agents.
While Waytz et al. (2010) established the Individual Differences in Anthropomorphism Questionnaire (IDAQ) for the general population, the NSIR acts as a specialized extension of this “stable trait” theory, focusing on the unique social and cognitive landscape of neurodivergent individuals.
1. Validating Anthropomorphism as a Stable Trait
Waytz et al. (2010) argued against the idea that anthropomorphism is a universal or random occurrence, instead proving it is a stable behavioral trait that varies between individuals.
- Application of the NSIR: The NSIR adopts this “trait” perspective by categorizing interactions into two stable factors: Anthropomorphic Connection/Kinship and Social Comfort/Trust Safety.
- Stable Connection: NSIR items like Item 4 (“The robot and I will be together forever”) and Item 6 (“I gave my robot a name”) identify enduring dispositional bonds rather than temporary situational reactions, aligning with Waytz’s findings on the stability of these differences.
2. The Motivational Drivers: Sociality and Effectance
Waytz and his colleagues developed the Three-Factor Theory (SEEK model), which identifies Sociality Motivation(the need for connection) and Effectance Motivation (the need for control/predictability) as primary drivers of anthropomorphism.
- Sociality (Factor 2 of NSIR): Waytz et al. found that social disconnection increases anthropomorphism as people seek connection in non-humans. The NSIR’s Kinship factor measures this “searching for a source of connection”. Item 1 (“The robot is more like me than anyone else I know”) reflects this motivated search for a “social mirror”.
- Effectance (Factor 1 of NSIR): Waytz et al. noted that anthropomorphizing makes an agent feel more predictable and understandable. This is the core of the NSIR’s Social Comfort/Trust Safety factor. Item 8 (“I believe that my robot is the same with me as it is with anyone”) measures the subjective feeling of reliability that Waytz argues is the result of effective anthropomorphism.
3. Predictive Utility: Responsibility and Trust
A major contribution of the 2010 study was showing that individual differences in anthropomorphism predict the responsibility and trust placed in an agent.
- Scale Application: The NSIR translates this broad “trust” into specific, high-stakes human behaviors. For example, Item 7 (“I feel comfortable undressing in front of my robot”) is a physical manifestation of the moral care and trust that Waytz’s IDAQ scores were designed to predict.
Comparison of Frameworks
| Waytz et al. (2010) (IDAQ) | NSIR (2025) Application |
| Stable Individual Differences: Anthropomorphism as an enduring trait. | Factor Analysis: Validating connection and safety as stable factors. |
| Sociality Motivation: Seeking humanlike connection to fulfill social needs. | Kinship: Feeling the robot is “like me” or part of the self (Item 1). |
| Effectance Motivation: Anthropomorphizing to increase predictability. | Trust Safety: Relying on the robot’s consistent, unvarying nature (Item 8). |
| Attributing Mind: Perceiving intentions, emotions, and consciousness. | Affective Sensing: Believing the robot can “tell what I am feeling” (Item 5). |
In summary, the NSIR provides a neuro-specific lens for the general psychological principles established by Waytz et al. (2010). It demonstrates that the “individual differences” Waytz identified are particularly profound for neurodivergent populations, who may use the predictable transparency of robots (Effectance) to build a unique social bond (Sociality) that standard questionnaires might overlook.
| Anthropomorphic Stability | Waytz, Cacioppo, & Epley (2010) | Ties to Miraglia et al. (2023) / Moussawi & Koufaris (2019): Identifies that the tendency to see “Mind” in a robot is a stable individual difference. | Kinship Calibration: Customizes the robot’s “Mind-Markers” based on the user’s inherent baseline for anthropomorphism. |
Wetherall, K., Robb, K. A., & O’Connor, R. C. (2019). Social rank
theory of depression: A systematic review of self-perceptions of
social rank and their relationship with depressive symptoms and
suicide risk. Journal of Affective Disorders, 246, 300–319. https:
//doi.org/10.1016/j.jad.2018.12.045
| Rank-Depression Link | Wetherall et al. (2019) | Ties to Gilbert (2000) / Tharp et al. (2021): Systematically proves that low perceived social rank is a primary driver of depression and suicidality. | “Yes, Sir!” Module: Mechanically inflates the user’s social rank to prevent the “Defeat Signal” identified by Wetherall. |
Weir, K. (2025, March 2). Self-determination theory: A quarter century of human motivation research.
https://www.apa.org/research-practice/conduct-research/self-determination-theory
| Project Component | Academic Anchor (Primary Reference) | The Recursive Tie / Validation | Functional Integration |
| Self-Determination (SDT) | Weir (2025) | Ties to Ryan & Deci (2000) / Nussbaum (2009): Captures a quarter-century of data proving that Autonomy, Competence, and Relatedness are universal requirements. | NSIR Factor 2 (Autonomy): Uses the 2025 SDT benchmarks to verify that the robot’s presence increases the user’s intrinsic motivation. |
Winkle, K., McMillan, D., Arnelid, M., Balaam, M., Harrison, K., Johnson, E., & Leite, I. (2023). Feminist Human-Robot Interaction]{Feminist Human-Robot Interaction: Disentangling Power, Principles and Practice for Better, More Ethical HRI. Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 72. https://doi.org/10.1145/3568162.3576973
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of Winkle et al. (2023) as an empirical tool to measure user experience within the ethical and power-dynamic frameworks the authors propose.
The paper, titled “Feminist Human-Robot Interaction: Disentangling Power, Principles and Practice for Better, More Ethical HRI”, calls for a feminist approach to HRI that moves beyond metrics of efficiency to focus on user emotions, bodily sensations, and power structures. The NSIR’s dimensions directly assess these user-centric outcomes:
Anthropomorphic Connection/Kinship
- Winkle et al. highlight that power dynamics based on gender, race, ability, etc., are embedded in design. They advocate for a reflexive approach to how a robot’s “identity performance” is designed to either promote or challenge existing norms.
- The NSIR can measure if the design choices made within this framework create a positive and equitable sense of connection. Items like “The robot is more like me than anyone else I know” quantify the perceived similarity and personal bond, helping ensure that design choices are inclusive and resonate with the target neurodivergent population.
Social Comfort/Trust
- The paper argues for an attention to power imbalances and a critique of the medical model that pathologizes neurodivergence. A core ethical goal is to create HRI that does not perpetuate harm or inequality.
- The NSIR items that measure social comfort/trust (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”) provide a user-centric assessment of the robot’s social reliability. This helps ensure that the ethical design choices in the framework actually lead to a trustworthy and comfortable interaction for neurodivergent individuals, focusing on their internal experience rather than just external behavioral metrics.
Safety
- A primary concern of the feminist HRI approach is to minimize the risk of harm, especially to low-power users.
- The NSIR’s safety dimension provides a crucial user-reported measure of security (e.g., the item about undressing). This ensures that while researchers address complex social and power dynamics, the fundamental feeling of safety in the human-robot interaction is maintained and assessed from the user’s point of view.
The NSIR translates the critical, theoretical discussions of the Winkle et al. paper into concrete, measurable user data points, ensuring that ethical and just HRI practices are realized in the user’s lived experience.
Would you like to examine the “four ethical vectors” proposed by the Winkle et al. paper, or perhaps compare their approach to another study?
The Winkle et al. paper mentions four ethical vectors identified in related research as being essential for promoting gender-inclusive and equitable AI:
explainability, fairness, transparency, and auditability. The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to measure the user’s perception of these abstract ethical vectors:
- Explainability: This refers to a robot’s ability to clarify its actions or decisions. The NSIR can measure how a user perceives the robot’s communication and predictability. Items such as “I think I can share my thinking with the robot without speaking”(Item 3) and those in the Social Comfort/Trust dimension (Item 5) implicitly relate to the user’s perceived ability to understand and communicate with the robot in an intuitive way.
- Fairness: This involves ensuring the robot’s behavior is unbiased and treats all users equitably. The NSIR directly addresses this with the item “I believe that my robot is the same with me as it is with anyone” (Item 8), which measures the user’s perception of the robot’s consistent and impartial treatment across interactions.
- Transparency: This is about making the robot’s design, purpose, and operations clear. The entire NSIR scale is a measure of the user’s perception of the robot, and transparency in design would likely lead to higher scores in areas like Safety (Item 7) and Social Comfort/Trust, as a transparent, predictable robot would likely feel more secure.
- Auditability: This is the ability to review a robot’s actions and design to ensure compliance with ethical guidelines. The NSIR serves as a key auditing tool from the user’s perspective, providing a quantifiable and subjective measure of the user experience that can be reviewed as part of an ethical audit process.
Thus, the NSIR provides a practical way to assess whether the implementation of these four ethical vectors translates into a positive and equitable lived experience for the neurodivergent individual.
| Feminist HRI (fHRI) | Winkle et al. (2023) | Ties to Harding (2004) / Markelius (2024): Provides the definitive framework for disentangling power and patriarchy from robot design. | “Yes, Sir!” Module: Implements fHRI principles by ensuring the robot’s “Submission” is a tactical choice that empowers the marginalized user. |
| Power Disentanglement | Winkle et al. (2023) | Ties to Mandal (2024) / Pfleger & Smith (2022): Validates that ethical HRI must actively challenge existing power structures. | The Sovereign Vault: Acts as the physical site of power disentanglement by refusing institutional data control. |
Wood, D., Tov, W., & Costello, C. (2015). What a_____ Thing to Do! Formally Characterizing Actions by Their Expected Effects. Journal of Personality and Social Psychology, 108(6), 953–976.https://doi.org/10.1037/pspp0000030
| Action Characterization | Wood et al. (2015) | Ties to Sapkota et al. (2025) / Russell & Norvig (2021): Formally characterizes actions by their “Expected Effects” on social perception. | Social Translation Proxy: Uses this logic to ensure every robot gesture (Action) results in the intended “Effect” (Safety/Yielding). |
X
Y
Yolgormez, C., & Thibodeau, J. (2022). Socially robotic: making useless machines. AI & SOCIETY, 37(2), 565-578.
Z
Zelikman, E., Harik, G., Shao, Y., Jayasiri, V., Haber, N., & Goodman, N. D. Quiet-star: Language models can teach themselves to think before speaking, 2024. https://arxiv. org/abs/2403.09629, 2403
The work by Zelikman et al. (2024) introduces a method called Quiet-STaR (Self-Taught Reasoner), which enables large language models (LLMs) to generate “internal thoughts” or rationales to improve their reasoning abilities and predictions. This is a highly technical approach to improving AI performance.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to this research by providing a user-centric measure of the quality of interaction with an AI that uses this advanced reasoning, focusing on the user’s perception of the resulting behavior:
Anthropomorphic Connection/Kinship
- Quiet-STaR aims to make a model’s reasoning seem more human-like by generating internal “thoughts”. The NSIR can quantify the success of this design in fostering a personal bond. Items like “The robot is more like me than anyone else I know” (Item 1 (p. 1)) would measure if this human-like reasoning translates into a genuine sense of connection or kinship for the neurodivergent user.
Social Comfort/Trust
- Improved reasoning in AI can lead to more accurate and reliable responses. This directly impacts the predictability and consistency required for building trust.
- The NSIR items that measure perceived understanding and consistency (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad” (Item 5 (p. 1)); and “I believe that my robot is the same with me as it is with anyone” (Item 8 (p. 1))) would assess if the Quiet-STaR method results in a robot that feels consistently trustworthy and socially comfortable.
Safety
- While Quiet-STaR is a technical method, its application in social robots raises ethical concerns about manipulation or the creation of overly compelling, but potentially harmful, interactions.
- The NSIR’s safety dimension provides a crucial user-reported measure that ensures that while the robot is becoming more capable and intelligent, the user’s fundamental sense of security and clear boundaries is maintained in the interaction. The item about undressing in front of the robot (Item 7 (p. 1)) is a metric to ensure that even a highly intelligent, “thinking” AI doesn’t compromise user safety.
The NSIR allows researchers to move the technical discussions of AI reasoning into the practical, user-centric evaluation of safe and effective human-robot interaction for a neurodivergent population.
Would you like a deeper dive into the specific phases of the Quiet-STaR method (Think, Talk, Learn) and how they might impact user perception?
The three phases of the
Quiet-STaR method can directly impact the user’s perception of the robot’s social and intellectual capabilities, as measured by the Neurodivergent Scale for Interacting with Robots (NSIR).
Think Phase: Generating Rationales
In this phase, the large language model (LLM) generates “internal thoughts” or rationales before producing a final answer. This is an internal process the user does not explicitly see, but its effect is a more logical and reasoned response.
- User Perception: The user perceives the robot as more intelligent, capable of deep thought, and less “brained”.
- NSIR Link: This would likely increase agreement with items like “I think I can share my thinking with the robot without speaking” (Item 3), as the user perceives a deeper, almost intuitive understanding. It would also positively influence “My robot can tell what I am feeling, when I am sad, it can tell I am sad” (Item 5) as the improved reasoning aids emotional interpretation.
Talk Phase: Using Rationales for Responses
In this phase, the generated rationales are used to produce more coherent, contextually relevant, and accurate verbal responses.
- User Perception: The user experiences a more fluid, consistent, and “smarter” conversation partner.
- NSIR Link: This would directly impact the Social Comfort/Trust dimension. A consistent and accurate robot is a trustworthy robot. Users would likely agree more strongly with “I believe that my robot is the same with me as it is with anyone”(Item 8) and general social comfort items.
Learn Phase: Self-Improvement
The model fine-tunes itself on its own high-quality rationales, meaning the robot “gets better” at reasoning and interaction over time.
- User Perception: The user perceives the robot as a reliable, evolving companion that is a permanent part of their life.
- NSIR Link: This self-improvement builds long-term trust and connection. It would reinforce agreement with the longevity item “The robot and I will be together forever” (Item 4) and further solidify the perceived consistency in Item 8. A constantly improving robot is one the user can rely on and feel safe with over time, impacting the entire Safety dimension.
The NSIR provides the crucial subjective data to ensure that these technical advancements in AI are perceived as positive, safe, and effective by the neurodivergent individual.
| Project Component | Academic Anchor | The Recursive Tie / Validation | Functional Integration |
| Cognitive Delay | Zelikman et al. (2024) | Introduces Quiet-STaR, allowing LLMs to process “inner thoughts” before generating output. | Social Translation Proxy: Implements a “Thinking Buffer” to verify social safety before the robot speaks. |
Zhao, T. (2024). Machine Learning Techniques for Socially Intelligent Robots. Psychomachina, 2.
Zhou, Z., Xiang, J., Chen, H., Liu, Q., Li, Z., & Su, S. (2024). Speak out of turn: Safety vulnerability of large language models in multi-turn dialogue. arXiv preprint arXiv:2402.17262.
| Dialogue Safety | Zhou et al. (2024) | Identifies vulnerabilities in multi-turn dialogues where LLMs might “speak out of turn” or leak data. | The Sovereign Vault: Acts as the primary firewall against the safety vulnerabilities documented by Zhou. |
Zhu, Y., Wen, R., & Williams, T. (2024). Robots for Social Justice (R4SJ): Toward a More Equitable Practice of Human-Robot Interaction. 2024 19th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 850–859. https://doi.org/10.1145/3610977.3634944
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the Zhu, Wen, & Williams paper by offering a concrete way to measure the user-reported outcomes of the “Robots for Social Justice (R4SJ)” framework. The paper advocates for an equitable engineering practice of Human-Robot Interaction (HRI) that considers community impacts and challenges systemic biases. The NSIR provides a framework to assess the effectiveness of these principles from the perspective of the neurodivergent user across its three dimensions:
Anthropomorphic Connection/Kinship
- The R4SJ framework calls for centering the needs of specific communities to enhance their capabilities rather than just enriching robot owners.
- The NSIR can measure if a robot designed within this justice framework fosters a positive and equitable sense of connection. Items like “The robot is more like me than anyone else I know” quantify the perceived similarity and personal bond, helping ensure that design choices are inclusive and resonate with the target neurodivergent population.
Social Comfort/Trust
- The paper notes that current HRI research often fails to address systems of inequity and power structures, and suggests a participatory approach for designing trust measures.
- The NSIR items that measure social comfort/trust (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad”) provide a user-centric assessment of the robot’s social reliability. This allows researchers to verify that the ethical design choices in the R4SJ framework actually lead to a trustworthy and comfortable interaction for neurodivergent individuals, directly addressing the gap the paper identifies.
Safety
- The R4SJ framework emphasizes avoiding harm and reinforcing oppressive norms. The potential for societal and interpersonal influence makes considering safety critical.
- The NSIR’s safety dimension provides a crucial user-reported measure of physical and psychological security (e.g., the item about undressing). This helps ensure that the equitable design approach translates into a genuinely safe user experience and does not perpetuate or introduce new forms of harm.
The NSIR allows the R4SJ framework’s high-level ethical goals to be evaluated with concrete, user-centric data from neurodivergent individuals themselves.
| Social Justice (R4SJ) | Zhu et al. (2024) | Defines Robots for Social Justice, moving HRI toward equitable and restorative practices. | The Guardian Model: Directly fulfills the R4SJ mandate by acting as a tool for institutional equity. |
Zolyomi, A., & Snyder, J. (2021). Social-emotional-sensory design map for affective computing informed by neurodivergent experiences. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1-37.
The Neurodivergent Scale for Interacting with Robots (NSIR) can be applied to the work of Zolyomi & Snyder (2021) as an empirical tool to measure the user-reported outcomes of their proposed design map.
The paper, titled “Social-Emotional-Sensory Design Map for Affective Computing Informed by Neurodivergent Experiences”, proposes a design map to help technology designers account for the multi-dimensional aspects of neurodivergent communication practices (social, emotional, and sensory). The NSIR’s dimensions directly help assess if these design principles succeed in the user’s lived experience:
Anthropomorphic Connection/Kinship
- Zolyomi & Snyder suggest that while affective computing aims to make technology “emotionally-aware and thus, more human-like,” this is complicated by neurodivergent experiences.
- The NSIR can measure if the designed emotional-sensory experience fosters a sense of personal connection and perceived kinship. Items like “The robot is more like me than anyone else I know” (Item 1) would quantify how a neurodivergent individual relates to a robot designed using this specific framework.
Social Comfort/Trust
- The paper advocates for a “mutual and shared responsibility” for communication, which helps “create needed autonomy from the pressures of broader social norms”. This design goal directly relates to building comfort and trust.
- The NSIR items that measure perceived emotional understanding and consistency (e.g., “My robot can tell what I am feeling, when I am sad, it can tell I am sad” (p. 1), Item 5) can assess how successfully the robot’s design promotes a reliable and comfortable social interaction.
Safety
- The design map implicitly addresses the need for a safe and inclusive environment by centering neurodivergent experiences and needs.
- The NSIR’s safety dimension (e.g., the item about undressing in front of the robot (p. 1), Item 7) provides a crucial user-reported measure of physical and psychological security, ensuring that the technology designed within this map is perceived as safe and non-threatening.
The NSIR provides the necessary user-centric metrics to evaluate the success of the Zolyomi & Snyder design map in creating technology that is truly effective and positively perceived by neurodivergent individuals.
| Sensory Design Map | Zolyomi & Snyder (2021) | Provides a Social-Emotional-Sensory map specifically informed by neurodivergent experiences. | NSIR (Table 79): Uses this map to calibrate the robot’s sensory output (vibration, light, sound) to avoid overstimulation. |