Neurodivergent Interaction Scale (NIS): A Heuristic Evaluation Tool
NSIR Item # Academic Reference Matrix
Detailed Project Budget- Biological HRI Social Exoskeleton Pilot
Theoretical Framework for Somatic Subversion
Specific ND Scales for HRI and Their Psychological Logic
Authentic Meaning-Making (Factor 1) and Clinical Masking/Hiding (Factor 2)
The use of a Neurodivergent (ND) scale in Human – Robot Interaction (HRI)
The “Kinship Mandate” Logic Script
“Applying the Neurodivergent Scale for Interacting with Robots (NSIR) to the Biological HRI Social Exoskeleton provides a psychometric framework to measure the success of the Hartley & Dubuque (2023) “Apprentice-to-Partner” evolution.While the “Slave” archetype focuses on the robot’s obedience, the NSIR measures the “Queer Kinship” and “Trust Safety” that emerge when the robot matures into a biographical partner”(Google, 2025).
When discussing how these elements come together in your collaborative work, you can describe the synthesis as follows:
“This project adapts the Hartley & Dubuque (2023) ‘Apprentice-to-Partner’ trajectory to Human-Robot Interaction, measuring its success in fostering ‘Queer Kinship’ through the Neurodivergent Scale for Interacting with Robots (NSIR) (Sadownik, 2025).”
Education
Conceptualization
Socialization is the number one priority for a majority of parents when speaking to teachers about their neurodivergent child. Many teachers spend considerable time pairing neurodivergent (ND) children with peers (NT) in the classroom to help with social comparison. Insight into the success of a Sonic card, however, as a motivational factor to participate in unpreferred tasks signals the potential of a social robot that is preferred by the child due to its kinship potential. (Age 9-14).
Overview of the De-escalation Sequence
The “defiance” seen in autism is often actually a fight for intellectual autonomy and truth.
“canary in the coal mine” for missed autism in females
drinking water physiological circuit breaker
Gemini environment that is enabling rather than disabling
Medical Model intersect the “truth” of human meaning-making with the belief that neurotypical counsellors
Shift from “Medical Model” to “Emancipatory Technology”
Appendix: State Machine Diagram for the Sovereign Reboot Protocol
Summary Table of Test Outcomes
professional “lived experience” empathy
“Safety” in Ontario School Boards
The Military Mirror- Monotropism vs. Camouflaging
Further
The Biological HRI Social Exoskeleton
Furstenberg, F. F. (2020). Kinship reconsidered: Research on a neglected topic. Journal of Marriage and Family, 82(1), 364-382. https://doi.org/10.1111/jomf.12628
Ma, Y., & Li, J. (2024). How humanlike is enough?: Uncover the underlying mechanism of virtual influencer endorsement. Computers in human behavior: Artificial humans, 2(1), 100037.
Prato-Previde, E., Basso Ricci, E., & Colombo, E. S. (2022). The complexity of the human–animal bond: Empathy, attachment and anthropomorphism in human–animal relationships and animal hoarding. Animals, 12(20), 2835.
Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on psychological science, 5(3), 219-232.
Vision-Language-Action (VLA) models operate for neuronormative compared to neurodivergent users

Sapkota, R., Cao, Y., Roumeliotis, K. I., & Karkee, M. (2025). Vision-language-action models: Concepts, progress, applications and challenges. arXiv preprint arXiv:2505.04769.
The decision to try to make a neuronormative VLA model more inclusive is presented as inclusive. My research and model suggest, as in autism, a different operating system is needed. One that considers object relation theory for a sliding bar of recognition between humanizing machines.
Initially, Table 63 was used for concept mapping of the articles based on the topics of:
- Cognitive Psych
- Social Robotics
- Pedagogy
- Feminism
however following the creation of the neurodivergent scale a new system of concept mapping resorted the articles into this:
Alternative categories and related concepts for the terms provided include:
Anthropomorphic Connection / Kinship
- Mind attribution / Mental state attribution: Attributing human-like mental capacities such as thinking, feeling, perceiving, and desiring to non-human entities.
- Empathy: The human tendency to feel with the robot and put themselves in the robot’s “shoes,” often triggered by anthropomorphic design.
- Attachment theory: Developing emotional bonds and connections with AI systems or social robots, similar to human-human attachments.
- Social presence: The feeling that an artificial agent is a social entity and that the interaction is a social one.
- Psychological kinship / Fictive kinship: The “familial” or kin-like treatment of unrelated others or non-human entities based on perceived similarity or affinity.
- Humanization: A process related to but distinct from anthropomorphism, involving attributing human qualities or form to other entities.
Social Comfort/Trust
- Social acceptance / User acceptance: The willingness of individuals to use and interact with robots in social contexts.
- Perceived sociability: The extent to which a robot is seen as social, friendly, or a potential companion.
- Interpersonal warmth: A quality attributed to robots that makes them seem more human-like and thus more accepted by users.
- Social integration / Belonging: The feeling of being part of a social connection that meets psychological and interpersonal needs.
- Reliable functioning / Competence: A robot’s ability to perform tasks consistently and effectively, which builds trust and confidence in its capabilities.
- Willingness to cooperate: The extent to which humans are inclined to work with a robot, a consequence of trust and positive interaction.
Safety
- Perceived security: The user’s sense of safety and reduced vulnerability when interacting with an agent.
- Ethical implications / Moral value: Considerations of ethical design and whether a robot is perceived as a moral agent deserving of care or rights.
- Risk-regulation model: The framework used to understand how people manage perceived risks in social connections to feel secure and protected.
- Vulnerability: The inverse of safety; an intention to accept vulnerability is a core component of trust.
- Appropriate trust / Over-reliance: Ensuring a balanced level of trust in a system to avoid both disuse (too little trust) and misuse (too much trust, which can compromise safety)
The following mapping re-aligns the core articles into the new conceptual categories based on their primary research focus and outcomes.
1. Anthropomorphic Connection / Kinship
These articles explore how human-like design (physical or vocal) influences how we project identity onto robots or form emotional/cognitive bonds with them.
- Dennler et al. (2025): Focuses on how design modalities like voice and clothing directly establish a robot’s perceived gender identity and social role.
- Bjornsdottir et al. (2024): Investigates how stereotypic facial features drive perceptions of social class, a core component of human-to-non-human trait projection [Table 63].
- Ratajczyk (2024): Explores social perceptions of dominance vs. submissiveness across the human-robot spectrum, assessing how human traits are mapped onto machines [Table 63].
- Nomura et al. (2006) / Pochwatko et al. (2015): The Negative Attitude Toward Robots Scale (NARS) measures the psychological resistance or friction people feel when robots mimic human characteristics too closely [Table 63].
2. Social Comfort / Trust
These articles analyze the conditions under which humans feel comfortable cooperating with, relying on, or accepting robots in shared environments.
- Pochwatko et al. (2024): Examines how societal representations of robots determine human trust and their ultimate willingness to cooperate in organizational tasks.
- Koch et al. (2025): Investigates consumer responses to dominance patterns(assertive vs. submissive) in voice-based service encounters and how these affect user comfort and trust [Table 63].
- Maj et al. (2024): Studies how children perceive and respond to assertive behaviorin robots, focusing on the social dynamics of the interaction [Table 63].
- Broadbent et al. (2009): Looks at the preferences of retirement home staff and residents for healthcare robots, focusing on the acceptance and comfort level of vulnerable populations [Table 63].
3. Safety (Ethical & Physical)
These articles prioritize the ethical frameworks, justice, and protection mechanisms necessary for safe and equitable Human-Robot Interaction (HRI).
- Winkle et al. (2023): Proposes a Feminist HRI framework to disentangle power structures, ensuring interactions are ethical and do not propagate harmful social hierarchies.
- Zhu et al. (2024): Introduces Robots for Social Justice (R4SJ), which focuses on equity and the protection of marginalized groups within the HRI space [Table 63].
- Ostrowski et al. (2022): Addresses ethics, equity, and justice, examining how robotic systems can be designed to avoid systemic biases and ensure user safety [Table 63].
- Balle (2022): Explores the moral status of robots and empathic responses, discussing the ethical safety of creating “moral” agents [Table 63].
- Bandura et al. (1996) / Gini et al. (2014): While originating in psychology, these works on moral disengagement are critical for understanding how safety boundaries might be ignored in technological or social systems [Table 63]
A few Google Searches from this thread lead to:
Google DeepMind’s AutoRT autism – Google Scholar
This came from asking the question : how is google ai different from a robot with large language models?
and asking
if google doesnt reference you in google ai does it mean you dont fit the parameters or the wordsmith – Google Search
and (related to memories of Biology classification) before that I asked:
what is the difference between anthropormorphic connection / kinship and social comfort/trust safety
Confirmation of receipt (#NAPN-DPMLQ8)
12/23/2025 10:57:07 Eastern Standard Time
Copyright