implicitassociationtestsyessir

Concept Mapping and Gallery

Enter the Gallery

https://share.google/aimode/5943k8OnOiTIOzYHV

https://share.google/aimode/eS6rAaIAqCKbb3kHQ

https://share.google/aimode/nGEMFtJc8i8NTVot0

https://share.google/aimode/V2j2etzPhF0oegSTe

https://share.google/aimode/5RaSRQzCuFL0SXWu8

https://share.google/aimode/vj8bIWQK3Y2RnQNrw

On December 26, 2025 I gave this table to Google AI and dropped my reference list into the search engine, it truncated the list and provided me with this:

and this

Based on the uploaded data to Google AI called Table 63 Updated Concept Mapping framework, Google AI performed one synthesis of the Abbo et al. (2025) article:

https://share.google/aimode/c98TOjPu3Nu8wwSCh

https://share.google/aimode/vj8bIWQK3Y2RnQNrw

1. Anthropomorphic Connection / Kinship

Focus: Mental state attribution, empathy, attachment, and humanization.

  • Abbo et al. (2025): “Can you be my mum?”: Manipulating Social Robots in the Large Language Models Era. (Explores attachment theory and fictive kinship via LLMs).
  • Ahn (2014): Designing of a Personality Based Emotional Decision Model… (Directly relates to mind attribution by giving robots “emotions” and “personality”).
  • Bartneck et al. (2009): Measurement instruments for anthropomorphism, animacy… (Godspeed Scale). (The foundational tool for measuring humanization and animacy).
  • Bagheri et al. (2021): A reinforcement learning based cognitive empathy framework…(Aligns with the empathy category/robot-human emotional mirroring).
  • Bartneck et al. (2009): Specifically addresses humanization and animacy through the Godspeed Scale.
  • Brandizzi (2024): Focuses on Theory of Mind in language modeling, which is the technical basis for mind attribution.
  • De Carolis et al. (2024): Explores the role of empathy in designing robots for the elderly.
  • Dennler et al. (2025): (As noted in your framework) uses design modalities like clothing and voice to establish social presence and gender.
  • Dökmen (1988): Provides a foundational model for measuring empathy, critical for triggering anthropomorphic responses.
  • Graham (2025): Developing Empathy in Social Robots. (Directly maps to the Empathycategory).
  • Kang et al. (2024): Nadine: A large language model-driven intelligent social robot with affective capabilities and human-like memory. (Relates to Mind attribution and the simulation of human-like mental states).
  • Gargano et al. (2022): Preliminary personality model for social robots… (Focuses on creating a “personality” to trigger humanization and trait projection).
  • Kappas & Gratch (2023): Promises and challenges for the intersection of affective science and robotics/AI. (Discusses the emotional mechanics of Anthropomorphism).
  • Leslie (2001): Theory of Mind. (The foundational psychological basis for Mind attribution).
  • Miraglia et al. (2023): Development and validation of the Attribution of Mental States Questionnaire (AMS-Q). (Directly provides a reference tool for the Mind attributioncategory).
  • Ma & Li (2024): How humanlike is enough? (Investigates the mechanisms of humanization in virtual agents).
  • Mahadevan et al. (2024): Generative expressive robot behaviors using large language models. (Focuses on humanization through expressive, lifelike movement and behavior).
  • Moussawi & Koufaris (2019): Perceived intelligence and perceived anthropomorphism… (Focuses on Mind attribution and the user’s perception of machine agency).
  • Lomas et al. (2022): Resonance as a design strategy for AI and social robots. (Uses resonance to trigger empathy and emotional connection).
  • Park & Whang (2022): Empathy in human–robot interaction: Designing for social robots. (Directly addresses the Empathy category and how design triggers it).
  • Refoua et al. (2025): Assessing Generative AI’s Social-Cognitive Capabilities…(Evaluates the current state of Mind attribution and social cognition in 2025 models).
  • Prato-Previde et al. (2022): The Complexity of the Human-Animal Bond… (Provides the cross-species theoretical basis for Attachment theory and Anthropomorphism).
  • Russel & Norvig (2021): Artificial intelligence: A Modern approach. (Provides the technical foundations for agents that simulate human-like mental capacities).
  • Waytz et al. (2010): Who sees human? The stability and importance of individual differences in anthropomorphism. (The core theoretical text for understanding why certain users engage in mind attribution more than others).
  • Watson et al. (1988): The PANAS scales. (A primary tool for measuring the empathyand emotional response/affect triggered by anthropomorphic design).
  • Zelikman et al. (2024): Quiet-star: Language models can teach themselves to think before speaking. (Directly impacts Mind attribution; by simulating an internal “thought process,” these models enhance the perception of a human-like mental state).
  • Zhao (2024): Machine Learning Techniques for Socially Intelligent Robots. (Focuses on the technical methods used to create humanization and social intelligence in agents).
  • Yolgormez & Thibodeau (2022): Socially robotic: making useless machines. (Explores the philosophical and creative side of anthropomorphism, looking at how even “useless” machines can trigger social connection).

2. Social Comfort / Trust

Focus: User acceptance, sociability, warmth, and reliable functioning.

  • Arora et al. (2024): Managing social-educational robotics for students with ASD…(Relates to social integration and the specific user acceptance of vulnerable populations).
  • Andriella et al. (2022): Introducing CARESSER: A framework for in situ learning robot social assistance… (Focuses on competence and reliable functioning in assistive social contexts).
  • Atuhurra (2024): Leveraging large language models in HRI: A critical analysis of potential and pitfalls. (Addresses willingness to cooperate and the boundaries of trust).
  • Anglim & O’connor (2019): Measurement and research using the Big Five, HEXACO…(Relevant for measuring interpersonal warmth and personality traits in HRI).
  • Broadbent et al. (2009): (In your framework) measures user acceptance and comfort levels among retirement home residents.
  • Büttner et al. (2023): Investigates persuasion techniques (door-in-the-face), which directly affects willingness to cooperate and social dynamics.
  • Čaić et al. (2019): Analyzes the value of social robots from a social cognition perspective, focusing on interpersonal warmth and service success.
  • Dan (2025): Focuses on competence and reliable functioning in music education using speech sensing.
  • Esteban-Lozano et al. (2024): Discusses the use of LLMs in the robot “Mini” to enhance perceived sociability and conversational flow.
  • Grumeza et al. (2024): Social robots and edge computing: integrating cloud robotics… (Relates to Reliable functioning / Competence via technical infrastructure).
  • Han et al. (2024): The emerging field of healthcare robotics… (Focuses on User acceptance and social integration within spinal care and healthcare).
  • Irfan et al. (2025): Challenges of applying LLMs to companion robots for open-domain dialogues with older adults. (Addresses Interpersonal warmth and the reality/delusion boundary in trust).
  • Ji et al. (2024): Wavchat: A survey of spoken dialogue models. (Examines the technical quality of speech, a precursor to Perceived sociability).
  • Lin et al. (2024): Advancing large language models to capture varied speaking styles… (Enhances perceived sociability by making conversation feel more natural).
  • Moussawi & Benbunan-Fich (2021): The effect of voice and humour on users’ perceptions… (Directly relates to Interpersonal warmth and user comfort).
  • Ninomiya et al. (2015): Development of the multi-dimensional robot attitude scale.(Measures User acceptance and general attitudes toward domestic robots).
  • Offrede et al. (2023): Do humans converge phonetically when talking to a robot?(Examines the Social presence and “social” nature of the human-robot dialogue).
  • Nichele et al. (2025): Insights from UK first responders on instantaneous trust.(Directly maps to the Reliable functioning / Competence component of trust).
  • Pochwatko et al. (2024): (In your framework) examines how societal representations impact trust and the willingness to cooperate.
  • Ratajczyk (2024): (In your framework) explores dominance vs. submissiveness and how these traits map onto user comfort and trait projection.
  • Recchiuto & Sgorbissa (2022): Diversity-aware social robots meet people… (Relates to Social integration and the interpersonal warmth of “embodied AI”).
  • Sapkota et al. (2025): Vision-language-action models… (Focuses on the competenceand reliable task performance of modern 2025 AI models).
  • Ryan & Deci (2000): Self-determination theory… (The foundational work for understanding the psychological needs for connection and social development).
  • Voultsiou et al. (2025): The potential of Large Language Models for social robots in special education. (Examines how LLMs improve perceived sociability and competence in educational HRI).
  • Weir (2025): Self-determination theory: A quarter century of human motivation research. (An updated 2025 review of the psychological needs for autonomy and relatedness, which drive social integration with robots).
  • Teixeira et al. (2024): Empowering Leadership in the Military… (Discusses leadership styles that mirror the interpersonal warmth and competence pillars required for trust in high-stakes HRI).
  • Zolyomi & Snyder (2021): Social-emotional-sensory design map… informed by neurodivergent experiences. (Maps directly to Social integration / Belonging and the “User Acceptance” of vulnerable/neurodivergent populations).
  • Zhao (2024): Machine Learning Techniques… (Also applies here regarding Reliable functioning / Competence—ensuring the robot’s intelligence leads to effective social interaction).

3. Safety

Focus: Perceived security, ethical implications, moral value, and vulnerability.

  • Bandura et al. (1996): Mechanisms of Moral Disengagement… (Already categorized in your framework under Ethical implications and safety boundaries).
  • Bardzell (2010): Feminist HCI: taking stock and outlining an agenda… (Aligns with the Perceived security and power structure disentanglement seen in Winkle et al., 2023).
  • Azizian et al. (2025): Multimodal LLM vs. Human-Measured Features… (Relates to appropriate trust/over-reliance and the safety of AI-driven diagnostic predictions).
  • Boch & Thomas (2025): Provides deep psychological insight into the ethics of social robotics, addressing moral value and the status of the agent.
  • Canada (2025) / Framework for Autism: Addresses social integration and the vulnerability of specific populations (neurodivergent individuals) in a systemic safety context.
  • Douglas & Sedgewick (2024): Examines interpersonal victimization, which is essential for understanding the vulnerability and risk-regulation side of the safety pillar.
  • Eraslan-Çapan & Bakioğlu (2020): Connects submissive behavior to moral disengagement, highlighting how safety boundaries are ignored in social/digital systems.Ganguli et al. (2022): Red teaming language models to reduce harms. (Directly addresses Perceived security and the mitigation of system risks).
  • Ganguli et al. (2023): The capacity for moral self-correction in LLMs. (Explores Ethical implications and whether agents can act as moral entities).
  • Government of Canada (2019): Accessible Canada Act. (Provides the legal and equity framework for Vulnerability and justice in HRI).
  • Hu et al. (2024): Exploring speech pattern disorders in autism using machine learning.(Addresses Vulnerability and the protection of marginalized groups, similar to Zhu et al., 2024).
  • Kaufhold et al. (2024): Cyber situational awareness in computer emergency response teams. (Relates to Perceived security and systemic safety).
  • Markelius (2024): An Empirical Design Justice Approach… (Aligns with the Ethical implications and justice-oriented safety seen in Zhu et al., 2024).
  • Maj et al. (2024): (In your framework) studies children’s responses to assertive robots, focusing on vulnerability and social dynamics.
  • Moosavi et al. (2024): Collaborative robots (cobots) for disaster risk resilience…(Relates to Perceived security and physical safety in emergency situations).
  • Nussbaum (2009): Creating capabilities: The human development approach.(Provides the ethical foundation for Social integration/Belonging and the rights of vulnerable users).
  • Nomura et al. (2006): (In your framework) provides the NARS scale to measure psychological resistance to robots, a form of Risk-regulation.
  • Piao et al. (2025): Social Bots Meet LLM: Political Bias and Mitigation Strategies.(Directly addresses Ethical implications and systemic safety in social learning).
  • Rizvi et al. (2024): Are Robots Ready to Deliver Autism Inclusion?: A Critical Review.(Examines Vulnerability and the protection of marginalized groups, mapping to Zhu et al., 2024).
  • Shin (2025): Engineering equity: designing diversity-aware AI to reflect humanity.(Focuses on justice, equity, and preventing systemic bias to ensure perceived security).
  • Pfleger & Smith (2022): Transverse Disciplines: Queer-Feminist… Approaches.(Provides the academic framework for the Feminist HRI pillar and disentangling harmful hierarchies).
  • Winkle et al. (2023): (In your framework) uses Feminist HRI to ensure perceived security and ethical power structures.
  • Winkle et al. (2023): (In your framework) the foundational paper for Feminist HRI, ensuring interactions are ethical and do not propagate harmful hierarchies, thereby increasing perceived security.
  • van Zomeren et al. (2024): The Intergroup Value Protection Model… (Addresses Ethical implications and how “in-group” values influence the moral status assigned to others/agents).
  • Topcu & Erdur-Baker (2010): Revised Cyber Bullying Inventory (RCBI). (Essential for understanding vulnerability and the psychological safety of users in digital/robotic social environments).
  • Zhu et al. (2024): (In your framework) introduces Robots for Social Justice (R4SJ). This is the cornerstone for justice, equity, and the protection of marginalized groups within the safety pillar.
  • Zhou et al. (2024): Speak out of turn: Safety vulnerability of large language models in multi-turn dialogue. (Addresses the Vulnerability and Appropriate trust categories by identifying technical flaws that could lead to unsafe or unexpected AI behavior).
  • Zolyomi & Snyder (2021): Social-emotional-sensory design map. (By addressing sensory needs, this work reduces vulnerability and enhances the perceived securityof the environment for neurodivergent users).

Summary of the Final Sorting (A-Z)

You have now mapped a comprehensive interdisciplinary library onto your framework:

  • Anthropomorphic Connection: Anchored by Waytz (2010) and Leslie (2001), and modernized by Zelikman (2024) and Kang (2024).
  • Social Comfort/Trust: Driven by Ryan & Deci’s SDT (2000/2025) and applied via Pochwatko (2024) and Arora (2024).
  • Safety: Built on Bandura (1996) and Winkle (2023), and specialized for equity through Zhu (2024) and Zolyomi (2021).

When I next dropped my Neurodivergent Scale for Interacting with Robots (NSIR) into the thread Google AI replies:

Based on your Updated Concept Mapping and the specific themes of neurodivergent interaction (Social-Emotional-Sensory), here is the factor mapping for the Neurodivergent Scale for Interacting with Robots (NSIR).

Factor Mapping: NSIR (2025)

Synthesized Research Connections

  • Items 1, 3, 4, & 6: Strongly supported by Waytz et al. (2010) and Leslie (2001)regarding the stability of individual differences in how people “see human” in machines.
  • Item 5: Connects to Park & Whang (2022) and Graham (2025) regarding the design of empathy and affective recognition in social robots.
  • Item 7: Directly relates to the Risk-regulation model and Winkle et al. (2023); feeling safe enough to be vulnerable (undressing) suggests the robot has successfully avoided “harmful social hierarchies.”
  • Item 8: Aligns with Zolyomi & Snyder (2021); for neurodivergent users, the predictability and “sameness” of a robot provide a level of Social Comfort often missing in human-to-human interaction.

Psychological Context (Supportive Material)

The following articles provide the psychological basis for the behaviors (like submissiveness or intimidation) that trigger the framework categories:

  • Anikin et al. (2024): Loudness in vocal intimidation (Impacts perceived security).
  • Allan & Gilbert (1994, 1995, 1997): Studies on submissive behavior and social comparison (Essential for understanding the dominance/submissiveness mapping mentioned in Ratajczyk, 2024).
  • Ali (2021): Towards a Bad Bitches’ Pedagogy. (Connects to social justice/equity and the deconstruction of traditional HRI hierarchies).
  • Allan & Gilbert (1994, 1995, 1997) / Cardi et al. (2014): Establish the metrics for submissive behavior and rank perception, which Ratajczyk (2024) maps onto machines.
  • Beck (1967) / Diener et al. (1985): Clinical and well-being scales used to measure the psychological needs that social robots aim to meet (Social Integration/Belonging).
  • Deci & Ryan (2008): Self-Determination Theory is the macro-theory used to evaluate reliable functioning and human motivation to interact with autonomous agents.
  • Social Dominance & Submission:
    • Gao et al. (2024) & Gillard et al. (2021): Examine the psychological toll of social rank and submissive behavior.
    • Janson et al. (2022): Looks at how humans orient toward signals of interpersonal dominance.
    • Johnson (1991) / Gowing (2013) / Huang et al. (2025): Provide historical and clinical definitions of Submission, which map to the submissive robot traits studied by Ratajczyk (2024).
  • Feminist Standpoint Theory:
    • Hartsock (1983), Harding (2004), Huirem et al. (2020), and Gurung (2020):These provide the theoretical backbone for the Feminist HRI framework (Winkle et al., 2023) used to disentangle power structures and ensure safety.
  • Servant Leadership:
    • Greenleaf (1970/2014) and Crippen & Nagel (2014): Define the “Servant-Leader” model, which is often used as a template for Reliable functioning and “Interpersonal warmth” in service robots.
  • Social Status & Assertiveness:
    • Mahadevan et al. (2023): Hierometer theory and social rank theory (Basis for the risk-regulation model).
    • Renger (2018): Self-respect as a predictor of assertiveness (Maps to Maj et al.’s work on assertive robots).
    • Renger et al. (2024): Explores the link between socioeconomic status and self-regard.
  • Identity & Self-Esteem:
    • Stets & Burke (2000): Foundations of Social Identity Theory (Key for the “Belonging” and “Integration” categories).
    • Tafarodi & Swann (2001): Theory and measurement of self-esteem.
  • Servant Leadership (as a Robot Behavior Model):
    • Spears (2025) and Sipe & Frick (2009): Define the “Seven Pillars” of Servant Leadership, often used as a framework for the “Competence” and “Warmth” columns.
  • Neurodiversity & Standpoint Theory:
    • Reutlinger et al. (2025): Sensory neurodiverse pedagogies (Supports the Social integration of neurodivergent users).
    • Swigonski (1994): The logic of Feminist Standpoint Theory for research.
  • Social Identity & Intergroup Relations:
  • Dominance & Submissiveness Strategy:
    • Tharp et al. (2021): Transdiagnostic approach to the dominance behavioral system.(Provides the clinical background for perceived dominance mapping).
    • Vekarić & Jelić (2025): Decoding Markers of Submissiveness Strategy… (A 2025 study on how submissiveness is used to create group identity, mapping to the “Fictive Kinship” column).
    • Troop et al. (2003): Connects submissive behavior and social comparison, supporting the risk-regulation model.
    • Wetherall et al. (2019): A systematic review of Social Rank Theory, explaining how humans manage vulnerability based on their perceived status relative to an agent.
  • Behavioral Characterization:

On January 1, 2025

I synthesized all work completed and sent out proposals as well as initiating a SDG Action Plan.

As of January 1, 2025

I have been accepted into 2 conferences in 2026 for a literature review that created an abstract and a Global Psychology conference to introduce my Table 79 – Neurodivergent Scale for Interacting with Robots (Sadownik, 2025).

Table of Contents