<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article
  PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD with MathML3 v1.2 20190208//EN" "JATS-journalpublishing1-mathml3.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.2" xml:lang="en">
<front>
<journal-meta><journal-id journal-id-type="publisher-id">RPIO</journal-id><journal-id journal-id-type="nlm-ta">Res People Organ</journal-id>
<journal-title-group>
<journal-title>Research for People in Organizations</journal-title><abbrev-journal-title abbrev-type="pubmed">Res. People Organ.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">2510-991X</issn>
<publisher><publisher-name>PsychOpen</publisher-name></publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">rpio.17157</article-id>
<article-id pub-id-type="doi">10.5964/rpio.17157</article-id>
<article-categories>
<subj-group subj-group-type="heading"><subject>Emerging Scholars' Showcase</subject></subj-group>


<subj-group subj-group-type="badge">
<subject>Data</subject>
<subject>Code</subject>
<subject>Materials</subject>
</subj-group>


</article-categories>
<title-group>
<article-title>Working With AI: How Team Cohesion Depends on Perceiving the AI as a Tool or a Partner</article-title>
<alt-title alt-title-type="right-running">Working with AI</alt-title>
<alt-title specific-use="APA-reference-style" xml:lang="en">Working with AI: How team cohesion depends on perceiving the AI as a tool or a partner</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid" authenticated="false">https://orcid.org/0009-0000-3653-3704</contrib-id><name name-style="western"><surname>Hoffmann</surname><given-names>Maja</given-names></name><xref ref-type="corresp" rid="cor1">*</xref><xref ref-type="aff" rid="aff1"><sup>1</sup></xref></contrib>
<contrib contrib-type="author"><contrib-id contrib-id-type="orcid" authenticated="false">https://orcid.org/0009-0002-3287-1890</contrib-id><name name-style="western"><surname>Wollstein</surname><given-names>Coraly</given-names></name><xref ref-type="aff" rid="aff1"><sup>1</sup></xref></contrib>
<contrib contrib-type="author"><contrib-id contrib-id-type="orcid" authenticated="false">https://orcid.org/0009-0003-5866-933X</contrib-id><name name-style="western"><surname>Zimber</surname><given-names>Fiona</given-names></name><xref ref-type="aff" rid="aff1"><sup>1</sup></xref></contrib>
<contrib contrib-type="author"><contrib-id contrib-id-type="orcid" authenticated="false">https://orcid.org/0000-0002-3190-6185</contrib-id><name name-style="western"><surname>Tausch</surname><given-names>Alina</given-names></name><xref ref-type="aff" rid="aff1"><sup>1</sup></xref></contrib>
<contrib contrib-type="editor">
<name>
	<surname>Hagemann</surname>
	<given-names>Vera</given-names>
</name>
<xref ref-type="aff" rid="aff2"/>
</contrib>
<aff id="aff1"><label>1</label><institution content-type="dept">Faculty of Psychology</institution>, <institution>Ruhr University Bochum</institution>, <addr-line><city>Bochum</city></addr-line>, <country country="DE">Germany</country></aff>
	<aff id="aff2">University of Bremen, Bremen, <country>Germany</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>*</label>Ruhr-University Bochum, Work, Organization and Business Psychology, 44801 Bochum, Germany. <email xlink:href="maja.hoffmann-b3f@ruhr-uni-bochum.de">maja.hoffmann-b3f@ruhr-uni-bochum.de</email></corresp>
</author-notes>
<pub-date date-type="pub" publication-format="electronic"><day>11</day><month>03</month><year>2026</year></pub-date>
<pub-date pub-type="collection" publication-format="electronic"><year>2026</year></pub-date>
<volume>2</volume>
<elocation-id>e17157</elocation-id>
<history>
<date date-type="received">
<day>28</day>
<month>02</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>01</day>
<month>10</month>
<year>2025</year>
</date>
</history>
<permissions><copyright-year>2026</copyright-year><copyright-holder>Hoffmann, Wollstein, Zimber, &amp; Tausch</copyright-holder><license license-type="open-access" specific-use="CC BY 4.0" xlink:href="https://creativecommons.org/licenses/by/4.0/"><ali:license_ref>https://creativecommons.org/licenses/by/4.0/</ali:license_ref><license-p>This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International License, CC BY 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p></license></permissions>

<abstract abstract-type="non-technical">
<sec><title>Background</title>
<p>In the future, people will increasingly work alongside artificial intelligence (AI). To make this work humane and acceptable, team cohesion, i.e., the feeling of belonging together as a strong unit within an organization, is essential, as it enables team members to reach common targets and work together as a group.</p></sec>
<sec><title>Why was this study done?</title>
<p>AI is getting more and more integrated into work settings. However, it remains unclear how people perceive AI when it is implemented in different roles. Two different possibilities to applicate AI in work life is either as a tool or as a potential teammate and at this point it is unclear how these role perceptions affect important factors such as team cohesion or trust. This study was conducted to gain a better understanding of how different perceptions of AI roles influence these parameters.</p></sec>
<sec><title>What did the researchers do and find?</title>
<p>We conducted an online vignette study where the participants were randomly assigned to different job scenarios. Here, AI was introduced either as a tool or a teammate. Afterwards the participants evaluated their experience, answering questions about team cohesion and trust in AI. The study shows that: 1) people’s perception of the role of AI does not necessarily correspond with how we designed it to be, and 2) their perception of AI as a teammate or a tool is essential for how well they can create a team with it.</p></sec>
<sec><title>What do these findings mean?</title>
<p>When AI is perceived as a tool, it is harder to experience team spirit, while perception as a teammate strongly relates to cohesion. Trust plays an essential role in shaping the effect of partner perception on team cohesion: Especially trust in a teammate-like AI helps with higher team cohesion.</p></sec>
</abstract>
<abstract abstract-type="highlights"><title>Highlights</title>
<p>
<list list-type="bullet">
<list-item>
<p>Using an online vignette study, participants interacted with AI, either implemented as a tool or a teammate.</p>
</list-item>
<list-item>
<p>Participants’ perception of team cohesion depended on the different types of AI application.</p>
</list-item>
<list-item>
<p>Perceiving AI as a teammate relates to a higher feeling of team cohesion.</p>
</list-item>
<list-item>
<p>Trust moderates the higher feeling of team cohesion when AI is perceived as a teammate.</p></list-item>
</list>
</p>
</abstract>
<abstract>
<p>	With artificial intelligence (AI) getting more integrated into working life, the question is how to use it in a human-centered way to promote resources such as team cohesion, belonging, and support. While these factors are crucial to reaching common targets, not every AI system and application raises the same expectations to provide those resources. This online vignette study aims to answer the question how different role conceptualizations of AI can influence team cohesion in a hybrid team. Two hundred and seventeen (217) participants were randomly divided into two experimental conditions, i.e., to work with AI as a teammate or as a tool. They were then asked to evaluate their team perception while working with the AI, and their trust in the system. While the vignette manipulation of the different AI framings was unsuccessful, further analyses were carried out by separating the participants using their own rating of the AI role. The results show that team cohesion is significantly higher when AI is perceived as a teammate. Moreover, trust in AI has a significant influence on the team perception. Perceiving AI as a teammate leads to higher team cohesion compared to seeing it as a tool, moderated by the participant’s trust in the AI.</p>			
</abstract>
<kwd-group kwd-group-type="author"><kwd>human-AI interaction</kwd><kwd>artificial intelligence (AI)</kwd><kwd>trust</kwd><kwd>role perception</kwd></kwd-group>

</article-meta>
</front>
<body>
	<sec sec-type="intro" id="intro"><title/>	
<p>The increasing application of artificial intelligence (AI), i.e., “software-based technology that permits automated machines to sense their surroundings and intelligently make decisions based on the available data” (<xref ref-type="bibr" rid="r17">Kaplan et al., 2023</xref>, p.1), requires proper integration into human workflows. AI systems and humans can work closely together, either in a supportive cooperation (<xref ref-type="bibr" rid="r18">Lai et al., 2022</xref>) or in a synergetic teaming approach (<xref ref-type="bibr" rid="r1">Berretta et al., 2023</xref>). As research on teams with autonomous agents builds on human team insights (<xref ref-type="bibr" rid="r23">Morrow &amp; Fiore, 2012</xref>), gaps remain in understanding the transferability and peculiarities of teaming in both contexts. This study compares the utilization of AI as a tool and as a teammate. It should be mentioned here that AI is a complex construct but was used in this study as a software application on the computer. The aim is to test whether working with AI in different roles has an impact on the sense of team cohesion, and the influence of trust.</p>
<sec sec-type="other1"><title>Theoretical Background and Current State of Research</title>
<sec><title>Applying AI as a Teammate or a Tool</title>
<p>While typically used as a working appliance, the dynamic capabilities of AI allow it to be considered a potential teammate. As <xref ref-type="bibr" rid="r28">Rix (2022)</xref> summarizes, factors such as a team setting and collaborative actions are “drivers of the formation of impactful human-machine teams” (p. 398). While there are many forms of collaboration (e.g., <xref ref-type="bibr" rid="r24">Parasuraman et al., 2000</xref>) and concepts of interaction, like “human centered assistance applications” (<xref ref-type="bibr" rid="r32">Schmidtler et al., 2015</xref>, p. 85) and human-AI teaming (<xref ref-type="bibr" rid="r1">Berretta et al., 2023</xref>), there is no agreed-on differentiation of AI roles within a work group.</p>
<p>Researchers acknowledge the existence of “various modes of cooperation between humans and AI” (<xref ref-type="bibr" rid="r21">Li et al., 2022</xref>, p. 1) — here referred to as AI roles — with an ongoing debate on whether technological agents can, should and may be considered team members or mere tools (<xref ref-type="bibr" rid="r28">Rix, 2022</xref>).</p>
<p>Perceiving AI as a teammate goes along with certain behaviors like proactivity (co-creation) and relationship-building (<xref ref-type="bibr" rid="r28">Rix, 2022</xref>), and the active and distinct role of AI (<xref ref-type="bibr" rid="r21">Li et al., 2022</xref>). The role of AI as a supportive tool (e.g., <xref ref-type="bibr" rid="r9">Fan et al., 2022</xref>) is described as helping with tasks while limited in its function and decision spectrum, not going beyond what told to do and not reaching into social spheres. However, more nuanced taxonomies are missing, and research on the impact of different forms of implementation of AI is urgently needed.</p></sec>
<sec><title>How People Experience Working With AI</title>
<sec><title>Role Perception of AI</title>
<p>The essential theory on how people perceive autonomous partners in teams is by <xref ref-type="bibr" rid="r36">Wynne and Lyons (2018)</xref>. Their concept of autonomous agent teammate-likeness describes the attitude of humans towards autonomous teammates as “the extent to which a human operator perceives and identifies an [...] agent partner as a highly altruistic, benevolent, interdependent, emotive, communicative and synchronized agentic teammate, rather than simply an instrumental tool” (<xref ref-type="bibr" rid="r36">Wynne &amp; Lyons, 2018</xref>, p. 355). Its six facets, affected by the human, the agent, and their characteristics, influence cognitive, affective and emotional outcomes. Research on the model by <xref ref-type="bibr" rid="r35">Tokadlı and Dorneich (2022)</xref> shows that an AI system giving cues for its action within the work environment was rated more teammate-like than one performing separately on a screen. <xref ref-type="bibr" rid="r4">Capiola and colleagues (2023)</xref> demonstrated that manipulating an agent’s behavior has effects on its perception. Therefore, we hypothesize the following:</p>
<list id="L2" list-type="simple">
<list-item>
<p><italic>H1</italic>: The manipulation of the AI role results in different subjective AI role allocations by the participants.</p></list-item>
</list>
<p>To capture perceptions independently and remain open to the possibility that AI can be perceived simultaneously as a teammate and as a tool, three separate scales were used: one to measure the perception as a teammate, one as a tool, and a bipolar continuum ranging from “tool” to “teammate” to capture the perception in different ways.</p>
<list id="L3" list-type="simple">
<list-item>
<p><italic>H1.1</italic>: Receiving a vignette with AI in the role of a teammate is positively connected to rating it as a teammate.</p></list-item>
<list-item>
<p><italic>H1.2</italic>: Receiving a vignette with AI in the role of a tool is positively connected to rating it as a tool.</p></list-item>
<list-item>
<p><italic>H1.3</italic>: Receiving a vignette with AI in the role of a teammate leads to a higher rating on a continuum between the two roles, reflecting more of a teammate evaluation.</p></list-item>
</list></sec>
<sec><title>Team Cohesion With AI</title>
<p><xref ref-type="bibr" rid="r36">Wynne and Lyons (2018)</xref> explicitly name human-agent team cohesion as one of the outcomes in their model. Cohesion is an important parameter in a team, enabling the members to reach common goals and work together as a group. It is defined as the degree to which an individual believes in the attraction between the members of their work group, their willingness to work together, and their commitment to their tasks and goals (<xref ref-type="bibr" rid="r27">Riordan &amp; Weatherly, 1999</xref>, p. 315). Team cohesion is well researched in human teams but under-explored when including autonomous agents (<xref ref-type="bibr" rid="r19">Lakhmani et al., 2022</xref>). However, given its impact on reaching common targets (<xref ref-type="bibr" rid="r12">Grossman et al., 2022</xref>) and its potential for creating synergistic human-AI teaming (<xref ref-type="bibr" rid="r7">Correia et al., 2018</xref>), it is an important factor to consider. <xref ref-type="bibr" rid="r28">Rix (2022)</xref> describes the way AI is implemented in a company as decisive for how humans work with it: When humans identify the AI as a teammate, cooperation will be characterized by bidirectional interaction. Contrarily, when perceived as a tool, there is a unidimensional relationship without team perception (<xref ref-type="bibr" rid="r28">Rix, 2022</xref>), thus not evoking team cohesion. Hence, the following hypotheses are assumed:</p>
<list id="L4" list-type="simple">
<list-item>
<p><italic>H2</italic>: A stronger perception of AI as a teammate relates to higher team cohesion.</p></list-item>
<list-item>
<p><italic>H2.1</italic>: The higher the rating of AI as a teammate, the higher the perceived team cohesion.</p></list-item>
<list-item>
<p><italic>H2.2</italic>: The higher the rating of AI as a tool, the lower the perceived team cohesion.</p></list-item>
<list-item>
<p><italic>H2.3</italic>: The more the role continuum is rated towards the side of a teammate, the higher the perceived team cohesion.</p></list-item>
</list></sec>
<sec><title>Trust as a Moderator Between AI Role and Team Cohesion</title>
<p>Trust is important when considering working in teams and vital for experiencing team cohesion (<xref ref-type="bibr" rid="r10">Fung, 2014</xref>). It can be defined as a “situation-specific [attitude] that [is] relevant only when something is exchanged in a cooperative relationship characterized by uncertainty” (<xref ref-type="bibr" rid="r14">Hoff &amp; Bashir, 2015</xref>, p. 410). Our study focuses on this situational trust in a certain AI system integrated in the team, keeping in mind that this is influenced by dispositional as well as learned trust (<xref ref-type="bibr" rid="r14">Hoff &amp; Bashir, 2015</xref>).</p>
<p>Research has shown that trust in human–AI teams depends on different aspects, e.g., the average performance of the teams (negatively related; <xref ref-type="bibr" rid="r22">McNeese et al., 2021</xref>). <xref ref-type="bibr" rid="r8">Dennis and colleagues (2023)</xref> showed that there are no significant differences in trustworthiness or willingness to collaborate with AI-teammates compared to human members. Additionally, the participants’ trust in AI has an impact on whether people use the technology (<xref ref-type="bibr" rid="r5">Choung et al., 2023</xref>) and is essential for creating a feeling of team cohesiveness (<xref ref-type="bibr" rid="r10">Fung, 2014</xref>). Also, <xref ref-type="bibr" rid="r16">Kao et al. (2019)</xref> found that team trust strengthened the positive link between transformational leadership and team cohesion. This indicates that trust not only contributes directly to cohesion but also moderates the strength of other predictors. Considering the literature on the specific importance of trust for team cohesion, and the evidence for its moderating role in related contexts, we expect it to also influence the effect of AI role perception on team cohesion.</p>
<list id="L5" list-type="simple">
<list-item>
<p><italic>H3</italic>: Trust moderates the effect of different AI role perceptions on team cohesion.</p></list-item>
</list></sec></sec>
<sec><title>Factors Influencing Team Cohesion With AI</title>
<p>Previous research has shown that humans prefer to work with other humans rather than with AI because they perceive the other human as a teammate (<xref ref-type="bibr" rid="r31">Sadeghian &amp; Hassenzahl, 2022</xref>). Following this insight, team cohesion can be influenced by implementing AI as a team partner or a tool, by its role’s perception and by the trust in the AI system (see <xref ref-type="fig" rid="f1">Figure 1</xref>).</p><fig id="f1" position="anchor" fig-type="figure" orientation="portrait"><label>Figure 1</label><caption>
<title>Assigned Hypotheses in the Analyzed Path Model</title></caption><graphic xlink:href="rpio.17157-f1" position="anchor" orientation="portrait"/></fig>
<p>Regarding technology and AI use, gender is a potentially influential variable affecting attitudes and perception (see e.g., <xref ref-type="bibr" rid="r3">Cai et al., 2017</xref>). The findings are ambivalent: <xref ref-type="bibr" rid="r25">Ray and colleagues (1999)</xref> showed that men and women are equally comfortable with technology at work. In fact, women even have a more positive attitude towards it. Nevertheless, gender is related to how much time people spend with technology (<xref ref-type="bibr" rid="r3">Cai et al., 2017</xref>).</p>
<p>Thus, the variables gender and experience might have an influence on how people perceive AI within teams and the collaboration with it. Because research does present an ambiguous picture of how these influences look like, experience and gender are used as control variables within this study.</p></sec></sec></sec>
<sec sec-type="methods"><title>Method</title>
<sec><title>Design of the Online-Vignette Study</title>
<p>The study was developed during a university seminar. As this paper serves as an overview of the research findings, the hypotheses deviate from those specified in the preregistration (see <xref ref-type="bibr" rid="r34">Tausch et al., 2024</xref>), which were individual student’s hypotheses for the seminar. This manuscript is an exploratory investigation of the results with exploratory hypotheses.</p>
<p>The experimental online study tested the influence of different types of AI application on team cohesion, moderated by trust in the AI and controlled for gender and AI experiences. The data collection via SoSci Survey took place from March 12 to June 23, 2024. The study was classified as ethically agreeable by the ethics committee (Application 919) of Ruhr-University Bochum.</p>
<p>Using a vignette study, participants were assigned a “job”, either in recruiting or in the scientific field, and were asked to imagine an exemplary workday scenario with typical tasks, supported by an AI system. They then evaluated the depicted situation. The corresponding vignettes can be viewed in full length at <xref ref-type="supplementary-material" rid="r15">Hoffmann et al. (2025)</xref>. They vary in their central characteristics regarding the role of the AI system, which results in different experimental conditions (<xref ref-type="bibr" rid="r30">Rungtusanatham et al., 2011</xref>). The dispersion of participants was partially randomized, as a software guaranteed balanced distribution of participants across the conditions within fully finished questionnaires. Data collection was anonymous and not incentivized.</p></sec>
<sec><title>Measurement Instruments</title>
	<p>For measuring team cohesion, the RoBoCo-scale was selected (<xref ref-type="bibr" rid="r33">Tausch &amp; Kluge, 2026</xref>). This scale captures cohesion between robots and humans but can also be used in human–AI teams according to the authors. Thus, it was deemed more suitable than general cohesion measures for human-only teams. The participants rated statements like “I have a good relationship with the AI” on a scale from 1 (Completely agree) to 7 (Strongly disagree).</p>
<p>Trust in the AI was measured using the Human-Computer Trust Scale (HCTM), which is specifically designed to analyze human trust in different technological systems and comprehensive enough to capture trust in the presented AI in a differentiated way (<xref ref-type="bibr" rid="r13">Gulati et al., 2019</xref>). On a five-point Likert scale, participants rated statements such as “I believe that the AI is acting in my best interests” from strongly disagree to completely agree. The German-language items were provided by the original authors.</p>
<p>Fourteen self-designed items separated into two scales were used to record participants’ subjective rating of whether they perceived AI as a tool or as a teammate (seven items each). On a five-point Likert scale, the participants rated statements such as “The AI and I together form a social unit” (Teammate) or “The AI is a useful tool” (Tool) from 1 (Strongly disagree) to 5 (Completely agree). Additionally, a continuum was included, on which the participants were asked to specify from 1 to 100 if the AI was perceived more as a tool (1) or as a teammate (100), because it remains unclear if the roles are distinct constructs or extremes on a continuum.</p></sec>
<sec><title>Sample</title>
	<p>Four hundred and eighty-four participants were acquired using the students’ personal networks and the platform SurveyCircle. The sample was filtered according to the exclusion criteria defined in the preregistration (see <xref ref-type="bibr" rid="r34">Tausch et al., 2024</xref>), decreasing it to 217 participants. A detailed description of the exclusion can be viewed at <xref ref-type="supplementary-material" rid="r15">Hoffmann et al. (2025)</xref>. For the results, a sub-sample of 150 participants was relevant, excluding those receiving the vignettes of collaboration with a human assistant. The following data relates to the sub-sample of people being confronted with an AI vignette. The average age was <italic>M</italic> = 31.40 years (<italic>SD</italic> = 13.30), with a minimum age of 18 and a maximum age of 73. Educational qualifications were also surveyed. 36.87% stated that their highest educational qualification was a university degree, 8.29% a university of applied sciences degree, 46.08% a high school diploma, 2.76% an intermediate school leaving certificate, 0.92% a lower secondary school leaving certificate and 5.07% a doctorate or habilitation. In general, 81 participants (33.33% male) received the vignette in which AI was presented as a tool, whereas 65 participants (30.77% male) received the vignette where AI was introduced as a teammate.</p></sec>
<sec><title>Statistical Analysis</title>
<p>To test the connection between the perception of AI and their influence on team cohesion, moderated by trust, the statistic software R 4.3.1 was used (<xref ref-type="bibr" rid="r26">R Core Team, 2023</xref>). Data was merged for the two different jobs presented, as distinctions were not relevant for this analysis. Scale analyses for reliability information and scale improvements were performed. Then, the data was analyzed descriptively.</p>
<p>To test the hypotheses for significance, a path analysis (see <xref ref-type="fig" rid="f1">Figure 1</xref>) was performed. A distinction was made between the direct mediation of the perception of AI (scales and continuum, M<sub>1-3</sub>) from the various vignettes (X) to team cohesion (Y) and a moderation effect through trust (W). Three different types of mediation were distinguished: M<sub>1</sub>) via the evaluation of AI as a teammate, M<sub>2</sub>) via participants’ rating of AI as a tool, and M<sub>3</sub>) via rating on a continuum from tool to teammate. This analysis was performed considering the covariates gender and experience with AI using the lavaan package (<xref ref-type="bibr" rid="r29">Rosseel, 2012</xref>).</p></sec></sec>
<sec sec-type="results"><title>Results</title>
<sec><title>Descriptives</title>
<p>All descriptive values can be found in <xref ref-type="table" rid="t1">Table 1</xref>.</p>
<table-wrap id="t1" position="anchor" orientation="portrait">
<label>Table 1</label><caption><title>Means, Standard Deviations, and Correlations of All Measured Constructs</title></caption>
	<table frame="hsides" rules="groups" style="striped-#f3f3f3">
<col width="" align="left"/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<thead>
<tr>
<th>Variable</th>
<th>Range</th>
<th>α/ω</th>
<th><italic>M</italic></th>
<th><italic>SD</italic></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. Gender</td>
<td>1 &amp; 2</td>
<td>—</td>
<td align="char" char=".">1.33</td>
<td align="char" char=".">0.49</td>
<td>—</td>
<td>—</td>
<td>—</td>
<td>—</td>
<td>—</td>
<td>—</td>
</tr>
<tr>
<td>2. Experience with AI</td>
<td>1–9</td>
<td>—</td>
<td align="char" char=".">6.36</td>
<td align="char" char=".">1.71</td>
<td align="char" char=".">.13</td>
<td>—</td>
<td>—</td>
<td>—</td>
<td>—</td>
<td>—</td>
</tr>
<tr>
<td>3. Teammate perception</td>
<td>1–5</td>
<td align="char" char=".">.83/.89</td>
<td align="char" char=".">2.61</td>
<td align="char" char=".">0.81</td>
<td align="char" char=".">-.07</td>
<td align="char" char=".">-.14</td>
<td>—</td>
<td>—</td>
<td>—</td>
<td>—</td>
</tr>
<tr>
<td>4. Tool perception</td>
<td>1–5</td>
<td align="char" char=".">.72/.79</td>
<td align="char" char=".">4.32</td>
<td align="char" char=".">0.47</td>
<td align="char" char=".">.17*</td>
<td align="char" char=".">.21*</td>
<td align="char" char=".">-.21**</td>
<td>—</td>
<td>—</td>
<td>—</td>
</tr>
<tr>
<td>5. Role continuum</td>
<td>1–100</td>
<td>—</td>
<td align="char" char=".">28.98</td>
<td align="char" char=".">25.09</td>
<td align="char" char=".">-.23**</td>
<td align="char" char=".">-.19*</td>
<td align="char" char=".">.43**</td>
<td align="char" char=".">-.30**</td>
<td>—</td>
<td>—</td>
</tr>
<tr>
<td>6. Trust</td>
<td>1–5</td>
<td align="char" char=".">.79/.84</td>
<td align="char" char=".">3.12</td>
<td align="char" char=".">0.56</td>
<td align="char" char=".">-.12</td>
<td align="char" char=".">-.01</td>
<td align="char" char=".">.60**</td>
<td align="char" char=".">-.01</td>
<td align="char" char=".">.34**</td>
<td>—</td>
</tr>
<tr>
<td>7. Team cohesion</td>
<td>1–7</td>
<td align="char" char=".">.92/.94</td>
<td align="char" char=".">3.73</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">-.03</td>
<td align="char" char=".">.01</td>
<td align="char" char=".">.80**</td>
<td align="char" char=".">-.14</td>
<td align="char" char=".">.44**</td>
<td align="char" char=".">.62**</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Note. M</italic> and <italic>SD</italic> are used to represent mean and standard deviation, respectively. Cronbach’s α and McDonald’s ω are measures for the scales’ reliability.</p>
<p>*<italic>p</italic> &lt; .05. **<italic>p</italic> &lt; .01.</p>
</table-wrap-foot>
</table-wrap></sec>
<sec><title>Testing the Hypotheses</title>
<p>All hypotheses were tested in a path model depicted in <xref ref-type="fig" rid="f2">Figure 2</xref>.</p><fig id="f2" position="anchor" fig-type="figure" orientation="portrait"><label>Figure 2</label><caption>
<title>Standardized Path Weights Between the Variables in the Analyzed Path Model</title><p><italic>Note</italic>. All these effects are examined controlling for gender and experience with AI.</p><p>*<italic>p</italic> &lt; .05. **<italic>p</italic> &lt; .01.</p></caption><graphic xlink:href="rpio.17157-f2" position="anchor" orientation="portrait"/></fig>
<sec><title>A-paths: Effects of Conditions on Role Perceptions</title>
<p>In Hypotheses <italic>H1.1</italic> and <italic>H1.2</italic>, the paths of the condition on the rating of the AI as a teammate (a<sub>1</sub>) and as a tool (a<sub>2</sub>) are observed. Receiving a vignette with AI as a tool, compared to AI as a teammate, leads to a lower perception of AI as a teammate, <italic>M</italic><sub>tool</sub> = 2.56 (0.81) and <italic>M</italic><sub>partner</sub> = 2.68 (0.82). Path a<sub>1</sub> is not significantly different from zero, ß = -.09, <italic>SE</italic> = 0.13, <italic>z</italic>(147) = -1.04, <italic>p</italic> = .298. The rating of AI as a tool was <italic>M</italic><sub>tool</sub> = 4.57(0.47) and <italic>M</italic><sub>partner</sub> = 4.48 (0.53) in the two conditions, with Path a<sub>2</sub> on the rating of AI as a tool, ß = .12, <italic>SE</italic> = 0.08, <italic>z</italic>(147) = 1.40, <italic>p</italic> = .162, being non-significant. <italic>H1.1</italic> and <italic>H1.2</italic> are thus rejected.</p>
<p>The a<sub>3</sub>-path (H1.3) relates to the influence of the experimental condition on the role rating of AI on the continuum. Receiving a vignette with AI as a tool (<italic>M</italic> = 27.67, <italic>SD</italic> = 24.16), compared to AI as a teammate (<italic>M</italic> = 30.60, <italic>SD</italic> = 26.30), leads to a lower ranking on the teammate continuum by -3.33 units. This result is not significantly different from zero, ß = -.07, <italic>SE</italic> = 4.09, <italic>z</italic>(147) = -0.81, <italic>p</italic> = .416, so that the rating on the continuum cannot be predicted by the experimental condition and <italic>H1.3</italic> has to be rejected.</p></sec><?figure f2?>
<sec><title>B-paths: Effects of Role Perception on Team Cohesion</title>
<p>In Hypotheses <italic>H2.1</italic> and <italic>H2.2</italic>, the paths of the AI rating as a teammate (b<sub>1</sub>) and as a tool (b<sub>2</sub>) on team cohesion are regarded. Rating AI as a tool (ß = -.14), compared to the AI rating as a teammate (ß = .74), leads to a lower ranking on the team cohesion scale by -0.28 units. This result is significantly different from zero, ß = -.14, <italic>SE</italic> = 0.13, <italic>z</italic>(147) = -2.15, <italic>p</italic> = .031. <italic>H2.1</italic> and <italic>H2.2</italic> can thus be accepted. The b<sub>3</sub>-path, tested in Hypothesis <italic>H2.3</italic>, examines the influence of the continuum-based AI rating: Rating AI more towards the teammate side leads to higher team cohesion, ß = .28, <italic>SE</italic> = 0.002, <italic>z</italic>(147) = 4.23, <italic>p</italic> &lt; .001. <italic>H2.3</italic> can thus be accepted.</p>
<p>The c´-path tested the direct effect of the experimental condition on team cohesion, which was non-significant.</p></sec>
<sec><title>Moderation Via Trust</title>
<p>Hypothesis H3 relates to the moderating effect of trust on the relationship between AI role perception and team cohesion. Trust enhances the relationship between the AI perception and team cohesion (on a scale from 1–7). Regarding the link between teammate perception and team cohesion, this effect is significantly different from zero, ß = .24, <italic>SE</italic> = 0.08, <italic>z</italic>(147) = 4.6, <italic>p</italic> &lt; .001. When it comes to the relation between perceiving AI as a tool and team cohesion, the moderating effect of trust is significant as well, ß = .61, <italic>SE</italic> = 0.11, <italic>z</italic>(147) = 9.57, <italic>p</italic> &lt; .001. At least, the effect of trust on the connection between the rating on the continuum and the perception of team cohesion is also significantly different from zero, ß = .55, <italic>SE</italic> = 0.1, <italic>z</italic>(147) = 8.46, <italic>p</italic> &lt; .001. <italic>H3</italic> is accepted.</p></sec></sec></sec>
<sec sec-type="discussion"><title>Discussion</title>
<p>We investigated whether people’s sense of team cohesion differs depending on working with AI as a teammate or as a tool and on how they perceive the AI role. We were partially able to accept our hypotheses, showing a correlation of AI perception with perceived team cohesion, moderated by trust in AI. The influence of role perception on team cohesion varies depending on the person’s mental model of the AI as either a useful tool or an integral teammate. However, our vignettes did not influence how strongly AI was rated as a teammate or a tool; there was no significant connection between the vignette presented to the participant and how strong their perception was. Despite the failure of the manipulation, the response behavior provides valuable insights.</p>
<p>This leads to the question why the differently presented AIs did not produce different role perceptions. One possible reason to be considered is the construction of our vignettes. The teammate vignette reflects several aspects of the definition of a teammate by <xref ref-type="bibr" rid="r36">Wynne and Lyons (2018)</xref> (e.g., interdependence or communication), but emotionality is not taken into account. In contrast, the tool vignette describes more of a mechanical collaboration. Accordingly, it is unclear whether the differences presented were sufficiently clear to the participants. Although individual facets of the model were considered, the overall manipulation may have been insufficient to produce clearly distinguishable role perceptions.</p>
<p>The question remains if the actively designed organizational role of AI in the workplace is irrelevant for shaping people’s perception of AI. This cannot reliably be answered, due to methodological reasons: As <xref ref-type="bibr" rid="r11">Groß and Börensen (2009)</xref> describe, situations can be perceived differently when reading a vignette rather than experiencing the situation itself, as vignettes highlight only certain aspects of the situation. Also, we fully rely on a situational judgment (as addressed in the section on trust) and are not able to observe interaction and have team dynamics evolve in this study design. This might explain the failure of the study to establish differences in perception of the AI role. Additionally, role perception can be expected to be strongly influenced by pre-existing attitudes, such as AI being unable to be a “real” teammate or being bound to be a tool. Hence, our vignettes might not have been far-reaching enough to make people question their convictions. The small differences from, e.g., a vignette contrasting media-coined expectations, which can fall into the <italic>zone of tolerance</italic> (<xref ref-type="bibr" rid="r2">Berry &amp; Parasuraman, 1991</xref>) and assimilate outcomes, in this case role perceptions, towards expectations, might have shaped our results more than did our manipulation.</p>
<p>Nevertheless, we could find evidence for the second part of the model: The more AI is perceived as a tool, the less do people experience cohesion (medium effect), while the more AI is perceived as a teammate, the more cohesion is experienced (large effect). For this correlation, it doesn’t matter if the role perception is experimentally induced or the result of prior expectations. Merging the two roles in one continuum seems to have worked in our study as well, even though the effects for the two respective roles seem to be more differentiated. Using the continuum results in information loss but still has a medium effect on team cohesion and can thus help understand the prerequisites for a human–AI team. Given that the results are correlational, the direction of causality needs to be considered. Based on <xref ref-type="bibr" rid="r28">Rix (2022)</xref> and the definition of team cohesion by <xref ref-type="bibr" rid="r27">Riordan and Weatherly (1999)</xref>, role perception is suggested to be causing higher team cohesion: To experience a sense of belonging with AI, it is beneficial to perceive it as an agent that can fulfill the role of a partner, while it is detrimental for teaming experience to work with something regarded as a tool.</p>
<p>Moreover, the study shows that trust is also a crucial factor when addressing cohesion in human–AI teams, especially when working together with AI as a teammate. When it comes to perception of AI, trust has a greater influence on the relationship between perceiving AI as a teammate and the feeling of team cohesion. While being highly relevant for cohesion, trust in AI also had a small effect on the team perception regarding the human-AI team. Higher trust in AI-team members could lead to a more open communication or interaction with it, which strengthens team cohesion and the working atmosphere. Thus, establishing a basis of trust is crucial.</p>
<sec><title>Limitations</title>
<p>It is important to stress that the hypotheses and the path model explored in this study are exploratory because the preregistration happened at the beginning of the university seminar and included our methods but also the individual students’ hypotheses. After the seminar, it became clear that the manipulation had little to no direct effect on the different dependent variables mentioned in the preregistration (e.g., team cohesion), so this paper aimed to explore if role perceptions play a mediating role.</p>
<p>Additionally, as mentioned above, vignettes as stimuli come with limited external validity (<xref ref-type="bibr" rid="r11">Groß &amp; Börensen, 2009</xref>). Maybe vignettes simply don’t produce role perceptions involving cognitive and affective components to the same extent as experiencing the actual situation would (<xref ref-type="bibr" rid="r6">Collett &amp; Childs, 2011</xref>) and therefore participants base their rating on prior experience. Statements based on the effects of the vignettes should therefore be made carefully. Although all participants who stated that they were unable to imagine the given scenario were excluded, this is only a subjective assessment. In a follow-up study, we will address the issue of vignette immersion as well as an even clearer, more nuanced differentiation of AI roles by, e.g., using video snippets of an interaction with AI. This is supposed to increase the influence of the experimentally manipulated AI roles on people’s perception and lead to clearer results regarding factors influencing cohesion in human–AI teams. Furthermore, no substantial difference was found between the two AI perception types, which may indicate that this cannot be regarded as dichotomous. It may be possible to use AI as a tool while at the same time considering it as a teammate.</p>
<p>It should also be noted that team cohesion and teammate perception correlate substantially with each other (.80). This might indicate possible equality of the two constructs. Looking into the definitions of both as stated by <xref ref-type="bibr" rid="r36">Wynne and Lyons (2018)</xref>, it becomes clear that an AI system that is perceived as a “communicative and synchronized agentic teammate” (p.355) might as well evoke motivation to work together. The differentiation between the two is thus unclear. Nevertheless, important differences between the scales should not be neglected. While team cohesion asks to what extent the participants are willing to correct mistakes in the team, team partner perception asks whether the AI is a helpful team partner.</p></sec>
<sec><title>Theoretical and Practical Implications</title>
<p>In the following, we will look deeper into how perception of AI might influence felt team cohesion. Our study supports the findings of <xref ref-type="bibr" rid="r28">Rix (2022)</xref> that in order to work together on common goals, one has to perceive the AI as an “other”, while working with AI as a tool is a unidirectional interaction. Our results show that the AI here is probably not intensely perceived as a social partner, even if it is presented as such in the scenario using framing and showing, e.g., proactive behavior. It remains to be investigated whether this is due to insufficient operationalization of teammate-likeness according to the model of <xref ref-type="bibr" rid="r36">Wynne and Lyons (2018)</xref>, due to a lack of real interaction and team development, or simply due to the fact that it is hard to imagine current systems to be perceived as partners. Further studies should more systematically follow the definitions by <xref ref-type="bibr" rid="r36">Wynne and Lyons (2018)</xref> and realize longitudinal designs with longer-term interactions with existing or simulated AI systems in different roles. Simultaneously, theory is needed to systematize roles of AI within a team.</p>
<p>Trust in AI can be a good starting point for people to strengthen team cohesion under certain circumstances. The moderating effect of trust shows potential for companies to support teaming by actively promoting trust in AI teammates. Therefore, companies could offer targeted training programs or discourse formats to promote adequate use of AI and exchange best practices. Employees can be actively involved in the process of establishing AI in the company, also giving them the opportunity to express concerns (<xref ref-type="bibr" rid="r20">Lee &amp; See, 2004</xref>). This can reduce doubts and foster trust, which in turn helps experiencing cohesion with it as a collaborator, which is then related to positive outcomes (<xref ref-type="bibr" rid="r12">Grossman et al., 2022</xref>). However, it is important to note that the moderating effect also means that trusting AI tools correlates with lower team cohesion. A possible explanation might be that learning that AI is reliable will strengthen trust but also the impression that AI doesn’t act on its own but on behalf of the human worker which weakens feelings of team cohesion (based on the definitions of trust and team cohesion, see above). Working with AI as a tool, team cohesion might nevertheless not be as favorable as in situations with AI teammates and promoting trust might not be beneficial. Future research could aim to identify more favorable outcome variables in companies working with AI tools.</p></sec>
<sec sec-type="conclusions"><title>Conclusion</title>
<p>As team cohesion is a variable that has a crucial influence on performance, organizations working with AI as a team member should aim to increase it. In this context, it is beneficial if AI is seen as a teammate rather than a tool. Furthermore, in human–AI–teams, trust can be helpful to strengthen team cohesion but not if AI is perceived as a tool. Still, team cohesion is not the only relevant outcome variable for modern companies and, depending on a company’s needs, AI tools might be more suitable. Further research could be conducted on the direct effect of role perceptions on performance or work satisfaction of workers that use, or work together with, AI. As this study has shown, interpersonal factors normally only researched in human teams, such as team cohesion and trust, are also relevant when working with technological systems and need to be properly addressed in theory, research, and organizations.</p></sec></sec>
</body>
<back>
<sec sec-type="ethics-statement">
<title>Ethics Statement</title>
<p>The local ethics committee of the Faculty of Psychology has approved the study under No. 919.</p></sec>
<fn-group content-type="author-contribution">
<fn fn-type="con">
<p><italic>MH</italic>: Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Visualization, Writing - original draft, Writing - review &amp; editing. <italic>CW</italic>: Investigation, Methodology, Project administration, Writing - original draft (discussion), Writing - review &amp; editing. <italic>FZ</italic>: Data curation, Investigation, Methodology, Project administration, Writing - review &amp; editing. <italic>AT</italic>: Conceptualization, Data curation, Methodology, Project administration, Supervision, Validation, Writing - original draft, Writing - review &amp; editing.</p>
</fn>
</fn-group>
<ref-list><title>References</title>
<ref id="r1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Berretta</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Tausch</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Ontrup</surname>, <given-names>G.</given-names></string-name>, <string-name name-style="western"><surname>Gilles</surname>, <given-names>B.</given-names></string-name>, <string-name name-style="western"><surname>Peifer</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Kluge</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2023</year>). <article-title>Defining human-AI teaming the human-centered way: A scoping review and network analysis.</article-title> <source>Frontiers in Artificial Intelligence</source>, <volume>6</volume>, <elocation-id>1250725</elocation-id>. <pub-id pub-id-type="doi">10.3389/frai.2023.1250725</pub-id><pub-id pub-id-type="pmid">37841234</pub-id></mixed-citation></ref>
<ref id="r2"><mixed-citation publication-type="book">Berry, L. L., &amp; Parasuraman, A. (1991). <italic>Marketing services: Competing through quality.</italic> Free Press.</mixed-citation></ref>
<ref id="r3"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Cai</surname>, <given-names>Z.</given-names></string-name>, <string-name name-style="western"><surname>Fan</surname>, <given-names>X.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Du</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Gender and attitudes toward technology use: A meta-analysis.</article-title> <source>Computers &amp; Education</source>, <volume>105</volume>, <fpage>1</fpage>–<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1016/j.compedu.2016.11.003</pub-id></mixed-citation></ref>
<ref id="r4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Capiola</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Lyons</surname>, <given-names>J. B.</given-names></string-name>, <string-name name-style="western"><surname>Harris</surname>, <given-names>K. N.</given-names></string-name>, <string-name name-style="western"><surname>Hamdan</surname>, <given-names>I. A.</given-names></string-name>, <string-name name-style="western"><surname>Kailas</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Sycara</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2023</year>). <article-title>“Do what you say?” The combined effects of framed social intent and autonomous agent behavior on the trust process.</article-title> <source>Computers in Human Behavior</source>, <volume>149</volume>, <elocation-id>107966</elocation-id>. <pub-id pub-id-type="doi">10.1016/j.chb.2023.107966</pub-id></mixed-citation></ref>
<ref id="r5"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Choung</surname>, <given-names>H.</given-names></string-name>, <string-name name-style="western"><surname>David</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Ross</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2023</year>). <article-title>Trust in AI and its role in the acceptance of AI technologies.</article-title> <source>International Journal of Human-Computer Interaction</source>, <volume>39</volume>(<issue>9</issue>), <fpage>1727</fpage>–<lpage>1739</lpage>. <pub-id pub-id-type="doi">10.1080/10447318.2022.2050543</pub-id></mixed-citation></ref>
<ref id="r6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Collett</surname>, <given-names>J. L.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Childs</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2011</year>). <article-title>Minding the gap: Meaning, affect, and the potential shortcomings of vignettes.</article-title> <source>Social Science Research</source>, <volume>40</volume>(<issue>2</issue>), <fpage>513</fpage>–<lpage>522</lpage>. <pub-id pub-id-type="doi">10.1016/j.ssresearch.2010.08.008</pub-id></mixed-citation></ref>
<ref id="r7"><mixed-citation publication-type="confproc">Correia, F., Mascarenhas, S., Prada, R., Melo, F. S., &amp; Paiva, A. (2018). Group-based emotions in teams of humans and robots. <italic>HRI ’18: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction</italic> (pp. 261–269). <pub-id pub-id-type="doi">10.1145/3171221.3171252</pub-id></mixed-citation></ref>
<ref id="r8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Dennis</surname>, <given-names>A. R.</given-names></string-name>, <string-name name-style="western"><surname>Lakhiwal</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Sachdeva</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2023</year>). <article-title>AI agents as team members: Effects on satisfaction, conflict, trustworthiness, and willingness to work with.</article-title> <source>Journal of Management Information Systems</source>, <volume>40</volume>(<issue>2</issue>), <fpage>307</fpage>–<lpage>337</lpage>. <pub-id pub-id-type="doi">10.1080/07421222.2023.2196773</pub-id></mixed-citation></ref>
<ref id="r9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Fan</surname>, <given-names>X.</given-names></string-name>, <string-name name-style="western"><surname>Jiang</surname>, <given-names>X.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Deng</surname>, <given-names>N.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Immersive technology: A meta-analysis of augmented/virtual reality applications and their impact on tourism experience.</article-title> <source>Tourism Management</source>, <volume>91</volume>, <elocation-id>104534</elocation-id>. <pub-id pub-id-type="doi">10.1016/j.tourman.2022.104534</pub-id></mixed-citation></ref>
<ref id="r10"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Fung</surname>, <given-names>H. P.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Relationships among team trust, team cohesion, team satisfaction, team effectiveness and project performance as perceived by project managers in Malaysia.</article-title> <source>Australian Journal of Basic and Applied Sciences</source>, <volume>8</volume>(<issue>8</issue>), <fpage>205</fpage>–<lpage>216</lpage>.</mixed-citation></ref>
<ref id="r11"><mixed-citation publication-type="book">Groß, J., &amp; Börensen, C. (2009). Wie valide sind Verhaltensmessungen mittels Vignetten? [How valid are behavioural measurements using vignettes?] In P. Kriwy &amp; C. Gross (Eds.), <italic>Klein aber fein!</italic> [<italic>Small but nice!</italic>] (pp. 149–178). VS Verlag für Sozialwissenschaften. <pub-id pub-id-type="doi">10.1007/978-3-531-91380-3_7</pub-id></mixed-citation></ref>
<ref id="r12"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Grossman</surname>, <given-names>R.</given-names></string-name>, <string-name name-style="western"><surname>Nolan</surname>, <given-names>K.</given-names></string-name>, <string-name name-style="western"><surname>Rosch</surname>, <given-names>Z.</given-names></string-name>, <string-name name-style="western"><surname>Mazer</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Salas</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2022</year>). <article-title>The team cohesion-performance relationship: A meta-analysis exploring measurement approaches and the changing team landscape.</article-title> <source>Organizational Psychology Review</source>, <volume>12</volume>(<issue>2</issue>), <fpage>181</fpage>–<lpage>238</lpage>. <pub-id pub-id-type="doi">10.1177/20413866211041157</pub-id></mixed-citation></ref>
<ref id="r13"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Gulati</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Sousa</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Lamas</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2019</year>). <article-title>Design, development and evaluation of a human-computer trust scale.</article-title> <source>Behaviour &amp; Information Technology</source>, <volume>38</volume>(<issue>10</issue>), <fpage>1004</fpage>–<lpage>1015</lpage>. <pub-id pub-id-type="doi">10.1080/0144929X.2019.1656779</pub-id></mixed-citation></ref>
<ref id="r14"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Hoff</surname>, <given-names>K. A.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Bashir</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Trust in automation: Integrating empirical evidence on factors that influence trust.</article-title> <source>Human Factors</source>, <volume>57</volume>(<issue>3</issue>), <fpage>407</fpage>–<lpage>434</lpage>. <pub-id pub-id-type="doi">10.1177/0018720814547570</pub-id><pub-id pub-id-type="pmid">25875432</pub-id></mixed-citation></ref>
	<ref id="r15"><mixed-citation publication-type="web">Hoffmann, M., Wollstein, C., Zimber, F., &amp; Tausch, A. (2025). <italic>AI_Tool vs Teammate</italic> [OSF project page containing code, codebook, data, &amp; supplementary materials]. Open Science Framework. <pub-id pub-id-type="doi">10.17605/OSF.IO/269EF</pub-id></mixed-citation></ref>
<ref id="r16"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Kao</surname>, <given-names>S. F.</given-names></string-name>, <string-name name-style="western"><surname>Tsai</surname>, <given-names>C. Y.</given-names></string-name>, <string-name name-style="western"><surname>Schinke</surname>, <given-names>R.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Watson</surname>, <given-names>J. C.</given-names></string-name></person-group> (<year>2019</year>). <article-title>A cross-level moderating effect of team trust on the relationship between transformational leadership and cohesion.</article-title> <source>Journal of Sports Sciences</source>, <volume>37</volume>(<issue>24</issue>), <fpage>2844</fpage>–<lpage>2852</lpage>. <pub-id pub-id-type="doi">10.1080/02640414.2019.1668186</pub-id><pub-id pub-id-type="pmid">31543005</pub-id></mixed-citation></ref>
<ref id="r17"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Kaplan</surname>, <given-names>A. D.</given-names></string-name>, <string-name name-style="western"><surname>Kessler</surname>, <given-names>T. T.</given-names></string-name>, <string-name name-style="western"><surname>Brill</surname>, <given-names>J. C.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Hancock</surname>, <given-names>P. A.</given-names></string-name></person-group> (<year>2023</year>). <article-title>Trust in artificial intelligence: Meta-analytic findings.</article-title> <source>Human Factors</source>, <volume>65</volume>(<issue>2</issue>), <fpage>337</fpage>–<lpage>359</lpage>. <pub-id pub-id-type="doi">10.1177/00187208211013988</pub-id><pub-id pub-id-type="pmid">34048287</pub-id></mixed-citation></ref>
<ref id="r18"><mixed-citation publication-type="confproc">Lai, V., Carton, S., Bhatnagar, R., Liao, Q. V., Zhang, Y., &amp; Tan, C. (2022). Human-AI collaboration via conditional delegation: A case study of content moderation. In S. Barbosa (Ed.), <italic>ACM Digital Library, Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems.</italic> Association for Computing Machinery. <pub-id pub-id-type="doi">10.1145/3491102.3501999</pub-id></mixed-citation></ref>
<ref id="r19"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Lakhmani</surname>, <given-names>S. G.</given-names></string-name>, <string-name name-style="western"><surname>Neubauer</surname>, <given-names>C.</given-names></string-name>, <string-name name-style="western"><surname>Krausman</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Fitzhugh</surname>, <given-names>S. M.</given-names></string-name>, <string-name name-style="western"><surname>Berg</surname>, <given-names>S. K.</given-names></string-name>, <string-name name-style="western"><surname>Wright</surname>, <given-names>J. L.</given-names></string-name>, <string-name name-style="western"><surname>Rovira</surname>, <given-names>E.</given-names></string-name>, <string-name name-style="western"><surname>Blackman</surname>, <given-names>J. J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Schaefer</surname>, <given-names>K. E.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Cohesion in human–autonomy teams: An approach for future research.</article-title> <source>Theoretical Issues in Ergonomics Science</source>, <volume>23</volume>(<issue>6</issue>), <fpage>687</fpage>–<lpage>724</lpage>. <pub-id pub-id-type="doi">10.1080/1463922X.2022.2033876</pub-id></mixed-citation></ref>
<ref id="r20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Lee</surname>, <given-names>J. D.</given-names></string-name>, &amp; <string-name name-style="western"><surname>See</surname>, <given-names>K. A.</given-names></string-name></person-group> (<year>2004</year>). <article-title>Trust in automation: Designing for appropriate reliance.</article-title> <source>Human Factors</source>, <volume>46</volume>(<issue>1</issue>), <fpage>50</fpage>–<lpage>80</lpage>. <pub-id pub-id-type="doi">10.1518/hfes.46.1.50.30392</pub-id><pub-id pub-id-type="pmid">15151155</pub-id></mixed-citation></ref>
<ref id="r21"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Li</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Huang</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Liu</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Zheng</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Human-AI cooperation: Modes and their effects on attitudes.</article-title> <source>Telematics and Informatics</source>, <volume>73</volume>, <elocation-id>101862</elocation-id>. <pub-id pub-id-type="doi">10.1016/j.tele.2022.101862</pub-id></mixed-citation></ref>
<ref id="r22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>McNeese</surname>, <given-names>N. J.</given-names></string-name>, <string-name name-style="western"><surname>Demir</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Chiou</surname>, <given-names>E. K.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Cooke</surname>, <given-names>N. J.</given-names></string-name></person-group> (<year>2021</year>). <article-title>Trust and team performance in human–autonomy teaming.</article-title> <source>International Journal of Electronic Commerce</source>, <volume>25</volume>(<issue>1</issue>), <fpage>51</fpage>–<lpage>72</lpage>. <pub-id pub-id-type="doi">10.1080/10864415.2021.1846854</pub-id></mixed-citation></ref>
<ref id="r23"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Morrow</surname>, <given-names>P. B.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Fiore</surname>, <given-names>S. M.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Supporting human-robot teams in social dynamicism: An overview of the metaphoric inference framework.</article-title> <source>Proceedings of the Human Factors and Ergonomics Society Annual Meeting</source>, <volume>56</volume>(<issue>1</issue>), <fpage>1718</fpage>–<lpage>1722</lpage>. <pub-id pub-id-type="doi">10.1177/1071181312561344</pub-id></mixed-citation></ref>
<ref id="r24"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Parasuraman</surname>, <given-names>R.</given-names></string-name>, <string-name name-style="western"><surname>Sheridan</surname>, <given-names>T. B.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Wickens</surname>, <given-names>C. D.</given-names></string-name></person-group> (<year>2000</year>). <article-title>A model for types and levels of human interaction with automation.</article-title> <source>IEEE Transactions on Systems, Man, and Cybernetics. Part A, Systems and Humans</source>, <volume>30</volume>(<issue>3</issue>), <fpage>286</fpage>–<lpage>297</lpage>. <pub-id pub-id-type="doi">10.1109/3468.844354</pub-id><pub-id pub-id-type="pmid">11760769</pub-id></mixed-citation></ref>
<ref id="r25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Ray</surname>, <given-names>C. M.</given-names></string-name>, <string-name name-style="western"><surname>Sormunen</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Harris</surname>, <given-names>T. M.</given-names></string-name></person-group> (<year>1999</year>). <article-title>Men’s and women’s attitudes toward computer technology: A comparison.</article-title> <source>Office Systems Research Journal</source>, <volume>17</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>8</lpage>.</mixed-citation></ref>
<ref id="r26"><mixed-citation publication-type="web">R Core Team. (2023). <italic>R</italic> (Version 4.3.1) [Computer software]. R Foundation for Statistical Computing. <ext-link ext-link-type="uri" xlink:href="https://www.R-project.org/">https://www.R-project.org/</ext-link></mixed-citation></ref>
<ref id="r27"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Riordan</surname>, <given-names>C. M.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Weatherly</surname>, <given-names>E. W.</given-names></string-name></person-group> (<year>1999</year>). <article-title>Defining and measuring employees’ identification with their work groups.</article-title> <source>Educational and Psychological Measurement</source>, <volume>59</volume>(<issue>2</issue>), <fpage>310</fpage>–<lpage>324</lpage>. <pub-id pub-id-type="doi">10.1177/00131649921969866</pub-id></mixed-citation></ref>
<ref id="r28"><mixed-citation publication-type="confproc">Rix, J. (2022). From tools to teammates: Conceptualizing humans’ perception of machines as teammates with a systematic literature review. In T. Bui (Ed.), <italic>Proceedings of the Annual Hawaii International Conference on System Sciences, Proceedings of the 55th Hawaii International Conference on System Sciences.</italic> Hawaii International Conference on System Sciences. <pub-id pub-id-type="doi">10.24251/HICSS.2022.048</pub-id></mixed-citation></ref>
<ref id="r29"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Rosseel</surname>, <given-names>Y.</given-names></string-name></person-group> (<year>2012</year>). <article-title>lavaan: An R package for structural equation modeling.</article-title> <source>Journal of Statistical Software</source>, <volume>48</volume>(<issue>2</issue>), <fpage>1</fpage>–<lpage>36</lpage>. <pub-id pub-id-type="doi">10.18637/jss.v048.i02</pub-id></mixed-citation></ref>
<ref id="r30"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Rungtusanatham</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Wallin</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Eckerd</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2011</year>). <article-title>The vignette in a scenario-based role-playing experiment.</article-title> <source>Journal of Supply Chain Management</source>, <volume>47</volume>(<issue>3</issue>), <fpage>9</fpage>–<lpage>16</lpage>. <pub-id pub-id-type="doi">10.1111/j.1745-493X.2011.03232.x</pub-id></mixed-citation></ref>
<ref id="r31"><mixed-citation publication-type="confproc">Sadeghian, S., &amp; Hassenzahl, M. (2022). The ”artificial” colleague: Evaluation of work satisfaction in collaboration with non-human coworkers. In G. Jacucci, S. Kaski, C. Conati, S. Stumpf, T. Ruotsalo &amp; K. Gajos (Eds.), <italic>IUI ’22: Proceedings of the 27th International Conference on Intelligent User Interfaces</italic> (pp. 27–35). ACM. <pub-id pub-id-type="doi">10.1145/3490099.3511128</pub-id></mixed-citation></ref>
<ref id="r32"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Schmidtler</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Knott</surname>, <given-names>V.</given-names></string-name>, <string-name name-style="western"><surname>Hölzel</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Bengler</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Human centered assistance applications for the working environment of the future.</article-title> <source>Occupational Ergonomics</source>, <volume>12</volume>(<issue>3</issue>), <fpage>83</fpage>–<lpage>95</lpage>. <pub-id pub-id-type="doi">10.3233/OER-150226</pub-id></mixed-citation></ref>
<ref id="r33"><mixed-citation publication-type="preprint">Tausch, A., &amp; Kluge, A. (2026). <italic>RoboCo:</italic> <italic>A scale to measure team cohesion in human-robot teams</italic>. [Manuscript in preparation]. Faculty of Psychology, Ruhr University Bochum.</mixed-citation></ref>
<ref id="r34"><mixed-citation publication-type="web">Tausch, A., Zimber, F., Brüggemann, C., Schulz, M., &amp; Alesin, K. (2024). ‘<italic>AI as a tool vs. a team partner – A vignette study within the FoPra seminar 2024’</italic> (AsPredicted #160,584) [Preregistration]. AsPredicted. <ext-link ext-link-type="uri" xlink:href="https://aspredicted.org/5ZY_7RM">https://aspredicted.org/5ZY_7RM</ext-link></mixed-citation></ref>
<ref id="r35"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Tokadlı</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Dorneich</surname>, <given-names>M. C.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Autonomy as a teammate: Evaluation of teammate-likeness.</article-title> <source>Journal of Cognitive Engineering and Decision Making</source>, <volume>16</volume>(<issue>4</issue>), <fpage>282</fpage>–<lpage>300</lpage>. <pub-id pub-id-type="doi">10.1177/15553434221108002</pub-id></mixed-citation></ref>
<ref id="r36"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Wynne</surname>, <given-names>K. T.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Lyons</surname>, <given-names>J. B.</given-names></string-name></person-group> (<year>2018</year>). <article-title>An integrative model of autonomous agent teammate-likeness.</article-title> <source>Theoretical Issues in Ergonomics Science</source>, <volume>19</volume>(<issue>3</issue>), <fpage>353</fpage>–<lpage>374</lpage>. <pub-id pub-id-type="doi">10.1080/1463922X.2016.1260181</pub-id></mixed-citation></ref>
</ref-list><fn-group><fn fn-type="conflict">
<p content-type="fn-title">The authors declare that they have no conflict of interest in the conduct and reporting of the research and that ethical standards were adhered to throughout the study.</p></fn></fn-group><ack><title>Acknowledgements</title>
<p>The following students, apart from the authors, took part in preparing the study and collecting data (in alphabetical order): Karina Alesin, Nina Böckmann, Charlotte Brüggemann, Nils Conen, Nimue Dort, Carla Evers, Jule Köntje, Antonia Löblein, Melina Priskos, Maren Schulz, Anise Yildiz.</p></ack><fn-group><fn fn-type="financial-disclosure">
<p content-type="fn-title">This work has not received funding. The authors declare that they have no conflict of interest in the conduct and reporting of the research.</p></fn></fn-group>
	<sec sec-type="data-availability" id="das"><title>Data Availability</title>
		<p>The data that support the findings of this study are available in the OSF repository at <xref ref-type="supplementary-material" rid="r15">Hoffmann et al. (2025)</xref>. This study was preregistered (see <xref ref-type="supplementary-material" rid="r34">Tausch et al., 2024</xref>).</p>
	</sec>	

	<sec sec-type="supplementary-material" id="sp1"><title>Supplementary Materials</title>
		<table-wrap position="anchor">
			<table frame='void' style="background-#f3f3f3">
				<col width="60%" align="left"/>
				<col width="40%" align="left"/>
				<thead>
					<tr>
						<th>Type of supplementary materials</th>
						<th>Availability/Access</th>
					</tr>
				</thead>
				<tbody>
					<tr>
						<th colspan="2">Data</th>						
					</tr>
					<tr>
						<td>Data - raw.</td>
						<td><xref ref-type="supplementary-material" rid="r15">Hoffmann et al. (2025)</xref></td>
					</tr>
					<tr>
						<td>Data after exclusion.</td>
						<td><xref ref-type="supplementary-material" rid="r15">Hoffmann et al. (2025)</xref></td>
					</tr>					
					<tr style="grey-border-top-dashed">
						<th colspan="2">Code</th>
					</tr>
					<tr>
						<td>Data analysis R script.</td>
						<td><xref ref-type="supplementary-material" rid="r15">Hoffmann et al. (2025)</xref></td>
					</tr>		
					<tr style="grey-border-top-dashed">
						<th colspan="2">Material</th>
					</tr>
					<tr>
						<td>Vignettes - German.</td>
						<td><xref ref-type="supplementary-material" rid="r15">Hoffmann et al. (2025)</xref></td>
					</tr>
					<tr>
						<td>Vignettes - English machine translated.</td>
						<td><xref ref-type="supplementary-material" rid="r15">Hoffmann et al. (2025)</xref></td>
					</tr>
					<tr style="grey-border-top-dashed">
						<th colspan="2">Study/Analysis preregistration</th>
					</tr>	
					<tr>
						<td>Preregistration.</td>
						<td><xref ref-type="supplementary-material" rid="r34">Tausch et al. (2024)</xref></td>
					</tr>
					<tr style="grey-border-top-dashed">
						<th colspan="2">Other</th>
					</tr>	
					<tr>
						<td>Exclusion criteria and excluded cases.</td>
						<td><xref ref-type="supplementary-material" rid="r15">Hoffmann et al. (2025)</xref></td>
					</tr>
					<tr>
						<td>Codebook - German.</td>
						<td><xref ref-type="supplementary-material" rid="r15">Hoffmann et al. (2025)</xref></td>
					</tr>
				</tbody>
			</table>
		</table-wrap>		
	</sec>
	
			

</back>
</article>
