The increasing application of artificial intelligence (AI), i.e., “software-based technology that permits automated machines to sense their surroundings and intelligently make decisions based on the available data” (Kaplan et al., 2023, p.1), requires proper integration into human workflows. AI systems and humans can work closely together, either in a supportive cooperation (Lai et al., 2022) or in a synergetic teaming approach (Berretta et al., 2023). As research on teams with autonomous agents builds on human team insights (Morrow & Fiore, 2012), gaps remain in understanding the transferability and peculiarities of teaming in both contexts. This study compares the utilization of AI as a tool and as a teammate. It should be mentioned here that AI is a complex construct but was used in this study as a software application on the computer. The aim is to test whether working with AI in different roles has an impact on the sense of team cohesion, and the influence of trust.
Theoretical Background and Current State of Research
Applying AI as a Teammate or a Tool
While typically used as a working appliance, the dynamic capabilities of AI allow it to be considered a potential teammate. As Rix (2022) summarizes, factors such as a team setting and collaborative actions are “drivers of the formation of impactful human-machine teams” (p. 398). While there are many forms of collaboration (e.g., Parasuraman et al., 2000) and concepts of interaction, like “human centered assistance applications” (Schmidtler et al., 2015, p. 85) and human-AI teaming (Berretta et al., 2023), there is no agreed-on differentiation of AI roles within a work group.
Researchers acknowledge the existence of “various modes of cooperation between humans and AI” (Li et al., 2022, p. 1) — here referred to as AI roles — with an ongoing debate on whether technological agents can, should and may be considered team members or mere tools (Rix, 2022).
Perceiving AI as a teammate goes along with certain behaviors like proactivity (co-creation) and relationship-building (Rix, 2022), and the active and distinct role of AI (Li et al., 2022). The role of AI as a supportive tool (e.g., Fan et al., 2022) is described as helping with tasks while limited in its function and decision spectrum, not going beyond what told to do and not reaching into social spheres. However, more nuanced taxonomies are missing, and research on the impact of different forms of implementation of AI is urgently needed.
How People Experience Working With AI
Role Perception of AI
The essential theory on how people perceive autonomous partners in teams is by Wynne and Lyons (2018). Their concept of autonomous agent teammate-likeness describes the attitude of humans towards autonomous teammates as “the extent to which a human operator perceives and identifies an [...] agent partner as a highly altruistic, benevolent, interdependent, emotive, communicative and synchronized agentic teammate, rather than simply an instrumental tool” (Wynne & Lyons, 2018, p. 355). Its six facets, affected by the human, the agent, and their characteristics, influence cognitive, affective and emotional outcomes. Research on the model by Tokadlı and Dorneich (2022) shows that an AI system giving cues for its action within the work environment was rated more teammate-like than one performing separately on a screen. Capiola and colleagues (2023) demonstrated that manipulating an agent’s behavior has effects on its perception. Therefore, we hypothesize the following:
H1: The manipulation of the AI role results in different subjective AI role allocations by the participants.
To capture perceptions independently and remain open to the possibility that AI can be perceived simultaneously as a teammate and as a tool, three separate scales were used: one to measure the perception as a teammate, one as a tool, and a bipolar continuum ranging from “tool” to “teammate” to capture the perception in different ways.
H1.1: Receiving a vignette with AI in the role of a teammate is positively connected to rating it as a teammate.
H1.2: Receiving a vignette with AI in the role of a tool is positively connected to rating it as a tool.
H1.3: Receiving a vignette with AI in the role of a teammate leads to a higher rating on a continuum between the two roles, reflecting more of a teammate evaluation.
Team Cohesion With AI
Wynne and Lyons (2018) explicitly name human-agent team cohesion as one of the outcomes in their model. Cohesion is an important parameter in a team, enabling the members to reach common goals and work together as a group. It is defined as the degree to which an individual believes in the attraction between the members of their work group, their willingness to work together, and their commitment to their tasks and goals (Riordan & Weatherly, 1999, p. 315). Team cohesion is well researched in human teams but under-explored when including autonomous agents (Lakhmani et al., 2022). However, given its impact on reaching common targets (Grossman et al., 2022) and its potential for creating synergistic human-AI teaming (Correia et al., 2018), it is an important factor to consider. Rix (2022) describes the way AI is implemented in a company as decisive for how humans work with it: When humans identify the AI as a teammate, cooperation will be characterized by bidirectional interaction. Contrarily, when perceived as a tool, there is a unidimensional relationship without team perception (Rix, 2022), thus not evoking team cohesion. Hence, the following hypotheses are assumed:
H2: A stronger perception of AI as a teammate relates to higher team cohesion.
H2.1: The higher the rating of AI as a teammate, the higher the perceived team cohesion.
H2.2: The higher the rating of AI as a tool, the lower the perceived team cohesion.
H2.3: The more the role continuum is rated towards the side of a teammate, the higher the perceived team cohesion.
Trust as a Moderator Between AI Role and Team Cohesion
Trust is important when considering working in teams and vital for experiencing team cohesion (Fung, 2014). It can be defined as a “situation-specific [attitude] that [is] relevant only when something is exchanged in a cooperative relationship characterized by uncertainty” (Hoff & Bashir, 2015, p. 410). Our study focuses on this situational trust in a certain AI system integrated in the team, keeping in mind that this is influenced by dispositional as well as learned trust (Hoff & Bashir, 2015).
Research has shown that trust in human–AI teams depends on different aspects, e.g., the average performance of the teams (negatively related; McNeese et al., 2021). Dennis and colleagues (2023) showed that there are no significant differences in trustworthiness or willingness to collaborate with AI-teammates compared to human members. Additionally, the participants’ trust in AI has an impact on whether people use the technology (Choung et al., 2023) and is essential for creating a feeling of team cohesiveness (Fung, 2014). Also, Kao et al. (2019) found that team trust strengthened the positive link between transformational leadership and team cohesion. This indicates that trust not only contributes directly to cohesion but also moderates the strength of other predictors. Considering the literature on the specific importance of trust for team cohesion, and the evidence for its moderating role in related contexts, we expect it to also influence the effect of AI role perception on team cohesion.
H3: Trust moderates the effect of different AI role perceptions on team cohesion.
Factors Influencing Team Cohesion With AI
Previous research has shown that humans prefer to work with other humans rather than with AI because they perceive the other human as a teammate (Sadeghian & Hassenzahl, 2022). Following this insight, team cohesion can be influenced by implementing AI as a team partner or a tool, by its role’s perception and by the trust in the AI system (see Figure 1).
Figure 1
Assigned Hypotheses in the Analyzed Path Model
Regarding technology and AI use, gender is a potentially influential variable affecting attitudes and perception (see e.g., Cai et al., 2017). The findings are ambivalent: Ray and colleagues (1999) showed that men and women are equally comfortable with technology at work. In fact, women even have a more positive attitude towards it. Nevertheless, gender is related to how much time people spend with technology (Cai et al., 2017).
Thus, the variables gender and experience might have an influence on how people perceive AI within teams and the collaboration with it. Because research does present an ambiguous picture of how these influences look like, experience and gender are used as control variables within this study.
Method
Design of the Online-Vignette Study
The study was developed during a university seminar. As this paper serves as an overview of the research findings, the hypotheses deviate from those specified in the preregistration (see Tausch et al., 2024), which were individual student’s hypotheses for the seminar. This manuscript is an exploratory investigation of the results with exploratory hypotheses.
The experimental online study tested the influence of different types of AI application on team cohesion, moderated by trust in the AI and controlled for gender and AI experiences. The data collection via SoSci Survey took place from March 12 to June 23, 2024. The study was classified as ethically agreeable by the ethics committee (Application 919) of Ruhr-University Bochum.
Using a vignette study, participants were assigned a “job”, either in recruiting or in the scientific field, and were asked to imagine an exemplary workday scenario with typical tasks, supported by an AI system. They then evaluated the depicted situation. The corresponding vignettes can be viewed in full length at Hoffmann et al. (2025). They vary in their central characteristics regarding the role of the AI system, which results in different experimental conditions (Rungtusanatham et al., 2011). The dispersion of participants was partially randomized, as a software guaranteed balanced distribution of participants across the conditions within fully finished questionnaires. Data collection was anonymous and not incentivized.
Measurement Instruments
For measuring team cohesion, the RoBoCo-scale was selected (Tausch & Kluge, 2026). This scale captures cohesion between robots and humans but can also be used in human–AI teams according to the authors. Thus, it was deemed more suitable than general cohesion measures for human-only teams. The participants rated statements like “I have a good relationship with the AI” on a scale from 1 (Completely agree) to 7 (Strongly disagree).
Trust in the AI was measured using the Human-Computer Trust Scale (HCTM), which is specifically designed to analyze human trust in different technological systems and comprehensive enough to capture trust in the presented AI in a differentiated way (Gulati et al., 2019). On a five-point Likert scale, participants rated statements such as “I believe that the AI is acting in my best interests” from strongly disagree to completely agree. The German-language items were provided by the original authors.
Fourteen self-designed items separated into two scales were used to record participants’ subjective rating of whether they perceived AI as a tool or as a teammate (seven items each). On a five-point Likert scale, the participants rated statements such as “The AI and I together form a social unit” (Teammate) or “The AI is a useful tool” (Tool) from 1 (Strongly disagree) to 5 (Completely agree). Additionally, a continuum was included, on which the participants were asked to specify from 1 to 100 if the AI was perceived more as a tool (1) or as a teammate (100), because it remains unclear if the roles are distinct constructs or extremes on a continuum.
Sample
Four hundred and eighty-four participants were acquired using the students’ personal networks and the platform SurveyCircle. The sample was filtered according to the exclusion criteria defined in the preregistration (see Tausch et al., 2024), decreasing it to 217 participants. A detailed description of the exclusion can be viewed at Hoffmann et al. (2025). For the results, a sub-sample of 150 participants was relevant, excluding those receiving the vignettes of collaboration with a human assistant. The following data relates to the sub-sample of people being confronted with an AI vignette. The average age was M = 31.40 years (SD = 13.30), with a minimum age of 18 and a maximum age of 73. Educational qualifications were also surveyed. 36.87% stated that their highest educational qualification was a university degree, 8.29% a university of applied sciences degree, 46.08% a high school diploma, 2.76% an intermediate school leaving certificate, 0.92% a lower secondary school leaving certificate and 5.07% a doctorate or habilitation. In general, 81 participants (33.33% male) received the vignette in which AI was presented as a tool, whereas 65 participants (30.77% male) received the vignette where AI was introduced as a teammate.
Statistical Analysis
To test the connection between the perception of AI and their influence on team cohesion, moderated by trust, the statistic software R 4.3.1 was used (R Core Team, 2023). Data was merged for the two different jobs presented, as distinctions were not relevant for this analysis. Scale analyses for reliability information and scale improvements were performed. Then, the data was analyzed descriptively.
To test the hypotheses for significance, a path analysis (see Figure 1) was performed. A distinction was made between the direct mediation of the perception of AI (scales and continuum, M1-3) from the various vignettes (X) to team cohesion (Y) and a moderation effect through trust (W). Three different types of mediation were distinguished: M1) via the evaluation of AI as a teammate, M2) via participants’ rating of AI as a tool, and M3) via rating on a continuum from tool to teammate. This analysis was performed considering the covariates gender and experience with AI using the lavaan package (Rosseel, 2012).
Results
Descriptives
All descriptive values can be found in Table 1.
Table 1
Means, Standard Deviations, and Correlations of All Measured Constructs
| Variable | Range | α/ω | M | SD | 1 | 2 | 3 | 4 | 5 | 6 |
|---|---|---|---|---|---|---|---|---|---|---|
| 1. Gender | 1 & 2 | — | 1.33 | 0.49 | — | — | — | — | — | — |
| 2. Experience with AI | 1–9 | — | 6.36 | 1.71 | .13 | — | — | — | — | — |
| 3. Teammate perception | 1–5 | .83/.89 | 2.61 | 0.81 | -.07 | -.14 | — | — | — | — |
| 4. Tool perception | 1–5 | .72/.79 | 4.32 | 0.47 | .17* | .21* | -.21** | — | — | — |
| 5. Role continuum | 1–100 | — | 28.98 | 25.09 | -.23** | -.19* | .43** | -.30** | — | — |
| 6. Trust | 1–5 | .79/.84 | 3.12 | 0.56 | -.12 | -.01 | .60** | -.01 | .34** | — |
| 7. Team cohesion | 1–7 | .92/.94 | 3.73 | 0.91 | -.03 | .01 | .80** | -.14 | .44** | .62** |
Note. M and SD are used to represent mean and standard deviation, respectively. Cronbach’s α and McDonald’s ω are measures for the scales’ reliability.
*p < .05. **p < .01.
Testing the Hypotheses
All hypotheses were tested in a path model depicted in Figure 2.
Figure 2
Standardized Path Weights Between the Variables in the Analyzed Path Model
Note. All these effects are examined controlling for gender and experience with AI.
*p < .05. **p < .01.
A-paths: Effects of Conditions on Role Perceptions
In Hypotheses H1.1 and H1.2, the paths of the condition on the rating of the AI as a teammate (a1) and as a tool (a2) are observed. Receiving a vignette with AI as a tool, compared to AI as a teammate, leads to a lower perception of AI as a teammate, Mtool = 2.56 (0.81) and Mpartner = 2.68 (0.82). Path a1 is not significantly different from zero, ß = -.09, SE = 0.13, z(147) = -1.04, p = .298. The rating of AI as a tool was Mtool = 4.57(0.47) and Mpartner = 4.48 (0.53) in the two conditions, with Path a2 on the rating of AI as a tool, ß = .12, SE = 0.08, z(147) = 1.40, p = .162, being non-significant. H1.1 and H1.2 are thus rejected.
The a3-path (H1.3) relates to the influence of the experimental condition on the role rating of AI on the continuum. Receiving a vignette with AI as a tool (M = 27.67, SD = 24.16), compared to AI as a teammate (M = 30.60, SD = 26.30), leads to a lower ranking on the teammate continuum by -3.33 units. This result is not significantly different from zero, ß = -.07, SE = 4.09, z(147) = -0.81, p = .416, so that the rating on the continuum cannot be predicted by the experimental condition and H1.3 has to be rejected.
B-paths: Effects of Role Perception on Team Cohesion
In Hypotheses H2.1 and H2.2, the paths of the AI rating as a teammate (b1) and as a tool (b2) on team cohesion are regarded. Rating AI as a tool (ß = -.14), compared to the AI rating as a teammate (ß = .74), leads to a lower ranking on the team cohesion scale by -0.28 units. This result is significantly different from zero, ß = -.14, SE = 0.13, z(147) = -2.15, p = .031. H2.1 and H2.2 can thus be accepted. The b3-path, tested in Hypothesis H2.3, examines the influence of the continuum-based AI rating: Rating AI more towards the teammate side leads to higher team cohesion, ß = .28, SE = 0.002, z(147) = 4.23, p < .001. H2.3 can thus be accepted.
The c´-path tested the direct effect of the experimental condition on team cohesion, which was non-significant.
Moderation Via Trust
Hypothesis H3 relates to the moderating effect of trust on the relationship between AI role perception and team cohesion. Trust enhances the relationship between the AI perception and team cohesion (on a scale from 1–7). Regarding the link between teammate perception and team cohesion, this effect is significantly different from zero, ß = .24, SE = 0.08, z(147) = 4.6, p < .001. When it comes to the relation between perceiving AI as a tool and team cohesion, the moderating effect of trust is significant as well, ß = .61, SE = 0.11, z(147) = 9.57, p < .001. At least, the effect of trust on the connection between the rating on the continuum and the perception of team cohesion is also significantly different from zero, ß = .55, SE = 0.1, z(147) = 8.46, p < .001. H3 is accepted.
Discussion
We investigated whether people’s sense of team cohesion differs depending on working with AI as a teammate or as a tool and on how they perceive the AI role. We were partially able to accept our hypotheses, showing a correlation of AI perception with perceived team cohesion, moderated by trust in AI. The influence of role perception on team cohesion varies depending on the person’s mental model of the AI as either a useful tool or an integral teammate. However, our vignettes did not influence how strongly AI was rated as a teammate or a tool; there was no significant connection between the vignette presented to the participant and how strong their perception was. Despite the failure of the manipulation, the response behavior provides valuable insights.
This leads to the question why the differently presented AIs did not produce different role perceptions. One possible reason to be considered is the construction of our vignettes. The teammate vignette reflects several aspects of the definition of a teammate by Wynne and Lyons (2018) (e.g., interdependence or communication), but emotionality is not taken into account. In contrast, the tool vignette describes more of a mechanical collaboration. Accordingly, it is unclear whether the differences presented were sufficiently clear to the participants. Although individual facets of the model were considered, the overall manipulation may have been insufficient to produce clearly distinguishable role perceptions.
The question remains if the actively designed organizational role of AI in the workplace is irrelevant for shaping people’s perception of AI. This cannot reliably be answered, due to methodological reasons: As Groß and Börensen (2009) describe, situations can be perceived differently when reading a vignette rather than experiencing the situation itself, as vignettes highlight only certain aspects of the situation. Also, we fully rely on a situational judgment (as addressed in the section on trust) and are not able to observe interaction and have team dynamics evolve in this study design. This might explain the failure of the study to establish differences in perception of the AI role. Additionally, role perception can be expected to be strongly influenced by pre-existing attitudes, such as AI being unable to be a “real” teammate or being bound to be a tool. Hence, our vignettes might not have been far-reaching enough to make people question their convictions. The small differences from, e.g., a vignette contrasting media-coined expectations, which can fall into the zone of tolerance (Berry & Parasuraman, 1991) and assimilate outcomes, in this case role perceptions, towards expectations, might have shaped our results more than did our manipulation.
Nevertheless, we could find evidence for the second part of the model: The more AI is perceived as a tool, the less do people experience cohesion (medium effect), while the more AI is perceived as a teammate, the more cohesion is experienced (large effect). For this correlation, it doesn’t matter if the role perception is experimentally induced or the result of prior expectations. Merging the two roles in one continuum seems to have worked in our study as well, even though the effects for the two respective roles seem to be more differentiated. Using the continuum results in information loss but still has a medium effect on team cohesion and can thus help understand the prerequisites for a human–AI team. Given that the results are correlational, the direction of causality needs to be considered. Based on Rix (2022) and the definition of team cohesion by Riordan and Weatherly (1999), role perception is suggested to be causing higher team cohesion: To experience a sense of belonging with AI, it is beneficial to perceive it as an agent that can fulfill the role of a partner, while it is detrimental for teaming experience to work with something regarded as a tool.
Moreover, the study shows that trust is also a crucial factor when addressing cohesion in human–AI teams, especially when working together with AI as a teammate. When it comes to perception of AI, trust has a greater influence on the relationship between perceiving AI as a teammate and the feeling of team cohesion. While being highly relevant for cohesion, trust in AI also had a small effect on the team perception regarding the human-AI team. Higher trust in AI-team members could lead to a more open communication or interaction with it, which strengthens team cohesion and the working atmosphere. Thus, establishing a basis of trust is crucial.
Limitations
It is important to stress that the hypotheses and the path model explored in this study are exploratory because the preregistration happened at the beginning of the university seminar and included our methods but also the individual students’ hypotheses. After the seminar, it became clear that the manipulation had little to no direct effect on the different dependent variables mentioned in the preregistration (e.g., team cohesion), so this paper aimed to explore if role perceptions play a mediating role.
Additionally, as mentioned above, vignettes as stimuli come with limited external validity (Groß & Börensen, 2009). Maybe vignettes simply don’t produce role perceptions involving cognitive and affective components to the same extent as experiencing the actual situation would (Collett & Childs, 2011) and therefore participants base their rating on prior experience. Statements based on the effects of the vignettes should therefore be made carefully. Although all participants who stated that they were unable to imagine the given scenario were excluded, this is only a subjective assessment. In a follow-up study, we will address the issue of vignette immersion as well as an even clearer, more nuanced differentiation of AI roles by, e.g., using video snippets of an interaction with AI. This is supposed to increase the influence of the experimentally manipulated AI roles on people’s perception and lead to clearer results regarding factors influencing cohesion in human–AI teams. Furthermore, no substantial difference was found between the two AI perception types, which may indicate that this cannot be regarded as dichotomous. It may be possible to use AI as a tool while at the same time considering it as a teammate.
It should also be noted that team cohesion and teammate perception correlate substantially with each other (.80). This might indicate possible equality of the two constructs. Looking into the definitions of both as stated by Wynne and Lyons (2018), it becomes clear that an AI system that is perceived as a “communicative and synchronized agentic teammate” (p.355) might as well evoke motivation to work together. The differentiation between the two is thus unclear. Nevertheless, important differences between the scales should not be neglected. While team cohesion asks to what extent the participants are willing to correct mistakes in the team, team partner perception asks whether the AI is a helpful team partner.
Theoretical and Practical Implications
In the following, we will look deeper into how perception of AI might influence felt team cohesion. Our study supports the findings of Rix (2022) that in order to work together on common goals, one has to perceive the AI as an “other”, while working with AI as a tool is a unidirectional interaction. Our results show that the AI here is probably not intensely perceived as a social partner, even if it is presented as such in the scenario using framing and showing, e.g., proactive behavior. It remains to be investigated whether this is due to insufficient operationalization of teammate-likeness according to the model of Wynne and Lyons (2018), due to a lack of real interaction and team development, or simply due to the fact that it is hard to imagine current systems to be perceived as partners. Further studies should more systematically follow the definitions by Wynne and Lyons (2018) and realize longitudinal designs with longer-term interactions with existing or simulated AI systems in different roles. Simultaneously, theory is needed to systematize roles of AI within a team.
Trust in AI can be a good starting point for people to strengthen team cohesion under certain circumstances. The moderating effect of trust shows potential for companies to support teaming by actively promoting trust in AI teammates. Therefore, companies could offer targeted training programs or discourse formats to promote adequate use of AI and exchange best practices. Employees can be actively involved in the process of establishing AI in the company, also giving them the opportunity to express concerns (Lee & See, 2004). This can reduce doubts and foster trust, which in turn helps experiencing cohesion with it as a collaborator, which is then related to positive outcomes (Grossman et al., 2022). However, it is important to note that the moderating effect also means that trusting AI tools correlates with lower team cohesion. A possible explanation might be that learning that AI is reliable will strengthen trust but also the impression that AI doesn’t act on its own but on behalf of the human worker which weakens feelings of team cohesion (based on the definitions of trust and team cohesion, see above). Working with AI as a tool, team cohesion might nevertheless not be as favorable as in situations with AI teammates and promoting trust might not be beneficial. Future research could aim to identify more favorable outcome variables in companies working with AI tools.
Conclusion
As team cohesion is a variable that has a crucial influence on performance, organizations working with AI as a team member should aim to increase it. In this context, it is beneficial if AI is seen as a teammate rather than a tool. Furthermore, in human–AI–teams, trust can be helpful to strengthen team cohesion but not if AI is perceived as a tool. Still, team cohesion is not the only relevant outcome variable for modern companies and, depending on a company’s needs, AI tools might be more suitable. Further research could be conducted on the direct effect of role perceptions on performance or work satisfaction of workers that use, or work together with, AI. As this study has shown, interpersonal factors normally only researched in human teams, such as team cohesion and trust, are also relevant when working with technological systems and need to be properly addressed in theory, research, and organizations.
This is an open access article distributed under the terms of the