In recent years, dramatic technological advances have seen Artificial Intelligence (AI) expand into more and more areas of our contemporary world. Decisions and actions that were once made by humans are increasingly, through AI, becoming the complex outcomes of processes of human-machine collaboration. Alongside this, governments and organisations around the world are increasingly framing the roadblocks to widespread positive and productive collaboration with AI in terms of its ‘trustworthiness’ (Choung et al, 2022). In this paper, we elaborate a new understanding of the intimate relationship between trust and collaboration that draws on the innovative insights of recent philosophical understandings of trust that reimagines it as a ‘material force of collective coherence’ (Fisch, 2018). Our paper explores three key aspects of this reformed understanding of trust for how we might rethink the potentials and risks of human-AI collaborations today. First, and against subjectivist models that frame trust as a psychological property of an already-constituted individual, we rethink trust as a collective process that emerges from our relations with the material, technological, and nonhuman forces comprising the intensive environments in which we think and act. Second, and as a result of this collective dynamism, trust is not a fixed subjective disposition, but is instead a volatile force of coherence with the immanent capacity to be transformed by unexpected events or encounters. Finally, and against the cognitivist models of trust that define much of the writing of AI, we foreground the noncognitive and unconscious dynamics of trust. Trust here, we argue, is therefore as much an affective capacity as it is a cognitive attitude. Drawing on findings from qualitative research exploring people’s everyday practices of using AI-enabled navigation technologies, our paper explores how this recalibrated understanding of trust generates new insights into how collaboration with AI is understood, experienced, and performed today.