Artificial Intelligence (AI) is rapidly transforming how we communicate, challenging the notion that advanced language skills, and by extension translation and interpreting, are exclusively a human faculty. Following Floridi’s theory of AI as agency without intelligence (2025), we argue that, from an outcome-oriented perspective, AI may, in the short to medium term, be on a trajectory to approach, and in some instances even surpass, human performance in spoken language translation without the need to exhibit human like intelligence. This development suggests that AI applications will act as interpreter agents in their own right, sharing the stage with human professionals, who may, in turn, increasingly rely on AI as a supportive technology. Despite the rise of such non-intelligent but highly capable agents, there might remain compelling reasons to prefer human interpreters, at least in some contexts and conditions. This position paper: (i) proposes a conceptual framework for analysing machine interpreting as an instance of agency without intelligence; (ii) suggests some key dimensions that will shape whether AI or human agents are preferred in specific contexts; and (iii) outlines the implications for research, policy, and practice.