The usability of computer-assisted interpreting (CAI) tools has largely been studied in highly controlled conditions. Research in settings that more closely resemble interpreters’ working conditions could provide a deeper understanding of how CAI tools can currently support professional interpreting, particularly in challenging assignments and settings characterised by high complexity. This paper reports on a usability study of the CAI tool SmarTerp conducted during a multilingual meeting at the European Patent Office. Six experienced interpreters worked simultaneously from three remote ‘dummy’ booths (German, English, French). This context involved fast, impromptu speech, a high density of problem triggers, and highly specialised terminology, offering a test of CAI performance under demanding conditions. Data were collected through interpreter observations recorded in digital logs, a post-task usability questionnaire, and focus groups. The findings emphasise the importance of tailoring the automatic speech recognition engine to handle highly technical content, as well as the necessity of providing targeted training to help interpreters integrate CAI tools effectively. The results also point to the value of conducting further research in naturalistic settings to evaluate CAI tools’ usability in addition to (quasi-)experimental studies and guide their ongoing development.