The growing integration of Artificial Intelligence (AI) systems into knowledge production, decision-making, and everyday activities has revamped debates about the nature and scope of trust and reliance. The standard view is to distinguish trust from reliance. While trust is an agent-directed attitude involving ethical and normative aspects, reliance does not require them and can also be directed at objects and processes. According to agent-directed accounts of trust, since AI lacks agency, it cannot be trusted. This article examines three alternative frameworks challenging agent-directed accounts of trust and supporting trust in AI. These frameworks better reflect human-AI interactions, align with everyday language, and fit current AI policy discussions. However, some argue, they overlook the core ethical features of trust. While standard views protect these features, they misrepresent actual user engagement with AI. The problem of whether we can truly trust AI systems or merely rely on them remains a central open problem in AI epistemology, with important policy implications.