This chapter briefly surveys some relevant aspects of current research into control of interactive
(music) systems, putting into evidence research issues, achieved results, and problems that
are still open for the future. A particular focus is on multimodal and cross-modal techniques
for expressive control of sound and music processing and synthesis. The chapter will discuss a
conceptual framework, the methodological aspects, the research perspectives. It will also present
concrete examples and tools such as the EyesWeb XMI platform and the EyesWeb Expressive
Gesture Processing Library.