Home
Esportazione
Statistica
Opzioni
Visualizza tutti i metadati (visione tecnica)
The past through the future: a hypermedia model for handling the information stored in the audio documents
CANAZZA Sergio
•
DATTOLO Antonina
2009
journal article
Periodico
JOURNAL OF NEW MUSIC RESEARCH
Abstract
The use of hypertextual structures has become very popular in humanities electronic critical editions. It provides a way of making connections between pieces of information, thus modelling what many humanities scholars actually do. Because hypertext has been popularized by the World Wide Web more than anything else, the linking mechanisms are fairly weak. Actually, there are two main open issues: the encoding models used lack separation between structure and content; and in recent years a strong request to handle various media (text, images, audio and video) has emerged. Thus, it is now time to take one step further: in this paper, we describe PSYCHO-MAD, a Powerful SYstem with Charming Hypermedia Objects for Music Audio Documents, a not-hierarchical hypermedia model for handling the information stored in the audio memories based on an extension of zz-structures. The cooperation activities of different classes of actors allow the user to create new virtual hyperdocuments and dynamic views (useful, for example, in performance of electroacoustic music open work or in ethno music events). By adopting Vannevar Bush’s point of view, the model herein elaborated connects, without preconceived limitations, documents stored in different media: annotations made by the author, scores, room programs, critical reviews, setting photos, sound recordings and video shootings. © 2009 Taylor & Francis.
DOI
10.1080/09298210903388947
WOS
WOS:000274674900004
Archivio
http://hdl.handle.net/11390/878952
info:eu-repo/semantics/altIdentifier/scopus/2-s2.0-85007587081
https://www.tandfonline.com/doi/abs/10.1080/09298210903388947
Diritti
closed access
Scopus© citazioni
6
Data di acquisizione
Jun 2, 2022
Vedi dettagli
Web of Science© citazioni
6
Data di acquisizione
Mar 26, 2024
google-scholar
Vedi dettagli