James Kippen est un des spécialistes incontournables de la musique hindoustanie. Sa rencontre en 1981 avec Afaq Hussain, alors doyen d’une des grandes lignées de joueur de tablā, est le point de départ d’importantes recherches sur cet instrument et sur les rythmes indiens. Il a occupé de 1990 à 2019 la chaire d’ethnomusicologie de la Faculty of Music de l’Université de Toronto (Canada). Formé à l’école de John Blacking et de John Baily, il acquiert parallèlement au cours de ses recherches la maîtrise de certaines langues indo-persanes. Cette habilité lui permet l’analyse de première main de nombreuses sources (traités de musique, manuscrits de musiciens, généalogies, iconographies…) et d’appréhender les différents contextes socio-culturels indiens et leurs mutations depuis le XVIIIe siècle (cours indo-persanes, empire colonial britannique, montée du nationalisme, post-colonialisme). Son travail (voir la liste de ses publications en fin d’entretien) s’impose comme une contribution majeure à la compréhension des pratiques relatives au rythme et au mètre en Inde. J’ai commencé à correspondre avec James Kippen lors de mes propres recherches sur le tablā à la fin des années 1990. Toujours prompt à partager ses connaissances et son expérience avec enthousiasme, il me donna de nombreux conseils et encouragements et ce fût un grand honneur de le compter parmi les membres de mon jury de thèse lors de ma soutenance en 2004. C’est avec la même envie de transmettre qu’il a répondu favorablement à ma proposition d’entretien. Réalisé à distance entre juillet et décembre 2020, cet échange, à l’origine en anglais, relate près de quarante années de recherches ethnomusicologiques.
Traduction : Olivia Levingston et Antoine Bourgeau – Octobre 2021.
Comment en es-tu venu à t’intéresser aux musiques de l’Inde et au tablā en particulier ?
J’ai grandi à Londres, et déjà enfant j'étais fasciné par les différentes langues et cultures qui étaient introduites progressivement en Grande-Bretagne par les immigrants. J’étais particulièrement séduit par les petites épiceries regorgeant de produits exotiques et par les restaurants indiens qui dégageaient des arômes épicés alléchants. Mon père me parlait souvent de ses aventures pendant les sept années qu'il avait passées en Inde en tant que jeune soldat, et j'ai donc développé une image très attrayante, bien qu’orientaliste, du sous-continent indien. Pendant ma licence de musique à l'Université de York (1975-78), mon ami et camarade Francis Silkstone m'a fait connaître le sitār. J'ai également eu la chance de suivre un cours intensif de musique hindoustanie avec le conférencier Neil Sorrell, qui avait étudié la sāraṅgī avec le renommé Ram Narayan. La littérature disponible à cette époque était relativement rare, mais deux textes en particulier étaient tout de même très influents : « Tabla in Perspective » de Rebecca Stewart (UCLA, 1974), qui a nourri en moi un intérêt musicologique pour les variétés et les complexités du rythme et le jeu des percussions et « The Cultural Structure and Social Organization of Musicians in India : the Perspective from Delhi » de Daniel Neuman (Université de l'Illinois, Urbana-Champaign, 1974), un aperçu socio-anthropologique du monde des musiciens traditionnels et héréditaires indiens et de leurs points de vue.
J’ai donc commencé à apprendre le tablā à partir des disques 33 tours et des livrets de Robert Gottlieb appelés « 42 Lessons for Tabla », et après quelques mois, j'avais appris suffisamment de techniques de base pour accompagner F. Silkstone lors d’un récital. J'ai ensuite été l’élève de Manikrao Popatkar, un excellent joueur de tablā professionnel qui venait d’immigrer en Grande-Bretagne. J'étais « accro » ! De plus, la pensée que je pourrais entrer dans ce monde socio-musical du tablā en Inde en qualité de participant-observateur m'a motivée à chercher des programmes d'études supérieures où je pourrais développer mes connaissances et compétences tout en combinant les approches musicologiques et anthropologiques de R. Stewart et D. Neuman. Sur les conseils de N. Sorrell, j'ai donc écrit à John Blacking au sujet de la possibilité d'étudier à l'Université Queen's de Belfast, et John a été très encourageant, en m'offrant une entrée directe au programme de doctorat. Il a également souligné que son collègue John Baily avait récemment écrit un texte : « Krishna Govinda's Rudiments of Tabla Playing ». J'avais trouvé le programme d'études supérieures idéal et des guides parfaits.
Approches méthodologiques
« How Musical Is Man » de J. Blacking est un texte fondamental paru en 1973, à contre-courant de la pensée de l’époque, refusant les frontières entre musicologie et ethnomusicologie ainsi que les oppositions stériles entre les traditions musicales. J. Blacking avance également l’idée essentielle que la musique, même si ce mot n’existe pas partout, est présente à travers toutes les cultures humaines, en ce qu’elle résulte du « son humainement organisé ». Sais-tu s’il connaissait les propos d’E. Varèse ? Voulant lui aussi se démarquer de la signification occidentale du concept de « musique », bien que pour d’autres raisons, il avait avancé en 1941 l’expression de « son organisé ».
Je ne me souviens pas que J. Blacking ait mentionné Varèse ou ses réflexions sur la nature de la musique. John était par contre un excellent musicien et pianiste qui avait sans doute rencontré et étudié beaucoup de musique d'art occidental, et il est donc possible qu'il ait connu la définition de Varèse. Cependant, alors que la philosophie de Varèse est née de la conviction que les machines et les technologies seraient capables d'organiser le son, J. Blacking a voulu porter l’attention sur la musique comme fait social : une activité où la multitude des façons dont les êtres humains produisent leurs sons, à la fois comme interprètes et surtout comme auditeurs, permettrait de révéler beaucoup de choses sur leur structure sociale.
En quoi tes études universitaires ont-elles orienté tes recherches ?
J'ai eu la chance d'avoir non pas un mais deux mentors : J. Blacking et J. Baily, tous deux très différents. J. Blacking regorgeait d’idées, grandes et inspirantes, qui ont défié et révolutionné la façon dont on pense la musique et la société, tandis que J. Baily a mis l'accent sur une approche plus méthodique et empirique fondée sur la performance musicale et sur la gestion scrupuleuse de l'acquisition et la documentation des données.
Il ne faut pas oublier que j'étais jeune et inexpérimenté lorsque j'ai entrepris ce travail de terrain, et donc l'exemple de J. Baily, axé sur la musique et la collecte de données, m’a servi de guide pratique dans ma vie quotidienne pendant mes années en Inde. Et lorsque j’avais en ma possession un énorme corpus de données, j'ai pu prendre du recul et, inspiré par J. Blacking, j’ai pu identifier certains des grands modèles que ces données mettaient en lumière. J'ai donc été frappé par le récit cohérent du déclin culturel lié à la nostalgie d'un passé glorieux et artistiquement abondant, et la tradition musicale du tablā de Lucknow était l'un des derniers liens vivants avec ce monde perdu. Cela est devenu l'un des thèmes clés de ma thèse de doctorat et de certains des autres travaux qui ont suivi. Quant à ma carrière d'enseignant, j'ai essayé au fil des ans de combiner les meilleures qualités de mes deux maîtres, tout en promouvant toujours l'idée que, dans les recherches portant sur la musique et la vie musicale, la théorie devrait naître à partir de données solides et ne jamais ignorer le dialogue avec la réalité ethnographique afin de préserver ainsi sa valeur heuristique.
Dans « Working with the Masters » (2008), tu décris avec détails et franchise (ce qui est assez rare dans la profession !…) ton expérience de terrain dans les années 1980 avec Afaq Hussain. Cette expérience, et le récit que tu en fais, apparaissent comme un modèle pour toute recherche en ethnologie et en ethnomusicologie avec la particularité de l’apprentissage musical. Tu rends compte ainsi des phases d’approche, de rencontre, de test et, enfin (et heureusement dans ton cas) d’acceptation au sein de l’environnement étudié et de la confiance accordée permettant de déployer pleinement ses intentions de recherche et d’apprentissage musical. Tu abordes aussi les réflexions éthiques et déontologiques indispensables au chercheur : relation aux autres, conflits de loyauté résultant des possibles dissonances entre le rapport à l’informateur et les objectifs ethnographique, responsabilités vis à vis du savoir récolté et place du chercheur-musicien dans la réalité musicale de la tradition étudiée. Au-delà des particularités du contexte musical, y a-t-il des spécificités indiennes que les chercheurs occidentaux doivent avoir en tête pour entreprendre (et espérer réussir) une étude ethnologique en Inde ?
La société sud-asiatique a énormément changé au cours des 40 années qui se sont écoulées, c’est une évidence, et ce depuis que j'ai commencé à mener des recherches ethnographiques. Mais certains principes, ceux qui devraient guider le processus d'enquête, demeurent inébranlables. C’est le cas du profond respect pour la dimension de l'ancienneté, qu’elle soit sociale ou culturelle. Naturellement, l'accès à une communauté est la clef de voute, et il n'y a pas de meilleur « gatekeeper » ou « sponsor » (pour utiliser les termes anthropologiques) qu'une figure d'autorité au sein de la sous-culture que l'on étudie, puisque la permission que l'on reçoit se répercute sur la hiérarchie sociale et familiale. Le danger, dans une société fortement patriarcale comme celle de l'Inde, est que l'on se retrouve avec une vision hiérarchique descendante de la vie musicale. Si j'avais l'occasion de reprendre mes recherches dans ce domaine, j'accorderais une plus grande attention à ceux qui se trouvent à différents niveaux de cette hiérarchie, en particulier aux femmes et à la musicalité quotidienne de la vie dans la sphère domestique. En se concentrant uniquement sur les aspects les plus raffinés de la production culturelle, on peut passer à côté de ce qui a de la valeur dans la formation des idées, de l'esthétique et des mécanismes de soutien nécessaires à la survie et à l'épanouissement d'une tradition artistique.
Sur une note plus pragmatique (et qui concerne plus souvent il me semble les aspects relatifs au travail sur le terrain), j'ai trouvé que les entretiens formels enregistrés étaient rarement très fructueux parce qu'ils étaient ressentis comme intimidants et étaient accompagnés d'attentes élevées. En outre, une sensibilité accrue aux ramifications politiques – micro et macro – nous engageant à parler selon nos convictions, représentait souvent un obstacle à la collecte d'informations. En vérité, officieusement et dans des circonstances détendues, moins je demandais et plus j'écoutais, plus l'information que je recevais était utile et intéressante. La mise en garde est que pour fonctionner de cette manière, il faut développer un niveau de patience que la plupart des Occidentaux auraient du mal à accepter.
Tu adoptes dans les années 1980 l’« approche dialectique » enseignée par J. Blacking en y associant l’informatique et un programme d’IA. Le but était d’analyser les fondements du jeu improvisé des joueurs de tablā. Peux-tu revenir sur la genèse et l’évolution de cette approche ?
J. Blacking était particulièrement intéressé par le travail de Noam Chomsky sur les grammaires transformationnelles. Il théorisait sur le fait que l'on pouvait créer des ensembles de règles pour la musique – une grammaire – avec plusieurs « couches » ; la première décrirait comment ces structures sonores de surface étaient organisées. Les autres plus profondes, comprendraient des règles abordant des principes de plus en plus généraux sur l'organisation musicale et, au niveau le plus profond, la grammaire formaliserait les règles régissant les principes de l'organisation sociale. Si le but ultime d'un ethnomusicologue est de relier la structure sociale à la structure sonore, ou vice versa, alors c'était cette idée que J. Blacking défendait pour atteindre cet objectif.
L’été 1981, j'ai fui la chaleur intense des plaines du nord de l'Inde et me suis réfugié près de Mussoorie dans les contreforts de l'Himalaya. J'avais convenu de retrouver mon ami F. Silkstone, qui à l'époque étudiait le sitār avec Imrat Khan et le dhrupad avec Fahimuddin Dagar à Calcutta. Francis est arrivé avec Fahimuddin et l'un des étudiants américains de Fahim, Jim Arnold. Jim et Bernard Bel (un informaticien et mathématicien qui vivait à l'époque à New Delhi) travaillaient ensemble pour un projet expérimental sur l'intonation dans le rāga. Bernard est alors arrivé à Mussoorie, également pour échapper à la chaleur, et pendant environ un mois nous avons tous vécu ensemble dans un environnement riche et fertile de musique et d'idées. C'est là que Bernard et moi avons discuté pour la première fois de la notion des grammaires socio-musicales de J. Blacking, ainsi que de ma fascination pour un type de composition des joueurs de tablā, avec une structure offrant un thème et des variations, connues sous le nom de qāida. J’étais très curieux d’apprendre que Bernard pouvait concevoir un programme informatique capable de modéliser le processus de création de variations à partir d'un thème donné.
L'année suivante, Bernard et moi nous sommes rencontrés à plusieurs reprises : il en a appris beaucoup plus sur le fonctionnement du tablā et j'ai beaucoup appris sur la linguistique mathématique. Ensemble, nous avons créé des ensembles de règles – des grammaires transformationnelles – qui ont généré des variations à partir d'un thème de qāida et traité des variations existantes pour déterminer si nos règles pouvaient en tenir compte. Mais il était évident que les connaissances modélisées étaient les miennes et non celles de musiciens experts. Alors nous avons développé une stratégie pour impliquer ces experts en tant que « collaborateurs et analystes » (une expression souvent utilisée par J. Blacking) dans un échange dialectique. Après tout, un « système expert » était destiné à modéliser les connaissances d'experts, et il n'y avait pas de meilleur expert qu'Afaq Hussain.
Avais-tu connaissance d’autres types de démarches interactives comme celle du re-recording développée un peu plus tôt par S. Arom ?
J'étais au courant des méthodes interactives de S. Arom pour obtenir les propres perspectives des musiciens sur ce qui se passait dans leur musique, tout comme j'étais au courant des travaux en anthropologie cognitive visant à déterminer les catégories cognitives significatives pour les personnes que nous étudiions. La thèse de S. Arom selon laquelle les données culturelles devaient être validées par nos interlocuteurs a certainement été très influente. Je ne connaissais pas d'autres approches. Les exigences de notre situation expérimentale particulière nous ont obligés à inventer notre propre méthodologie unique pour ce processus d’interaction homme-machine.
On connaît la crainte des maîtres indiens d’une diffusion de leurs savoirs au-delà de leur gharānā, et en particulier certaines techniques et compositions. Quelles étaient l’attitude et l’implication d’Afaq Hussain dans cette démarche qui mettait à jour les structures des qāida ?
Afaq Hussain n'était pas du tout préoccupé par les révélations concernant le qāida puisque l'art de les jouer dépendait de sa capacité à improviser. En d’autres termes, il s’agissait d’une activité axée sur les processus et donc en constante évolution. A l’inverse, les compositions fixes, en particulier celles transmises de génération en génération au sein de la famille, ne changeaient pas. Celles-ci étaient considérées comme des atouts précieux et étaient soigneusement gardées.
Lorsque je repense aux expériences scientifiques, je m'étonne que Bernard ait pu créer une grammaire générative aussi puissante pour un ordinateur (d'abord un Apple II avec 64k RAM, puis le portable 128k Apple IIc) avec une puissance de traitement et un espace aussi limité. Afaq Hussain s'est également étonné qu'une machine « puisse penser », pour reprendre son expression. Nous avons commencé par une grammaire de base pour un qāida donné, puis généré quelques variations, et je les ai ensuite lues à voix haute en utilisant la langue syllabique, les bols pour tablā. De nombreux résultats étaient prévisibles, certains étaient inhabituels mais néanmoins acceptables, et d'autres ont été jugés erronés – techniquement et esthétiquement. Nous avons ensuite demandé à Afaq Hussain de proposer ses propres variations ; celles-ci ont été introduites dans l'ordinateur (j’ai effectué la saisie en utilisant un système de corrélation de clés pour gagner en rapidité) et « analysées » pour déterminer si les règles de notre grammaire pouvaient en tenir compte. De simples ajustements aux règles étaient possibles in situ, mais lorsqu'une reprogrammation plus complexe était nécessaire, nous passions à un deuxième exemple et revenions à l’exemple d'origine dans une session ultérieure.
Est-ce que ces recherches ont concerné d’autres types de composition comme les gat ou les tukra ?
Non. L'avantage d’observer une structure de thème et de variations comme celle des qāida est fondé sur le fait que chaque composition est un système fermé où les variations (vistār) sont limitées aux éléments présentés dans le thème. Le but est donc de comprendre les règles non écrites pour créer des variations. Les compositions fixes comme les gat, ṭukṛā, paran, etc., comprennent une variété d'éléments beaucoup plus large et plus imprévisible, et seraient ainsi très difficiles à modéliser. Cependant, nous avons pu expérimenter sur le type de composition appelé tihāī : la phrase répétée trois fois qui agit comme une cadence rythmique finale. Cette dernière peut être modélisée mathématiquement afin d’obtenir une formule arithmétique dans laquelle on peut proposer des phrases rythmiques, puis être appliquée soit à un qāida (un fragment de son thème ou l'une de ses variations), soit à des compositions fixes comme par exemple le ṭukṛā.
Est-ce que certaines phrases rythmiques générées par l’ordinateur et validées par Afaq Hussain ont intégré le répertoire du gharānā de Lucknow ?
C'est une question difficile. Lorsque nous étions au milieu d'une période intensive d'expérimentation avec le « Bol Processor », une sorte de dialogue se mettait en branle où Afaq Hussain alternait des phrases rythmiques générées par ordinateur avec des ensembles de variations qui lui étaient propres. Tant de compositions ont été générées et alternées de cette manière qu'il était souvent difficile de savoir si le répertoire qu'il jouait en concert provenait de l'ordinateur ou pas. Pourtant, alors que certains enseignants et interprètes développent un répertoire de variations fixes provenant d’un thème, Afaq Hussain lui l'a rarement fait, s'appuyant plutôt sur son imagination « dans l'instant ». C'est aussi l'approche qu'il a encouragée en nous. Par conséquent, je doute que le matériel généré par ordinateur soit devenu une partie permanente du répertoire.
Est-ce que ce type d’approche spécifique utilisant l’IA en ethnomusicologie a été poursuivi par d’autres ?
Le terme « Intelligence Artificielle » a fait l'objet d'un changement radical dans les années 1980-1990 grâce au développement de l'approche « connexionniste » (les neurones artificiels) et de techniques d'apprentissage à partir d'exemples capables de traiter une grande masse de données. Avec le Bol Processor (BP) nous étions au stade de la modélisation symbolique-numérique de décisions humaines représentées par des grammaires formelles, ce qui exigeait une connaissance approfondie, bien qu'intuitive, des mécanismes de décision.
Pour cette raison, les approches symboliques-numériques n'ont pas été reprises par d'autres équipes à ma connaissance. Par contre, nous avions aussi abordé l'apprentissage automatique (de grammaires formelles) à l'aide du logiciel QAVAID écrit sous Prolog II. Nous avons ainsi montré que la machine devait collecter des informations en dialoguant avec le musicien pour effectuer une segmentation correcte des phrases musicales et amorcer un travail de généralisation par inférence inductive. Mais ce travail n'a pas été poursuivi car les machines étaient trop lentes et nous ne disposions pas de corpus assez grands pour construire un modèle couvrant une grande variété de modèles d'improvisation.
Il se peut que des chercheurs indiens fassent appel à de l'apprentissage à partir d'exemples – qu'on appelle aujourd'hui « Intelligence Artificielle » – pour traiter de grandes masses de données produites par des percussionnistes. Cette approche « big data » a le défaut de manquer de précision dans un domaine où la précision est un marqueur d'expertise musicale, et de ne pas produire des algorithmes compréhensibles qui constitueraient une « grammaire générale » de l'improvisation sur un instrument de percussion. Notre ambition initiale était de contribuer à la construction de cette grammaire, mais nous avons seulement prouvé, avec la technologie de l'époque, que ce serait réalisable.
Dans les versions ultérieures, ce logiciel a pu procurer également de la matière et des outils pour le travail de composition en musique et en danse au-delà du contexte indien. On fêtera en 2021 les 40 ans de ce logiciel avec une nouvelle version. Quels sont les artistes qui ont utilisé le logiciel ?
Des compositions rythmiques programmées sur BP2 et interprétées sur un synthétiseur Roland D50 ont été utilisées pour l'œuvre chorégraphique CRONOS dirigée par Andréine Bel et produite en 1994 au NCPA de Bombay. Voir par exemple https://bolprocessor.org/shapes-in-rhythm/.
A la fin des années 1990, le compositeur néerlandais H. Visser a utilisé BP2 pour contribuer au développement d'opérateurs permettant la composition de musique sérielle. Voir par exemple https://bolprocessor.org/harm-vissers-examples/.
Nous avons eu des retours et demandes d'universitaires européens et américains qui utilisent BP2 comme outil pédagogique pour l'enseignement de la composition musicale. Mais nous n'avons jamais fait de campagne « publicitaire » à grande échelle pour agrandir la communauté d'utilisateurs, étant intéressés en priorité par le développement du système et la recherche musicologique qui lui est associée.
La principale limite de BP2 était son fonctionnement exclusif dans l'environnement Mac. C'est pourquoi la version BP3 en cours de développement est multiplateforme. Elle sera vraisemblablement mise en service en version « Cloud » rendu possible par son interaction étroite avec le logiciel Csound. Ce logiciel permet de programmer des algorithmes performants de production sonore et de travailler avec des modèles d'intonation microtonale que nous avons développés, aussi bien pour la musique harmonique que pour le raga indien – voir https://bolprocessor.org/category/related/musicology/.
Etudes de la notation, du mètre, du rythme et de leurs évolutions
Au fil de ton travail, la question de la notation musicale occupe une place importante autant sur le plan de la méthodologie que sur celui de la réflexion à propos de son usage. Tu as mis en place ton propre système afin de représenter le plus rigoureusement possible tes analyses des compositions de tablā et de pakhāvaj. Peux-tu nous parler de cet aspect de ton travail ?
Toutes les notations écrites sont des approximations incomplètes et leur contribution au processus de transmission est limité. Les représentations orales, comme les suites de syllabes énoncées (bols) représentant des frappes de percussion, transmettent souvent des informations plus précises sur la musicalité inhérente aux modèles, tels que l’accentuation, l'inflexion, le phrasé et la variabilité micro-rythmique. De même, une fois intériorisées, ces syllabes sont indélébiles. Nous savons que les systèmes oraux favorisent une bonne mémoire musicale, ce qui est particulièrement important dans le contexte de la performance musicale en Inde où les interprètes ne commencent qu'avec une feuille de route très générale, mais prennent ensuite toutes sortes de détours inattendus. Dans cette perspective, on pourrait se demander pourquoi écrire quoi que ce soit ?
À partir des années 1860, il y a eu un essor des notations musicales en Inde, inspiré il me semble par la prise de conscience que la musique occidentale possédait un système de notation efficace, et suscité aussi par l'augmentation constante de l'apprentissage institutionnalisé et d’un besoin apparent de textes pédagogiques et de répertoires. Pourtant, il n'y a jamais eu de consensus sur la façon de noter, et chaque nouveau système différait grandement des autres. La notation conçue en 1903 par Gurudev Patwardhan était sans doute la plus détaillée et la plus précise jamais créée pour les percussions comme le tablā et le pakhāvaj, mais elle était sûrement trop compliquée pour que les étudiants la lisent comme une partition. Son objectif premier était donc davantage d'être un ouvrage de référence qui préservait le répertoire et fournissait un programme pour un apprentissage structuré.
Nous vivons dans un monde de l’écrit et les musiciens reconnaissent que leurs élèves ne consacrent plus leurs journées entières à la pratique. Comme d'autres professeurs, Afaq Hussain nous a tous encouragés à écrire le répertoire qu'il enseignait pour qu'il ne soit pas oublié. Pour moi, il était particulièrement important de saisir deux aspects dans mes propres cahiers : la précision rythmique et les doigtés précis. En ce qui concerne ce dernier, par exemple, face à la phrase keṛenaga tirakiṭa takataka tirakiṭa, je m’assurais de noter correctement le doigté précis dans la douzaine de techniques possibles pour takataka, sans parler des variétés de keṛenaga, et j’indiquais également que les deux versions de tirakiṭa avaient été jouées légèrement différemment.
Afaq Hussain a gardé ses propres cahiers rangés en toute sécurité dans une armoire verrouillée. Il les consultait parfois. Je pense qu'il avait conscience du fait que le répertoire disparaissait effectivement avec la tradition orale. Après tout, il y a des centaines, voire des milliers de morceaux de musique. Son grand-père, Abid Hussain (1867-1936) fut le premier professeur de tablā au Bhatkhande Music College de Lucknow. Lui aussi a noté des compositions de tablā, et j'ai en ma possession des centaines de pages qu'il a écrites sans aucun doute pour être publiées sous forme de texte pédagogique. Cependant, il n'a pas indiqué de rythmes ou de doigtés précis, et l'interprétation de sa musique est donc problématique, même pour le fils d'Afaq Hussain, Ilmas Hussain, avec qui j'ai passé tout son répertoire au peigne fin. Une notation précise a donc de la valeur, si elle est accompagnée d'une tradition orale qui peut ajouter toutes les informations nécessaires pour donner vie à la musique.
Avec tes recherches récentes sur de nombreux textes indo-persans des XVIIIe et XIXe siècles, tu mets en évidence l’évolution de la représentation de la métrique en Inde. Ces recherches illustrent l’importance de l’approche historique et mettent en évidence pleinement les mécanismes d’évolution des faits culturels. Quels sont les concepts que tu utilises pour décrire ces phénomènes ?
Une facette importante de notre formation anthropologique était d'apprendre à fonctionner dans la langue de ceux avec qui nous nous sommes engagés dans nos recherches, non seulement pour gérer la vie au quotidien, mais aussi pour avoir accès à des concepts qui sont significatifs dans la culture étudiée. Deux termes sont importants à cet égard, l'un dont l'importance est à mon avis exagérée, l'autre sous-estimée. Premièrement, gharānā, qui depuis sa première apparition dans les années 1860 signifiait « famille » mais qui, au fil du temps, en est venu à englober toute personne qui croit partager certains éléments de technique, de style ou de répertoire avec une personne dominante du passé. Deuxièmement, silsila, un terme commun dans le soufisme qui signifie « chaîne, connexion ou succession », et qui a une pertinence spécifique dans le cas de l’enseignement dans une lignée de musiciens. C'est cette silsila plus précise qui détient, selon moi, la clé de la transmission de la culture musicale, et pourtant le paradoxe est que la chaîne porte en elle une directive implicite pour explorer l’individualité créatrice. C'est pourquoi, par exemple, lorsque l'on examine la lignée des joueurs de tablā de Delhi à partir du milieu du XIXe siècle, on constate des différences majeures de technique, de style et de répertoire d'une génération à l'autre. Il en va de même pour mon professeur Afaq Hussain, dont le jeu différait grandement de celui de son père et enseignant Wajid Hussain. Chaque individu hérite d'une certaine essence musicale dans la silsila, bien sûr, mais il doit s'engager et opérer dans un monde en constante évolution où la survie artistique nécessite une adaptation. Il est donc d'une importance vitale lors de l'étude de toute époque musicale de recueillir autant d'informations que possible sur le milieu socioculturel observé.
Comme je viens de le démontrer, il est impératif de s'engager avec des concepts de la culture, de les expliquer et de les utiliser sans recourir à la traduction. Un autre excellent exemple est celui du terme tāla, qui est le plus souvent traduit par mètre ou cycle métrique. Et pourtant, il y a une différence fondamentale entre les deux. Le mètre est implicite : c'est un motif qui est dérivé des rythmes de surface d'une pièce, et se compose d'une impulsion sous-jacente qui est organisée en une séquence hiérarchique récurrente de battements forts et faibles. Mais, par contraste, tāla est explicite : c'est un motif récurrent de battements non hiérarchiques se manifestant par des gestes de la main consistant en des claps, des mouvements silencieux de la main et des comptes sur les doigts, ou comme une séquence relativement fixe de frappes de percussion. Utiliser le terme « mètre » dans le contexte indien est donc trompeur, et j'encourage donc l'utilisation de terme tāla avec une explication mais sans traduction.
Tu travailles actuellement sur un ouvrage concernant les sources du XVIIIe et XIXe siècles, quel est ton objectif ?
Mon objectif est de retracer les origines et l'évolution du système du tāla actuellement utilisé dans la musique hindoustanie en rassemblant autant d'informations que possible à partir de sources contemporaines de la fin du XVIIe siècle jusqu’au début du XXe siècle et de l'ère de l’enregistrement. Le problème est que les informations disponibles sont fragmentaires et souvent rédigées dans un langage obscur : la tâche s'apparente à un puzzle où la plupart des pièces manquent. De plus, les sources que l'on trouve ne sont pas nécessairement directement connectées, et donc j’ai plutôt l’impression de travailler avec deux ou plusieurs puzzles à la fois. En bref, après une analyse minutieuse, des déductions et des hypothèses, je pense qu'il y a eu une convergence des systèmes rythmiques au XVIIIe siècle qui a donné naissance au système du tāla d'aujourd'hui.
Les pratiques musicales et les contextes sociaux des diverses communautés (les Kalāwant qui chantaient le dhrupad, les Qawwāl qui chantaient le khayāl, le tarāna et le qaul, ainsi que la communauté des Ḍhāḍhī qui accompagnaient tous ces genres musicaux), doivent impérativement être pris en compte pour comprendre comment et pourquoi la musique et le rythme en particulier, ont évolués comme ils l'ont fait. Pourtant, il y a tant d'autres aspects importants dans cette histoire : le rôle des femmes instrumentistes dans les espaces privés de la vie moghole au XVIIIe siècle, et leur disparition progressive au XIXe siècle, le colonialisme, le statut et l'influence des textes anciens, les techniques d'impression et la diffusion de nouveaux textes pédagogiques à la fin du XIXe siècle, pour n'en citer que quelques-uns.
Quelles sont les sources intéressantes à considérer pour comprendre l’évolution des pratiques et des représentations rythmiques de la musique hindoustanie ?
Le nord de l'Inde a toujours été ouvert aux échanges culturels, et cela était particulièrement le cas sous les Moghols. Il est impératif de comprendre qui se rendait dans ces cours, d'où ils venaient et ce qu'ils jouaient. Il est tout aussi important de comprendre les documents écrits disponibles ainsi que les discours intellectuels de l'époque, car la connaissance de la musique était cruciale pour l'étiquette moghole. Ainsi, quand on sait que le traité de musique très influent Kitāb al-adwār, du théoricien du XIIIe siècle Safi al-Din al-Urmawi al-Baghdadi, était largement disponible en Inde en arabe et en traduction persane, et que des exemplaires se trouvaient dans la collection des nobles de Delhi à partir du XVIIe siècle, on comprend mieux pourquoi le rythme indien était expliqué en utilisant les principes de la prosodie arabe à la fin du XVIIIe siècle. Mon argument est que la prosodie arabe, appliquée à la musique, était un outil plus puissant que les méthodes traditionnelles de prosodie sanskrite, et qu’elle était donc plus efficace pour décrire les changements qui se produisaient dans la pensée et la pratique rythmique à cette époque.
Ces recherches ethno-historiques bousculent parfois les croyances de certains musiciens et chercheurs, notamment sur les questions d’ancienneté et d’« authenticité » des traditions. Penses-tu que les musiciens d’aujourd’hui sont davantage enclins à accepter les évidences de la nature complexe des traditions musicales, formées de multiples apports et en perpétuelles transformations ?
Certains le sont, mais certains ne le sont pas. Il y a toujours eu un petit nombre de chercheurs en Inde qui menaient des recherches précieuses et factuelles sur la musique. Pourtant, je suis déçu de constater qu'il y en a beaucoup d'autres qui reposent sur le rabâchage et la diffusion d'opinions non fondées et non savantes. Ce qui me surprend peut-être le plus, c'est le manque de formation scientifique rigoureuse dans les universités de musique en Inde et la persistance d'idées et d'informations réfutées ou discréditées en dépit de tant d'excellentes recherches publiées indiquant le contraire.
Depuis les années 1990, on constate le renforcement d’un nationalisme hindou au sein de la société indienne. Notes-tu un impact particulier sur le monde de la musique hindoustanie et sur celui de la recherche ?
Il s’agit là d’un sujet complexe et sensible. Le nationalisme hindou n'est pas nouveau, loin de là, et comme je l'ai démontré dans mon livre sur Gurudev Patwardhan, il a constitué une partie importante de la raison d'être de la vie et de l'œuvre de Vishnu Digambar Paluskar au début du XXe siècle. Comme de nombreux chercheurs l'ont souligné, ce nationalisme avait ses racines dans le colonialisme et s'est développé en tant que mouvement anticolonial axé sur la politique identitaire hindoue. Ce récit, basé sur des notions inventées d'un passé hindou glorieux, a minimisé les contributions de la culture moghole et des grandes lignées de musiciens musulmans (sans parler des femmes). Depuis ce temps, l'identité musulmane indienne dans le domaine de la musique a connu un certain déclin. Les chercheurs ont pris note de cette chute et ont tenté de retracer certains des contre-récits qui ont jusqu'à présent été ignorés, comme l'excellent livre de Max Katz Lineage of Loss (Wesleyan University Press, 2017) sur une grande famille de musiciens-savants musulmans, nommée Shahjahanpur-Lucknow gharānā. Je pense que dans de nombreuses études actuelles qui portent sur la musique en Inde se trouve une forte motivation de ne pas omettre ces récits culturels importants, de les réanimer et de les replacer dans le grand récit de l'histoire de l'Asie du Sud.
A la suite de R. Stewart, tu as mis en évidence l’intrication complexe des approches rythmiques et métriques dans le jeu des joueurs de tablā en montrant qu’il résulte de divers apports culturels qui se sont succédés dans le temps. Avec l’intensification des échanges culturels mondiaux depuis la fin du XXe siècle, as-tu observé une ou des tendances évolutives dans le jeu des joueurs de tablā ?
Depuis l'inclusion du tablā dans la musique pop des années 1960, l’exaltante fusion jazz du groupe Shakti de John McLaughlin dans les années 1970 et l'omniprésence aujourd’hui du tablā dans la musique sous toutes ses formes, il semble tout naturel que les joueurs de tablā du monde entier aient envie d’explorer et d’expérimenter ses sons magiques. Zakir Hussain a montré la voie en démontrant la flexibilité et l'adaptabilité de cet instrument, ainsi que la vélocité viscérale et palpitante de ses motifs rythmiques.
Quant au tablā, dans le contexte de la musique de concert hindoustanie, j'ai remarqué que nombreux sont ceux qui tentent d'injecter ce même sentiment d'exaltation, renforcé de plus en plus, semble-t-il, par une amplification si forte qu'elle déforme le son et heurte les tympans du public jusqu'à la soumission. J'irais jusqu'à dire que c'est malheureusement devenu la norme. À cet égard, je me considère comme une sorte de puriste qui aspire à un retour à une pratique où le joueur de tablā maintient un rôle subtil, discret mais de soutien, et complète la ligne du soliste, en restant modeste et sans dominer la scène lorsqu'il est invité à faire une petite apparition ou un court solo. De la même manière, je désire un retour aux soli de tablā qui regorgent de contenu plutôt que d’« effets sonores ». Par « contenu », j'entends des compositions traditionnelles de caractère, dotées de techniques spécialisées, dont les compositeurs sont nommés et ainsi honorés. Et pourtant, il est douloureusement évident qu'un tel « contenu » n'atteint pas beaucoup de jeunes joueurs de nos jours.
Ethnomusicologie
Comme évoqué, tes recherches mettent en avant l’importance des sources historiques aussi bien que la prise en compte des phénomènes plus large comme l’orientalisme ou le nationalisme pour comprendre le présent des pratiques musicales indiennes. En même temps tu es très attentif aux intenses phénomènes transculturels actuels et à la nécessité de les appréhender. Dans la profession, le concept d’« ethnomusicologie » ne fait pas toujours consensus. Quelle est ta position par rapport à cette appellation et à l’objet de cette discipline en ce début du XXIe siècle ?
Je n'ai jamais été particulièrement à l'aise avec l'étiquette d’« ethnomusicologie ». Comme disait J. Blacking, toute musique est de la « musique ethnique », et par conséquent, il ne devrait pas y avoir de distinction entre les études sur le tablā, le gamelan, le hip-hop et celles sur Bach, Beethoven ou Brahms. Nous nous engageons tous dans un « discours sur la musique », une « musicologie ». L'avantage de termes comme « anthropologie » ou « sociologie » de la musique est qu'ils impliquent une gamme plus large d'approches théoriques et méthodologiques qui nous rappellent que la musique est un fait social. Pourtant, nous devons reconnaître que le champ des études ethnomusicologiques a évolué et que, de nos jours, une attention bien plus grande est accordée à des phénomènes comme le bruit ou les sons de la vie quotidienne. Par conséquent (sans vouloir paraître trop cynique) bien que dans certains milieux les « sound studies » soient traitées avec un certain mépris, ce terme très général est peut-être la définition la plus honnête et la plus précise de ce que nous (nous tous) faisons. Je reconnais toutefois qu'il serait dommage de rejeter complètement le terme « musique », et donc j’aime concevoir l'ethnomusicologie, la musicologie et la théorie musicale se réunissant sous la rubrique « musique et sound studies ».
Enseignement
Après une courte période à Belfast, tu as enseigné à Toronto, peux tu nous parler de ton expérience d’enseignement ?
Toronto est une ville merveilleuse et, selon la plupart des témoignages, c'est la ville la plus multiculturelle de la planète. Elle offre un environnement musical très riche et stimulant.
Miecyzslaw Kolinski a enseigné à l'Université de Toronto de 1966 à 1978. Ses intérêts ethnomusicologiques ont été façonnés par sa formation auprès de Hornbostel et Sachs, et par la vision d’un monde, partagée par tant de géants de notre discipline. Ses publications portent sur les bases scientifiques de l'harmonie et de la mélodie et il a développé des méthodes d'analyse interculturelle. Son approche a été catégoriquement rejetée dans ma propre formation avec John Blacking qui a toujours défendu avec véhémence le relativisme culturel, tout comme cela était en contradiction avec la formation de Tim Rice à l'Université de Washington. Tim a été embauché en 1974 et est parti pour l'UCLA en 1987. Comme moi à mes débuts, Tim a eu du mal à convaincre ses collègues de l'importance de l'approche ethnomusicologique et de la nécessité de traiter notre discipline avec le respect qu'elle mérite et les ressources qu'elle nécessite. Nous avons tous les deux beaucoup lutté. Tim a créé un programme qui est devenu connu sous ma direction sous le nom « The World Music Ensembles », et pour ma part j'ai acquis un gamelan balinais en 1993, aidé par mon épouse, l'ethnomusicologue Annette Sanger, ancienne collègue de J. Blacking. De plus, Tim et moi avons réussi à intégrer davantage les cours d'ethnomusicologie au cœur du programme pour nous assurer que tous les étudiants en musique, quels que soient leurs intérêts, soient exposés à notre approche et comprennent la valeur et l'importance d'une vision socialement fondée de toute musique. J’ai créé un cours d'introduction d'un an intitulé Music as Culture que j'ai co-enseigné pendant quelques années avec un collègue de musicologie : nous avons alterné nos cours, illustrant et croisant notre corpus et nos observations sur nos canons occidentaux et le vaste monde de la musique au-delà. Mon cours Introduction to Music & Society est devenu emblématique. Mon approche étant essentiellement modulaire, les thèmes choisis ont changé et se sont adaptés au fil du temps pour refléter des préoccupations plus contemporaines, notamment la musique et l'identité, l'expérience religieuse, la migration, le genre, la guérison et les sound studies.
Dans mes fonctions d’enseignant, j'ai conçu et enseigné une variété de cours : Hindustani music, Music & Islam, Theory & Method in Ethnomusicology, The Beatles, Anthropology of Music, Fieldwork, Music, Colonialism & Postcolonialism, Rhythm & Metre in Cross-Cultural Perspective, Transcription, Notation & Analysis, etc. J'ai travaillé avec la communauté sud-asiatique de Toronto pour organiser des concerts du chanteur Pandit Jasraj. Ils ont attiré des sponsors générant des bourses d'études fiables pour des étudiants dont les recherches portaient sur la musique hindoustanie. J'ai aidé à mettre en place un programme d'artiste en résidence, invitant des musiciens du monde entier à passer un trimestre avec nous à enseigner et à jouer. J'ai contribué à la refonte de nos programmes d'études supérieures axés sur la musicologie et j'ai introduit dans le programme d’étude une maîtrise et un doctorat en ethnomusicologie. Mais les deux réalisations dont je suis sans doute le plus fier sont premièrement les nombreux et merveilleux doctorants que j'ai encadrés, dont beaucoup ont eux-mêmes poursuivi une carrière dans le milieu universitaire, et deuxièmement le succès de mon initiative d’élargissement de notre représentation : nous sommes passés d'un seul poste de professeur à quatre titulaires à plein-temps en ethnomusicologie.
Quelle est ta place au sein du gharānā de Lucknow ?
J'ai beaucoup apprécié apprendre et jouer du tablā dans ma vie et je me considère extrêmement chanceux d'avoir eu un lien aussi étroit et productif avec l'un des joueurs de tablā les plus remarquables de l'histoire : Afaq Hussain. J'ai la chance d'avoir une bonne mémoire et j'ai donc encore dans ma tête un vaste répertoire de compositions merveilleuses remontant aux premiers membres de la lignée Lucknow qui ont prospéré à la fin du XVIIIe et au début du XIXe siècle. Je suis particulièrement intéressé par la technique et j'ai passé beaucoup de temps à étudier les mécanismes du jeu. Cependant, je suis avant tout un érudit et, en pratique, je ne me fais aucune illusion d’être guère plus qu’un amateur. En effet, mon intérêt pour le jeu m'a fourni des aperçus extraordinaires de l'instrument et de son histoire.
Quant à ma place ou mon rôle au sein du gharānā de Lucknow, je dirais deux choses. Tout d'abord, je continue à faire partie de l'échange d'idées et de répertoire avec mes pairs aux côtés desquels j'ai étudié le tablā et qui font partie maintenant, comme moi, des grandes figures de la silsila, la lignée directe de l’enseignement d'Afaq Hussain. Ils me considèrent comme un professionnel avisé, une autorité dans mon domaine. Parfois, on me demande si je me souviens d'une composition rare sur laquelle il y a eu débat, et parfois j'introduis dans notre dialogue des informations et des questions issues de mes recherches qui suscitent un vif intérêt. Par exemple, le fils d'Afaq Hussain, Ilmas Hussain, et moi-même avons travaillé ensemble pour ressusciter les cahiers de son arrière-grand-père Abid Hussain et les placer dans leur contexte, non seulement celui de leur tradition mais aussi celui de la fin des années 1920 et du début des années 1930, années durant lesquelles Abid Hussain incarnait le tout premier professeur de tablā au Bhatkhande College de Lucknow. Enfin, je pense que mes travaux ont su attirer une plus grande attention sur la lignée de Lucknow. Quand je suis arrivé à la porte d'Afaq Hussain en janvier 1981, il était affaibli – psychologiquement et financièrement – et son avenir était incertain. D'autres étudiants étrangers ont suivi mon exemple et ont rejoint un nombre toujours croissant de disciples indiens venus pour apprendre. Mon livre, The Tabla of Lucknow, ainsi que d'autres facettes de mes recherches ont donc bien contribué à attirer l'attention nationale et internationale sur Afaq Hussain, son fils Ilmas et toute leur tradition.
Liste des publications
Ouvrages
2006 Gurudev’s Drumming Legacy : Music, Theory and Nationalism in the Mrdang aur Tabla Vadanpaddhati of Gurudev Patwardhan. Aldershot : Ashgate (SOAS Musicology Series).
2005 The Tabla of Lucknow : A Cultural Analysis of a Musical Tradition. New Delhi : Manohar (Nouvelle édition avec nouvelle préface).
1988 The Tabla of Lucknow : A Cultural Analysis of a Musical Tradition. Cambridge : Cambridge University Press (Cambridge Studies in Ethnomusicology).
Direction d’ouvrage
2013 avec Frank Kouwenhoven, Music, Dance and the Art of Seduction. Delft : Eburon Academic Publishers.
Direction de revue
1994-1996Bansuri (A yearly journal devoted to the music and dance of India, published by Raga Mala Performing Arts of Canada). Volume 13, 1996 (60 pp), volume 12, 1995 (60 pp), volume 11, 1994 (64 pp).
Articles, chapitres d’ouvrages
À paraître « Weighing ‘The Assets of Pleasure’: Interpreting the Theory and Practice of Rhythm and Drumming in the Sarmāya-i ‘Ishrat, a Pivotal 19th Century Text. », in Katherine Schofield, dir. : Hindustani Music Between Empires : Alternative Histories, 1748-1887. Éditeur à préciser.
À paraître « An Extremely Nice, Fine and Unique Drum : A Reading of Late Mughal and Early Colonial Texts and Images on Hindustani Rhythm and Drumming. », in Katherine Schofield, Julia Byl et David Lunn, dir. : Paracolonial Soundworlds : Music and Colonial Transitions in South and Southeast Asia. Éditeur à préciser.
2021 « Ethnomusicology at the Faculty of Music, University of Toronto. » MUSICultures (Journal of the Canadian Society for Traditional Music). Vol. 48.
2020 « Rhythmic Thought and Practice in the Indian Subcontinent. » in Russell Hartenberger & Ryan McClelland, dir. : The Cambridge Companion to Rhythm. Cambridge University Press : 241-60.
2019 « Mapping a Rhythmic Revolution Through Eighteenth and Nineteenth Century Sources on Rhythm and Drumming in North India. » In Wolf, Richard K., Stephen Blum, & Christopher Hasty, dir. : Thought and Play in Musical Rhythm: Asian, African, and Euro-American Perspectives. Oxford University Press : 253-72.
2013 « Introduction. » In Frank Kouwenhoven & James Kippen, dir. : Music, Dance and the Art of Seduction. Delft : Eburon Academic Publishers : i-xix.
2012 « On the contributions of Pt. Sudhir V. Mainkar to our understanding of the tabla.” Souvenir Volume in Honour of Sudhir Vishnu Mainkar. Sharda Sangeet Vidyalaya : Mumbai.
2010 « The History of Tabla. » In Joep Bor, Françoise ‘Nalini’ Delvoye, Jane Harvey and Emmie te Nijenhuis, dir. : Hindustani Music, Thirteenth to Twentieth Centuries. New Delhi : Manohar : 459-78.
2008 « Working with the Masters. » In Gregory Barz and Timothy Cooley, dir. :Shadows in the Field : New Perspectives for Fieldwork in Ethnomusicology (2nd Edition révisée). Oxford University Press : 125–40.
2008 « Hindustani Tala : An Introduction. » Concise Garland Encyclopedia of World Music. New York : Garland [version condensée de la publication de 2000].
2007 « The Tal Paddhati of 1888 : An Early Source for Tabla. » Journal of the Indian MusicologicalSociety, 38 : 151–239.
2005 « Lucknow » Encyclopedia of Popular Music of the World, Part 2, Vol. 5, Locations: Asia & Oceania. London : Continuum : 109–110.
2003 « Le rythme: Vitalité de l'Inde. » In Gloire des princes, louange des dieux: Patrimoine musical de l'Hindoustan du XIVe au XXe siècle. Paris : Cité de la musique et Réunion des Musées Nationaux 2003 :152–73.
2002 « Wajid Revisited : A Reassessment of Robert Gottlieb’s Tabla Study, and a new Transcription of the Solo of Wajid Hussain Khan of Lucknow. » Asian Music, 33, 2 : 111–74.
2001 « Asian Music [in Ontario]. » Garland Encyclopedia of World Music, Volume 3, The United States and Canada. New York : Garland Publishing : 1215–17.
1992 « Tabla Drumming and the Human-Computer Interaction. » The World of Music, 34, 3 : 72–98.
1992 « Music and the Computer : Some Anthropological Considerations. » Interface, 21, 3-4 : 257–62.
1992 « Where Does The End Begin ? Problems in Musico-Cognitive Modelling. » Minds & Machines, 2, 4 : 329–44.
1992 « Identifying Improvisation Schemata with QAVAID. » In Walter B. Hewlett & Eleanor Selfridge-Field, dir. : Computing in Musicology : An International Directory of Applications, Volume 8. Center for Computer Assisted Research in the Humanities :115–19.
1992 avec Bernard Bel « Modelling Music with Grammars : Formal Language Representation in the Bol Processor. » In A. Marsden & A. Pople, dir. : Computer Representations and Models in Music. London, Academic Press : 207–38. https://halshs.archives-ouvertes.fr/halshs-00004506
1991 avec Bernard Bel « From Word-Processing to Automatic Knowledge Acquisition : A Pragmatic Application for Computers in Experimental Ethnomusicology. » in Ian Lancashire, dir. : Research in Humanities Computing I : Papers from the 1989 ACH-ALLC Conference, Oxford University Press : 238–53.
1991 « Changes in the Social Status of Tabla Players. » Bansuri, 8 : 16–27, 1991. (réédition de la publication de JIMS, 1989)
1990 « Music and the Computer: Some Anthropological Considerations. » In B. Vecchione & B. Bel, dir. : Le Fait Musical — Sciences, Technologies, Pratiques, préfiguration des actes du colloque Musique et Assistance Informatique, CRSM-MIM, Marseille, France, 3-6 Octobre : 41–50.
1989 « Changes in the Social Status of Tabla Players. » Journal of the Indian Musicological Society, 20, 1 & 2 : 37–46.
1989 Avec Bernard Bel « The Identification and Modelling of a Percussion ‘Language’, and the Emergence of Musical Concepts in a Machine-Learning Experimental Set-Up. » Computers and the Humanities, 23, 3 : 199–214. https://halshs.archives-ouvertes.fr/halshs-00004505
1989 « Computers, Fieldwork, and the Analysis of Cultural Systems. » Bulletin of Information on Computing and Anthropology, 7, 1989 : 1–7. En ligne : http://lucy.ukc.ac.uk/bicaweb/b7/kippen.html
1988 « Computers, Fieldwork, and the Problem of Ethnomusicological Analysis. » International Council for Traditional Music (UK Chapter) Bulletin, 20 : 20–35.
1988 Avec Bernard Bel « Un modèle d’inférence grammaticale appliquée à l’apprentissage à partir d’exemples musicaux. » Neurosciences et Sciences de l’Ingénieur, 4e Journées CIRM, Luminy, 3–6 Mai 1988.
1988 « On the Uses of Computers in Anthropological Research. » Current Anthropology, 29, 2 : 317–20.
1987 « An Ethnomusicological Approach to the Analysis of Musical Cognition. » Music Perception 5, 2 : 173–95.
1987 Avec Annette Sanger « Applied Ethnomusicology : the Use of Balinese Gamelan in Recreational and Educational Music Therapy. » British Journal of Music Education 4, 1 : 5–16.
1986 Avec Annette Sanger « Applied Ethnomusicology : the Use of Balinese Gamelan in Music Therapy. » International Council for Traditional Music (UK Chapter) Bulletin, 15 : 25–28.
1986 « Computational Techniques in Musical Analysis. » Bulletin of Information on Computing and Anthropology (University of Kent at Canterbury), 4 : 1–5.
1985 « The Dialectical Approach : a Methodology for the Analysis of Tabla Music. » International Council for Traditional Music (UK Chapter) Bulletin, 12 : 4–12.
1984 « Linguistic Study of Rhythm: Computer Models of Tabla Language. » International Society for Traditional Arts Research Newsletter, 2 : 28–33.
1984 « Listen Out for the Tabla. » International Society for Traditional Arts Research Newsletter, 1 : 13–14.
Comptes rendus
2012 Elliott, Robin and Gordon E. Smith, dir. : Music Traditions, Cultures and Contexts, Wilfrid Laurier University Press, in « Letters in Canada 2010 », University of Toronto Quarterly, 81: 3 :779–80.
2006 McNeil, Adrian Inventing the Sarod : A Cultural History. Calcutta : Seagull Press, 2004. Yearbook for Traditional Music, 38 : 133–35.
1999 Myers, Helen, Music of Hindu Trinidad : Songs from the India Diaspora. Chicago Studies in Ethnomusicology. Chicago : University of Chicago Press, 1998. Notes : 427–29.
1999Marshall, Wolf, The Beatles Bass. Hal Leonard Corporation, 1998. Beatlology, 5.
1997 Widdess, Richard, The Ragas of Early Indian Music: Music, Modes, Melodies, and Musical Notations from the Gupta Period to c.1250. Oxford Monographs on Music. Oxford : Clarendon Press, 1995. Journal of the American Oriental Society, 117, 3 : 587.
1994 Rowell, Lewis, Music and Musical Thought in Early India. Chicago Studies in Ethnomusicology, edited by Philip V. Bohlman and Bruno Nettl. Chicago and London : The University of Chicago Press, 1992. Journal of the American Oriental Society, 114, 2 : 313.
1992 Compte rendu CD : « Bengal : chants des ‘fous’ », par Georges Luneau & Bhaskar Bhattacharyya, and « Inde du sud : musiques rituelles et théâtre du Kerala », par Pribislav Pitoëff. Asian Music 23, 2 :181–84.
1992 Witmer, Robert, dir. : “Ethnomusicology in Canada : Proceedings of the First Conference on Ethnomusicology in Canada.” (CanMus Documents, 5) Toronto, Institute for Canadian Music, 1990. Yearbook for Traditional Music, 24 : 170–71.
1992Neuman, Daniel M. The Life of Music in North India: The Organization of an Artistic Tradition. Chicago, University of Chicago Press, 1990. Journal of the American Oriental Society, 112, 1 : 171.
1988 Qureshi, Regula Burckhardt. Sufi Music of India and Pakistan : Sound, Context and Meaning in the Qawwali. Cambridge Studies in Ethnomusicology. Cambridge : CUP, 1986. International Council for Traditional Music (UK Chapter) Bulletin, 20 : 40–45.
1986Wade, Bonnie C. Khyal : Creativity within North India’s Classical Music Tradition. Cambridge Studies in Ethnomusicology. Cambridge : CUP. Journal of the Royal Asiatic Society : 144–46.
Enregistrements
1999 HonouringPandit Jasraj at Convocation Hall, University of Toronto. 2 CD set. Foundation for the Indian Performing Arts, FIPA002.
1995Pandit Jasraj Live at the University of Toronto. 2 CD set. Foundation for the Indian Performing Arts, FIPA001.
Livrets d’album musical
2009 Liner notes for Mohan Shyam Sharma (pakhavaj): Solos in Chautal and Dhammar. India Archive Music CD, New York.
2007 Liner notes for Anand Badamikar (tabla): Tabla Solo in Tintal. India Archive Music (IAM•CD 1084), New York.
2002 Pandit Shankar Ghosh : Tabla Solos in Nasruk Tal and Tintal. CD, India Archive Recordings (IAM•CD1054), New York.
2001 Shujaat Khan, Sitar : Raga Bilaskhani Todi & Raga Bhairavi. CD, India Archive Recordings (IAM•CD1046), New York.
1998 Pandit Bhai Gaitonde : Tabla Solo in Tintal. CD, India Archive Recordings (IAM•CD1034), New York.
1995Ustad Amjad Ali Khan : Rag Bhimpalasi & Rag “Tribute to America”. CD, India Archive Recordings (IAM•CD1019), New York.
1994 Ustad Nizamuddin Khan : Tabla Solo in Tintal. CD, India Archive Recordings (IAM•CD1014), New York.
1992 Rag Bageshri & Rag Zila Kafi, played by Tejendra Narayan Majumdar (sarod) and Pandit Kumar Bose (tabla). CD, India Archive Recordings (IAM•CD 1008), New York.
1990 « In Memoriam : John Blacking (1928-1990). » Ethnomusicology 34, 2 : 263–6.
➡ A new version of Bol Processor compliant with various systems (MacOS, Windows, Linux…) is under development. We invite software designers to join the team and contribute to the development of the core application and its client applications. Please join the BP open discussion forum and/or the BP developers list to stay in touch with work progress and discussions of related theoretical issues.
James Kippen is one of the key figures in the study of Hindustani music. His encounter in 1981 with Afaq Hussain, at the time the doyen of one of the great tablā-playing lineages, was the starting point for major research into both the instrument and Indian rhythm. From 1990 to 2019 he was the head of ethnomusicology at the Faculty of Music in the University of Toronto in Canada. Trained under John Blacking and John Baily, he also acquired over the course of his research a mastery of several Indo-Persian languages. This ability has allowed him to analyse first-hand numerous sources (treatises on music, musicians' own writings, genealogies, iconographic materials…) and to understand the changing sociocultural contexts in which they were produced (the Indo-Persian courts, the colonial British Empire, the rise of Indian Nationalism, and the post-colonial state). His work (see the select list of publications at the end of this interview) stands out as a major contribution to the understanding of the theory and practice of rhythm and metre in India.
I began corresponding with James Kippen during my own research on tablā at the end of the 1990s. Always quick to share his knowledge and his experience with enthusiasm, he gave me a lot of advice and encouragement, and it was a great honour to count him among the members of my thesis jury during my defence in 2004. It was with that same willingness to share that he responded favourably to my proposal to interview him. Carried out remotely between July and December 2020, this exchange covers nearly 40 years of ethnomusicological research.
– How did you become interested in the musics of India, and in the tablā in particular?
As a child growing up in London, I was fascinated by the different languages and cultures that were increasingly being introduced by immigrants to Britain. I was particularly enchanted by the little Indian corner shops brimming with exotic goods and the Indian restaurants that emitted alluring, spicy aromas. My father regularly regaled me with stories of his adventures from the seven years he spent in India as a young soldier, and I developed an entirely favourable though admittedly Orientalist impression of the subcontinent. During my music degree at the University of York (1975-78), I was introduced by my friend and fellow student Francis Silkstone to the sitār. I also had the good fortune to take an intensive course in Hindustani music with lecturer Neil Sorrell, who had studied sāraṅgī with the great Ram Narayan. The available literature at that time was relatively sparse, but two texts in particular were highly influential: Rebecca Stewart's Tablā in Perspective (UCLA, 1974), which nurtured in me a musicological interest in the varieties and complexities of rhythm and drumming, and Daniel Neuman's The Cultural Structure and Social Organization of Musicians in India: the Perspective from Delhi (University of Illinois, Urbana-Champaign, 1974), which offered social-anthropological insights into both the worlds and the worldviews of traditional, hereditary musicians.
Thus, I began learning tablā from Robert Gottlieb's LP recordings and booklets called 42 Lessons for Tabla, and after a few months I had learnt enough basic material to accompany Francis Silkstone in a recital. I later studied in person under Manikrao Popatkar, an excellent professional tablā player who had recently immigrated to Britain. I was hooked. Moreover, the thought that I might enter that socio-musical world of tablā in India and become a participant-observer motivated me to look at graduate programs where I would be able to develop the knowledge and skills to combine the musicological and anthropological approaches of Stewart and Neuman. On Neil Sorrell's advice I wrote to John Blacking about the possibility of studying at The Queen's University of Belfast, and John was most encouraging, offering me entry directly to the doctoral program. He also pointed out that his colleague John Baily had recently written a text: Krishna Govinda's Rudiments of Tabla Playing. It seemed I had found the ideal graduate program and the perfect mentors.
Methodological approaches
– The book How Musical Is Man? by John Blacking is a fundamental text that appeared in 1973 that ran counter to the thinking of the time and refused to recognise the barriers between musicology and ethnomusicology, as well as the fruitless differences between musical traditions. Blacking also put forward the essential idea that music, even if that word does not exist everywhere, is present in all human cultures, resulting in his definition of “humanly organised sound.” Do you know if he knew of Edgar Varèse's expression “organised sound,” which Varèse put forward in 1941 in an attempt to distance himself from the Western concept of “music,” albeit for other reasons?
I have no personal recollection of Blacking ever mentioning Varèse or his thoughts on the nature of music. Nonetheless, Blacking was an excellent musician and pianist who had doubtless encountered and studied a great deal of Western Art Music, and so it is possible he knew of Varèse's definition. However, whereas Varèse's philosophy was born out of a conviction that machines and technologies would be capable of organising sound, Blacking wanted to re-centre music as a social fact: an activity where the myriad ways in which human beings organised sound both as performers and, importantly, as listeners promised to reveal a great deal about their social structure.
– How did your studies at university guide your research?
I was lucky enough to have not one but two mentors in John Blacking and John Baily, and they were very different from one another. Blacking was full of grand and inspiring ideas that challenged and revolutionized the way one thinks about music and society, whereas Baily emphasized a more methodical and empirically-based approach grounded in performance and the careful acquisition and documentation of data. One should remember that I was young and inexperienced when I undertook fieldwork, and so Baily's example, focussed on doing music and on gathering data, served as a practical guide in my daily life during my years in India; yet once I was armed with a huge corpus of information I was able to stand back and, hopefully like Blacking, see some of the grand patterns which that data spelled out. I was struck therefore by the consistent narrative of cultural decline linked to a nostalgia for a glorious and artistically-abundant past, and the tablā music of Lucknow was one of the last living links to that lost world. This became one of the key themes in my doctoral dissertation, and in some of the other work that followed. As for my career as a teacher, I have tried over the years to combine the best qualities of both my mentors, always promoting the idea that theory should grow out of solid data about music and musical lives so that it does not lose its heuristic value by abandoning its dialogue with ethnographic reality.
– In Working with the Masters (2008), you describe in detail and with frankness (something that is fairly rare in the profession!) your fieldwork experience with Afaq Hussain in the 1980s. This experience, and your account of it, appear to be a model for any research in ethnology and ethnomusicology, particularly as it applies to learning music. Thus, you account for the phases of approaching, meeting, being tested and, finally (and fortunately in your case), acceptance within the research context; the trust you were granted allowed you to pursue in full your research and music-learning goals. You also tackle the ethical and deontological considerations essential to any researcher: one's relationship to others, conflicts of loyalty resulting from possible inconsistencies between that relationship and one's ethnographic objectives, responsibility to the gathered knowledge, and the place of the researcher-musician within the musical reality of the tradition studied. Beyond the particularities of the musical context, are there any specific features of Indian culture that Western researchers need to bear in mind in order to undertake (and hopefully succeed with) an ethnological study in India?
It goes without saying that South Asian society has changed enormously in the 40 years since I first began conducting ethnographic research, but certain principles steadfastly remain that should guide the investigative process, such as a deeply ingrained respect for social and cultural seniority. Naturally, access to a community is key, and there is no better “gatekeeper” or “sponsor” (to use the anthropological terms) than an authority figure within the subculture one is studying, since the permission one receives trickles down through the social and familial hierarchy. The danger, in a heavily patriarchal society like India's, is that one ends up with a top-down view of musical life. If I had an opportunity to revisit my field I would pay greater attention to those at different levels within that hierarchy, especially to women and to the everyday musicality of life in the domestic sphere. By focussing only on the most refined aspects of cultural production, one may miss much that is of value in the formation of ideas, of aesthetics, and in the support mechanisms necessary for an artistic tradition to survive and thrive.
On a more practical note – something that applies I think rather more generally in the fieldwork endeavour – I found that formal, recorded interviews were rarely very insightful because they were felt to be intimidating and were accompanied by lofty expectations. Furthermore, a heightened sensitivity to the political ramifications – micro and macro – of speaking one's mind on record was also often an impediment to gathering information. In truth, the less I asked and the more I listened – off the record and in relaxed circumstances – the more useful and insightful the information I received. The caveat is that to operate in that way one must develop a level of patience that would be difficult for most Westerners to accept.
– In the 1980s you adopted the “dialectical approach” taught by John Blacking and combined it with computer science and an Artificial Intelligence program. The aim was to analyse the fundamentals of improvisation by tablā players. Can you go over the genesis and evolution of this approach?
John Blacking was particularly interested in Noam Chomsky's work on transformational grammars. He theorized that one could create sets of rules for music – a grammar – with the topmost layer describing how those surface sound structures were organised. At deeper levels the layers of rules would address increasingly more general principles of musical organisation, and at the very deepest level the grammar would formalise rules governing principles of social organisation. If an ethnomusicologist's ultimate aim is to relate social structure to sound structure, or vice versa, then this was Blacking's idea of how one might achieve that goal.
In the summer of 1981, I escaped the intense heat of the North Indian plains and headed to Mussoorie in the foothills of the Himalayas. I had agreed to meet up again with my friend Francis Silkstone, who at the time was studying sitār with Imrat Khan and dhrupad vocal music with Fahimuddin Dagar in Calcutta. Francis arrived with Fahimuddin and one of Fahim's American students named Jim Arnold. Jim was collaborating on some experimental work on rāga intonation with Bernard Bel, who at that time was living in New Delhi. Bernard then arrived in Mussoorie, also to escape the heat, and for about a month we all lived together in a rich and fertile environment of music and ideas. It was there that Bernard and I first discussed Blacking's notion of socio-musical grammars as well as my fascination with tablā's theme-and-variations structures known as qāida. I was intrigued when Bernard suggested that he could design a computer program capable of modelling the process of creating variations from a given theme.
Over the following year, Bernard and I met several times: he learnt much more about how tablā works and I learnt much more about mathematical linguistics. Together we created sets of rules – transformational grammars – that generated variations from a qāida theme and processed existing variations to determine if our rules could account for them. Yet it was also clear that the knowledge being modelled was my own and not that of expert musicians. Therefore, we developed a strategy to involve those experts as “co-workers and analysts” (a phrase Blacking often used) in a dialectical exchange. After all, an “expert system” was intended to model expert knowledge, and there was no better expert than Afaq Hussain.
– Were you aware of other types of interactive approaches, such as Simha Arom's “re-recording” developed a few years earlier?
I was aware of Simha Arom's interactive methods of eliciting musicians' own perspectives on what was happening in their music, much as I was aware of work in cognitive anthropology aimed at determining cognitive categories meaningful to the people we studied. Arom's insistence that cultural data had to be validated by our interlocutors was certainly very influential. I did not know of other approaches. The exigencies of our particular experimental situation forced us to invent our own unique methodology for this human-computer interaction.
– We know of the fear Indian masters have of their knowledge being spread beyond their own gharānā, in particular, certain techniques and compositions. What was Afaq Hussain's attitude regarding this, and what was his involvement in this method that updated the software for examining qāida structures?
Afaq Hussain was not remotely concerned about revelations regarding qāida since the art of playing them depended on one's ability to improvise. In other words, this was a process-oriented and therefore ever-changing endeavour. On the contrary, playing fixed compositions, especially those handed down over generations within the family, were product-oriented, and the pieces did not change. Those were considered precious assets, and were carefully guarded.
When I reflect on the experiments, I marvel that Bernard Bel was able to create such a powerful generative grammar for a computer (firstly an Apple II with 64k RAM, then the portable 128k Apple IIc) with such limited processing power and space. Afaq Hussain also marvelled that a machine could “think,” as he put it. We began with a basic grammar for a given qāida, generated some variations, and I then read those out loud using the syllabic language, the bols, for tablā. Many results were predictable, some were unusual but nonetheless acceptable, and others were deemed to be wrong – technically, aesthetically. We then asked Afaq Hussain to offer a few variations of his own; these were fed into the computer (I typed using a key-correlation system for rapid entry) and “analysed” to determine if the rules of our grammar could account for them. Simple adjustments to the rules were possible in situ, but when more complex reprogramming was required we would move on to a second example and return to the original example in a later session.
– Did this research ever involve other types of composition such as gat or ṭukṛā?
No. The advantage of looking at a theme-and-variations structure like qāida is that each composition is a closed system where variations (vistār) are restricted to the material presented in the theme. Relā (rapidly-articulated strings of strokes) is another structure that follows similar principles. The aim is therefore to understand the unwritten rules for creating variations. Fixed compositions such as gat, ṭukṛā, paran, etc., comprise a far wider and more unpredictable variety of elements, and would be very hard to model. However, one thing we did experiment with was the tihāī, the thrice-repeated phrase that acts as a final rhythmic cadence. These can be modelled mathematically and applied to a qāida (based on fragments of its theme or one of its variations) or to fixed compositions like, say, ṭukṛā as an arithmetic formula into which one can pour rhythmic phrases.
– Did any of the rhythmic phrases generated by the computer and validated by Afaq Hussain Khan make it into the repertoire of the Lucknow gharānā?
That is a hard question to answer. When we were in the middle of an intensive period of experimentation with the Bol Processor, there would develop a kind of dialogue where Afaq Hussain would play material generated by the computer and then respond with sets of variations of his own. So many were generated and exchanged in this way that it was often hard to tell whether something he played in concert originated in the computer. Yet, whereas some teachers and performers develop a repertoire of fixed variations for a theme, Afaq Hussain rarely did, relying instead on his imagination “in the moment.” This is also the approach he encouraged in us. Therefore, I doubt computer-generated material became a permanent part of the repertoire.
– Has this specific type of approach using Artificial Intelligence in ethnomusicology been pursued by others?
The term “Artificial Intelligence” underwent a radical change in the years 1980-1990 thanks to the development of the “connectionist” approach (artificial neurons) and learning techniques from examples with the capability of processing a large amount of data. With the Bol Processor (BP) we were at the stage of symbolic-numerical modelling of human decisions represented by formal grammars, which required in-depth, although intuitive, knowledge of decision mechanisms.
For this reason, symbolic-numerical approaches have not to my knowledge been taken up by other teams. On the other hand, we had also tackled machine learning (of formal grammars) using the QAVAID software written in Prolog II. We also showed that the machine had to collect information by dialoguing with the musician in order to carry out a correct segmentation of musical phrases and to begin generalising by inductive inference. But this work was discontinued because the machines were too slow and we did not have a large enough body of data to build a model capable of covering a wide variety of improvisation models.
It is possible that Indian researchers will use learning from examples – now called Artificial Intelligence – to process large amounts of data produced by percussionists. This “big data” approach has the drawback of lacking precision in a field where precision is a marker of musical expertise, and it does not produce understandable algorithms which would constitute a “general grammar” of improvisation on a percussion instrument. Our initial ambition was to contribute to the construction of this grammar, but we only proved, using the technology available at the time, that it would be feasible.
– In later versions, this software was also able to provide material and tools for music and dance composition beyond the Indian context. We will be celebrating 40 years of this software next year with a new version. Who are the artists that have used this software?
Rhythmic compositions programmed on BP2 and performed on a Roland D50 synthesiser were used for the choreographic work CRONOS directed by Andréine Bel and produced in 1994 at the NCPA in Bombay. See, for example, https://bolprocessor.org/shapes-in-rhythm/.
At the end of the 1990s, the Dutch composer Harm Visser used BP2 to help develop operators for serial music composition. See, for example, https://bolprocessor.org/harm-vissers-examples/.
We have had feedback (and requests) from European and American academics who use BP2 as an educational tool for teaching musical composition. However, we have never carried out a large-scale advertising campaign to enlarge the user community because we are primarily interested in the development of the system itself and in the musicological research associated with it.
The main limitation of BP2 was its exclusive operation within the Macintosh environment. This is why the BP3 version under development is cross-platform. It will probably be implemented in a Cloud version made possible by its close interaction with Csound software. This software makes it possible to program high-performance sound production algorithms and to work with microtonal intonation models that we have developed, both for harmonic music and for Indian rāga. See, for example, https://bolprocessor.org/category/related/musicology/.
Studies of notation, metre, rhythm, and their evolution
– Over the course of your work, the question of musical notation has occupied an important place both in terms of methodology and also in considerations of how it is used. Can you speak to this aspect of your work?
All written notations are incomplete approximations, and their contribution to the transmission process is limited. Oral representations, like the spoken strings of syllables representing drum strokes, often convey more accurate information about the musicality inherent in patterns, such as stress, inflection, phrasing, and micro-rhythmic variability. By the same token, once internalised, those spoken strings are indelible. We know that oral systems promote a healthy musical memory, which is particularly important in the context of the performance of music in India where performers begin with only a very general road map but then take all manner of unexpected twists and turns along the way. That being the case, one might ask why write anything down at all?
From the 1860s onwards, there was a burgeoning of musical notations in India inspired, I believe, by an awareness that Western music possessed an efficient notation system, and prompted too by the steady increase in institutionalised learning and the perceived need for pedagogical texts and associated repertoire. Yet there was never any consensus on how to notate, and each new system differed greatly from the others. The notation devised in 1903 by Gurudev Patwardhan was arguably the most detailed and precise ever created for drumming, yet it was surely too complicated for students to read as a score. Therefore, its purpose was more as a reference work that preserved repertoire and provided a syllabus for structured learning.
We live in a literate age, and musicians recognise that their students no longer devote their waking hours to practising. Like other teachers, Afaq Hussain encouraged us all to write down the repertoire he taught so that it would not be forgotten. For me, it was especially important to capture two aspects in my own notebooks: rhythmic accuracy and precise fingering. Regarding the latter, for example, when faced with the phrase – keṛenaga tirakiṭa takataka tirakiṭa – I wanted to ensure that I notated the correct intended fingering from the dozen or so possible techniques for takataka, not to mention the varieties of keṛenaga, and I would also indicate that the two instances of tirakiṭa were played slightly differently.
Afaq Hussain kept his own notebooks safely stored in a locked cupboard. He sometimes consulted them. I think he recognised that repertoire does indeed disappear in the oral tradition – after all, there are many hundreds, if not thousands of pieces of music. His grandfather, Abid Hussain (1867-1936) was the first professor of tablā at the Bhatkhande Music College in Lucknow. He too notated tablā compositions, and I have hundreds of pages he wrote that were almost certainly intended to be published as a pedagogical text. However, he did not indicate precise rhythms or fingerings, and so interpreting his music is problematic, even for Afaq Hussain's son Ilmas Hussain with whom I combed through the material. A precise notation, then, does have value, but only alongside an oral tradition that can add the necessary layers of information that can bring the music to life.
– In your recent research on numerous Indo-Persian texts from the 18th and 19th centuries, you highlight the evolution of the representation of musical metre in India. This research illustrates the importance of the historical approach and fully demonstrates the mechanisms of the evolution of cultural facts. What concepts do you use to describe these phenomena?
An important facet of our anthropological training was learning to function in the language of those we engaged with in our research, not merely to manage life on a day-to-day basis but rather to have access to concepts that are meaningful within the culture studied. Two terms are significant in this regard, one whose importance is, I think, overstated, the other understated. Firstly, gharānā, which from its first appearance in the 1860s originally meant “family” but which over time has come to encompass anyone who believes they share some elements of technique, style, or repertoire with an apical figure of the past. Secondly, silsila, a term common in Sufism which means chain, connection, or succession, has specific relevance to a direct teaching lineage. It is this more precise silsila that I believe holds the key to the transmission of musical culture, and yet the paradox is that the chain carries within it an implicit directive to explore one's creative individuality. That is why, for example, when one examines, say, the lineage of Delhi tablā players from the mid 19th century onwards, one finds major differences in technique, style, and repertoire from generation to generation. The same is true for my teacher Afaq Hussain, whose playing differed greatly from that of his father and teacher Wajid Hussain. Each individual inherits some musical essence in the silsila, for sure, but they must engage with and operate in an ever-changing world where artistic survival requires adaptation. It is therefore vitally important when studying any musical era to gather as much information about the socio-cultural milieu as possible.
As I have shown above, it is imperative to engage with native concepts, and to explain and use them without recourse to translation. Another prime example is tāla, which most commonly gets translated as metre or metric cycle. And yet there is a fundamental difference. Metre is implicit: it is a pattern that is abstracted from the surface rhythms of a piece, and consists of an underlying pulse that is organized into a recurring hierarchical sequence of strong and weak beats. On the other hand, tāla is explicit: it is a recurring pattern of non-hierarchical beats manifested as hand gestures consisting of claps, silent waves, and finger counts, or as a relatively fixed sequence of drum strokes. To use metre in the Indian context is therefore misleading, and I therefore encourage the use of tāla with an accompanying explanation but without translation.
– You are currently working on a book about 18th and 19th century sources. What is your goal?
My goal is to trace the origins and evolution of the tāla system currently in use in Hindustani music by gathering as much information as possible from contemporary sources beginning in the late 17th century through to the early 20th century and the era of recorded sound. The problem is that the available information is fragmentary and often couched in obscure language: the task is akin to doing a jigsaw puzzle where most of the pieces are missing. Moreover, the pieces one does find are not necessarily directly connected, and so the task might be better described as working with two or more puzzles. In brief, through careful analysis, inference, and some guesswork, I believe that there was a convergence of rhythmic systems in the 18th century that gave rise to the tāla system of today.
The musical practices and social contexts of the communities of Kalāwants who sang dhrupad and Qawwāls who sang khayāl, tarāna, and qaul, along with the Ḍhāḍhī community that accompanied all these genres, are crucial to understanding how and why music – and rhythm in particular – evolved the way it did. Yet there are so many other important aspects to this story: the role of women instrumentalists in the private spaces of Mughal life in the 18th century, and their gradual disappearance in the 19th century; colonialism; the status and influence of ancient texts; printing technology and the dissemination of new pedagogical texts in the late 19th century – to name but a few.
– What are some of the interesting sources to consider in order to understand the evolution of practices and rhythmic representations of Hindustani music?
Northern India has always been open to cultural exchange, and this was especially true under the Mughals. It is imperative that we understand who travelled to the courts, from where, and what they played. It is equally important to understand the written materials available as well as the intellectual discourses of the time, for knowledge of music was crucial to Mughal etiquette. Thus, to know that the highly influential music treatise Kitāb al-adwār, by the 13th century theorist Safi al-Din al-Urmawi al-Baghdadi was widely available in India both in Arabic and Persian translation, and that copies were in the collection of Delhi nobles from the 17th century onwards, helps us to understand why Indian rhythm was explained using the principles of Arabic prosody in the late 18th century. I have argued that, as applied to music, Arabic prosody was a more powerful tool than the traditional methods of Sanskrit prosody, and thus it was more effective in describing the changes that were occurring in rhythmic thought and practice in that period.
– This ethno-historical research sometimes clashes with the beliefs of certain musicians and researchers, especially on questions of the age and “authenticity” of traditions. Do you think the younger generations are more inclined to accept the obvious facts of the complex nature of musical traditions made up of multiple contributions and in perpetual transformation?
Some are, but some are not. There has always been a small number of scholars in India who conduct valuable, evidence-based research on music. Yet it disappoints me to note there are many more that rely on the regurgitation and propagation of unfounded, unscholarly opinion. What perhaps surprises me most is the lack of rigorous scholarly training in Indian music colleges and the persistence of disproven or discredited ideas and information in spite of so much excellent published research to the contrary.
– Since the 1990s, one notices the strengthening of a Hindu nationalism within Indian society. Have you noted a particular impact on the world of Hindustani music and on research?
This is a complex and sensitive topic. Hindu nationalism is not new, far from it, and as I demonstrated in my book on Gurudev Patwardhan, it formed a significant part of the rationale for the life and work of Vishnu Digambar Paluskar in the early 20th century. As many scholars have pointed out, it had roots in colonialism, and developed as an anti-colonial movement focussed on Hindu identity politics. That narrative, based on invented notions of a glorious Hindu past, downplayed the contributions of Mughal culture and the great lineages of Muslim musicians (not to mention women), and Indian Muslim identity within the sphere of music has suffered a decline ever since. Scholars have taken note of this dynamic and have attempted to trace some of the counternarratives that have hitherto been ignored, such as Max Katz's excellent book Lineage of Loss (Wesleyan University Press, 2017) about an important family of Muslim scholar-musicians, the so-called Shāhjahānpūr-Lucknow gharānā. I suspect that a motivational force in much modern scholarship on music in India is the desire not to omit important cultural narratives but to animate them and frame them within the grand sweep of South Asia's history.
– Following on from Rebecca Stewart's work, you too have highlighted the complex interweaving of rhythmic and metric approaches in tablā playing by showing that it results from various cultural contributions which have followed one another over time. With the intensification of global cultural exchanges since the end of the 20th century, have you observed one or more evolving trends in tablā playing?
Since the inclusion of tablā in pop music in the 1960s, the exciting jazz fusion of John McLaughlin's group Shakti in the 1970s, and the ubiquity of tablā ever since in music of every kind, it seems only natural that tablā players the world over should explore and experiment with its magical sounds. Zakir Hussain has led the way in demonstrating the flexibility and adaptability of these drums, and the thrilling, visceral velocity of its rhythmic patterns. As for tablā within the context of Hindustani concert music, I have noticed that there are many who attempt to inject that same sense of excitement, enhanced increasingly, it seems, by amplification so loud that it distorts the sound and beats the audience's eardrums into submission. I would go so far as to say that this has unfortunately become the norm.
In this regard, I count myself as something of a purist who longs for a return to a practice where the tablā player maintains a subtle, understated yet supportive role, complements the material presented by the soloist, and is modest and not overpowering when invited to contribute a short flourish or cameo solo. By the same token, I crave a return to tablā solos that are packed with content rather than “sound effects.” By “content,” I mean traditional, characterful compositions featuring specialised techniques, whose composers are named and thus honoured. And yet it is painfully obvious that such “content” is not reaching many younger players these days.
Ethnomusicology
– As mentioned, your research highlights the importance of historical sources as well as the consideration of broader phenomena such as Orientalism or Nationalism in order to understand Indian musical practices in the present. At the same time, you are very attentive to the intense current transcultural phenomena and to the need to comprehend them. In the profession, the concept of “ethnomusicology” does not always achieve consensus. What is your position with regard to this name and the subject of this discipline at the start of the 21st century?
I have never been particularly comfortable with the label “ethnomusicology.” As John Blacking used to say, all music is “ethnic music,” and therefore there should be no distinction between studies of tablā, gamelan, or hip-hop and those of Bach, Beethoven, or Brahms. We all engage in a “discourse on music”: in other words, “musicology.” The advantage of terms like the “anthropology” or “sociology” of music is that they imply a broader slate of theoretical and methodological approaches that remind us that music is a social fact. Yet we must recognise that the purview of ethnomusicological studies has evolved, and nowadays far greater attention is paid to phenomena like noise or the mundane sounds of everyday life. Therefore – without wishing to sound too cynical – although in some quarters the term “sound studies” is treated with a degree of contempt, perhaps that very general term is the most honest and accurate definition of what we (all of us) do. However, I acknowledge that it would be a shame to reject the term “music” altogether, and so I could imagine ethnomusicology, musicology, and music theory coming together under the rubric “music and sound studies.”
Teaching
– After a short period in Belfast, you taught in Toronto. Can you tell us about your teaching experience?
Yes, Toronto is a wonderful city, and by most accounts it is the most multi-cultural city on this planet. It offers a very rich and stimulating musical environment.
Miecyzslaw Kolinski taught at the University of Toronto from 1966 until 1978. His ethnomusicological interests were shaped by his training under Hornbostel and Sachs, and by the worldview shared by so many of the early giants of our discipline. He published on the scientific basis of harmony and melody, and developed methods for cross-cultural analysis – an approach emphatically rejected in my own training with John Blacking who argued vehemently for cultural relativism, much as it was at odds with Tim Rice's training at the University of Washington. Tim was hired in 1974 and left for UCLA in 1987. Like me during my early days, Tim struggled to persuade colleagues of the importance of the ethnomusicological approach and the need to treat our discipline with the respect it deserved and the resources it required. We both fought hard. Tim introduced a program that came to be known under my watch as the World Music Ensembles, and I acquired a Balinese gamelan in 1993, which was taught by my wife, ethnomusicologist Dr Annette Sanger, formerly a colleague of John Blacking. Moreover, both Tim and I succeeded in drawing ethnomusicology classes further into the core of the curriculum to ensure that all music students, whatever their interests, were exposed to our approach and understood the value and importance of a socially-grounded view of all music. One initiative I created was a year-long introductory course called Music as Culture which for a few years I co-taught with a musicology colleague: we alternated our presentations, illustrating and cross-referencing our material and observations from the Western canon and the vast world of music beyond. Later incarnations of this course included our flagship Introduction to Music & Society. Essentially modular in approach, the chosen themes shifted and adapted over time to reflect more contemporary concerns, including music and identity, religious experience, migration, gender, healing, and sound studies.
I devised and taught a variety of courses during my time: Hindustani music; Music & Islam; Theory & Method in Ethnomusicology; The Beatles; Anthropology of Music; Fieldwork; Music, Colonialism & Postcolonialism; Rhythm & Metre in Cross-Cultural Perspective; Transcription, Notation & Analysis, etc. I worked with the South Asian community in Toronto to put on concerts by vocalist Pandit Jasraj that drew sponsorship that generated healthy scholarships for students studying Hindustani music. I helped institute an Artist-in-Residence program, inviting musicians from all over the world to spend a term with us teaching and performing. I helped to overhaul our musicology-oriented graduate programmes and introduced an MA and PhD in ethnomusicology. But perhaps the two achievements of which I am most proud are firstly the many wonderful doctoral students I mentored, many of whom have themselves gone on pursue to careers in academia, and secondly my success in expanding our representation from a single faculty position to four full-time positions in ethnomusicology.
– What is your position within the Lucknow gharānā?
I have greatly enjoyed learning and playing tablā in my life, and I consider myself extremely fortunate to have had such a close and productive association with one of the most remarkable tablā players in history: Afaq Hussain. I am blessed with a good memory and therefore still have in my head a vast repertoire of wonderful compositions dating all the way back to the early members of the Lucknow lineage who flourished in the late 18th and early 19th centuries. I am particularly interested in technique, and have spent a good deal of time studying the mechanics of playing. However, I am first and foremost a scholar, and in practical matters I have no illusions about being anything more than a tablā hobbyist. Indeed, my interest in playing has provided me with extraordinary insights into the instrument and its history.
As for my place or role within the Lucknow gharānā, I would say two things. Firstly, I continue to be part of the exchange of ideas and repertoire with my peers alongside whom I studied tablā and who now are, like me, senior figures within the silsila, the direct teaching lineage of Afaq Hussain. I am considered by them to be knowledgeable: an authority, if you will. On occasions I am asked if I remember a rare composition over which there has been some debate, and sometimes I introduce into our dialogue information and questions arising from my research that spark a lively interest. For example, Afaq Hussain's son Ilmas Hussain and I have been working together to resurrect the notebooks of his great-grandfather Abid Hussain, and place them in the context not only of his tradition but also of the early years of Lucknow's Bhatkhande College where Abid Hussain served as the first professor of tablā in the late 1920s and early 1930s. Secondly, I believe that my work has brought greater attention to the Lucknow lineage. When I arrived at Afaq Hussain's doorstep in January 1981 he was frankly at a low ebb in his life – psychologically and financially – and much about the future was uncertain. Other foreign students followed my lead and joined an ever-growing number of Indian disciples who came to learn. My book, The Tabla of Lucknow, as well as other facets of my research helped to bring national and international attention to Afaq Hussain, his son Ilmas, and their entire tradition.
When I came to Toronto I made a decision not to teach tablā outside of my duties at the University of Toronto, since I did not wish to risk depriving local tablā players (of whom there were several very good ones) of the opportunity to earn income. Within the university itself, I did run occasional workshops and courses for students, plus individual lessons, and some of them (particularly percussionists) became quite competent players.
List of publications
Books
2006 Gurudev’s Drumming Legacy: Music, Theory and Nationalism in the Mrdang aur Tabla Vadanpaddhati of Gurudev Patwardhan. Aldershot: Ashgate (SOAS Musicology Series).
2005 The Tabla of Lucknow: A Cultural Analysis of a Musical Tradition. New Delhi: Manohar (New edition with new preface).
1988 The Tabla of Lucknow: A Cultural Analysis of a Musical Tradition. Cambridge: Cambridge University Press (Cambridge Studies in Ethnomusicology).
Edited books
2013 with Frank Kouwenhoven, Music, Dance and the Art of Seduction. Delft: Eburon Academic Publishers.
Edited journals
1994-1996Bansuri, volumes 11-13 (A yearly journal devoted to the music and dance of India, published by Raga Mala Performing Arts of Canada).
Articles, chapters in books
Forthcoming “Weighing ‘The Assets of Pleasure’: Interpreting the Theory and Practice of Rhythm and Drumming in the Sarmāya-i ‘Ishrat, a Pivotal 19th Century Text” in Katherine Schofield, ed.: Hindustani Music Between Empires: Alternative Histories, 1748-1887. Publisher TBA.
Forthcoming “An Extremely Nice, Fine and Unique Drum: A Reading of Late Mughal and Early Colonial Texts and Images on Hindustani Rhythm and Drumming” in Katherine Schofield, Julia Byl et David Lunn, eds: Paracolonial Soundworlds: Music and Colonial Transitions in South and Southeast Asia. Publisher TBA.
2021 “Ethnomusicology at the Faculty of Music, University of Toronto.” MUSICultures (Journal of the Canadian Society for Traditional Music): Vol.48.
2020 “Rhythmic Thought and Practice in the Indian Subcontinent” in Russell Hartenberger & Ryan McClelland, eds: The Cambridge Companion to Rhythm. Cambridge University Press: 241-60.
2019 “Mapping a Rhythmic Revolution Through Eighteenth and Nineteenth Century Sources on Rhythm and Drumming in North India” in Wolf, Richard K., Stephen Blum, & Christopher Hasty, eds: Thought and Play in Musical Rhythm: Asian, African, and Euro-American Perspectives. Oxford University Press: 253-72.
2013 “Introduction” in Frank Kouwenhoven & James Kippen, eds: Music, Dance and the Art of Seduction. Delft: Eburon Academic Publishers: i-xix.
2010 “The History of Tabla” in Joep Bor, Françoise ‘Nalini’ Delvoye, Jane Harvey and Emmie te Nijenhuis, eds: Hindustani Music, Thirteenth to Twentieth Centuries. New Delhi: Manohar: 459-78.
2008 “Working with the Masters” in Gregory Barz and Timothy Cooley, eds:Shadows in the Field: New Perspectives for Fieldwork in Ethnomusicology (2nd revised edition). Oxford University Press: 125–40.
2007 “The Tal Paddhati of 1888: An Early Source for Tabla.” Journal of The Indian Musicological Society, 38: 151–239.
2003 “Le rythme: Vitalité de l'Inde.” Gloire des princes, louange des dieux: Patrimoine musical de l'Hindoustan du XIVe au XXe siècle. Paris: Cité de la musique et Réunion des Musées Nationaux 2003:152–73.
2002 “Wajid Revisited: A Reassessment of Robert Gottlieb’s Tabla Study, and a new Transcription of the Solo of Wajid Hussain Khan of Lucknow.” Asian Music, 33, 2: 111–74.
1992 “Tabla Drumming and the Human-Computer Interaction.” The World of Music, 34, 3: 72–98.
1992 “Music and the Computer: Some Anthropological Considerations.” Interface, 21, 3-4: 257–62.
1992 “Where Does The End Begin ? Problems in Musico-Cognitive Modelling.” Minds & Machines, 2, 4: 329–44.
1992 “Identifying Improvisation Schemata with QAVAID” in Walter B. Hewlett & Eleanor Selfridge-Field, eds: Computing in Musicology: An International Directory of Applications, Volume 8. Center for Computer Assisted Research in the Humanities:115–19.
1992 with Bernard Bel “Modelling Music with Grammars: Formal Language Representation in the Bol Processor” in A. Marsden & A. Pople, eds: Computer Representations and Models in Music. London, Academic Press: 207–38. https://halshs.archives-ouvertes.fr/halshs-00004506
1991 with Bernard Bel “From Word-Processing to Automatic Knowledge Acquisition: A Pragmatic Application for Computers in Experimental Ethnomusicology” in Ian Lancashire, ed.: Research in Humanities Computing I: Papers from the 1989 ACH-ALLC Conference, Oxford University Press: 238–53.
1990 “Music and the Computer: Some Anthropological Considerations” in B. Vecchione & B. Bel, eds: Le Fait Musical – Sciences, Technologies, Pratiques, préfiguration des actes du colloque Musique et Assistance Informatique, CRSM-MIM, Marseille, France, 3-6 Octobre: 41–50.
1989 with Bernard Bel “The Identification and Modelling of a Percussion ‘Language’, and the Emergence of Musical Concepts in a Machine-Learning Experimental Set-Up.” Computers and the Humanities, 23, 3: 199–214. https://halshs.archives-ouvertes.fr/halshs-00004505
1988 with Bernard Bel “Un modèle d’inférence grammaticale appliquée à l’apprentissage à partir d’exemples musicaux.” Neurosciences et Sciences de l’Ingénieur, 4e Journées CIRM, Luminy, 3–6 Mai 1988.
1987 “An Ethnomusicological Approach to the Analysis of Musical Cognition.” Music Perception 5, 2: 173–95.
1987 with Annette Sanger “Applied Ethnomusicology: the Use of Balinese Gamelan in Recreational and Educational Music Therapy.” British Journal of Music Education 4, 1: 5–16.
1986 with Annette Sanger “Applied Ethnomusicology: the Use of Balinese Gamelan in Music Therapy.” International Council for Traditional Music (UK Chapter) Bulletin, 15: 25–28.
1986 “Computational Techniques in Musical Analysis.” Bulletin of Information on Computing and Anthropology (University of Kent at Canterbury), 4: 1–5.
1985 “The Dialectical Approach: a Methodology for the Analysis of Tabla Music.” International Council for Traditional Music (UK Chapter) Bulletin, 12: 4–12.
1984 “Linguistic Study of Rhythm: Computer Models of Tabla Language.” International Society for Traditional Arts Research Newsletter, 2: 28–33.
1984 “Listen Out for the Tabla.” International Society for Traditional Arts Research Newsletter, 1: 13–14.
Reviews
2012 Elliott, Robin and Gordon E. Smith, eds: Music Traditions, Cultures and Contexts, Wilfrid Laurier University Press, in “Letters in Canada 2010”, University of Toronto Quarterly, 81: 3:779–80.
2006 McNeil, Adrian Inventing the Sarod: A Cultural History. Calcutta: Seagull Press, 2004. Yearbook for Traditional Music, 38: 133–35.
1999 Myers, Helen, Music of Hindu Trinidad: Songs from the India Diaspora. Chicago Studies in Ethnomusicology. Chicago: University of Chicago Press, 1998. Notes: 427–29.
1999Marshall, Wolf, The Beatles Bass. Hal Leonard Corporation, 1998. Beatlology, 5.
1997 Widdess, Richard, The Ragas of Early Indian Music: Music, Modes, Melodies, and Musical Notations from the Gupta Period to c.1250. Oxford Monographs on Music. Oxford: Clarendon Press, 1995. Journal of the American Oriental Society, 117, 3: 587.
1994 Rowell, Lewis, Music and Musical Thought in Early India. Chicago Studies in Ethnomusicology, edited by Philip V. Bohlman and Bruno Nettl. Chicago and London: The University of Chicago Press, 1992. Journal of the American Oriental Society, 114, 2: 313.
1992 CD: review “Bengal: chants des ‘fous’”, par Georges Luneau & Bhaskar Bhattacharyya, and “Inde du sud: musiques rituelles et théâtre du Kerala”, par Pribislav Pitoëff. Asian Music 23, 2:181–84.
1992 Witmer, Robert, ed.: “Ethnomusicology in Canada: Proceedings of the First Conference on Ethnomusicology in Canada.” (CanMus Documents, 5) Toronto, Institute for Canadian Music, 1990. Yearbook for Traditional Music, 24: 170–71.
1992Neuman, Daniel M. The Life of Music in North India: The Organization of an Artistic Tradition. Chicago, University of Chicago Press, 1990. Journal of the American Oriental Society, 112, 1: 171.
1988 Qureshi, Regula Burckhardt. Sufi Music of India and Pakistan: Sound, Context and Meaning in the Qawwali. Cambridge Studies in Ethnomusicology. Cambridge: CUP, 1986. International Council for Traditional Music (UK Chapter) Bulletin, 20: 40–45.
1986Wade, Bonnie C. Khyal: Creativity within North India’s Classical Music Tradition. Cambridge Studies in Ethnomusicology. Cambridge: CUP. Journal of the Royal Asiatic Society: 144–46.
Recordings
1999 HonouringPandit Jasraj at Convocation Hall, University of Toronto. 2 CD set. Foundation for the Indian Performing Arts, FIPA002.
1995Pandit Jasraj Live at the University of Toronto. 2 CD set. Foundation for the Indian Performing Arts, FIPA001.
Liner notes
2009 Mohan Shyam Sharma (pakhavaj): Solos in Chautal and Dhammar. India Archive Music CD, New York.
2007 Anand Badamikar (tabla): Tabla Solo in Tintal. India Archive Music (IAM•CD 1084), New York.
2002 Pandit Shankar Ghosh: Tabla Solos in Nasruk Tal and Tintal. CD, India Archive Recordings (IAM•CD1054), New York.
2001 Shujaat Khan, Sitar: Raga Bilaskhani Todi & Raga Bhairavi. CD, India Archive Recordings (IAM•CD1046), New York.
1998 Pandit Bhai Gaitonde: Tabla Solo in Tintal. CD, India Archive Recordings (IAM•CD1034), New York.
1995Ustad Amjad Ali Khan: Rag Bhimpalasi & Rag “Tribute to America”. CD, India Archive Recordings (IAM•CD1019), New York.
1994 Ustad Nizamuddin Khan: Tabla Solo in Tintal. CD, India Archive Recordings (IAM•CD1014), New York.
1992 Rag Bageshri & Rag Zila Kafi, played by Tejendra Narayan Majumdar (sarod) and Pandit Kumar Bose (tabla). CD, India Archive Recordings (IAM•CD 1008), New York.
1990 “In Memoriam: John Blacking (1928-1990).” Ethnomusicology 34, 2: 263–6.
➡ A new version of Bol Processor compliant with various systems (MacOS, Windows, Linux…) is under development. We invite software designers to join the team and contribute to the development of the core application and its client applications. Please join the BP open discussion forum and/or the BP developers list to stay in touch with work progress and discussions of related theoretical issues.
The Bol Processor BP3 is currently comprised of a console (written in C language) and a set of PHP/HTML/CSS/Javascript files that act as its interface. A console version of Csound can also be attached. For detailed installation instructions, please refer to the Bol Processor ‘BP3’ and its PHP interface page.
It all works beautifully in a design that is compatible with multiple 64-bit systems: MacOS, Linux and Windows. However it does require the installation of an Apache+PHP package to run the interface. (We are currently using the free version of MAMP on Mac computers to develop the BP3 interface.)
The next phase of the project will involve the creation of a standalone application that will replace the web browser and its associated PHP/HTML/CSS files. The application will also be available in three versions, for Linux, MacOS and Windows.
This step is within our reach using the PHP Desktop platform. Versions in MacOS and Windows are already up and running, but there are still a few issues that need to be resolved to reach the state of a distribution.
A contribution to The Ratio Symposium, 14-16 Dec. 1992, Den Haag (The Netherlands). Published in Barlow, Clarence (ed.) The Ratio Book. Den Haag: Royal Conservatory - Institute of Sonology. 2001: 86-101. This paper is referenced on HAL ⟨hal-00134179⟩ and quoted in Polymetric structures.
Abstract
This paper deals with various problems of quantifying musical time that arise both in the analysis of traditional drumming and in computer-generated musical pieces based on "sound-objects", i.e. sequences of code that control a real-time sound processor.
Section 1 suggests that syntactic approaches may be closer to the intuitions of musicians and musicologists than commonly advocated numerical approaches. Furthermore, symbolic-numerical approaches lead to efficient and elegant solutions of constraint satisfaction problems with respect to symbolic and physical durations, as illustrated in Sections 2 and 3, respectively.
Polymetric expressions are the basic representation model for the timing of musical data in the Bol Processor. The word is a mixture of polyphony and polyrhythm, the former evoking superimposed streams of musical events, and the latter a metric adjustment of their durations.
This page illustrates the syntax of simple expressions and their interpretation by the polymetric expansion algorithm described in Two algorithms for the instantiation of structures of musical objects (Bel 1992). This process can be extremely complex, since an entire musical work — e.g. Beethoven's Fugue in B flat major — is treated by the Bol Processor as a single polymetric structure: see example.
In this tutorial, simple notes ("C4", "D4" etc.) are used following the "English" convention. All time-setting processes could be illustrated using sound-objects or simple notes in other conventions: "Italian/Spanish/French" or "Indian".
Symbolic versus physical duration
Music notation systems (for humans) make use of symbolic rather than physical durations. Their units are beats rather than (milli)seconds.
For example, if the time signature is 3/4, we will have 3 quarter notes (crotchets) in a bar (see picture). A half note (minim) lasts twice as long as a quarter note in the same context. Other relative durations are expressed in the same way.
To get the physical duration of a note we need an additional piece of information: the metronome value, for example "mm = 100", which means 100 beats (quarter notes) per minute.
A metronome value (60 bpm by default ) is declared in the settings file of a Grammar or Data page. With this setting, note "E4" on a Bol Processor score represents an "E" of the 4th octave played in 1 beat with a physical duration of 1 second.
This convention extends to arbitrarily named sound-objects whose default durations are set by the streams of MIDI events or sequences of Csound instructions from which they are composed. The mapping of symbolic to physical time for the performance of sound-object structures (with their metric and topological properties) is a sophisticated process performed by a time-setting algorithm. A practical example is discussed on the page Interactive improvisation with sound-objects.
Polymetric expression
Typical forms of polymetric expressions are:
field 1, field2 or {field 1, field2} indicates that field1 and field2 should be superimposed and the total symbolic duration should be adjusted to that of field1;
field1 • field2 or {field1 • field2} indicates that field1 and field2 should be consecutive and the symbolic duration of each field should be adjusted to that of field1;
{expression} is equivalent to expression.
Curly braces '{' and '}' are used to create multi-level expressions.
➡ Periods written as bullets '•' in the Data and Grammar windows are converted to plain periods before being sent to the console, as the console rejects some Unicode characters.
For example, {C4 D4, E4 F4 G4, E5} produces the following time structure with a metronome set to 60 beats per minute:
The use of the first field to set the total duration is highlighted by the following examples where the fields appear in a reverse order:
Rests (silences) can be notated with "-" for single unit rests, or with integer numbers and ratios. The following shows a single unit rest and a more complex rest of 2.5 beats:
Polymetric structures can be multi-level, for example:
The same time-setting rules apply to sequences where commas are replaced by periods. For example:
Superpositions and sequences can be combined (even in multi-level expressions), such as:
Undetermined rests
Undetermined rests are a powerful feature of polymetric expressions used to avoid inconvenient computations. The polymetric expansion algorithm calculates (symbolic) durations that produce the least complex expression.
They may be notated as "…" or "_rest" in Data or Grammars.
➡ Since the console does not recognise the "…" Unicode symbol, the PHP interface rewrites it as"_rest".
Let us start with a trivial example. In {C4 D4 E4, … F4 G4}, the undetermined rest "…" is replaced by a single unit rest:
This solution gives the simplest polymetric expression. The same simple case is that of {… C4 D4 E4, F4 G4}:
If a field of the polymetric expression contains several undetermined rests, these are assigned equal durations — in such a way that the complexity of the structure remains minimal. For example, consider {… C4 D4 … E4, A4 F4 G4}:
An undetermined rest may even be assigned duration 0 in case this yields a simpler expression. For example, in {… C4 D4 … E4, F4 G4} duration 0 gives a "three in two" polyrhythm whereas duration 1 would give "five in two". The criterion for evaluating the complexity is to get the lowest common multiple (LCM) of the number of units in each field, in fact 6 against 10. Therefore the solution is:
Each field of a polymetric expression can contain undetermined rests. Consider for example {… C4 D4 E4, A4 B4 F4 … G4}. Again, assigning a duration of zero to each undetermined rest gives the simplest structure, since "four in three" (LCM = 12) is a better trade-off than "five in four" (LCM = 20).
A more complex structure is assigned to {C4 D4 E4, A4 B4 F4 … G4 A4, C5 … D5} with rests of 1 unit in the second and third fields. The LCM of 3 and 6 is 6, which is the lowest value possible for this structure.
Note that there is an equivalent solution in terms of complexity: assigning duration 0 to the rest in the third field. If more than one solution is valid, the algorithm chooses the one with the fewest zero-duration rests.
A similar case is {C4 D4 E4, A4 B4 F4 … G4 A4, C5 … D5 E5}:
Here, the first rest has been assigned 1 unit and the second one 3 units. This gives the LCM of 3 and 6 = 6. Another optimal (equivalent) solution would be to assign 0 to the second rest, but this was discarded due to the heuristic of avoiding zero duration rests.
Replacing commas with periods gives the same structure in a sequential form:
Tied notes, tied sound-objects
Sound-objects or simple notes can be concatenated ("tied"). Consider, for example:
and its variation with ties notated "&":
The time interval of a tied note/sound-object may cross the limits of (tree-shaped) polymetric structures. For example:
The challenge of dealing with tied events is discussed on the Tied notes page.
Real music is “polymetric”
The rules and heuristics associated with polymetric expressions make sense when dealing with real musical items. In particular, they made it possible to import MusicXML scores and interpret them as Bol Processor data (read page).
An Indian conception of time can be seen most clearly in the venerable Bol Processor system for algorithmic music, created by computer scientist Bernard Bel from work on notating tabla rhythms and developed over forty years. Drawing from Indian classical music, it includes an expressive approach to time setting that seems unique to the algorithmic music field, in which sound events are organized in terms of interrelationships before being mapped to physical time. Although not a live coding system itself, it has been heavily influential on the design of the TidalCycles system, particularly its embedded “mininotation” language for describing rhythm in the Bol Processor and more generally its representation of music based not on the duration of events (as in staff notation) but on the duration of cycles.
Alan Blackwell, Emma Cocker, Geoff Cox, Thor Magnusson, Alex McLean, Live Coding: A User's Manual, MIT 2022, page 195.
This page is intended for developers of the Bol Processor BP3 (read installation). It is not a formal description of the algorithms carried by the console's C code, but rather an illustration of their management of musical processes, which may be useful for checking or extending algorithms.
All examples are included in the "ctests" folder of the distribution.
Example of tied notes
Consider measures #18 to #22 of Liszt's 14th Hungarian Rhapsody imported from a MusicXML file — see Importing MusicXML scores. The printed score of measures #19 to #21 is as follows:
Tied notes are visible on this score. Slurs connecting notes at different pitches are ignored by the Bol Processor. These could be interpreted using the _legato(x) performance control, but setting an appropriate value for 'x' would require a careful analysis of the context. Ties link the same pitch, such as "Ab1" (A flat in octave 1) at the bottom of the score lines. There are 3 occurrences of tied "Ab1" in this section, the first one starting at the end of measure #18 and the third one ending at measure #21.
In Bol Processor notation, this fragment results in a sequence of polymetric structures. For the sake of clarity, each measure has been placed on its own paragraph:
The 3 occurrences of tied "Ab1" and "B1" are shown in colour. "Ab1&" is the beginning of a tie and "&Ab1" is the end (of the same colour). Longer ties would occasionally require sequences such as "Ab1&" + "&Ab1&"+ "&Ab1".
These ties merge the (symbolic) time intervals of the beginning and ending occurrence. For example, the score "C4& &C4" could be replaced by "C4 _ " or equivalently "{2, C4}". The merging of time intervals is done in the FillPhaseDiagram() procedure of file "FillPhaseDiagram.c".
While parsing the compact polymetric structure to build the phase table — see Complex ratios in polymetric expressions — the algorithm calls GetSymbolicDuration() (in "SetObjectFeatures.c") to calculate the symbolic duration of a sound-object or simple note. By default, this is easy to calculate. The ignoreconcat flag is set to true if the sound-object or note is not followed by a '&'. The duration is set by prodtempo = Prod / tempo.
If ignoreconcat is false, GetSymbolicDuration() looks for the next acceptable occurrence of the note or sound object preceded by a '&'. Acceptability implies the following conditions:
The note or sound-object should be on the same MIDI channel or the same Csound instrument;
The date of the next occurrence should be later than the on-set date.
The conditions are readily apparent in the example. For example, "A1b&" cannot be paired with "&A1b" because the latter occurs at an earlier date. The next valid occurrence is "&A1b". The same applies to pairs shown in other colours. Each colour indicates a matching pair.
Once the duration has been set, the algorithm calls PutZeros() to fill as many columns as necessary to set the total duration of the pair of tied notes — as detailed in the page on Complex ratios in polymetric expressions.
In addition, when "&A1b" is parsed later, it should be ignored because the duration of the note has already been set by calling GetSymbolicDuration() and PutZeros() at the time of parsing "A1b&". Skipping these procedures is ensured by the foundendconcatenation flag.
Graphic display
The following is a sound-object graph of measures #20 to #21 on which the bounderies of intervals in tied notes are marked with dashed lines. The bounds of "Ab1& … &AB1" (red colour) are self-explanatory when compared with the musical score shown above: each occurrence is an instance of "{1/8, Ab1&} &Ab1".
Symbolic durations can be checked against physical time: since the tempo is 13/10, the first part has a physical duration of 1/8 x 10 / 13 = 0.09 seconds and the second part 10/13 = 0.77 seconds.
At the end of measure #21 the beginning of the note "B1&" is linked to its occurrence "&B1" in measure #22. The boundary is marked by a dashed blue line. The piano roll of measures #21-22 shows this connection of "B1":
Arpeggios
The green dashed lines belong to the polymetric expression {1/4,Db4& Gb4& A4& Eb5&}{7/2,&Db4,&Gb4,&A4,&Eb5}, the interpretation of an arpeggio on the chord at the beginning of measure #20. This interpretation is constructed when importing MusicXML files — see the PHP code in the file "_musicxml.php".
A short sequence "Db4 Gb4 A4 Eb5" (duration 1/4 beat) is played before the chord "{Db4, Gb4, A4, Eb5}" whose duration is set to 7/2 beats. Each note in the sequence is tied to its occurrence in the chord.
A clear illustration of the use of tied notes and undetermined rests is a short musical phrase borrowed from a tutorial by Charles Ames, a pioneering designer of composition algorithms. The phrase is supplied as a musical score but its interpretation requires a careful analysis of the musical structure, resulting in the following Bol Processor score:
To make things clear we need to look at the score in common music notation, divide it into blocks associated with variables, and finally write a grammar "-gr.Ames" to build the structure. Below are details of the analysis process and the resulting graphs of sound-objects and piano roll:
In this grammar, undetermined rests have been written as "…". In its current version, the Bol Processor console no longer recognises the Unicode symbol for ellipsis "…". Therefore, it is automatically converted it to "_rest" by the PHP interface.
Undetermined rests are a powerful feature of polymetric expressions used to avoid tedious calculations. The polymetric expansion algorithm computes (symbolic) durations that produce the least complex expression. Read more in the Polymetric Structures tutorial.
Tied notes are exactly the ones indicated by links on the musical score. The sound rendering is:
Error tracing
The construction of suitable time intervals for tied notes depends on the matching of pairs — e.g. "A4&" followed by "&A4" — in the Bol Processor score. Some pairs may remain incomplete for one reason or another:
The musical item has been split into chunks, using the PLAY safe (instead of PLAY) option to speed up computation, and the two parts belong to separate chunks;
An error in the imported MusicXML score;
An error in the algorithm — increasingly rare.
Case (1) is limited by the method for chunking items: each chunk is designed to contain the same number of start and end ties. However, this is not guaranteed because the chunks are limited in size.
Failure to balance ties is indicated below the PLAY safe button (see image).
Errors are shown in colour on the track being played.. These may not cause any noticeable changes in performance. However, we recommend that you report any incorrect data to the designers.
Below is an example of an error in a MusicXML score of Beethoven's Fugue in B flat major. A tie starts at note "Db5" (MIDI key #73) at the beginning of measure #573 (part 2), but it ends nowhere:
Read the MusicXML score fragment to check for this inconsistency. There are other inconsistencies in this score, such as a slur starting on note 'C4' of measure #646 (part 3) that does not end. This makes it difficult to interpret the slurs as legato.
This page is intended for developers of the Bol Processor BP3 (read installation). It is not a formal description of the algorithms carried by the console's C code, but rather an illustration of their management of musical processes, which may be useful for checking or extending algorithms.
All examples are contained in the file "-da.checkPoly" in the "ctests" folder in the distribution.
Syntax of silences
In the Bol Processor's data/grammar syntax, silences (rests in conventional musical terminology) are represented either by a hyphen '-' for a single unit duration, or by integer ratios to specify a more complex duration:
"4" is a rest of 4 units duration
"5/3" is a rest of (approximately) 1.666-unit duration
"3 1/2" is a rest of 3.5 units duration
For example, "C4 C5 3/2 D5 E5" results in the following piano roll with a rest of 3/2 (1.5) units starting on beat 2 and ending on beat 3.5:
In this tutorial we will use the default metronome value = 60 beats per minute.
Another simple example is {3 1/2, C3 D3 B2}, which is the sequence of notes "C3 D3 B2" constrained to a total duration of 3 1/2 (3.5) beats. This silence is the first field of the polymetric expression (explained below). This results in the following piano roll:
or equivalently the sound-object graph:
Syntax of tempo
Any sequence of symbols conforming to the syntax of Bol Processor is processed as a polymetric expression. Typical forms are:
field 1, field2 indicates that field1 and field2 should be superimposed and the total duration should be that of field1;
field1.field2 indicates that field1 and field2 should be sequential, with the duration of each field being that of field1;
{expression} is equivalent to expression.
Brackets '{' and '}' are used to create multi-level expressions.
A number of examples of polymetric expressions can be found in the Polymetric structures tutorial.
For example, {{C4 D4, E4 F4 G4}, E5} gives the following structure:
In order to interpret this structure, the Bol Processor needs to insert explicit tempo values into the expression. In fact, in this case, the most compact representation would be with explicit tempo values:
*1/1 {{C4 D4,*2/3 E4 F4 G4} ,*2/1 E5}
Expressions such as "*2/3" indicate that the duration of each note (or sound-object) should be multiplied by 2/3, regardless of the preceding statements. This means that the durations of notes "E4", "F4" and "G4" should be 2/3 seconds as shown in the diagram.
Creating the compact representation with its explicit tempo markers may require recursive calls of a sophisticated procedure called PolyExpand() in the "Polymetric.c" file.
At this stage it is important not to confuse the notations:
"2/3" is a silence of duration 2/3 beats;
"_tempo(2/3)" multiplies the current tempo by 2/3. This is a relative tempo marker;
"*2/3" sets the current duration of the units to 2/3 of the metronome period. This is an absolute tempo marker. Similarly, "*4" multiplies durations by 4, and "*1/5" or "/5" divides them by 5 — whereas "1/5" is a 1/5 beat silence.
The third syntax is the one used by the Bol Processor's time-setting algorithms. Despite its syntactic validity, we do not recommend using it in grammars and data, as it can produce conflicting durations in polymetric structures. For example, {*2/1 A4 B4, *3/1 A5 B5} makes no sense because it tries to force the first field to have a duration of 2 x 2 = 4 beats and the second field to have a duration of 3 x 2 = 6 beats. The correct (never contradictory) way to change a tempo in data or grammars is to use the "_tempo(x)" performance tool.
Expanding a polymetric expression
In the previous paragraph we saw that {{C4 D4, E4 F4 G4}, E5} is internally represented as *1/1 {{C4 D4,*2/3 E4 F4 G4} ,*2/1 E5}. This internal representation is the most compact with explicit tempo markings. Therefore, it is the one that is maintained through all the steps of time setting.
Humans may prefer to see a more comprehensive representation called the expanded polymetric expression:
/3 {{C4_ _ D4_ _, E4_ F4_ G4_} , E5_ _ _ _ _}
This is done by clicking the EXPAND button on a data page. Underscores'_' represent extensions of the duration of the previous unit. These should not be confused with '-' (silence). To make things clearer, let us replace a '_' with a '-':
/3 {{C4_ _ D4_ _, E4_ F4 - G4_}, E5_ _ _ _ _}
This results in the following structure, where "F4" is not extended:
The expanded polymetric expression may become too large for a human observer. In this case only the compact version will be returned.
In the code of the Bol processor console, sound objects (of all kinds) are identified by numbers. The variable used to identify them in algorithms is always 'k' or 'kobj'. There is an option (in the code) to display object identifiers on a graph, which is set by the SHOWEVERYTHING constant. If set to true, this would be the previous sound object graph:
Notes "C4", "D4", etc. have identifiers kobj = 2, 3, etc. The identifier "0" is reserved for extensions '_' and "1" for silences "-", none of which are shown in the graph. An exception is object #8, labelled <<->>, which is an out-time (zero duration) "silence" marking the end of the structure to facilitate its synchronisation with the next item.
The phase diagram
Given a compact polymetric structure, time-setting algorithms require a table in which each column is assigned a date (in physical time). The cells of this phase diagram contain the identifiers of the sound-objects, including "0" and "1". It is created by the procedure FillPhaseDiagram() in the file "FillPhaseDiagram.c".
It is easy to imagine that the table would become very large if no compression techniques were used. For example, Liszt's 14th Rhapsody would require no less than 9 x 1021 cells! The reason is that the Bol Processor calculates symbolic durations as integer ratios. A symbolic duration of 5/3 will never be replaced by "1.666" for two reasons: (1) roundings would accumulate as noticeable errors, and (2) we don't know in advance how many decimals we need to keep. The physical duration of 5/3 beats depends on the metronome and the sequence of "_tempo(x)" controls that change the tempo.
Let us first consider an unproblematic case. The polymetric expression /3 {{C4_ _ D4_ _, E4_ F4 - G4_}, E5_ _ _ _ _} creates the following phase diagram:
In this example, if the metronome is set to 60 beats per minute, the physical duration assigned to each column is 1/3 second = 333 ms. As the graph becomes larger, this physical duration may decrease beyond the limit. This is where quantization comes in. It is set to 10 milliseconds by default, which means that two events occurring within 10 ms of each other can be written into the same column. To do this, the compact polymetric structure is rewritten with a compression rate (Kpress) that makes it fit into a phase diagram of suitable size.
If the piece of music lasts 10 minutes, we'll still get 10 x 60000 / 10 = 60000 columns in the table. Filling the phase diagram requires a very high compression rate, for example more than 5 x 1012 for Beethoven's Fugue in B-flat major.
To make matters worse, the algorithm has to deal with sequences of events that fall into the same column. This situation is signalled by the variable toofast, which is obtained by comparing the current tempo with the maximum tempo accepted in the structure. In the case of toofast, each event is written to a new row of the table in such a way as to respect the sequential order of the stream.
So we end up with 12132 lines for the phase table of Beethoven's fugue, in which the longest toofast stream contains 625 events — notes or sound-objects. These 625 events, which occur within a single frame of 10 ms, actually include '_' events which are extensions of notes belonging to the stream.
Dealing with complex ratios
In Bol Processor terminology, an integer ratio p/q is "complex" if either 'p' or 'q' exceeds a limit that depends on the source code. The limit is ULONG_MAX, the maximum value of an unsigned long type, currently 18446744073709551616.
In the code of the Bol Processor console, 'p' and 'q' are actually coded as double floating point numbers whose mantissa can contain as many digits as unsigned long integers. Arithmetic operations are performed on the fractions. Each resulting fraction is checked for complexity by a procedure called Simplify() in the "Arithmetic.c" file:
While 'p' or 'q' is greater than ULONG_MAX, divide 'p' and 'q' by 10;
Divide 'p' and 'q' by their greatest common divisor (GCD).
Part (1) of the Simplify() procedure generates rounding errors, but these represent a few units of very large numbers. In this way, the accuracy of symbolic durations is maintained throughout the computation of complicated polymetric structures.
Complex ratios in silences
Let us check the effect on quantization by playing:
C4 C5 36001/24000 D5 E5
The ratio 36001/24000 cannot be simplified. However, 1/24000 beat would take 0.04 ms which is much less than the 10 ms quantization. So, the ratio can be approximated to 36000/24000 and simplified to 3/2. The final result is therefore "C4 C5 3/2 D5 E5":
Let us now consider "C4 C5 35542/24783 D5 E5" which looks similar, as 35542/24783 (1.43) is close to 1.5. However, the calculation is more complex… Using the 10 ms quantization, the ratio is reduced to 143/100 and the compact polymetric expression is:
The 143/100 silence is now represented as a single '-' (kobj = 1) followed by 142 '_' (kobj = 0). This sequence is toofast because tempomax, the maximum tempo accepted here, would be '50' instead of '100'. The compression rate is Kpress = 2. A full explanation requires the polymetric algorithm explained here.
The process of filling the phase table can be found in "FillPhaseDiagram.c". We call 'ip' the index of the column into which the next event is to be plotted. In the most common situation, e.g. writing "C4 _ _" (object #2), two procedures are called:
Plot() writes '2' (kobj) into column ip
PutZeros() writes two zeros into the columns ip + 1 and ip +2.
So, "C4 _ _" will have a symbolic duration of 3 units, as expected.
The case is different with a silence of 143/100, because the toofast situation requires that less than 142 '_' should be inserted after '-'. To this end, a (floating-point) variable part_of_ip is initialised to 0 and gets incremented by a certain value until it exceeds Kpress. Then Plot() and PutZeros() are called, part_of_ip is reset and a new cycle starts… until all 142 '_' of the compact polymetric expression have been read.
The increment of part_of_ip in each cycle is:
part_of_ip += Kpress * tempomax / tempo;
In this simple example, tempo = 100, tempomax = 50 and Kpress =2. So the increment is 1 and part_of_ip will reach the value of Kpress after 2 cycles. This means that every other '_' will be skipped.
Incrementing ip requires a more complicated process. The algorithm keeps track of the column numbers in the table as it would be created with Kpress = 1. These numbers are usually much larger than those of the actual phase diagram. The large number i is mapped to ip using the Class() function:
unsigned long Class(double i) { unsigned long result; if(Kpress < 2.) return((unsigned long)i); result = 1L + ((unsigned long)(floor(i) / Kpress)); return(result); }
So, each cycle of reading '_' in the toofast situation ends up incrementing i and then updating ip via the Class(i) function. The increment of i is:
prodtempo - 1
in which:
prodtempo = Prod / tempo
The variables Prod and Kpress are calculated after the compact polymetric expression has been created. Prod is the lowest common multiple (LCM) of all tempo values, i.e. '100' in this example.
Let us use the integers Pclock and Qclock to define the metronome value as Qclock * 60 / Pclock. If the metronome is set to its default value of 60 bpm, then Pclock = Qclock = 1.
The following (simplified) code calculates Kpress and updates Prod accordingly:
Let us calculate the duration of the silence between "F2" and "G1" in two ways:
In the source polymetric expression, this silence is notated as 667/480. Since the tempo is 80/39, its duration should be 667/480 * 39/80 = 0.67 beats (confirmed by the graph).
In the compact polymetric expression, we find one '-' object followed by 666 '_' prolongations at a speed of *13/12800. The duration is therefore 667 * 13/12800 = 0.67 beats.
It would be difficult to follow the algorithm step by step because Prod = 2496100 , Kpress = 24961 and tempomax = Prod / Kpress = 100. Within the silence, tempo = 985 and the increment of part_of_ip is 24961 * 100 / 985 = 2 534.11167… The number of cycles before part_of_ip reaches the value of Kpress is ceil(9.85) = 10. This means that 9 out of 10 objects '_' have been skipped.
Conclusion
These examples and explanations provide insight into the code in the "FillPhaseDiagram.c" file of the console code. We hope that it will be useful for future development or migration of algorithms.
This is also a demonstration of the complexity of time calculations when dealing with polymetric structures capable of carrying all the details of real musical works — see Importing MusicXML scores for "real life" examples.
The following are Bol Processor + Csound interpretations of J.-S. Bach's Prelude 1 in C major (1722) and François Couperin's Les Ombres Errantes (1730) — both near the end of the Baroque period — using temperament scales (Asselin 2000). The names and tuning procedures follow Asselin's instructions (p. 67-126). Images of the scales have been created using the Bol Processor.
The construction of these scales with the Bol Processor is explained in detail on the Microtonality page. The complete set of scale images is available on this page.
➡ We hope to be able to release better sound demos upon receipt of a set of well-designed C-sound instruments. ("orc" files). My apologies to harpsichord players, tuners and designers!
Let us begin by listening to the piece in equal temperament, the popular tuning of instruments in the electronic age. Unlearned musicians believe that "well-tempered" is the equivalent of "equal-tempered."
➡ Don't hesitate to click on the "Image" links to see circular graphical representations of scale intervals highlighting consonance and dissonance.
The following are traditional temperaments, each of which was designed at a particular time to meet the specific requirements of the musical repertoire en vogue (Asselin 2000 p. 139-180).
The previous example was Zarlino's temperament, not to be confused with the popular "natural scale" of Zarlino, an example of just intonation:
J.S. Bach's Well-Tempered Clavier (BWV 846–893) is a collection of two sets of preludes and fugues in all 24 major and minor keys, dated 1722. To judge the validity of a tuning scheme it would be necessary to listen to all the pieces. Readers impatient to know more may be interested in a "computational" approach to the subject, read Bach well-tempered tonal analysis and listen to the results on the page The Well-tempered Clavier.
Fortunately, there are historical clues as to the optimal choice: Friedrich Wilhelm Marpurg received information from Bach's sons and pupils and Johann Kirnberger, one of these pupils, designed tunings that he claimed represented his master's idea of "well-tempered".
On the page Tonal analysis of musical items we show that the analysis of tonal intervals tends to suggest the choice of Kirnberger III rather than Kirnberger II. However, the temperament devised by the French physician Joseph Sauveur in 1701 also seemed to fit better in terms of melodic intervals — and indeed it sounds beautiful… This, in turn, can be challenged by the systematic matching of all the works in books I and II with the tuning schemes implemented on the Bol Processor — see page Bach well-tempered tonal analysis.
François Couperin's Les Ombres Errantes (1730)
Again, my apologies to harpsichord players, tuners and manufacturers!
This piece is from François Couperin's Quatrième livre published in 1730 ➡ read the full score (Creative Commons licence CC0 1.0 Universal). We used it to illustrate the interpretation of mordents when importing MusicXML files.
First, listen to an (excellent) interpretation of this work by the harpsichord player Iddo Bar-Shaï (source: https://youtu.be/DCwkMSTFV_E).
Despite its artistic quality, this performance has some dissonant effects, which are partly masked by the abundance of melodic ornamentation: mordents, trills, etc. Such a departure from the theme of the 'Ombres errantes' cannot be attributed to either the composer or the performer. It is therefore legitimate to question the tuning of the instrument. To do this, we must focus our attention on tonality, even if the sound synthesis seems artificial to listeners whose attention is focused on temporality, ornamentation and sound quality.
As some of the following temperaments were invented (or documented?) after 1730, it is unlikely that the composer used them. Let's try them all anyway, and find the winner!
The best temperament for this piece might be Rameau en sib, which was devised by Couperin's contemporary Jean-Philippe Rameau for musical works with flats in the key signature (Asselin, 2000 p. 149) — such as the present one. See the Tonal analysis of musical items page for a description of a systematic (automated) analysis that confirms this choice.
We might end up with listening to François Couperin's Le Petit Rien (Ordre 14e de clavecin in D major, 1722), which has two sharps in the key signature, suggesting the use of a Rameau en do temperament.
Chapter VIII of Pierre-Yves Asselin's book (2000 p. 139-180) contains examples of musical works that illustrate the relevance of specific temperaments. As the scores of many baroque and classical masterpieces are available in the digital format MusicXML, we hope to use Bol Processor's Importing MusicXML scores to transcode them and play these fragments with the suggested temperaments.
Despite the limitations of comparing temperaments on only two musical examples, the aim of this page is to illustrate the notion of "perfection" in sets of tonal intervals — and in music in general. Read the discussion: Just intonation: a general framework. If nothing else, we hope to convince the reader that "equal temperament" is not the "perfect" solution!
Musicians interested in continuing this research and related development can use the beta version of the Bol Processor BP3 to process musical works and create new tuning procedures. Follow the instructions on the Bol Processor ‘BP3’ and its PHP interface page to install BP3 and learn its basic operation. Download and install Csound from its distribution page.
References
Asselin, P.-Y. Musique et tempérament. Paris, 1985, republished in 2000: Jobert. Soon available in English.
MusicXML is a very popular XML-based file format for representing western musical notation. It is designed for the exchange of scores between music notation software and other musical devices.
A MusicXML file contains all the information needed to represent a musical score in western music notation. It also contains data that can be processed by a sound device to "play" the score. The basic representation may sound mechanical, lacking control over volume, velocity, tempo, etc., which are not accurately represented on printed scores. As such, it can be used as a tool for checking the representation of a musical work, or as a teaching aid for deciphering scores.
In addition to its use as an exchange format between score editors, many MusicXML files are edited by groups of musicians — such as the MuseScore community — to embed intensity and tempo information. Sound examples are given below.
Importing scores from music archives into the Bol Processor makes it possible to use them (or fragments of them) in grammars that produce variations, for example Mozart's musical dice game. Thanks to the Csound interface, these musical works can even be played with specific tunings, as explained on the Microtonality page. The latter was an incentive to implement the MusicXML conversion, which makes it possible to compare works from the Baroque and Classical repertoires with the variety of meantone temperaments documented by historians.
The MusicXML to Bol Processor converter is fully functional on the PHP interface of BP3. Follow the instructions on the Bol Processor ‘BP3’ and its PHP interface page to install BP3 and learn its basic operation.
Bol Processor's data format
The Bol Processor has its own data format for representing musical items that are intended to produce sound via its MIDI or Csound interface. This format is displayed and stored as plain text.
The syntax of Bol Processor data is based on polymetric structures — read the tutorial on Polymetric structures. A few elementary examples will illustrate this concept:
{A4 B4 C5} is a sequence of three notes "A4", "B4", "C5" played at the metronomic tempo
{A4, C5, E5, A5} is a A minor chord
{la3, do4, mi4, la4} is the same chord in Italian/Spanish/French notation
{dha4, sa5, ga5, dha5} is the same chord in Indian notation
{C4 G4 E4, F3 C4} is a two-level structure calling for the juxtaposition and time alignment of sequences "C4 G4 E4" and "F3 C4", which yields a polyrhythmic structure that may be expanded to {C4_ G4_ E4_, F3__ C4__} in which ‘_’ are prolongations of the preceding notes.
{5, A4 Bb4 C5} is sequence "A4 Bb4 C5", 3 note played over 5 beats. Their durations are therefore multiplied by 5/3.
{7/16, F#3 G3} is sequence "F#3 G3" played over 7/16 beats. The duration of each note is multiplied by (7/16) / 2 = 7/32.
Unlike conventional western musical scores, polymetric structures can be recursively embedded with no limit to their complexity (except the machine). Some complex structures are discussed on the page Harm Visser's examples. All timings calculations are performed on integer ratios to achieve the best accuracy compatible with the system.
Why do we need to import scores?
The Bol Processor's data format is overall compact, computable and humanly comprehensible. However, its compactness makes it difficult to edit complex polymetric structures. In practice, these are created by generative grammars…
A grammar that produces pieces of tonal music may require "building blocks" extracted from existing musical works. So far (in Bol Processor BP1 and BP2) it has been possible to map the computer keyboard to arbitrary characters representing drum strokes (see the initial project), or to capture notes using common music notation — three different conventions: Italian/Spanish/French, English and Indian. Sound-objects can also contain Csound scores and/or sequences of instructions imported from MIDI files.
Things get complex when dealing with polyphonic tonal music. Work is in progress on a method of capturing MIDI events in real time. Since musical material exists on scores in Western notation, and these scores have been digitised in interchange formats such as MusicXML, an import procedure that captures the full complexity of the score is a great asset. Mozart's Musical dice game is a good example of this need.
In practice you can pick up and rework fragments of the very large musical repertoire available in MusicXML format, or create your own building blocks with a score editor such as Werner Schweer's MuseScore — a public domain program that works on Linux, Mac and Windows. MuseScore recognises many input/output formats and it can capture music via MIDI or Open Sound Control.
👉 Exporting music produced by the Bol Processor to MusicXML scores is not yet on our agenda. The reason for this is that the model for timing musical events in the Bol Processor (polymetric structures) is more sophisticated (and compact) than that used by score representations derived from Western frameworks. Charles Ames wrote (Exporting to External Formats, 2013):
Of the two formats, MIDI operates at (or below) the level of performance gestures while MusicXML operates note-by-note. Accelerations and ritards, ramped dynamics, pitch bend, and other continuous controls are musical features that MIDI handles well. MusicXML handles these same features clumsily or not at all. Such limitations make it difficult to consider MusicXML as a viable intermediary for MIDI, at least for the foreseeable future.
Importing and converting a MusicXML score
A few public domain MusicXML scores can be found in the "xmlsamples" of the bp3-ctests-main.zip sample set shared on GitHub. Most of them are fragments used to illustrate the format. We start with a very short fragment of "MozartPianoSonata.musicxml", which also has a graphical score:
First create a data file, for example "-da.musicXML". The default settings will suffice for this example, but a "-se.musicXML" file may be declared in the data window and you will be prompted to create it. Leave the default settings as they include the graphic display.
To import the MusicXML file, click the Choose File button at the top of the editing form, select the file and click IMPORT.
The machine displays the list of "parts" contained in the score. Each part can be assigned to an instrument, including human voices. This score contains a unique part to be played on an Acoustic Grand Piano, which would be played on channel 1 of a MIDI device. This MIDI channel information appears in the Bol Processor score and can later be mapped to a Csound instrument.
Clicking on CONVERT THEM (or IT) is all that remains to be done!
This will create the following Bol Processor data:
// MusicXML file ‘MozartPianoSonata.musicxml’ converted // Score part ‘P1’: instrument = Acoustic Grand Piano — MIDI channel 1
This may look uncomfortable to read, but remember that a layman would not even be able to make sense of scores in Western music notation! Fortunately, there is now a PLAY button to listen to the piece. By default, it is also saved as a MIDI file, which can be interpreted by a MIDI soft synthesiser such as PianoTeq:
The same process can be invoked in the Csound environment. If Csound is installed and responsive, selecting the Csound output format will produce a Csound score immediately converted to an AIFF sound file displayed on the process window:
Understanding the conversion process
Let us compare the score in common Western notation with its conversion to Bol Processor data. This may be helpful in understanding the features and limitations of MusicXML files. Remember that this format is a complete description of a graphical representation of the musical work. It is up to the musician to add implicit information necessary for a correct (and artistic) rendering of the piece…
Scores of classical works are divided into bars (measures) marked by vertical lines. This score contains 5 measures of equal duration. The MusicXML file contains data indicating that the duration of each measure is 2 beats, i.e. 2 seconds, if the metronome is beating at 60 beats per minute. However, the _tempo(2) doubles the speed, resulting in measures that last 1 second. The third measure contains a chord {2, F#5, A5, D6} of half notes (minims) lasting 2 beats.
The Bol Processor score also shows the five measures, each of which is interpreted as a polymetric structure. At the beginning of each measure, a MIDI channel instruction has been automatically inserted to indicate which part it belongs to.
Let us read the first measure and compare it with its conversion on the score:
The ‘2’ (green colour) is the total duration of the polymetric expression (i.e. the measure). The first two lines are the upper score (in the G key on the picture) while the third line (in the F key on the picture) is the lower score. At the top of the upper score is a half note C#6 interpreted as {2, C#6}. A comma (in red colour) indicates a new field of the polymetric structure that needs to be superimposed on the first field. It contains a chord {C#5, E5, A5} of quarter notes (crotchets) of 1 beat, followed by a rest of 1 beat notated "-".
➡ In the printed score there is an arpeggio on the chord which is ignored for the moment to make the explanations easier. Arpeggios will be considered below.
To complete the field, we need a rest of 1 beat, which is not indicated in the graphical score, although the gap is mentioned in the MusicXML file. In Bol Processor notation, rests can be written as '-' or as integer numbers/ratios. For example, a rest of 3 beats could be notated “---” or {3, -} or {3}, while a rest of 3/4 beats should be notated {3/4, -} or {3/4}.
The lower score contains a sequence that is difficult for a machine to process: three grace notes "A2 C#3 E3". Grace notes have no explicit duration in MusicXML files, so we follow the practice of giving this sequence a duration half that of the following main note, here the first occurrence of "A3", which is declared as eight notes of 1/2 beat. Consequently, the stream of grace notes has a total duration of 1/4 beat and is notated {1/4, A2 C#3 E3}. This is followed by A3 whose length is reduced by one half, so {1/4, A3}. The following 3 occurrences of A3 have a total duration of 3/2 beats, so {3/2, A3 A3 A3}.
The structure of this first measure is made clear in the graphic display. Note that, unlike the piano roll display, this object display does not position sound-objects vertically according to pitch values:
The rest of the score can be deciphered and explained in the same way. Bol Processor notation is based on very simple (and multicultural) principles, but it is difficult to create by hand… So it is best created by grammars or extracted from MusicXML scores.
Note that it is easy to change the tempo of this piece. For example, to slow it down, insert the instruction _tempo(1/2) at the beginning:
Exploding scores
Clicking the EXPLODE button segments the musical work into individual measures, making it easier to analyse the conversion or reuse fragments:
Each measure can be played (or expanded) separately. Segments are labelled [item 1], [item 2] etc. for easy identification.
The IMPLODE button reconstructs the original work from its fragments.
A more complex example
Let us try DichterLiebe (op. 48) Im wunderschönen Monat Mai by Robert Schumann. The MusicXML score is in the "xmlsamples" folder distributed in the sample set "bp3-ctests-main.zip", which is available on GitHub, together with its graphic score (read the PDF file).
The Bol Processor score is more complex:
"Im wunderschönen Monat Mai" (Robert Schumann)
The correct rendering of this piece on the Bol Processor is obtained with its (default) quantization set to 10 milliseconds. Quantization is a process of merging the timing of events when they are less than a certain value apart: a human would not notice a 10 millisecond timing error, but merging "time streaks" is an efficient way of saving memory when building a phase diagram of events. In this particular piece, setting the quantization to 30 ms would already produce a noticeable error in synchronisation. This gives an idea of the accuracy expected from human performers, which their trained auditory and motor systems can easily handle.
Note that this MusicXML score has 2 parts, one for voice and the second one for piano. These are sent on MIDI channels 1 and 2 respectively. These channels should in turn activate different Csound instruments. If several instruments are not available, it is possible to listen to their parts separately by importing selected parts of the score.
As the first measure is incomplete (1/4 beat), the piano roll is not aligned with the background streaks (numbered 0, 1, 2…):
This problem can be solved by inserting a silence of 3/4 beats duration before the score:
3/4 {_chan(1){1/4,{{1/4,-}}},_chan(2){1/4,{{1/4,C#5},{1/4,-}}}} … etc.
which yields:
The musical work can be interpreted at different speeds after inserting a "_tempo()" instruction in the beginning. For example, given that the metronome is set to 60 beats per minute, inserting _tempo(3/4) would set the tempo to 60 * 3 / 4 = 45 beats per minute. To produce a sound rendering of this particular piece we inserted a performance control _legato(25), which extends the duration of all notes by 25% without modifying the score. We also added some reverberation on the PianoTeq vibrophone. The resulting piano roll was:
Time-reversed Bach?
The _retro tool also produces bizarre transformations, most of which would sound "unmusical". In fact, some of them are quite interesting. Consider, for example, Bach's Goldberg Variation No. 5 played on Bol Processor + Csound with (Bach's presumably favourite) Kirnberger II temperament — see the page Comparing temperaments:
Listen to the same piece after applying the _retro tool:
In Bol Processor scores created by importing MusicXML files, many (musically meaningful) modifications can be made, such as inserting variables and sending the data to a grammar that will produce completely different pieces. To achieve this, the grammar — for example "-gr.myTransformations" — must be declared above the data window.
The claim for "well-tempered tuning" for the interpretation of Baroque music can be further assessed by comparing the following versions of J.-S. Bach's Brandenburg Concerto Nr 2 in F major (BWV1047) part 3:
Complex structures
At the time of writing, BP3 was able to import and convert all the MusicXML files in the "xmlsamples" folder. However, it may not be possible to play or expand pieces classified as "too complex" due to overflow. Since it is possible to isolate measures after clicking the EXPLODE button, a PLAY safe button has been created to pick up chunks and play them in a reconstructed sequence. The only drawback is that the graphics are disabled, but this is less important given the complexity of the work.
For example, listen to Lee Actor's Prelude to a Tragedy (2003), a musical work consisting of 22 parts assigned to various instruments via the 16 MIDI channels — read the graphic score.
Instrument mapping is incorrect, with most channels being played as piano instead of flute, oboe, English horn, trumpet, viola, etc. Parts mapped to channels 10 and 16 are fed with drum sounds. All these instruments were synthesised by the Javascript MIDIjs player installed on the BP3's interface. A better solution would be to feed the "prelude-to-a-tragedy.mid" MIDI file into a synthesiser capable of imitating the full set of instruments, such as MuseScore.
Score part ‘P1’: instrument = Picc. (V2k) — MIDI channel 1 Score part ‘P2’: instrument = Fl. (V2k) — MIDI channel 2 Score part ‘P3’: instrument = Ob. (V2k) — MIDI channel 3 Score part ‘P4’: instrument = E.H. (V2k) — MIDI channel 4 Score part ‘P5’: instrument = Clar. (V2k) — MIDI channel 5 Score part ‘P6’: instrument = B. Cl. (V2k) — MIDI channel 5 Score part ‘P7’: instrument = Bsn. (V2k) — MIDI channel 7 Score part ‘P8’: instrument = Hn. (V2k) — MIDI channel 8 Score part ‘P9’: instrument = Hn. 2 (V2k) — MIDI channel 8 Score part ‘P10’: instrument = Tpt. (V2k) — MIDI channel 9 Score part ‘P11’: instrument = Trb. (V2k) — MIDI channel 11 Score part ‘P12’: instrument = B Trb. (V2k) — MIDI channel 11 Score part ‘P13’: instrument = Tuba (V2k) — MIDI channel 12 Score part ‘P14’: instrument = Timp. (V2k) — MIDI channel 13 Score part ‘P15’: instrument = Splash Cymbal — MIDI channel 10 Score part ‘P16’: instrument = Bass Drum — MIDI channel 10 Score part ‘P17’: instrument = Harp (V2k) — MIDI channel 6 Score part ‘P18’: instrument = Vln. (V2k) — MIDI channel 14 Score part ‘P19’: instrument = Vln. 2 (V2k) — MIDI channel 15 Score part ‘P20’: instrument = Va. (V2k) — MIDI channel 16 Score part ‘P21’: instrument = Vc. (V2k) — MIDI channel 16 Score part ‘P22’: instrument = Cb. (V2k) — MIDI channel 16
Remember, however, that these are raw interpretations of musical scores based on a few quantified parameters. For a better representation, you should add performance parameters to the Bol Processor score to control volume, panoramic, etc. on a MIDI device, or an unlimited number of parameters with Csound.
Stylistic limitations are evident in transcriptions of jazz music, as opposed to musical works originally composed in writing. A transcription of improvised material is only a fixed image of one of its myriad variations. As a result, its score may convey a pedagogical rather than an artistic vision of the piece. The following is a transcription of Oscar Peterson's Watch What Happens from a MusicXML score:
The Bol Processor score of this transcription is as follows. The metronome has been increased to 136 beats per minute — notated _tempo(136/60) — to match an estimated performance speed. This is easy with a machine! Below is an excerpt from the piano roll display and the full Bol Processor score:
In a very different style, Tchaikovsky's famous June Barcarole in G minor (1875):
Another complex example is Beethoven's Fugue in B flat major (opus 133). As we could not obtain the piano four hands transcription, we used the string quartet version.
Again, the Javascript MIDIjs player could not synthesise the two violins, viola and cello tracks (MIDI channels 1 to 4). So the MIDI file was sent to PianoTeq to get a fair piano rendering of the mixed channels.
Played as a single chunk (on MacOS), this piece takes no less than 372 seconds to calculate, whereas PLAY safe delivers the same in 33 seconds. In addition, single chunk playback requires 30 ms quantization on a machine with 16 GB of memory.
Another emblematic example of complex structure is La Campanella, originally composed by Paganini for the violin and transcribed for piano by Franz Liszt:
The Bol Processor score of this piece (a single polymetric expression) consists of only 37268 bytes. Dynamics are interpreted as velocities :
In this Bol Processor score, the pedal start/end commands are translated to _switchon(64,1) and _switchoff(64,1), and a 20 milliseconds randomisation of dates is applied as per the instruction _rndtime(20) — see Pedals and Randomisation below.
According to Wikipedia : "La Campanella" (Italian for "The little bell") is the nickname given to the third of Franz Liszt's six Grandes études de Paganini, S. 141 (1851). Its melody comes from the final movement of Niccolò Paganini's Violin Concerto No. 2 in B minor, where the melody is metaphorically amplified by a 'little handbell'. After listening to Liszt's piano version interpreted by the Bol Processor — and its human performance by Romuald Greiss on the Wikipedia page — I recommend watching the outstanding violin performance of Paganini's original work by maestro Salvatore Accardo in 2008 (video).
In measure #96 (image above), the locations of vertical blue lines are irrelevant because of the varying tempi listed below (green arrows). Note that these are the metronome values given for the performance (tags sound tempo), which are slightly different from those given in the printed score (tags per-minute). However, if the MusicXML score is well designed, there is no significant difference between importing only performance metronome values and including printed score values ; this point is discussed below, see Tempo interpretation: prescriptive versus descriptive.
Ahead with grammars
Before we look in more detail at material imported from MusicXML files, let us consider the issue of using fragments of this material to create music in the Bol Processor task environment.
After importing/converting a MusicXML score, clicking EXPLODE will split it into separate items, one per measure, according to the MusicXML structure:
The data has been chunked into units item 1, item 2 etc. Note that it is possible to play each measure separately and display its sound-objects or its piano roll.
The CREATE GRAMMAR button will now start converting this data into a grammar:
The new grammar is displayed in a pop-up window and can be copied to a Grammar page:
This is a basic transformation. Playing this grammar would simply reconstruct the musical work as it was imported. However, as each measure is now labelled as a variable M001, M002 etc., these variables can be used as the "building bricks" of a new compositional work.
Performance controls
MusicXML files contain descriptive information for use by mechanical players that is not displayed on the graphic score. For example, where the score says "Allegretto" the file contains a quantitative instruction such as "tempo = 132".
Another notable case is the representation of trills (see image above). In some (but not all) MusicXML scores, they appear explicitly as sequences of fast notes. Consequently, they are rendered correctly by the interpreter of the MusicXML file. In other cases they have to be constructed — see Ornamentation below.
In the same measure #10, a fermata appears on top of the crotchet rest. Its duration is not specified as it is at the discretion of the performer or conductor, but the Bol Processor follows a common practice of making it twice the duration of the marked rest.
MusicXML files contain information about sound dynamics which the Bol Processor can interpret as either _volume(x) or _vel(x) commands. The latter (velocity) is appropriate for instruments such as piano, harpsichord etc.
In the absence of a numerical value, a graphical representation of the dynamics (ffff to pppp) will be used. This value is estimated according to the MakeMusic Finale dynamics convention.
Some prescriptive information that appears on the graphical score is not (currently) interpreted. The first reason is that it would be difficult to translate performance controls to the Bol Processor - for example, stepwise/continuous volume control, acceleration, etc. The second reason is that the aim of this exercise is not to produce the "best interpretation" of a score. Score editing programs can do that better! Our only intention is to capture musical fragments and rework them with grammars or scripts.
It would be difficult to reuse a musical fragment packed with strings of performance controls relevant to its particular context in the musical work. To this end, the user is offered options to ignore volume, tempo and channel assignments in any imported MusicXML score. These can later be deleted or remapped with a single click (see below).
Remapping channels and instruments
MusicXML digital scores contain specifications for individual parts/instruments. These parts are visible in the Bol Processor score after conversion and can be mapped to the sound output device(s) — read below.
Each part can also be assigned a MIDI channel. These channels can be used to match instruments available on a MIDI synthesiser, and _ins() instructions are needed to call instruments available in the Csound orchestra.
The remapping of MIDI channels is easily done at the bottom of the Data or Grammar pages:
The default note convention when importing MusicXML scores is English ("C", "D", "E"…). This form allows it to be converted with a single click to the Italian/Spanish/French ("do", "re", "mi"…) or Indian ("sa", "re", "ga"…) conventions.
Clicking on the MANAGE _chan() AND _ins() button displays a form listing all occurrences of MIDI channels and Csound instruments found in the score. Here, for example, we want to keep MIDI channels and in the same time insert _ins() commands to call Csound instruments described in a "-cs" Csound resource file:
Error corrections
MuseScore reported an error in measure 142 of the MusicXML score for Beethoven's Fugue: the total timing of the notes in part 1 (the uppermost score) is 3754 units, which is 3.66 beats (instead of 4) based on a division of 1024 units per quarter note. MuseScore has corrected this error by stretching this sequence to 4 beats with an erroneous silence marker at the end.
The Bol Processor behaves differently. Its notion of "measure" as a polymetric structure is not based on counting beats. It takes the top line of the structure as the timing reference, so "measures" can be of variable duration. Its interpretation of this measure is as follows: the ratio 3755/1024 denotes exactly the (presumably incorrect) duration of this measure according to the MusicXML score:
The graphic rendering of this measure shows that the four sequences are perfectly synchronised.
To correct the error, simply replace "3755/1024" with "4".
At the time of writing, the Bol Processor has been able to import and play most MusicXML scores correctly. Errors can still occur with very complicated files, particularly due to inconsistencies (or rounding errors) in the MusicXML code. For example, the measure numbering in Liszt's 14th Hungarian Rhapsody looks confusing (due to implicit measures) and some values of are incorrect. These details are detected and the errors are corrected when the file is converted.
Tempo interpretation: prescriptive versus descriptive
MusicXML scores contain tempo markings of two kinds: (1) metronome prescriptive markings available on conventional printed scores and (2) their descriptive modifications for proper mechanical interpretation.
In the prescriptive setting, tempo controls (sound tempo tags) type within measures are discarded. Only per-minute tags are interpreted. This results in a "robotic" rendering: acceleration/deceleration lacks the passion and subtlety of human interpretation. However, since the transcription reflects the plain printed score, its fragments are more suitable for a reuse. Assuming that this is exactly the version published by the composer (which is indeed debatable) we can take the following interpretation as reflecting the music that Liszt "had in mind" regardless of the performer's interpretation.
In a detailed interpretation, all tempo indications are converted, including the "non-printing" ones (sound tempo tags), which we call descriptive. Global rendering is more pleasant when these tags make musical sense. For example, this is Liszt's 14th Hungarian Rhapsody with all tempo markings. Note that the total duration has increased by 15 seconds:
The options of relying on exclusively prescriptive, or exclusively descriptive, tempo markings should be considered when there is an inconsistency between the printed score (per-minute tags) and the performance details (sound tempo tags). The former are intended for use by a human performer, whereas the latter are intended for use by machines…
Multiple versions of the same piece of music can be found in shared repositories. Below is an interpretation of the same 14th Hungarian Rhapsody based on the MusicXML score customised by OguzSirin:
The entire work is contained in a single polymetric expression (see code below) which must be "expanded" to fill a "phase diagram" of sound-objects. Its full expansion would produce no less than 7 x 1023 symbols… more than the estimated 400 billion (4 x 1011) stars in the Milky Way! Fortunately, polymetric representations can be compressed into a comprehensive format (see code below) and processed to produce the expected sequence of sound objects. The compression rate for this item is greater than 5 x 1022, so a Bol Processor score can be obtained without any loss of data.
Despite the limitations (and potential errors), the detailed virtuosity engraved in Liszt's score supports Alfred Brendel's idea of interpreting a musical work:
If I belong to a tradition, it is a tradition that makes the masterpiece tell the performer what to do, and not the performer telling the piece what it should be like, or the composer what he ought to have composed.
Focus on tempo and fermatas
This section is intended for readers familiar with standard western music notation. We illustrate the interpretation of (non-printed) metronome markings within measures and fermatas (unmeasured prolongations) using a typical example: measure #6 of Liszt's 14th Hungarian Rhapsody. The source material is the MusicXML code of this measure on which tempo annotations are highlighted in red and fermatas in green colour.
This measure is displayed in the printed score as follows. Invisible tempo markings have been added in red at the exact locations specified by the MusicXML score. Three fermatas are printed above/below the note or silence to which they apply.
The symbolic duration of this measure is 6 beats. Due to rounding errors, the Bol Processor displays it as 1441/240 = 6.004 beats. This tiny discrepancy is caused by rounding off the durations of the 14 notes Ab2 C3 F3 Ab3 C4 F4 Ab4 C5 F5 Ab5 C6 F6 Ab6 C7, a sequence that should last exactly 3/8 of a measure. Each beat is divided into 480 parts — the division given at the beginning of the score. So the sequence should last 480 x 3/8 = 180 units, and each note should last 180/14 = 12.85 units. Since durations are represented as integers in a MusicXML score, this value has been rounded to 13. This explains the small difference visible in the Bol Processor score, but unnoticeable to the human ear.
Below is the complete Bol Processor transcription of this measure. First, the graphic representation of sound-objects labeled as simple notes:
Note that all sound-objects in the first 2.5 seconds are duplicated. The MusicXML score is redundant, fortunately with no inconsistencies between duplicate occurrences, which explains why they are not visible in the printed score.
The same polymetric expression is available in piano roll format:
We will further explain how this transcription has been obtained.
On the Data window the 6th measure is displayed as a polymetric structure: {duration, field 1, field2, field 3, field4}. After importing the MusicXML score, click the EXPLODE button on the right side to display measures as separate items. Since measure numbering in this score starts with 0, measure #6 will be displayed as item #7.
To facilitate reading, each field is on a separate line:
Integers and integer ratios represent rests. For example, 667/480 in the third field is a rest of 667/480 = 1.389 beats. Dates and durations are treated by the Bol Processor as integer ratios, thereby allowing perfect time accuracy. The ratio 1/2 in the first field can be interpreted as a 1/2 beat rest or the symbolic duration of the expression {1/2,F7}.
Redundancy in the MusicXML score is visible as expressions such as {Ab2,C3}{2,F3} and {F1,C2}{2,F2} appear in two fields (at the same date and speed).
Tempo markings in red reflect MusicXML score annotations. Each field starts with a metronome of 80 bpm (beats per minute). The _tempo(13/20) instruction before the polymetric structure sets the metronome to 60 x 13/20 = 39 bpm. At the beginning of each field it is multiplied by 80/39, so 60 x 13/20 x 80/39 = 80 bpm, as expected. The following instructions produce 16 bpm and 52 bpm in their respective places.
This interpretation of a MusicXML score as a polymetric structure is not easy to work out with respect to metronome annotations. The main problem is that these annotations only appear on the top line of the graphic score (i.e. the first field of the structure) and should be inserted at the same date in other fields. For example, _tempo(4/3) is on the 4.5th beat, before {1/2,F7} in the first field, and therefore before {1/2,Ab6} in the second field. This is easy to calculate.
The rest 481/240 (about 2 beats), which appears in green on the Bol Processor score, has been added after the second field to calibrate its duration to that of the measure. This calibration is not mandatory on printed scores or in MusicXML files: where no note is shown, musicians understand that there is an implicit rest, which they insert spontaneously to anticipate the synchronisation of upcoming notes in the next measure. However, a machine should be instructed to do so.
However, _tempo(16/39), which precedes the Ab2 C3 F3… sequence in the first field, falls within a 1/2 beat rest in the second field. This pause is actually coded as a forward instruction, as it does not appear on the printed score. To synchronise tempo changes, the_tempo(16/39) instruction must be placed in the first quarter of this rest. The result is:
1/8 _tempo(16/39) 3/8
Similarly, a forward of 2.5 beats in the fourth field must be broken in order to insert the _tempo(8/27) and _tempo(26/27) statements, which would yield the following:
1/8 _tempo(16/39) 91/240 _tempo(4/3) 480/240
However, the calibration of the duration of this fourth field requires an additional pause of 1/2 beat, suggesting that 480/240 be replaced by 600/240. An additional 1/240 gap is required to compensate for rounding errors. This gives:
Another problem with measure 6 of Liszt's 14th Hungarian Rhapsody is the appearance of three fermatas (see printed score). Like metronome markings, fermatas are not repeated on every line of the score, as they apply to all parts (voices) simultaneously. The durations must therefore be adjusted accordingly in order to maintain synchronisation in a machine performance.
The first fermata (coloured green in the MusicXML score) is on note "F3" of the first field. Its duration is therefore 2 beats instead of 1. This extension is propagated to subsequent fields at the same date, namely "F3", "F2", "F2".
The second fermata is placed on an eighth (quaver) rest that appears in the printed score, and its duration is extended by 1/2 beat. This ends by extending by 1/2 beat the rests that occur at the same date in subsequent fields.
To facilitate similar analyses, an option is provided to track transformations when importing/converting MusicXML scores. The part relevant to measure #6 (item #7) reads as follows:
• Measure #6 part [P1] starts with current period = 0.75s, current tempo = 4/3, default tempo = 4/3 (metronome = 80) mm Measure #6 part P1 field #1 metronome set to 80 at date 0 beat(s) f+ Measure #6 part P1 field #1 note ‘F3’ at date 1 increased by 1 beat(s) as per fermata #1 mm Measure #6 part P1 field #1 metronome set to 16 at date 3 beat(s) mm Measure #6 part P1 field #1 metronome set to 16 at date 25/8 beat(s) mm Measure #6 part P1 field #1 metronome set to 52 at date 1513/480 beat(s) mm Measure #6 part P1 field #1 metronome set to 52 at date 841/240 beat(s) f+ Measure #6 part P1 field #1 note ‘-’ at date 961/240 increased by 1/2 beat(s) as per fermata #2 + measure #6 field #1 : physical time = 7.98s • Rounding part P1 measure 6 field #2, neglecting ‘backup’ rest = 1/240 mm Measure #6 part P1 field #2 metronome set to 80 at date 0 beat(s) f+ Measure #6 part P1 field #2 note ‘F3’ at date 1 increased by 1 beat(s) to insert fermata #1 mm Measure #6 part P1 field #2 metronome set to 80 at date 1 beat(s) mm Measure #6 part P1 field #2 metronome set to 16 during rest starting date 3 beat(s) mm Measure #6 part P1 field #2 metronome set to 52 at date 7/2 beat(s) + measure #6 field #2 : physical time = 5.08s ➡ Error in measure 6 part P1 field #3, ‘backup’ rest = -1/2 beat(s) (fixed) mm Measure #6 part P1 field #3 metronome set to 80 at date 0 beat(s) f+ Measure #6 part P1 field #3 note ‘F2’ at date 1 increased by 1 beat(s) to insert fermata #1 mm Measure #6 part P1 field #3 metronome set to 80 at date 1 beat(s) mm Measure #6 part P1 field #3 metronome set to 16 at date 25/8 beat(s) mm Measure #6 part P1 field #3 metronome set to 16 at date 1513/480 beat(s) f+ Measure #6 part P1 field #3 silence at date 961/240 increased by 1/2 to insert fermata #2 mm Measure #6 part P1 field #3 metronome set to 52 during rest starting date 841/240 beat(s) + measure #6 field #3 : physical time = 9.28s • Rounding part P1 measure 6 field #4, neglecting ‘backup’ rest = 1/240 mm Measure #6 part P1 field #4 metronome set to 80 at date 0 beat(s) f+ Measure #6 part P1 field #4 note ‘F2’ at date 1 increased by 1 beat(s) to insert fermata #1 f+ Measure #6 part P1 field #4 silence at date 961/240 increased by 1/2 to insert fermata #2 mm Measure #6 part P1 field #4 metronome set to 16 during rest starting date 3 beat(s) mm Measure #6 part P1 field #4 metronome set to 52 during rest starting date 3 beat(s) +rest Measure #6 part P1 field #2 added rest = 481/240 beat(s) +rest Measure #6 part P1 field #4 added rest = 1/240 beat(s) + measure #6 field #4 : physical time = 7.77s ➡ Measure #6 part [P1] physical time = 9.28s, average metronome = 49, final metronome = 39
Changing tempo
There are several methods for changing the tempo of imported MusicXML scores. After the conversion it is obviously possible to edit the _tempo(x) statements individually. Clicking on the EXPLODE button allows each measure to be modified and checked visually/audibly.
Inserting a _tempo(x) instruction in front of the musical work changes the average metronome value. The effect is identical to changing the metronome in the settings file (which we did for Oscar Peterson's work). For example, the following Bol Processor score would play Liszt's 14th Hungarian Rhapsody at half speed:
// MusicXML file ‘Hungarian_Rhapsody_No._14.musicxml’ converted // Score part ‘P1’: instrument = Piano — MIDI channel 1 -se.Hungarian_Rhapsody
Despite the Bol Processor's systematic treatment of symbolic time as integer ratios, a floating-point argument x is acceptable in a _tempo(x) instruction. For example, _tempo(1.68) is automatically converted to _tempo(168/100) and simplified to _tempo(42/25).
Advanced tempo adjustment is possible when importing the MusicXML score.
The current average, minimum and maximum metronome values are displayed. Yellow boxes show the default values, e.g. set average to 60 bpm, minimum to 10 bpm and maximum to 180 bpm.
All metronome values are modified using a quadratic regression of the mapping of values. A linear regression can be used to replace the polynomial form if it is not monotonous. For this example (14th Hungarian Rhapsody) the new average would be 63 bpm instead of the expected 60 bpm. The discrepancy depends on the statistical distribution of the values.
Changing volume or velocity
When converting a MusicXML score, there is an option to interpret sound dynamics as volume or velocity controls. The latter may be preferable for sound synthesis that imitates plucked or struck string instruments.
Whatever you choose, you can later adjust the volume and velocity controls for the entire musical work. For example, click Modify _vel() at the bottom of the Data page.
This will display a form showing the current average, minimum and maximum values of _vel(x) statements in the score. Enter the desired values in the yellow cells and click APPLY.
The mapping uses a quadratic regression (if monotonous), as explained in relation to tempo (see above). For the same reason, the averages obtained are generally not exactly the desired ones.
Ornamentation
Western musical scores can contain many types of ornamentation with names and styles of interpretation depending on the historical period. The MusicXML format includes some of these, which can produce sound effects similar to those produced by human performers.
The following ornaments are transcribed into Bol Processor scores when MusicXML files are imported. The accuracy of these interpretations is not a big deal, since the main object is to import musical fragments that will be transformed and reused in different contexts. Nevertheless, it is fun to design a good interpretation… Before importing a MusicXML file, options are given to discard one of the following types of ornaments. The option is only displayed if at least one occurrence is found in the file.
Mordents
There is a wide variety of mordents with meanings that have changed over the years. The interpretation in Bol Processor is close to the musical practice of the 19th century, yet acceptable for the interpretation of Baroque works.
The MusicXML tags for mordents are mordent and inverted-mordent which correspond to the more comprehensible terms of lower mordent and upper mordent respectively. We will illustrate their use in François Couperin's work Les Ombres Errantes (1730), using a MusicXML file arranged by Vinckenbosch from the MuseScore community. Let us look at and listen to the first three measures:
There are eight mordents in the first three measures of Les Ombres Errantes. Those numbered 1, 2, 3, 7 and 8 are of the upper type. Mordents #4, #5 and #6 are of the lower type. In addition, the marks of all the upper mordents are longer than standard, which makes them long. Their MusicXML tag is therefore <inverted-mordent long="yes"/>.
Each mordent is interpreted as a series of notes on a rhythmic pattern, which may be short or long. For example, note B4 (the first longupper mordent) is interpreted as
{1/4,C5 B4 C5}{3/4,B4}
which indicates that it has been embellished by a short step down from the next higher note C5. The fourth mordent is of the short lower type on note C5, which yields:
{1/8,C5 B4}{7/8,C5}
The full list of mordents in these three measures is:
{1/4,C5 B4 C5}{3/4,B4}
{1/4,Eb5 D5 Eb5}{3/4,D5}
{1/4,C5 B4 C5}{3/4,B4}
{1/8,C5 B4}{7/8,C5}
{1/8,Eb4 D4}{7/8,Eb4}
{1/8,Eb4 D4}{7/8,Eb4}
{1/4,C4 B3 C4}{3/4,B3}
{1/4,C4 B3 C4}{3/4,B3}
While creating rhythmic patterns of mordents is fairly straightforward, a difficulty lies in choosing the note above or below the final note at a tonal distance of 1 or 2 semitones. The default choice is a note that belongs to the diatonic scale, which can be modified by changes earlier in the measure. An option to interpret mordents, turns and trills as"chromatic" is offered, see below.
With two flats in the key signature, i.e. Bb and Eb, the global diatonic scale of this piece reads as B flat major (or G minor) scale = "C D Eb F G D Bb". However, in the second measure, Bb is altered to B by a natural sign. Therefore, in the following mordent #4, the note B4 must be used instead of Bb4 as the lower note leading to C5.
Mordents sound acceptable in this interpretation, as can be heard in the full recording:
Turns
A turn is similar to a mordent, except that it picks up both the next high and low notes in the scale. If it is linked to a mordent, it can borrow its attributes (see above): upper/lower and long/short. If the turn is not associated with a mordent, it will use the long + upper attributes. This is a design option that can be revised or made optional.
A specific attribute of turns is beats, similar to trill-beats (see below). These are defined (read the source) as "The number of distinct notes during playback, counting the starting note but not the two-note turn. It is 4 if not specified."
Examples of turns can be found in François Couperin's Les Ombres Errantes. They are all four beats long and embedded in long/upper mordents, for example, the note Ab3 in measure #12:
{1, Ab3 {2, A3 Ab3 G3} Ab3}
The complete measure #12 (with the turn highlighted) is:
Note that the result would be unchanged if the turns in this piece were interpreted as "chromatic" in this piece: this option picks up the next higher and lower notes in the chromatic scale underlying the tuning of the piece — see image of the Rameau en sib meantone temperament.
Turns not associated with mordents are found in François Couperin’s Le Petit Rien:
Trills
Trills are marked with the trill-mark tag. There is an option to ignore this if the detailed note sequences are already encoded in the MusicXML file. (This is not easy to guess!) Let us see the construction of trills when the option is not checked.
The treatment of trills is similar to that of mordents (see above). There are many ways to interpret a trill, depending on the style and personal preference of the performer. By default, trills in the Bol Processor start with the reference note and end with the altered note, which is one step higher in the scale. However, if the starting note has a tie, the order of the notes is reversed, so that the stream ends with the tied note.
Among the available options of the trill-mark tag, we pick up trill-beats (read the documentation), whose value is "3" by default. Its definition is a little obscure: "The number of distinct notes during playback, counting the starting note but not the two-note turn. It is 3 if not specified." Our provisional interpretation is that the total number of jumps is trill-beats + 1.
Examples of the two types in Liszt's 14th Hungarian Rhapsody:
Arpeggios are also converted into polymetric structures. Below is a chord {F1,C2,F3} of 1/2 beat duration, followed by its interpretation as an arpeggio:
The piano roll of this sequence makes it clear. The chord is divided into two parts. The duration of the first part is determined by a minimum value of the delay between each arpeggio note and the following one, here set to 1/20th of a beat. The total duration must not exceed half of the duration of the chord.
Notes are tied (symbol '&') so that their durations are merged, as expected, between the arpeggio part and the pure chord part: for instance, "F1&" is tied to "&F1" — read page Tied notes for details.
Slurs
Slurs are translated into the Bol Processor score as _legato(x) statements, where "x" is the percentage by which note durations are increased. This option is set by default to x = 20% and can be deselected before importing the MusicXML file.
The notes used for stretching are those associated with slurs in the score: C5 , Eb5 and C5. Other notes, even the sequence {1/2,Eb4}{5/2,G4 D4 F4 C4 Eb4},Eb4 D4 C4}}, are not modified because they do not appear at the same level of the polymetric structure.
With staccato or spiccato, the duration is halved. For example, C4 is replaced by {1, C4 -}.
Staccatissimo reduces the duration by three quarters. For example, C4 is replaced by {1, C4 ---}.
Pedals
Pedal commands are captured from the MusicXML file and can be interpreted as MIDI controller commands. (These are ignored when generating a Csound output.)
A controller setting is suggested for each part of the score where a pedal command has been found. By default, controller #64 is used along with the MIDI channel assigned to the part. The performance controls _switchon() and _switchoff() are used according to the Bol Processor syntax.
Below are the settings for pedals, trills, etc., and the extra duration of the last measure for a piece with pedals in a single part. Numbers in yellow cells can be modified:
For example, the first three measures of Liszt's La Campanella are interpreted as follows:
Breath marks are "grace rests" analogous to the commas in written languages. On the Bol Processor, they are optionally interpreted as short silences lasting a fraction of a quarter note.
Look at measure #3 of François Couperin's Les Ombres Errantes (see conventional score above). The image shows the effect of breaths set to a 1/4 quarter note — that is, an eighteenth.
In the Bol Processor score, breaths can be tagged with any sequence of symbols. For example, in measure #3 of Les Ombres Errantes, the breaths are tagged with [🌱] which actually contains a Unicode character 🌱 compatible with BP syntax. Note that the breaths after D4 and B3 are 1/4 beats because these beats are quarter beats, whereas the breath after F4 is 1/2 beats because each beat is an eighth.
When importing this piece, we used 1/6th quarter note silence as it sounded more acceptable. In addition, we randomised the timing by 20 ms (see below).
Previewing ornamentation and setting options
Before importing a MusicXML file, options are displayed for selecting or ignoring any ornaments detected in the file. Here, for example, mordents and turns. Selecting an option implies that the ornament was described only as a graphic mark in the printed score, so we expect the algorithm to construct the note sequences according to the rules shown above. If the ornament has been embedded in the file as a sequence of notes, it is necessary to ignore its statement.
This decision can be difficult to make, as it requires analysis of the MusicXML code. To do this, the buttons open windows showing only the bars in which the selected ornament occurs. The ornament code is displayed in green and preceded by a red arrow.
Mordents, turns and trills can also be interpreted as chromatic. See their checkboxes on the picture.
Another button (at the top of the window) displays the complete MusicXML code in a pop-up window with coloured lines for measures and notes.
There are several "trace" options available. With a long file, it may not be easy to trace the entire process. It is therefore possible to focus on a number of measures. Other restricted options are the management of tempo and ornamentation.
Measure and part numbers
An option (see image above) allows the measure numbers to be displayed on the imported score, as shown below:
These become more visible after clicking the EXPLODE button, which fragments the score into one item per measure. Measure numbers (which appear on the printed score) do not always correspond to item numbers in the exploded view.
If the score contains several parts, their labels are also optionally displayed as "_part()" commands in the resulting score. This makes it easier to match the BP score with the printed one. For example:
In real-time MIDI, each part can be mapped to a specific MIDI output and fed to a specific digital instrument as indicated on the score — see the method.
Randomisation
Many performance controls can be applied to the imported score to change its global tempo, dynamics etc. These include the "random" operators "_rndvel(x)" and "_rndtime(x)".
The first changes the velocities by a random value between 0 and x, where x < 64. It can be placed at the beginning of a sequence of notes and followed by "_rdnvel(0)" when the randomisation is no longer desired. If it is placed before a polymetric structure it will apply to all notes in the structure.
The performance control "_rndtime(x)" follows the same syntax. Its effect is to randomly shift each note by ± x milliseconds.
Randomisation is not intended to "humanise" digital music, but rather to compensate for unwanted effects when multiple digitally synthesised sounds are superimposed. This is the case, for example, when notes in a synthesiser are attacked to imitate plucked instruments. Attacking several notes (in a chord) at the same time can sound very harsh. In general, placing a "_rndtime(20)" instruction at the beginning of the piece will solve the problem. However, the musical score may consist of several parts with instruments that benefit from different randomisations; therefore, several instructions must be placed in from of each part (one per bar). To avoid this editorial work, an option is given to insert "_rndtime(x)" with the correct values of x on each part/instrument.
Compare the beginning of Les Ombres Errantes without, then with, a time randomisation of 20 milliseconds — i. e. much less than what would be perceived as a "wrong timing". To get the right effect, the time resolution of the Bol Processor must be much lower than 20 ms. Here it is set to 1 ms, which means that the timing offsets can randomly pick up 40 different values within the ± 20 ms interval.
However, be careful not to reduce the time quantization to less than 10 milliseconds, as this could increase memory usage to the point where the MAMP or XAMPP driver hangs without warning. For example, on a Mac with 16 GB memory, Beethoven's Fugue in B flat major will only play in a single chunk at 30 ms quantization.
Let us compare the sizes of the files able to deliver the same interpretation of the 14th Hungarian Rhapsody:
Sound file in AIFF 16-bit 48 Khz produced by PianoTeq = 200 MB
MusicXML file = 3.9 MB
Graphic + audio score produced by MuseScore = 141 KB
Graphic score exported as PDF by MuseScore = 895 KB
Csound score produced by Bol Processor = 582 KB
MIDI file produced by Bol Processor = 75 KB
Bol Processor data = 64 KB
This comparison supports the idea that Bol Processor data is arguably the most compact and altogether comprehensive (text) format for representing digital music. Below is the complete data of this musical work (with measure numbers):
// MusicXML file ‘Hungarian_Rhapsody_No._14.musicxml’ converted // Reading metronome markers // Including slurs = _legato(20) // Including trills // Including fermatas // Including mordents // Including arpeggios
Velocities have been remapped to average 78 and maximum 110.
Take-away
The interpretation of complex musical works packaged in digitised musical scores highlights important features of the Bol Processor model:
Any musical work that can be scored digitally can be encapsulated in a singlepolymetric expression;
The timings of polymetric expressions are symbolic, here meaning human-understandable integers (or integer ratios) that count beats rather than milliseconds;
The simple syntax of polymetric expressions makes them amenable to reuse and transformation by human editors (or formal grammars);
The limitations of this modelling are only "physical": memory size and computation time;
The temporal accuracy (typically 10 ms) is not affected by the size of the data.
Return to humanity
The examples will hopefully convince the reader that the Bol Processor format is capable of emulating scores in common Western notation, and even correcting some irregularities in their timing… Let us admit that it has come a long way from its original dedication to the beautiful poetry created by drummers in India!
These are indeed interpretations of musical scores. To remember the added value of human artists playing real instruments, we might end up listening to the same Beethoven Fugue played by the Alban Berg Quartet:
For more than twenty centuries, musicians, instrument makers and musicologists have devised scale models and tuning procedures to create tonal music that embodies the concept of "consonance".
This does not mean that every style of tonal music aims to achieve consonance. This concept is most explicit in the design and performance of North Indian raga and Western harmonic music.
There was a common idea that the octave and the major fifth (interval 'C' to 'G') were the building blocks of these models, and the harmonic major third (interval 'C' to 'E') has recently played an important role in European music.
Computer-controlled electronic instruments are opening up new avenues for the implementation of microtonality, including just intonation frameworks that divide the octave into more than 12 degrees - see the Microtonality page. For centuries, Indian art music claimed to adhere to a division of 22 intervals (the ṣruti-swara system) theorised in the Nāṭyaśāstra, a Sanskrit treatise dating from between 400 BCE and 200 CE. Since consonance (saṃvādī) is the basis of both ancient Indian and European tonal systems, we felt the need for a theoretical framework that encompassed both models.
Unfortunately, the subject of "just intonation" is presented in a wholly confusing and reductive manner (read Wikipedia), with musicologists focusing on integer ratios that reflect the distribution of higher partials in periodic sounds. While these speculative models of intonation may support beliefs in the magical properties of natural numbers — as claimed by Pythagoreanists — they have rarely been teste against undirected musical practice. Instrument tuners rely on their own auditory perception of intervals rather than on numbers, despite the availability of "electronic tuners"…
Interestingly, the ancient Indian theory of natural scales did not rely on arithmetic. This may be surprising given that in Vedic times mathematicians/philosophers had laid out the foundations of calculus and infinitesimals which were much later exported from Kerala to Europe and borrowed/appropriated by European scholars — read C.K. Raju's Cultural Foundations of Mathematics: the nature of mathematical proof and the transmission of the calculus from India to Europe in the 16th c. CE. This epistemological paradox was an incentive to decipher the model presented by the author(s) of the Nāṭyaśāstra by means of a thought experiment: the two-vina experiment.
Earlier interpretations of this model, mimicking the Western habit of treating intervals as frequency ratios, failed to explain the intervalic structure of ragas in Hindustani classical music. In reality, the implicit model of the Nāṭyaśāstra is a 'flexible' one because the size of the major third (or equivalently the pramāņa ṣruti) is not predetermined. Read the page on Raga intonation and listen to the examples to understand the connection between the theory and practice of intonation in this context.
In Europe, the harmonic major third was finally accepted as a "heavenly interval" after the Council of Trent (1545-1563), ending the ban on polyphonic singing in religious gatherings. Major chords— such as {C, E, G} — are vital elements of Western harmony, and playing a major chord without unwanted beats requires the simplest frequency ratio (5/4) for the harmonic major third {C, E}.
With the development of fixed-pitch keyboard instruments, the search for consonant intervals gave way to the elaboration of theoretical models (and tuning procedures) that attempted to perform this interval in "pure intonation". Theoretically, this is not possible on a chromatic scale (12 degrees), but it can be worked out and applied to Western harmony if more degrees (29 to 41) are allowed. Nevertheless, the choice of enharmonic positions suitable for a harmonic context remains an uncertain proposition.
Once again, the Indian model comes to the rescue, because it can be extended to produce a consistent series of twelve "optimally consonant" chromatic scales, corresponding to chord intervals in western harmony. Each scale contains 12 degrees, which is more than the notes of the chords to which it applies. Sound examples are provided to illustrate this process — see the Just intonation: a general framework page.
The tuning of mechanical keyboard instruments (church organ, harpsichord, pianoforte) to 12-degree scales made it necessary to distribute unwanted dissonances (the syntonic comma) over series of fifths and fourths in an acceptable manner. From the 16th to the 19th centuries, many tempered tuning systems were developed in response to the constraints of particular musical repertoires, with an emphasis on either "perfect fifths" or "pure major thirds".
These techniques have been extensively documented by the organist and instrument builder Pierre-Yves Asselin, along with methods for achieving intonation on a mechanical instrument such as the harpsichord. His book Musique et tempérament (Paris: Jobert, 2000, to be published in English) served as a guide for implementing a similar approach in the Bol Processor — see the pages Microtonality, Creation of just-intonation scales and Comparing temperaments. This framework should make it possible to listen to Baroque and classical workson Csound instruments in the tunings intended by their composers, according to historical sources.
➡ Sadly, Pierre-Yves Asselin left this world on 4 December 2023. We hope that the English translation of his groundbreaking work will be completed soon.