It’s been almost three months since the last instalment in this series, so it’s high time for another update on our progress. In this update, as promised I will share some information about our music font, Bravura, and the project we have been leading to standardise the layout of music fonts, and some insights into how we are going to build the user interface for the new application.
We released the first pre-release version of our music font, Bravura, in May of last year. One year later, we are on the cusp of releasing version 1.0 of the font. That first release, version 0.1, contained around 800 unique glyphs. The current version, version 0.99, contains nearly 2400 unique glyphs (plus almost 400 stylistic alternates and ligated forms for some of the glyphs). Hundreds of hours of work have gone into the font, and overall I am really pleased with both the wide coverage of different kinds of music notation – from early chant right up to pictograms for including electronic elements in performance – and the consistency of its visual appearance, which has retained the blackness and boldness that I set out to capture at the start of the project.
Because we have released it under the very permissive SIL Open Font License, it is free for anybody to use, modify, redistribute or otherwise muck about with. And it has been a thrill to see it being used in lots of different ways, even before its primary intended home, our new scoring application, is ready to go.
You can play Breve directly in your web browser. The aim of the game is to combine two notes of the same note value to make longer notes using the arrow keys, and to combine all the way up to a breve (double whole note) before you run out of space on the board. I’ve managed to get up to a minim (half note). Can you do better? (Incidentally, if you enjoy Breve, check out Threes, the game on which 2048 and Breve are based, on iOS and Android. It’s a much more subtle and engaging game than 2048.)
Bravura is also being used in the in-development MuseScore 2.0, and it’s also the main music font used in Rising Software’s Auralia and Musition iOS apps, which help students to develop their ear training and music theory skills.
Another great use of Bravura is in Soundslice, a browser-based application that makes it easy to produce detailed, multi-track transcriptions of musical performances from YouTube, and display the results both as tablature and as a score.
Bravura has also already been put to use in LilyPond, and a project is underway to make it easy to use its glyphs in TeX documents via LuaTeX.
Development of Bravura has gone hand in hand with development of the Standard Music Font Layout (SMuFL, pronounced “smoofle”), which is an effort to define a set of guidelines for how music fonts should be constructed, both in terms of which musical symbols should be assigned to which Unicode code points (for example, “A” has the code point U+0041 in all Unicode-compliant fonts), and how glyphs should be scaled relative to each other and positioned relative to the origin point. This is all very technical, but the upshot of the project is that font designers and music software developers have the option of using this standard for their fonts and applications, which should in time make it easier for users to use the same fonts across different applications, and make more compatible fonts overall available.
You might be surprised at just how many different musical symbols are used in conventional music notation (or CMN, a term coined by Donald Byrd to refer broadly to Western music notation developed over the past three or four centuries). You can now browse through the repertoire of glyphs on the SMuFL web site.
Exercising the musical brain
If you’ve been following this diary, by now you will be familiar with the high-level concept of how our application’s musical brain is constructed: from a large number of small engines, each designed to perform a specific task, chained together in series and in parallel, for maximum efficiency in multi-CPU computers. The main thrust of the development work over the past few months has been to connect together all of the engines that have been built to date, so that we can establish the basic editing loop of the program.
This means, at a high level, taking the basic form in which the music is represented (time-based streams of notes, each of a duration stored as a number of beats, rather than as a specific notated duration), and running the music through each of the discrete engines (some in parallel, some in sequence) to build up a display of music that shows the notes with notated durations appropriate to their rhythmic length, prevailing time signature and position in the bar; positioned in the correct place on the staff according to clef, transposition and octave lines; with accidentals appropriate to the prevailing key signature stacked correctly against chords with multiple noteheads; stems pointing in the correct direction and of an appropriate length; rhythm (augmentation) dots shown where appropriate and positioned correctly; and with rests of the appropriate durations (again, divided relative to their position in the bar and the prevailing time signature) shown at the correct vertical position so as not to collide with the notes… and all of this drawn using appropriate spacing, based on the complex relationship between rhythmic duration and horizontal space aggregated from the rhythms of all of the streams of notes that make up the music.
Once the music data has flowed upwards from its most basic representation all the way to its correctly notated appearance, the user must then be able to interact with the drawn music. The simplest way to close the loop and allow the user to make a change that the application has to process is to allow the user to select a note and delete it (leaving a rest in its place, at least for now), change its pitch, change its duration, or shift it forwards or backwards by a set rhythmic amount.
The edit is then made at the most basic level of the representation, i.e. the time-based streams of notes with durations specified in numbers of beats, rather than directly on the displayed notation, and the whole process runs again, transforming the basic representation into fully notated music. This time, the engines need to process only the range of music that has been affected, typically only a bar or two, and perhaps only for one instrument, so the amount of computation needed to update the notation after a typical edit is normally considerably reduced.
Even though the displayed notation is still crude at the moment, now that we have this basic loop in place, we have confidence that the architecture our crack team of programmers have designed is going to deliver the kind of beautiful automatic engraving and powerful editing features that are central to the vision for our product.
Building the application
At this point, our editing loop and the beginnings of our properly engraved music are being exercised in the same test harness we have been using for more than a year. Next, we will start to hook the musical brain up to the real application that you will eventually be able to use.
The musical brain itself is built in a platform-independent way: it runs on Windows and OS X today, but it can be ported to other operating systems if we want to build applications for other platforms in the future.
To build the real application, we have decided to use the cross-platform Qt application framework. Qt has really sophisticated tools for building user interfaces that can run on multiple platforms, while still retaining the native capabilities of the host operating system, for example, accessibility support via MSAA or VoiceOver, or full screen mode on OS X. Qt also has mature and efficient 2D drawing capabilities (essential for producing beautiful display of music notation on the screen, on paper, and in graphics files like PDFs), robust typography support, and all sorts of other goodies.
Most importantly, Qt will allow us to build an application with an attractive, functional user interface that is consistent across both Windows and OS X, and build it only once. If we had to develop the application on Windows and OS X separately, it would take at least twice as long, and you would have to wait even longer to get your hands on it!
More to come
As always, there is so much more we could talk about: the complexity of building an engine that can figure out how to position notes and chords in multiple voices at the same rhythmic position such that they don’t collide and are laid out according to established engraving convention, with the appropriate number of rhythm dots shown in the right places, and with stems drawn in the right place, spanning the correct staves; the subtleties of when an accidental should cause extra rhythmic space to be allocated, and when it should not; how the audio and playback engine will be integrated into our application; and many other things besides.
But having given you a tantalising glimpse into what we’ve been working on recently, I will try to follow the advice of PT Barnum (or was it Walt Disney?), who said, “Always leave them wanting more.” Hopefully your appetite for information about our application has not been completely satisfied, and you will check back again for another update soon.