Hacker Newsnew | past | comments | ask | show | jobs | submit | kjander79's commentslogin

It's just an event loop, watching for status changes in a memory location. Talking to the hardware just occurs when the keyboard changes the values in memory.


The iMuse system really is remarkable. Games like X-Wing took great advantage of the features, when a Star Destroyer jumps into the game the music would seamlessly transition to the imperial March and it felt just like being in the movies. I don't think any modern system even tries to do those seamless transitions from one music piece to another.

One thing I wonder about .. he mentions CD-audio (Redbook?) as being one capability of the system. But the CD-Audio games like X-Wing vs Tie Fighter were much more limited in that sense. You'd literally just hear the music switch to the new track. And the Force Unleashed, the last game that used iMuse, wasn't particularly remarkable if memory serves. I wonder if that was a limitation they just couldn't quite make as seamless?

I figure today you could do it, with a "virtual MIDI" system using MP3 audio of individual instrument sounds ..

Edited to add: that last sentence is essential what a DAW provides.


Also done very well in Monkey Island 2 with iMuse as well, in which a lot of care was taken in transitioning music with custom bridges. It was quite subtle and lovely, and is considered a high point in video game music.

1. A video demo: https://www.youtube.com/watch?v=7N41TEcjcvM

2. Some details: https://mixnmojo.com/features/sitefeatures/LucasArts-Secret-...


Games today feature dynamic music with loops and transitions and individual stems that can be remixed at runtime. One prominent example (to me, at least) is "Take Control" playing over the Ashtray Maze in Control. This sounds like an absolutely seamless prog metal song while playing, but it is actually highly reactive to the gameplay - the rapid-fire sequence of battle arenas and fast paced corridors. The player stays in absolute control of the pacing the whole time.


Similar with Herald of Darkness in Alan Wake 2 "We Sing" level, the song loops through bridges based on how long you take to play through it.

And that's only the most obvious examples - games like Deus Ex and others have featured dynamic music transitions decades ago.


Tetris Effect is also a great example of this. Each movement and rotation of pieces impacts the score and each level has varied genres. One of my favorites is the New York City Jazz level.


Hi-Fi Rush did some of the opposite: the gameplay in certain parts shrinks or stretches so it takes the right amount of time to hit the next musical cue.


Ashtray Maze is a masterpiece and music is core to its experience indeed.


Nier Automata comes to mind of an example, has many versions (musically and lyrically) of the same pieces for each area and transitions between them.


Much like the Need for Speed series (I believe it was introduced in 1998, in the third installment called Hot Pursuit).


Final Fantasy XIV does this a lot. Boss fights' music will often change depending on what phase of the fight you're in, and in some the music will gradually transition to the heroic themes at the right moment.


Take Control is amazing.


> I wonder if that was a limitation they just couldn't quite make as seamless?

It's a fundamental limitation of CD audio. There isn't enough buffering to keep playing sound while the laser seeks to the next track, so there must be a gap. The gap isn't even predictable, the seek time will vary from drive to drive and even vary on the same drive.

With CD audio, your CD-ROM drive actually switches mode to become a regular CD player. The digital samples don't get sent to your sound card, the drive actually has all the electronics required to decode the digital audio and convert it to analog. All your sound card does is mix the analog output from your CD ROM drive with everything else.

The game can only really send "skip to track" style commands to the drive, more or less the same set of commands you could send with a proper CD player's remote.


CD and other formats create trade-offs vs MIDI event sequences - it's a simple playback method offering a lot of fidelity but in exchange, you're tied to having either "one track at a time and the CD spins up in between" (Redbook CD), cueing uncompressed sampled tracks(feasible but memory intensive) or cueing one or more lossy-compressed streams(which added performance or hardware-specific considerations at the time, and in many formats, also limits your ability to seek to a particular point during playback or do fine-grained alterations with DSP). So as a dynamic music system it tends to lend itself to brief "stings" like the Half-Life 1 soundtrack, or simple explore/combat loops that crossfade or overlay on each other. Tempo and key changes have been off the table, at least up until recently(and even then, it really impacts sound quality). DJ software offers the best examples of what can be done when combining prerecorded material live and there are some characteristic things about how DJs perform transitions and mashups which are musically compelling but won't work everywhere for all material.

MIDI isn't really that much better, though - it's a compatibility-centric protocol, so it doesn't get at the heart of the issue with dynamic audio of "how do I coordinate this". All it is responsible for is an abstract "channel, patch number, event" system, leaving the details involved in coordinating multiple MIDI sequences and triggering appropriate sounds to be worked out in implementation. An implementation that does everything a DAW does with MIDI sequences has to also implement all the DSP effects and configuration surfaces, which is out of scope for most projects, although FMOD does enable something close to that.

I think the best approach for exploring dynamic and interactive right now is really to make use of systems that allow for live coding - Pure Data, Supercollider, etc. These untangle the principal assumptions of "either audio tracks or event sequences" and allow choice, making it more straightforward to coordinate everything centrally, do some synthesis or processing, some sequencing, adopt novel methods of notation. The downside is that these are big runtimes with a lot of deployment footprint, so they aren't something that people just drop into game engines.


> I figure today you could do it, with a "virtual MIDI" system using MP3 audio of individual instrument sounds ..

Reinventing tracker music, in other words? =D


X-Wing vs Tie Fighter: available memory and CD seek performance really restricted what you could do then.

The Force Unleashed: this is one of those "succeeds if it's invisible" things. The music is procedural based on mixing rhythmic and arrhythmic stems. That allowed continuous cross fades without needing to precisely match beats. That was a limitation again of not being able to precisely line up stems. The other fun thing that was introduced was physics driven synthesis. The DMM system fed information about strain, impacts, and other events into a granular synthesizer. The bussing and ducking architecture was derived from this paper by Walter Murch: https://transom.org/wp-content/uploads/2005/04/200504.review... Fun anecdote: I was at a party with some audio nerds, and raving about the paper to a new acquaintance, who interrupted me and said, "Oh, I wrote that!"


https://www.youtube.com/watch?v=_MguVQ1Fja8&t=8 - an example of the X-Wing iMUSE soundtrack! You can hear it swap in memorable tags, literally without missing a beat.

From a musical theater composition perspective, it's almost like building around vamp sections: https://romanbenedict.com/vamp-safety-repeat/ - you build a neutral, repeatable motif that you can easily lay under unpredictably timed segments (e.g. spoken dialogue), that's primed to "explode" into a memorable melody whenever the on-stage timing calls for it!


X-Wing just had great music. Even the original stuff was great. The music for the training run was perfect.

Modern games have similar reactive music systems but I've never heard one I felt was better than X-Wing's. They got it right on the first try.


What made these games different was that the musical themes were significant and well known long before you installed your SoundBlaster. The music was mixed at high intensity out of the box allowing it to influence you, each track tailored to the moment.

This gave the series a leg up in that the music could actually communicate information effectively -- a tense moment, the shifting tide of the battle, the calm after a victory -- whereas other games simply had to put up waveforms that sounded pleasing.

To be fair many games experimented with sound design in this era, but few had such legendary IP to build with. An unfair advantage to say the least. The folks wielding iMUSE clearly knew what they had.


Again, the original stuff was great too, I don't think it was just the familiar themes.


> I don't think any modern system even tries to do those seamless transitions from one music piece to another.

You will be pleased to hear that plenty of games since then have continued to use that same technique, and there are in fact entire realms of game dev systems dedicated to enable that experience!


Dynamic music systems are standard in modern game development: https://www.fmod.com/studio

(Whether or not the game actually does anything interesting with them is its own question.)


> I don't think any modern system even tries to do those seamless transitions from one music piece to another.

Games definitely do this.


I might deceive myself, but as I recall, I was vary satisfied with how world of warcraft handled environmental music changes. imuse may be a whole other level though.


You have awakened some incredible memories. I know exactly what you are talking about.


I really enjoyed having the kids introduce it


This is the Logic Theorist: https://en.wikipedia.org/wiki/Logic_Theorist

I love Simon's anecdote of using his wife and children as the stack as he developed IPS.

This eventually was replaced by GPS, which has a good description in Norvig's Paradigms of Artificial Programming, here: https://github.com/norvig/paip-lisp


This really is a beautiful book, applicable to breaking down problems into manageable parts, whatever language you are using. Highly recommend.


Actually that's an interesting point, since the images are already in use in the article this one is responding to. It seems likely.


> but how much actual understanding does it have?

That's always the question, isn't it? The article does a pretty convincing job of showing that at least in the given examples, it has pretty good "understanding" of what's taking place in the scenes and what makes them remarkable to people, and 7 years for comparison is going back a long ways, just the last 2 or 3 years has been where much of the most interesting progress has revealed itself.

Image segmentation, object detection and tracking, are all on display already here.


Just speculating on how the "understanding" may come about:

In the given images above, it may be clear from context (text? tags? exif info etc?) of images in its training data, that it's unusual for people to be dragged on a rope behind a horse, very unusual / dangerous for 747 sized airplanes to fly on their side, or houses to be lying on their side on a beach. And hence, describe such a view with "unusual", "dramatic", etc. Would it even need to understand a conceptual meaning of those words? Apply label, done.

Don't people work the same, in a way? Over the years we'll rarely see a house burning in person. We see news reports of such events mentioning people dead or severely burned. So after a while that 'training set' is enough to say: "person stumbling out of a burning house = something bad happened".

Yes, humans may then reflect on how they would feel if placed in unlucky person's shoes, and rush to alleviate that person's pain. Or cringe by the thought of it.

But in the end: maybe, just maybe, what human brains do isn't so special after all? Just training data, pattern match with external input & use results to self-reflect.

(that last step not -yet- covered by GPT & co)


Sounds to me they are saying this is a significant reduction in the Neanderthal genes that survive today, down to 4303 that can't be attributed to other human variants. This would make more sense to me, as the usual statement of up to 4% of a person's genes would be enough to create a unique species, wouldn't it?


That depends entirely on what the genes in question do. Some pairs of species can produce viable offspring even though the parent species have different numbers of chromosomes.


I'm editing this because I'm wrong but leaving original comment below.

On second thought, this research wouldn't be looking at genes that would be highly variable among human populations, that DNA tests would profile to identify individuals, but on genes highly conserved among the human genome, where 4303 changed genes would be a significant amount. I was ranting about the wrong thing.

Original comment: Point taken. Speciation is certainly messy.

4303 statistically significant coding genes is just essentially "a good start" when it comes to identify what, if any, Neanderthal inheritance modern humans might have, compared to a potential "4% of genes DNA tests bother profiling, which now could have still come from some undetermined modern human population, that just so happened to have the same alleles as Neanderthals, including non-coding genes that we couldn't pinpoint effects anyways".

I think that would explain the ambiguity in the articles about actual effects of these identified genes, that's for the follow-up research.


Neanderthals themselves were barely a unique species compared to sapiens, and they had 100% Neanderthal variants. 4% of the genome consisting of their variants makes a variety, at best.


This write up highlights a depressing fact we all know: the mobile security situation is bleak, to the point of being untenable.

I checked my own device, and despite owning it just about a year, security updates have likely already stopped (although the manufacturer website hasn't exactly confirmed that, just yet) and even if the security updates were still coming, the gap between when they are released and then reach the devices are measured in months, not days, making these exploits worse than zero-days. I have seen no movement in correcting these issues from any of the manufacturers.

You too can check for yourself at source.android.com/security/bulletin


Android is not the only mobile ecosystem around, and not every ecosystem is a train wreck.


In theory I agree, but in practice...

The exploit given here works on any device with the given driver, regardless of OS. Android is just the primary example since it is the 800lb gorilla.

And as the article mentions, just 2 hardware stacks make up nearly the whole ecosystem.


I believe your parent comment is referring to iOS


Chaotic systems, at least, were not known to Einstein. They weren't described until after his death, from Lorenz' computer simulations. Earlier hints of the theory were there, but no one took them seriously. Ian Malcolm, though, is a solid authority.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: