Okay, Well, Just Tell Me What Happens at 2:14 a.m. EST, August 29th

Ξ January 29th, 2010 | → 0 Comments | ∇ Uncategorized, ai, music |

Artificial Intelligence has always fascinated me. I think it’s probably fascinated everyone… even people who didn’t grow up wanting to play poker with Soong-type androids or reading about paranoid androids and robopsychologists,… or, you know, growing up a little and wondering exactly how the Enterprise’s main computer does its natural language processing and if the Universal Translator actually assists with or simply serves to complicate the syntactic ambiguity problem.

That said, I’m not a linguist (I wish I were, by the way, but that’s a story for another time) and I certainly don’t have the intellectual chops to be an actual AI researcher. Case in point: I know enough about basic probability theory to understand the underlying principles of Bayesian learning and to think to myself, “Hey, that’s pretty neat. I could use this here and here and here.” But that thought is quickly replaced with something along the lines of “OH DEAR GOD, MY EYES ARE BLEEDING”, augmented with a lot of teeth-gnashing and garment-rending, as soon as the equations start popping up.

So long story short, am I going to be the one who solves the natural language problem? No. I’m much more likely to just spend my time bitching about comma-splicing in Facebook statuses. Am I going to be the one who develops a self-learning defense grid? No. I’m much more likely to join Cyberdyne for their awesome dental. Am I going to be the one who builds a protocol droid that can speak the binary language of moisture vaporators and can grumble humorously about how some nerf-herding smuggler’s hair-brained scheme is going to make him violate all three Asimovian laws? No. I’m much more likely to write a short story about said droid learning said binary language after being stranded in the unforgiving deserts of Tatooine and finding himself nursed back to health by a beautiful moisture vaporator with dreams of getting off her backwards world and seeing the galaxy beyond. (She dies at the end.)

There are much, much smarter people out there who are going to do these things. They’re working on it now. And even if I can never solve the problems they’re solving, I want to start understanding the challenges they’re facing. And I don’t meant that I want to understand it in a mathematical sense. (Well, that’s a lie. I do want to understand it in the mathematical sense. I just think I should understand the actual manifestations of it first. That will also give me time to get over the whole eye-hemorrhaging issue.) I mean that I should understand what it means to try to get an artificial system to make decisions, to do something human, and be able to see firsthand what is stopping us.

So can a robot write music? Yes, yes, a thousand times yes. There are even efforts to generate music that is capable of provoking an emotional response in a human listener by capturing the principles of music psychology in knowledge bases that can be used as decision weights by the system’s inference mechanisms.1

How cool is that? Seriously. Next thing you know, we’ll have androids shredding.

So anyway. I’m building a rule-based system that’ll generate music. I call him BachBot. He probably gets beat up at the Young Robots Finishing School for that name but we hang out sometimes. We’re starting with species counterpoint. One of his first epics: Ode to Beefy the Musical Wondercow2.

1. Hmm. Who owns the rights to music generated entirely by a rule-based system?
2. Title mine. BachBot doesn’t create his own titles yet, unless you count auto-incrementing his test outputs. He does that just fine. We’ll tackle the whole natural language thing, you know. Later. Right now, I’m more interested in him not shredding my gorram ears apart by insisting that a minor second is perfectly okay in first species counterpoint.

Original post by blah


  • Rumours and Lies