Music Evolves Podcast

How Technology is Changing the Way We Make Music: A Look Inside Columbia University's Groundbreaking Computer Music Center | A Conversation with Seth Cluett | Music Evolves with Sean Martin

Episode Summary

Explore the intersection of music and technology with Seth Cluett, Director of Columbia University’s Computer Music Center, as he shares the center’s rich history, groundbreaking innovations, and how it continues to shape the future of sound. From pioneering electronic music to integrating AI and immersive audio, this episode reveals how technology enhances creativity and redefines what’s possible in music.

Episode Notes

Guest and Host

Guest: Seth Cluett, Director of Columbia University’s Computer Music Center | On LinkedIn: https://www.linkedin.com/in/seth-cluett-7631065/ | Columbia University Computer Music Center Bio: https://cmc.music.columbia.edu/bios/seth-cluett

Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast & Music Evolves Podcast | Website: https://www.seanmartin.com/

Show Notes

Music and technology have always shaped each other, and few places embody that relationship as deeply as the Computer Music Center (CMC) at Columbia University. In this episode of Music Evolves, Sean Martin sits down with Seth Cluett, Director of the Computer Music Center and Assistant Director of the Sound Art MFA program at Columbia, to explore the center’s rich history, its role in advancing music technology, and how it continues to shape the future of sound.

The Legacy and Mission of the Computer Music Center

The CMC is housed in the same 6,000-square-foot space as the original Columbia-Princeton Electronic Music Center, which dates back to 1951 and is one of the world’s oldest university-based electronic music research facilities. This was the birthplace of early electronic music, where pioneers learned to use cutting-edge technology to create new sounds. Many of those musicians went on to establish their own studios around the world, from Egypt to Japan.

The center has played a role in major milestones in music history, including the work of Wendy Carlos, a former student known for Switched-On Bach, the score for Tron, and The Shining. The first piece of electronic music to win a Pulitzer Prize was also composed here. Today, under Cluett’s leadership, the focus remains on creativity-driven technological innovation—allowing composers and artists to explore technology freely and push the boundaries of what’s possible in sound and music.

One of the center’s guiding principles is accessibility. Cluett emphasizes the importance of lowering barriers to entry for students who may not have had prior access to music technology. The goal is to make sure that anyone, regardless of background, can walk into the studio and begin working with 80% of its capabilities within the first 20 minutes.

Exploring the Labs and Studios

The episode also includes a tour of the labs and studios, showcasing some of the center’s groundbreaking equipment. One highlight is the RCA Mark II Synthesizer, the world’s first programmable music synthesizer. Built in the late 1950s, this massive machine—seven feet tall and weighing over a ton—was instrumental in shaping the sound of early electronic music. The system worked by punching holes into paper to control sound generation, similar to a player piano. While no longer in use, the CMC has collaborated with iZotope to model some of its effects digitally.

The tour also features Columbia’s electronic music studio, which houses synthesizers from Buchla, Serge, and Moog—the latter being developed by Bob Moog, who was once an undergraduate at Columbia. The center’s modern design emphasizes a seamless workflow between analog and digital technologies, allowing students to quickly create, process, and experiment with sound.

Another key space is the immersive media and spatial audio research facility, which features a 12.1-channel loudspeaker sphere for ambisonic sound, along with a 32-capsule microphone that captures highly detailed audio environments. This technology is not only shaping music but also fields like virtual reality, data sonification, and interactive media.

The Future of Music Technology

Looking ahead, Cluett highlights the increasing interplay between AI, machine learning, and music composition. While some companies promote AI-generated melodies, he believes that truly expressive, human-driven composition remains essential. The role of technology, he argues, is not to replace human creativity but to enable new forms of expression. The CMC is at the forefront of this shift, experimenting with real-time audio processing, interactive performance systems, and embedded sensors that enhance live music experiences.

As music and technology continue to merge, Columbia’s Computer Music Center remains a key player in shaping the future of sound. Whether through pioneering hardware, software innovation, or fostering the next generation of creative minds, the center proves that music technology is not just about engineering—it’s about expression, accessibility, and the pursuit of artistic joy.

🎧 To hear the full conversation and get an inside look at the labs and studios, listen to the episode now and catch it on the YouTube playlist: https://www.youtube.com/playlist?list=PLnYu0psdcllTRJ5du7hFDXjiugu-uNPtW.

Sponsors

Are you interested in sponsoring this show or placing an ad in the podcast?

Sponsorship 👉 https://itspm.ag/annual-sponsorship

Ad Placement 👉 https://itspm.ag/podadplc

Resources

Columbia University Computer Music Center: https://cmc.music.columbia.edu/

The last piece of functional music created on the RCA Mark II Synthesizer (1998) | DJ Spooky vs. The Freight Elevator Quartet – File Under Futurism: https://www.youtube.com/watch?v=KViFoylo2hQ

Podcast | The Future Of Music And Its Impact On Society And Humanity | With Seth Cluett and Scott Scheferman: https://itspmagazine.simplecast.com/episodes/the-future-of-music-and-its-impact-on-society-and-humanity-with-seth-cluett-and-shagghie

Episode Transcription

How Technology is Changing the Way We Make Music: A Look Inside Columbia University's Groundbreaking Computer Music Center | A Conversation with Seth Cluett | Music Evolves with Sean Martin
 

[00:00:00] Sean Martin: Here you are. Very welcome to a new episode of Music Evolves. I'm Sean Martin, your host. I get to actually enjoy, enjoy what I love, which is exploring the world of music and seeing how it has evolved over time and where it might be heading. And I'm super thrilled to be here with Seth Cluett, who I met, I don't know, it's probably been four or five, six years, maybe something like that. 
 

I met you in Ireland. You were at Inspirefest demonstrating. some technology. Mm hmm. And I was like, that's really cool. I'd like to keep in touch with Seth, which we did. You've been on the show a couple times. Yeah. Yeah. Talking about different things. And I actually got to see the lab. We're gonna see the lab. 
 

Yeah. And a number of labs here and studios. And I'm excited to show you those. But I wanted to talk more about kind of the program here. Mm hmm. The history of the program. Uh, your role in the program, and obviously we're here at Columbia, Columbia University. Um, a few words, Seth, first for Folks, who you are, how you, how you arrived where you are today. 
 

[00:01:08] Seth Cluett: Yeah, um, I'm Seth Cluett. I'm, uh, director of the Computer Music Center and assistant director of the SoundArt MFA program here at Columbia. Um, maybe of interest to your audiences, I'm affiliate faculty in the Data Science Institute and, uh, affiliate faculty in the Center for Comparative Media. Um, I, as you can tell, I'm an interdisciplinarian. 
 

I work, uh, somewhere in the middle of audio media technologies. creative sound practices and music, uh, and that can take a number of different shapes, from writing book chapters or peer reviewed journal articles, or creating sound installations, either didactic exhibition installations for museums, or art exhibitions for galleries, and then, uh, within the music space, mostly, um, uh, composed music for ensembles, and live performance with custom built, uh, electronic musical instruments of my own design. 
 

[00:02:04] Sean Martin: I saw one of those, I  
 

[00:02:05] Seth Cluett: think.  
 

[00:02:06] Sean Martin: And I forget who was, who joined you, uh, on stage. Um, the electronic, the drummer. Levi Lorenzo. Yes, that's right. Yeah, Levi Lorenzo. But, but your memory doesn't test there. Yes, it did. Mine didn't work. Yeah, yeah. 
 

[00:02:17] Seth Cluett: Uh, it's a snare drum with a transducer and a pickup inside of the drum head that listens to the air pressure inside of the drum and changes the musical processing based on the air pressure inside the instrument. 
 

[00:02:31] Sean Martin: And who knew there's a whole, whole conference and competition revolving around, maybe there's more than one. I saw one pretty significant one where creation of new, new music instruments, um, which is pretty cool. I'm open to, open to explore that a little more. So let's talk about the program. I mean, there's a lot of history here. 
 

Um, can you kind of give us an overview of when it started? Sure. Things came from here. I think we'll touch on some of that when we go through the lab. Sure. Maybe an overview for folks up here.  
 

[00:03:00] Seth Cluett: Yeah, of course. Uh, so, um, so this is the Computer Music Center at Columbia University. Um, we occupy the same 6, 000 square foot, footprint in the same building, uh, as the original Columbia Princeton Electronic Music Center, um, which is, uh, um, you know, Going back to 1951, uh, one of the oldest, if not the oldest, uh, electronic music research facility at a university in the world. 
 

Um, so what that meant for the history of this place was that at the very beginning when no one knew how to do electronic music, this was the place where they trained all of the first generation of people who would go out into other universities and found studios, go back to their home countries and found studios at their universities. 
 

So famous examples would be. Heli Meldab, who wrote the first piece of electronic music ever in Cairo in 19, in the late 1940s, uh, um, came here, worked in the studios, and then went back to Cairo and, and founded, uh, Electronic Music Resources, uh, at the university there. Same thing happened in Turkey, and Iran, and Israel, and Venezuela, and Argentina, and You know, the list goes on and on and on. 
 

In Japan, um, uh, hundreds of people from all over the world coming here to figure out what this new thing, electronic music, is and how they might learn to use those, um, those facilities. The original facility had, uh, five studios, just the same way the current facility does. Um, some of the rooms are a little bit different, but, um, uh, all the difference Primarily being that at that time they were all the same set up basically, so they were trying to do multiples to ensure more access rather than diversity to, uh, ensure more exposure, which is what we're trying to do in the current iteration of the facility. 
 

Um, uh, over the years, dozens of, uh, people of interest have worked at the center. Um, um, famously Wendy Carlos, who did switch Don Bach, was a student here. Uh, Also the music for The Shining and Tron. Um, uh, very important person in the history of electronic music. Um, did their foundational work at the Center. 
 

Um, uh, you know, the, I mentioned, uh, I mentioned when we toured the space that, uh, That the first piece of music to win, uh, the Pulitzer, first piece of electronic music to win the Pulitzer Prize was composed here. Um, and so, uh, so it's a long history of kind of, uh, creative inquiry into sound technology practice. 
 

Um, and so the current center, um, you know, we have a lot of, uh, peers. The Center for Computer Research in Music and Acoustics at Stanford, or the Center for New Music and Auditory Technologies at UC Berkeley. or the Georgia Tech Music Technology Doctoral Program. Uh, all of those have different ways of interfacing with intellectual property generation and patent development and uh, publishing in scientific journals and feeding industry. 
 

Um, I've decided that, uh, I will continue the tradition here since I came on as director two years ago of creativity first technological interfa Innovation, which I'm basically framing as, um, when composers and artists have, uh, free reign to explore technology, they come up with ideas that surpass most, uh, uh, engineering research and development processes. 
 

And so when we come up with something interesting, we publish on it. And when we have people who are suited to industry, they go there. Most of them are making their lives as composers, uh, you know, lucratively employed as, uh, themselves rather than, uh, working in the service of others. Um, but, uh, but this place is still focused very much on creativity. 
 

First engineering innovation.  
 

[00:07:02] Sean Martin: Yeah. So many things in my mind to, uh, to talk about here. So let's go here. We were talking earlier, you mentioned, mentioned maybe not exactly in these words, but essentially removing barriers and, and and friction from the creative process.  
 

[00:07:21] Seth Cluett: Sure.  
 

[00:07:21] Sean Martin: Um, maybe talk a little bit about that. 
 

[00:07:23] Seth Cluett: Sure. Um, I mean, I know it's a complicated topic to address now, but, um, there for a long time have been, uh, equity issues and access to technical facilities, particularly around electronic music, but not exclusively that way. Um, particularly people who grew up in regions or in high schools that didn't have access to technology. 
 

Particularly hardware technology come to a center like this that's loaded with historical hardware and new hardware innovation, new software designs and new software interfaces, human computer, uh, HCI like human computer interaction design. Um, that can all be very daunting. Um, and what we want to do is retain people who, um, otherwise might see themselves as not belonging here. 
 

Um, uh, so this is not about prioritizing anyone, uh, You know, uh, person. But I'm a first gen college student. I come from very, very rural upstate New York in a very, very poor high school. Um, I was lucky to have been exposed to electronic music at a very young age. Um, but in every fancy place I've been a student, New England Conservatory, Rensselaer Polytechnic Institute, and Princeton, I have always felt as though everyone around me knows much, much more than me. 
 

And so with that spirit of design, we've tried to build our studios with the idea that the average person should be able to come in to the studio and do 80 percent of a studio's function within the first 20 minutes of walking in the room. Which if you've used technical facilities, you know that that is a very high bar, uh, and sometimes unachievable. 
 

[00:09:01] Sean Martin: Probably took me that just to do this. 
 

[00:09:03] Seth Cluett: Right. And, um, uh, and so we have gone a long way to make it so that We're reducing friction to creativity, um, and we're reducing frustration or the sense, early senses of failure that can, um, kind of creep in as a way for someone's, a little voice to pop up in someone's head that says, ah, I can't figure this out. 
 

I don't belong in this field. And I want to capture the people for whom, um, you know, the first pass might not be intuitive. But what they're doing is questioning the nature of the system. And it's that questioning of the nature of the system that gets us to think of new solutions to problems that we don't know need solving yet. 
 

And so I want to find people who are willing to push just enough through the friction, uh, but not provide so much friction that they're, uh, deterred from engaging.  
 

[00:09:58] Sean Martin: The other thing I wanted to touch on 
 

Take a step back, um, the creation of electronic music, um, so when we look in the, in the different studios and different labs, if you will, lots of wires, lots of computers, lots of dials, lots of knobs, um, I was talking to, I think you actually met, uh, Scott Shefferman on one of the episodes, I was on with him the other day. 
 

I'm starting to play around with synthesizers and things. It didn't trigger for me that there's actually sound passing through and being manipulated, but also pulses. Kind of like the dots on the paper that we're going to have a look at. So maybe describe what electronic music is in terms of hardware and analog and digital. 
 

[00:10:59] Seth Cluett: Yeah, so we are in a world right now that, um, seamlessly moves back and forth between analog and digital technologies. There was a gap between, I would say, 1988 and 2005, where it seemed as though we were going to bypass analog and that it would be rarefied and collector's items and vintage. Um, but what we've discovered is that, Kind of the rise of Internet of Things technology, the, um, the ability for your refrigerator to alert you as to the door being open because the temperature is lowered, has meant the, um, the nanoscale development of hardware technologies seamlessly interacting with software technologies. 
 

And that's put us in a really interesting space for electronic music because, you know, we started with test oscillators from electronic workbenches in the 30s and 40s as the way that we could generate a tone. So it's the function generator that tells you, I'm now, like, my circuit is operational, I, you know, you can count the number of cycles of the period going through the circuit and you understand, like, your RC time circuits, your, your, your, your, um, your electric, your electrical engineering principles. 
 

Um, Like we've gone from that, which is kind of like MacGyver cut and paste from things that are found around the, the world to the 1960s with, um, the development of transistorized technologies and solid state electronics to be able to have synthesizers in a suitcase. And that meant that people could generate tones with circuitry, oscillators, and um, uh, and the like, um, filter designs. 
 

Um. Some of these things trickling down from military, industrial, and corporate contexts, telephony in particular. Um, uh, work at Bell Labs, you know, filter designs coming, coming back to audio from places where they were designed to like make a telephone line more efficient. Um, what we have now is, um, a kind of embarrassment of riches, which means a person just starting in electronic music right now has more Easily accessible, affordable end user technologies that require very little technical expertise at their disposal. 
 

[00:13:29] Sean Martin: So, I could talk to you for hours, we don't have hours. Um, I want to close with this, and of course if there's anything you want to add to anything, by all means, but When, I was fortunate enough to go to NAMM in Anaheim, the NAMM show. And there's certainly a lot of technology there. Um, a lot of, what I found is a lot of technology embedded in other technologies. 
 

Yeah. And, so, we were looking in one room where you're doing, there's a screen over the, over the, visualizing the sound. Yeah. Um, that, I don't know what, what, are you? How is that helping students learn music? Where is that leading us to? What do you kind of see the future of this technology? Where is it taking us? 
 

[00:14:28] Seth Cluett: Yeah, so, you know, sometimes, uh, especially in the current climate, people will ask, like, why are you going to school for music? Why are you studying computer music? And, um, A long time ago when I was at another university, I had a colleague ask that question quite snarkily, and, uh, from an engineering department, and I said, How many people do you know who listen to music? 
 

And the answer is, everyone. And then what do they listen to it on? Technology. Who develops that technology? Engineers. Who needs to understand how audio works? musicians who are trained as engineers. And then how is all that music made? And it's recorded and produced and developed on microphones and played on loudspeakers and written in software and engaged with keyboards that are mechanical engineering that, uh, tightens, uh, has a tight relationship with, uh, uh, embedded systems that are. 
 

Electrical engineering that has a tight relationship with software for the sound synthesis routines. All of it needing to be playable and expressive. And it's that expressivity that is the domain of musicians natively trained. And so I can have a software engineering student from the School of Engineering and Applied Sciences, or a electrical engineering student, or a CS major, or a music major with a CS minor, who becomes an incredibly employable person because they understand the interconnectedness of a bunch of systems that enable people to have, um, some joy in their day. 
 

And, um, that pursuit of joy. I think we should stop shaming people for the pursuit of joy in their lives. But domain specific computing, uh, is a really healthy work life balance for a lot of people. And so, um, I think we're headed to a domain where, 
 

uh, the kind of slow demise of through hole components in electrical engineering towards surface mount, uh, chips. The super miniaturization of Uh, things that used to cost a lot of money to super miniaturize and are now standardly available is allowing us to put musical computing in places that look like normal music ing. 
 

Meaning, a sensor on a violinist's wrist, uh, that can dial in the amount of reverb in an over dry hall for a section that needs to feel more full. Right? These kinds of, like real time, sometimes machine learning or deep learning or AI inflected engagements with performance data, with musical data, um, which is to say nothing of the whole world of audio prediction engines for playlist development in the audio industry. 
 

So, there's, you know, we're a, We're a rich and vibrant field that once you explain it to people, they go, oh yeah, it's totally obvious you guys are here because there's an incredible demand for what you do. Um, but the first instinct is to say, yeah, get a real job .  
 

[00:18:02] Sean Martin: And as you say that, I either, I said it on camera or not, but uh. 
 

You got asked the question, why would you? And I asked myself, why didn't I? Right, right, exactly. To your very point, that it would be, it would be a dream, dream way to spend my time. Um, and it's not too late. Um, It's never too late. I always have one more question, so I'm gonna, I'm gonna leave you with, leave folks with this question. 
 

Um, the, the role of the human as technology comes together, and, and with that, How much new stuff can be built? Because you mentioned AI that can model a lot of things. And the computers can do a lot of it. Even just, I don't have to actually pull the, pull the bow across the string. I can just pretend to and it may be right in the position of my hand. 
 

Could create the sound that I want. Um, where, where does technology end? The human end, technology pick up. Is there a role for human, humans in the creative process still and all that kind of stuff? Yeah.  
 

[00:19:07] Seth Cluett: Um, so we met in Ireland and, and I'm sure you've experienced the, um, uh, a session as they'd call it, of musicians over a couple pints of Guinness, um, communing around music making. 
 

It's precisely things like that, that are, that convince me that the human will never leave. We need to be together. They're doing it. It is part of feeling embodied in the act of making music, to make music in whatever form. What's happening now is that technology is enabling kinds of expression with computing and circuitry that before were stilted or not expressive or not embodied or didn't feel like they were, um, you know, of the past. 
 

Human playing and that has meant that we just get more embodied the further technology develops instead of less. Now there's a dozen, hundreds probably by now, AI companies trying to, you know, convince us that they can do AI melody generation and they're fine. But the most advanced projects in that space, um, when you get past the vaporware of an intelligently edited video of what their project, what their product can do, and you hand it to a musician who wants to engage with it for an end result that is expressive or emotional, the, the, the, um, the it isn't there yet. 
 

Um, now I'm not saying it won't get there and that we might, uh, lose the opportunity to create a 30 second intro clip for a YouTube video, which monetarily it's cheaper to have AI generate, you know, a little bit of boilerplate. It's been a little easier for MS Word to generate a boilerplate for a letter for a long time. 
 

But effective letter writing is an art. And so I think, uh, uh, music's not a threat. But it's actually, um, the other way around. Music provides examples, uh, to the technology world that, uh, are problems that are much harder to solve.  
 

[00:21:41] Sean Martin: Beautifully put. Thank you. Beautifully put. Well, Seth, I won't keep any longer. 
 

Um, folks are going to get a view of the lab, or the studios, and I hope everybody enjoys that.  
 

[00:21:52] Seth Cluett: Where are we at here? Sure. So, this is the RCA Mark II synthesizer. Um, The 6, 000 square foot facility of the current Computer Music Center is the same as the original Columbia Princeton Electronic Music Center that was started in the late 50s here. 
 

This synthesizer is the world's first programmable music synthesizer. The first piece of electronic music to win the Pulitzer Prize was made on it. 
 

basically a two voice synthesizer with tone generation and composing in the left hand side and processing on the right hand side, including manual binary for the, uh, ordering of the effects processes. Um, which for its time, uh, essentially like a uh, touring computer. Um, uh, was it You know, one and a half tons, seven feet tall, 14 feet wide, it's quite a behemoth, but now it's a very large paperweight. 
 

Um, but we've been working with, um, uh, Isotope Software Company to do, um, models of some of the effects, and we have published a paper on the sound effects filter where we did a wave digital filter model of the, of the circuit topology of the, uh, low and high pass filters with their modifications. 
 

[00:23:22] Sean Martin: So, incredible. 
 

Little RCA logo up there.  
 

[00:23:25] Seth Cluett: Mm hmm.  
 

[00:23:27] Sean Martin: Walk me through how this works. Is it, because you mentioned left to right. Sure. Is it, is it um, do you start dialing in and move across?  
 

[00:23:38] Seth Cluett: Yeah, so tone generation happens here. This is, these are what would be effectively sinusoids or sine tones, pure tones. Uh, but because of this being before the transistor. 
 

It's all, uh, vacuum tube technology. Um, it was very hard to make a sine tone at that time, so they put, uh, uh, tuning forks inside canisters and electromagnetically actuated the tuning forks. Um, so you get those. And those are fixed pitches, so they go to a transposer and you can, uh, change the pitches there. 
 

There's a reference oscillator and a frequency counter so that you can know what frequency you have if you're doing it by ear or dial in the frequency you want. Uh, as a kind of pure test tone. Then there's 24 sawtooth oscillators, uh, which provide all of the sort of tone generation. Um, the two voices are two type interfaces that punch holes through paper. 
 

Um, I think we can see paper with holes punched here. Um, the, essentially it would go, um, Through a reading device that had a little brush that when it made an electrical contact through the hole, it would send a control message to the routing mechanism and a set of relays that could switch the, uh, different processes on and off. 
 

So you would enter in your pitches, in your rhythms, in your volume envelopes, and then the, uh, control processing, uh, score, so to speak, and that would, uh, Uh, get you'd play and it would roll and it would, uh, send the sound over to this last four banks for processing. So you have a envelope and a tremolo and a, what they're calling a sound effects filter, but it's a low pass and a high pass filter that makes a band pass filter, uh, peek and notch filters for cutting out one frequency or emphasizing one frequency. 
 

Um, um. tube tree, uh, for distortion artifacts. And then at the very end, the empty spot that you see at the, the far side would have been a, um, a record lathe in the late 50s, early 60s, and then a reel to reel machine, and then eventually a DAT machine in the early 90s. The last piece of functional music that was created on it was in 1998 with DJ Spooky meets the Freight Elevator Quartet with one of our former alums, Luke Dubois, who's now the director of the Music and Games Network at NYU Tannen School of Engineering. 
 

[00:26:06] Sean Martin: When was that? 1998. So it's been up and running since, at least then.  
 

[00:26:12] Seth Cluett: Yep, yep.  
 

[00:26:13] Sean Martin: Or until then, anyway.  
 

[00:26:15] Seth Cluett: So do you power it up at all? No. No. Each of the modules passes audio and can be powered up individually. And so we're, uh, using that ability, uh, to, uh, do circuit based or software based modeling of all of the effects and processes. 
 

Um, but to compose, I don't know, a minute worth of music on this interface would be two or three weeks of work. And we have much more efficient ways of doing things now. So, um Can you describe What, how you compose your Well, I mean, so it's very much like, um, uh, piano rolls for 19th century player pianos, which were the, um, functional predecessor to the punch card. 
 

Uh, so jacquard loom, weaving, and player piano provided the template by which people would program computers in the 20th century. And so the idea of a whole plus time Equals an instruction that, you know, tells a system to initiate a process. Um, that's been in place since the 19th century. And this system is essentially a 19th century player piano roll with all the same things like time, rhythm, frequency, duration. 
 

Uh, but it has the added features of some of the flexibility of electronic music, which is like all the frequencies. Um, and very complicated things that people can't, humans can't do. And processes that, uh, you know, need to happen one right after the other with a sequence, for example. So, um, so this is very similar also to the MIDI piano roll in most, like, GarageBand or Logic or Ableton. 
 

Um, uh, with We're viewing it vertically, we're Yeah, we're viewing it vertically, but otherwise we'd be looking at it left to right. Yeah, yeah, yeah. This is our, uh, electronics workbench and instrument building facility. So we have all the basic electronic components. Um, we pair on projects with the electrical engineering department. 
 

Um, and do everything from synthesizer circuits and microphone building to the lutherie of standard electronic, uh, musical instruments. So, all the same tools you'd expect to find on a circuit building bench, um, just sort of mobilized towards, um, kinetic robotic things for sculptures or, um, or instruments with moving parts or Arduinos and microcontroller programming and that sort of thing. 
 

Cool, so this is our electronic music studio. As I mentioned, we're the same footprint as the original Columbia Princeton Electronic Music Studio, and we have many of the other synthesizers. So, um, this is one of the earliest Buchla synthesizers, um, famous for its red module, which in the 60s was covered in LSD. 
 

Um, no, no longer any LSD on, on that module. Students have worn that off. Yes, indeed. Uh, then a 1979 Surge synthesizer, uh, meant to be the quote, people's synthesizer, a more affordable option to. Moog and Bukla. Um, Moog was a undergraduate, uh, at in the electrical engineering program here when the center was founded. 
 

Um, and his advisor was the technical director of the center who installed the RCA synthesizer in the other room. Um, so in this room, we've concentrated on accessible technical studio design, which is the idea that we want to get people to creating faster. So we reduce friction to. to fun, basically. Um, so in this room, all of the synthesizers come into a mixer. 
 

One, uh, the output of that goes into a splitter. One half of the splitter goes to the computer so we can process things digitally. The other half goes to a rack mounted hard disk recorder that records to a thumb drive. So a student can just come in, raise the faders, press record, record everything they're doing in the room and then Uh, put the thumb drive in their, um, laptop and, uh, walk away with sound files, uh, in no time. 
 

Uh, but we've tried to make a seamless continuum between digital and analog. So we have the ability to send control voltage signals from software to hardware, or we can send hardware electrical signals to software and drive them in a variety of different ways. So, we have students who are even writing their own software, uh, that is becoming modules, so modules like this module have an SD card and you write the software in C and put the SD card in and that becomes, uh, whatever you tell the program these jacks are and whatever you tell the program these knobs are become the knobs, so it becomes a platform for. 
 

Um, actually teaching, like, applied CS, which in the current educational landscape is a really great thing for them to go through a single project and learn how to do that. Um, One of the cool things about this center was it was Where a lot of firsts happened. So, one of the first musically usable ring modulators was designed here. 
 

The frequency shifter was designed here. The attack decay sustain release envelope generator for synthesizers was invented here. And a lot of that work happened between my predecessor, three predecessors ago, Vladimir Ushashevsky, working with Harold Boda, the engineer who developed the Mellophone in Germany. 
 

for the Southwest Deutsche Rundfunk Studio and then later would develop circuits for the SD Organ Company and the Wurlitzer Company before coming, uh, to New York and then building a relationship with Electronic Music Studios. So we have the first production model and the last production model frequency shifters as well as the prototype in, uh, one of the back rooms and we still have the first ring modulator. 
 

Um, which is now being, um, recapped and, uh, set up to be able to put back into a studio. Um, the only other thing that's worth mentioning here is, um, All of the studios are connected with, um, four channels of shielded Cat6 and four channels of BNC so that we can do bi directional multi channel digital audio between the rooms. 
 

It's like 256 channels between all of the rooms and then four channels of video send and receive. And all of the studios have a common rack with, uh, internet facing microphone preamp and an internet facing, um, uh, headphone monitoring system. And an internet facing MIDI controller, which allows us to send and receive any of the things you do in a recording session to and from any of the rooms. 
 

So, despite being a 1909 dairy factory in the same building as the Manhattan Project was based, um, we've managed to do the most we can with the little space that we have. Uh, yeah, so for example, all of the rooms have a pan tilt zoom camera. And a ceiling mounted auto mixing mic, both of which are on the internet, uh, so that we can send video signals back and forth and audio signals back and forth, just talking like normal people, uh, between rooms as a way of communicating during a recording session. 
 

Um, so this room, all of the devices were chosen for their pedagogical value, meaning we want to choose things for one of two reasons. Um, the interface is dead simple to read. Uh, so like the silver synthesizer, the dope for synthesizer has really clear fonts, unified design, uh, standard principles for analog sound synthesis. 
 

So that allows us to teach things in a kind of archetypal way. And then, uh, the second criteria being that things should be, um, recognizable. So in this case, the Roland TR 8. TR 08, which is a model of the TR 808 drum machine, and the TB 03, which is their modernized version of the TB 303, as well as an MPC, for people to see how the history of software based and hardware based instrument design can play a role in their sort of, um, even potential job futures. 
 

So, we are trying to encourage computer scientists with an interest in musical application development or electrical engineers interested in hardware development to look at domain specific computing as an option for postgraduate study or the job market because you might as well be doing something you love with your CS degree. 
 

Really, we attract a lot of people for that reason. So, this is the entryway to our recording studio, which serves largely as storage for the drum set. But, um, also we have two EMT plate reverbs, which are four foot by eight foot sheet metal plates suspended by springs, with a transducer on one side and a pickup on the other side. 
 

You send sound through it and it creates artificial reverb. So this was one of the first ways in early recording that people could make artificial reverberation. Um, and we have Two very low serial number versions of these. 
 

So this is our humble recording studio. 
 

You know, uh, basic kind of golden signal flow of, uh, professional levels for everything end to end. Um, uh, the idea here is that people should be able to walk in, plug in eight microphones, go into eight preamplifiers, go into eight channels of audio in the software, and they don't have to patch anything. 
 

Reducing the friction to, to creative work. Um, and so headphone monitoring is always set up. Uh, and also so are the ISO booths. Um, we did a proper acoustic study of the room to set it up when I got here. Um, and so, um, the mix position, uh, is flat within 10, uh, flat within 10 dB across the entire audible spectrum. 
 

So, um, pretty good. For one seat in the room and then everything else in the room. It's a little bit shaky. But, um, but it allows us to do, um, the creative work that we need to do. You're in our immersive media and spatial audio research facility. Um, it has a 12. 1 channel, uh, Loudspeaker sphere essentially, uh, that can do, um, standard per channel multi channel audio or, uh, very fancy version of this called ambisonic audio, which captures three dimensional sound fields and reproduces them. 
 

So we'll either do that with a spherical microphone. This is a 32 capsule microphone that can capture really highly resolved auditory environments. And then on the wall we've got a 24 channel planar array, uh, that we're in the process of redesigning. Um, uh, that lets us do data sonification and visualization with the screen that comes down in front of the speaker so we can, uh, audio animate graphs of data and things like that as a way to kind of emphasize, uh, things that are going on. 
 

Um, this workstation is dual purpose, both, uh, high end Mac and high end PC builds for, uh, VR, AR. Immersive media, um, as well as scoring for the composers and, um, and video editing and streaming and things like that for the students. And then the back of the room is just, um, uh, graduate coworking space.  
 

[00:38:22] Sean Martin: Cool. 
 

Cool. And thank you so much. Oh, you're very welcome. For your time today and coming in special for me. Yep. To do this. And I hope. Everybody enjoys, uh, this conversation and the history and what's possible. We've, we've seen a lot of evolution in this short episode. Mm hmm. And, uh, we're just getting started, I think. 
 

[00:38:44] Seth Cluett: Sounds good.  
 

[00:38:45] Sean Martin: All right. All right. Thank you. Take care. Thanks  
 

everybody. 
 

[00:38:46] Seth Cluett: Thank you.