Tuesday, February 26, 2013

Neuro | The De-objectifier

Last semester Boyd Branch offered a class called the Theatre of Science that was aimed at exploring how we represent science in various modes expression. Boyd especially wanted to call attention to the complexity of addressing issues about how todays research science might be applied in future consumable products. As a part of this process his class helped to craft two potential performance scenarios based on our discussion, readings, and findings. One of these was Neuro, the bar of the future. Take a cue from today's obsession with mixology (also called bartending), we aimed to imagine a future where the drinks your ordered weren't just booze filled fun-times, but something a little more insipidly inspiring. What if you could order a drink that made you a better person? What if you could order a drink that helped you erase your human frailties? Are you too greedy, have specialty cocktail of neuro-chemicals and vitamins to help make you generous. Too loving or giving, have something to toughen you up a little so you're not so easily taken advantage of.


With this imagined bar of the future in mind, we also wanted to consider what kind of diagnostic systems might need to be in place in order to help customers decide what drink might be right for them. Out of my conversations with Boyd we came up with a station called the De-Objectifier. The goal of the De-Objectifier is to help patrons see what kind of involuntary systems are at play at any given moment in their bodies. The focus of this station is heart rate and it's relationship to arousal states in the subject. While it's easy to claim that one is impartial and objective at all times, monitoring one's physiology might suggest otherwise. Here the purpose of the station is to show patrons how their own internal systems make being objective harder than it may initially seem. A subject is asked to wear a heart monitor. The data from the heart monitor is used to a calibrate a program to establish a resting heart rate and an arousal threshold for the individual. The subject is then asked to view photographs of various models. As the subject's heart rate increases beyond the set threshold the clothing on the model becomes increasingly transparent. At the same time an admonishing message is displayed in front of the subject. The goal is to maintain a low level of arousal and to by extension to master one physiological aspect linked to objectivity. 


So how does the De-objectifier work?! The De-objectifier is built on a combination of tools and code that work together to create the experience for the user. The heart monitor itself is built from a pulse sensor and an Arduino Uno. (If you're interested in making your own heart rate monitor look here.) The original developers of this product made a very simple processing sketch that allows you to visualize the heart rate data passed out of the Uno. While I am slowly learning how to program in Processing it is certainly not an environment where I'm at my best. In order to work in an programming space that allowed me to code faster I decided that I needed a way to pass the data out of the Processing sketch to another program. Open Sound Control is a messaging protocol that's being used more and more often in theatrical contexts, and it seemed like this project might be a perfect time to learn a little bit more about OSC. To pass data over OSC I amended the heart rate processing sketch and used the Processing OSC Library written by Andreas Schlegel to broadcast the data to another application. 


Ultimately, I settled on using Isadora. While I started in MaxMSP, I realized that for the deadlines that I needed to meet I was just going to be able to program faster in Isadora than in Max. This was a hard choice, especially as MaxMSP is quickly growing on me in terms of my affection for a visual programming language. I also like the idea of using Max because I'd like the De-objectifier to be able to stand on its own without any other software and I think that Max would be the right choice for developing a standalone app. That said, the realities of my deadlines for deliverables meant that Isadora was the right choice. 


My Isadora patch includes three scenes. The first scene runs as a pre-show state. Here an motion graphic filled movie plays on a loop as an advertisement to potential customers. The second scene is for tool calibration. Here the operator can monitor the pulse sensor input from the arduino and set the baseline and threshold levels for playback. Finally there's a scene that includes the various models. The model scene has an on-off toggle that allows the operator to enter this mode with the heart rate data not changing the opacity levels of any images. Once the switch is set to the on position the data from the heart rate sensor is allowed to have a real-time effect on the opacity of the topmost layer in the scene.


Each installation also has an accompanying infomercial like trailer and video vignettes that provide individuals with feedback about their performance. Here Boyd described the aesthetic style for these videos as a start-up with almost too much money. It's paying your brother-in law who wanted to learn Premiere Pro to make the videos. It's a look that's infomercial snake-oil slick. 





Reactions from Participants - General Comments / Observations


  • Couples at the De-Objectifier were some of the best participants to observe. Frequently one would begin the process, and at some point become embarrassed during the experience. Interestingly, the person wearing the heart rate monitor often exhibited few visible signs of anxiety. The direct user was often fixated on the screen wearing a gaze of concentration and disconnection. The non-sensored partner would often attempt to goad the participant by using phrases like "oh, that's what you like huh?" or " you better not be looking at him / her." The direct user would often not visible respond to these cues, instead focusing on changing their heart rate. Couples nearly always convinced their partner to also engage in the experience, almost in a "you try it, I dare you" kind of way. 


  • Groups of friends were also equally interesting. In these situations one person would start the experience and a friend would approach and ask about what was happening. A response that I frequently heard from participants to the question "what are you doing?" was "Finding out I'm a bad person." It didn't surprise users that their heart rate was changed by the images presented to them, it did surprise many of them to see how long it took to return to a resting heart rate as the experience went on.


  • By in large participants had the fastest return to resting rate times for the images with admonishing messages about sex. Participants took the longest to recover to resting rates when exposed to admonishing messages about race. Here participants were likely to offer excuses for their inability to return to resting rate by saying things like "I think I just like this guy's picture better."


  • Families were also very interesting to watch. Mothers were the most likely family member to go first with the experience, and were the most patient when being goaded by family members. Fathers were the least likely to participate in the actual experience. 


  • Generally participants were surprised to see that actual heart rate data was being reported. Many thought that data was being manipulated by the operator.


  • Tools Used
    Heart Rate - Pulse Sensor and Arduino Uno
    Programming for Arduino - Arduino
    Program to Read Serial data - Processing
    Message Protocol - Open Sound Control
    Programming Initial Tests - MaxMSP
    Programming and Playback- Isadora
    Video Editing - Adobe After Effects
    Image Editing - Adobe Photoshop 
    Documentation - iPhone 4S, Canon 7D, Zoom H4n
    Editing Documentation - Adobe Premiere, Adobe After Effects

    Wednesday, February 13, 2013

    Delicious Max 6 Tutorial 25: Cell Pump

    I can't get enough of these Delicious Max Tutorials. In today's episode Sam walks us through a little bit of exploratory play with matrices, mesh blending, and geometry.




    Tuesday, February 12, 2013

    GIFs Galore

    Today I spent a chunk of the afternoon making GIFs at the ASU School of Art Festival. I really made about 31 GIFs ranging in size from 5 frames to early 20. All in all it was a great event, made even more fun by the act of making something fun and silly in the process. It's amazing to me how fun it is to make really simple and silly animation. 


    In the past year GIFs have become the it thing on the web. This is made even more surprising by the growing number of GIF artists whose work is showing up in galleries and curated shows. The Graphics Interchange Format was first introduced by CompuServe in 1987. Limited to the expression fo 256 distinct colors, the GIF was never a format that was destined for the replication and display of photos. It did, however, prove to be a strong format for line and logo art with a low color demand. The GIF's real claim to fame, however, was it's ability to display animation. In the early days of the popular web, GIF animation was often used to create motion on a page. A single GIF could be used to create a boarder of moving and twinkling lights, or create some stellar animation of a spinning globe. In the early days of the HTML coding boom, countless tweens and teens were obsessed with GIF decoration on their anglefire websites. The limelight ultimately faded on the GIF as It's limitations in the display of color were trumped by the internet's new darling, Flash animation. While Flash provided for a much richer animation environment,  it began to loose its footing when Apple refused to support Flash on its mobile iOS devices. 

    In a world of mobile browsing, bandwidth and compatibility became increasingly important for the spread of memes and mediated ideas. With that in mind, it's no surprise that the GIF, a 1980s standard past licensing quarrels and patent warfare, has taken center stage again.

    Here's to you GIF... your hypnotic looping never ceases to entertain.









    Wednesday, February 6, 2013

    Delicious Max/MSP Tutorial 4: Vocoder

    This week I was gutsy, I did two MaxMSP tutorials. I know, brave. Sam's tutorials on YouTube continue to be a fascinating way to learn Max, as well as yielding some interesting projects. This second installment this week is about building a vocoder. The audio effect now common place is still incredibly rewarding, especially when running through a mic rather than using a recorded sample. There is a strange pleasure in getting to hear the immediate effects of this on your voice, which is further compounded by the ability to add multiple ksliders (keyboards) to the mix. Below is the tutorial I followed along with yesterday, and a resulting bit of fun that I had as a byproduct.





    A silly patch made for a dancer's birthday using the technique outlined by Sam in his tutorial above.

    Help yourself to the patch if you'd like to play

    Tuesday, February 5, 2013

    Delicious Max/MSP Tutorial 2: Step Sequencer

    Another MaxMSP tutorial from dude837 today in the afternoon. Today I the step sequencer in the video below. This seems like slow going, and maybe a little strange since I keep jumping around in this set of tutorials. This is a rough rode. Maybe it's not rough so much as it's slow at times. I guess that's the challenge of learning anything new, it always has times when it's agonizingly slow, and times when ideas and concepts come fast and furious. The patience that learning requires never ceases to amaze me. Perhaps that's what feels so agonizing about school when we're young - it's a constant battle to master concepts, it's a slow road that never ends. Learning to enjoy the difficult parts of a journey is tough business. Anyway, enough of that tripe. On to another tutorial. 



    Monday, February 4, 2013

    Personal Essay


    Twenty six of my thirty one years have, in some way, involved performance: from community musicals where I performed along side my mother, to gravity-defying circus performance for the Christopher Reeves foundation. I have also worked purposefully to provide educational access for populations that have not traditionally been able to engage with the arts. In this respect it was my work for an educational outreach program in rural New Hampshire and Vermont that had a deeply resonant impact on my view of the power of arts in education. Over the course of a five year period working for Keene State College’s Upward Bound Program I was a residential director, teacher, advisor, counselor, college-coach, and facilitator. As I transitioned to another position at Keene State my role changed from supporting potential students to supporting college faculty and staff. In my role as Rich Media Specialist for Keene State’s Center for Engagement, Learning, and Teaching I worked as an instructional designer, blackboard administrator, media maker, researcher, and faculty collaborator. While working full time in higher education, I also continued to develop as a performer through an ongoing circus training regimen. In thinking about graduate school I saw that I had been shaped by those three distinct forces: performance based art, technology, and a passion for teaching. I came to ASU to create a life where those three forces might co-exist in a meaningful and transformative program of study. In fact, that’s what I’ve found at ASU. In my first year I will have participated in, or contributed to (as performer, media creator, or system designer) eleven Phoenix-area productions while also having served as instructor or TA to over 350 students. My introduction to ASU has been, to say it mildly, a whirlwind of exposure to new ideas, methods, and opportunities to collaborate or participate. Especially interesting to me has been the opportunity to engage other artists in a critical dialogue about the impact, consequences, and outcomes of including digital media in live performance. 

    I sometimes find it difficult to know what I will be doing in the next ten days, let alone ten years. That said, my vision for a professional life after graduating from ASU does include some specific goals. Without a doubt, my work will include some element of physical computing to address the issues of how to integrate real-time data from performers into the experience of seeing a theatrical production. Specifically, I plan to start a circus company with a heavy emphasis on the incorporation of traditional and generative media as elements of the performance. This involves the development of both physical apparatuses capable of capturing and transmitting meaningful data, as well as the development of applications to parse and interpret the data for playback-system integration. Further, I think this kind of work is potentially the most meaningful when partnered with an educational institution where performers, media makers, and technicians can collaborate on this process. Finally, my hope is that ten years after graduation I will be in a position to spearhead the implementation a of an integrated technology and circus program for the development of artists looking to transcend the traditional ideas of physical and mediated performance.

    Saturday, February 2, 2013

    Sound Trigger | MaxMSP


    Programming is often about solving problems, sometimes problems that you didn't know that you actually had to deal with. This past week the Media Installations course that I'm taking spent some time discussion issues of synchronicity between computers for installations, especially in a situation where the latency of wired or wireless connections creates a problem. When 88 computers all need to be "listening" in order to know when to start playback, how can you solve that problem?

    Part of the discussion in the class centered around using the built-in microphones on modern laptops as a possible solution. Here the idea was that if every computer had it's microphone turned on, the detection of a sound (say a clap) would act as a trigger for all the machines. Unless you are dealing with distances where the speed of sound becomes a hurdle for accuracy, this seemed like a great solution. So I built a patch to do just that.

    This max patch uses the internal microphone to listen to the environment and send out a trigger messages (a "bang" in max-speak) when a set sonic threshold is crossed. As an added bonus, this patch also turns off the systems involved with detection once it's set in motion. Generally, it's seemed to me that a fine way to keep things from going wrong is to streamline your program so that it's running as efficiently as possible.



    Here's a link to the simple trigger patch
    Here's a link to the Max Project that includes the triggered video

    Tools Used
    Programming - MaxMSP
    Countdown Video - Adobe After Effects
    Screen Cast - ScreenFlow
    Video Editing - Adobe Premiere