I’ve decided to participate in NaNoDrawMo this year (its first year). The idea is to create 50 drawings during the month of November. Very simple.
So, for the rest of this month, consider this blog a drawing blog.
Oh, and if you want to keep up with my progress, you can either follow on here or subscribe to my NaNoDrawMo 2009 Flickr Set. I’ll probably be averaging two or three drawings a day.
Last year, I started a pet project called “Flow“. The point of the project was to find some intersection with more traditional art-making techniques and my interests in the digital realm.
Well, that itch has resurfaced, and I realized I was making things way too complicated. This time, I’ve turned to using some open source tools for the data capture portion of the project. Components include:
It’s actually really simple to get set up. Here’s a little demo of how it all comes together.
Next steps include making a client to listen for the TUIO data generated by CCV. I’m probably going to opt for an AS3 AIR client, since it’ll be fastest for me to create. Then create a web application that captures and stores this data. This I’ll build in CodeIgniter, again, since it’ll be fastest for me to create in this environment.
I’ve got a lot of stuff on my plate this month, and I want to participate in this month’s NaNoDrawMo on top of it. So, we’ll see where things net out.
No, guys, seriously. This is pretty bad. So bad, I don’t even want your comments. But I really needed to do something that was not programming.
I think once a week, I might try to make an iPhone wallpaper. Here’s the first. Looks like crap on your iPhone. No, seriously. Download it and then delete it.
Some progress has occurred. C++ is becoming clearer. Here’s what progressed in the first couple images:
- I created a Tween class for animating values. It’s modeled after CASA Lib’s Tween class. I’ll probably be releasing it to the community soon.
- I hooked up my MIDI controllers to the piece, so I could have more control over tweaking values. I’m using a great add-on for openFrameworks called ofxMIDI.
- Along the way, I learned how to implement openFrameworks event dispatchers and listeners. (Hint: search for “poco events” in the openFrameworks Wiki if you want to learn how.)
In doing all this, I was able to trigger light movements with a MIDI controller. I used my M-Audio Trigger Finger’s velocity-sensitive pads to animate two lights. Hard tap = fast movement.
Here’s a video of the animated lights in action. Please excuse the stutters; while this does run at 30fps in realtime, SnapzPro isn’t able to keep up because of the CPU usage of the piece.
Now after all this, I came to realize that I really wanted to be able to loop the Perlin noise that was creating the height map of the tube. Because, though you can’t see it, there’s an awful seam on the side facing away from the camera. By looping the noise, I could reliably look at it from all angles.
After some research, I came across a great library called libnoise. Took me a little while to get it to compile, only to realize that while it did the job, it dropped my framerate from a respectable 45fps to 9fps. Definitely not suitable for realtime noise generation.
So I went back to the fastest Perlin noise generator I could find (this class from John Ratcliff), and I decided to live with the seam in the back.
Amidst all this, I also stumbled upon this awesome blog post about Twisted Architecture. I thought it would be cool to implement something like that in my piece. So I did. But then I forgot about it.
It was only when I started twisting the “twist-associated” knob on my MIDI controller that the damn seam actually came to life. So, you can see how it starts to play out in the rest of the images here:
Also, in those last couple renders, I added a convex mesh to the background so that I could start incorporating background colors properly. Moreover, so I could get some light-generated gradients happening. More to come… more to come.
For those of you following me on Twitter today, you witnessed the pain I endured under the cruel, icy grip of OpenGL.
That pain: learning how to compute vertex normals.
But eight hours later, I managed to pull it off with the help of the OpenGL SuperBible, this thread, and this article. Oh, and this amidst porting it to C++ (openFrameworks) for maximum control and speed.
Along the way, I learned how OpenGL lighting works. I was having such a hard time with it before because my form didn’t have its surface normals nor its vertex normals correctly calculated. So light wasn’t bouncing off it the way I expected. Now it’s all making good sense.
My next steps will be adding interactivity. I’m going to try to use my HP TouchSmart for touch input along with its webcam for vision input. Separately, I’ll try to make some sort of music visualizer out of this — or at least make it audio-responsive.
Check out a video from today:
This time I wanted to add some actual volume — not just perceived volume — to the form. These are just some initial tests that you see above. I’ve got a ton to learn about lighting and optimization and matrix transformations. Robert says I need to use “per-vertex normals”. I say “OKAY!” Hopefully I can start adding some of these things over the next couple weeks and really use this project as a general learning tool.
For now, I’m developing all this in Processing using the PGraphicsOpenGL renderer. I’m not using any actual OpenGL calls directly; I’m only using the Processing API. One little discovery I made yesterday: PeasyCam turns out to be a really sweet little camera library for Processing. Adds mouse control over the camera with a single line of code.
The ultimate goal of this is to create two different pieces. One will be a rendered motion piece. It probably won’t have any audio and will function as an ambient looping video. It will be submitted to this call for entries: Flow Interrupted.
The second piece would actually be interactive. I’m going to toy around with camera, audio, and touch as discreet inputs (all separately). This will give me a chance to play with the new HP TouchSmart PC I got. So you could consider the second execution to be more of an interactive installation piece. And that’s going to be submitted to this call for entries: FLUID Interactive Digital Art.
Check out some of the stills:
I recently discovered a huge disconnect between text-based 3D graphics programming and my brain. It was quite an unpleasant discovery.
Out of frustration, I decided to turn to a more graphical, procedural approach. For some reason when it comes to 3D, I really need a UI and “objects”. It’s all too abstract when I’m dealing with lines of code.
So I’m trying out visual procedural programming. And I’m starting with Derivative’s TouchDesigner. I was introduced to the application long ago, back in 2005 at FITC. But it’s gone through a complete rewrite and UI overhaul.
And tonight, I made a spherical 3D particle cloud. Nothing revolutionary, but there is some serious power and possibility embedded in this application. The UI is absolutely next level and intriguing. I highly recommend you download it and have a run through the video tutorial screencasts. It’s a Windows-only application (sadly). And it’s still buggy since it’s in very active development. I’d recommend you download the latest experimental release. I had more luck with it than the main release on their Downloads page from May of this year.