Spent some time at Aalto Lounge last night, winding down from an 11-hour programming session. It’s so dark in that place, I honestly didn’t have any idea what these sketches even looked like until this morning.
And tonight, got those vertex normals calculated. Much easier this time around — probably just because I understand it all conceptually now.
I’m really enjoying how this stuff is looking. Finally, I feel like this digital generative work feels as organic and alive as the source of the data itself. I can’t wait to add the UI controls for “sculpting” the composition and colors… and can’t wait to add more structural elements.
Now that I’m unpacked and settled in my new apartment — and since I have a 50 minute presentation coming up at FITC in just over a month — it was time to gain some momentum and get back on top of this stuff for Flow.
I’ll be the first to say that these definitely bear a strong resemblance to Evan Roth‘s latest Graffiti Analysis video called “Graffiti Analysis: 3D” — because they do. And because I was definitely inspired by it (just like I was inspired by the original project when I first saw it back in 2005 at FITC).
But this is just one layer of what I’ve finally settled on for my first real “pieces” to come out of this Flow project. I’m focusing on isolation of a few strokes, creating a strong composition, and building something structural — almost architectural — out of these energetic moments of frozen time.
The hurdle today was learning to draw a tube in space…this took the better part of the day. I would’ve used the GLE Tubing and Extrusion library, but unfortunately, that has dependencies on GLUT, which Cinder does not use. And I didn’t think that I’d be able to vary the thickness of the tube over its length, so I decided to try it from scratch. And we’ve got [decent] success. I do want to implement proper per-vertex normals to smooth the sucker out some more. I think I’ll be able to rip the per-vertex normals code from the old Noise Tube experiments, since it’s ultimately the same mesh structure.
And yeah, this is all OpenGL still. Once I get the other structural elements in place, I’ll be writing some sort of Cinder Block for creating a Sunflow scene / scene file and rendering this stuff out in Sunflow. The final aim is to create some prints from this thing. I’m giving myself till the end of the week to wrap up this visual direction, so I can focus on refactoring and documenting all the behind-the-scenes Flow applications for open source release around mid-August.
You can see the full set of this first bunch of renders in this Flickr set.
I’m both humbled and delighted to announce that I’ll be speaking at this year’s first ever FITC San Francisco event: FITC San Francisco 2010! The event runs from August 17th – 19th at the Mission Bay Conference Center at UCSF. FITC has gathered over 70 internationally renowned presenters from around the globe for this epic three day Flash, Design & Technology event.
I attended FITC in Toronto in 2005 and 2006, where I saw some of the most inspiring presentations, met some amazing people, and generally had an incredible time. This year I’m particularly excited to see presentations from the likes of Robert Hodgin, Yugo Nakamura, Ben Fry, Erik Natzke, Mario Klingemann, and Theo Watson.
My presentation is titled “Harnessing the Abundance“. For those of you that have been following along here, I’ll be telling the story of and process involved in my Flow project. I hope to demonstrate that you don’t need to know all these different technologies in order to start using them — that in today’s world, a lot of the hard work has already been done. You just have to generate an idea and put the pieces together.
Along with the presentation, I’ll be posting all the source code that’s gone into this project in an effort to “give back” — because I couldn’t have done any of it without the numerous open source projects I’ve built upon.
I’m scheduled to speak on Wednesday, August 18th at 10AM — the first presentation of the day. My presentation is listed as being technical, but I hope that it also possesses a fair amount of creativity.
Early Bird pricing ends on July 2nd, so you should definitely hurry and grab your tickets if you haven’t already. Also, if you use the code “
mikecreighton“, you’ll get 10% off your ticket price! GO GO GO GO!
I feel like there have been some omissions here as of late. While I haven’t been drawing as much I as could, I have been doing some sketching. Work has sort of ground things to a near halt.
We tried posting it all to Flickr at first. But I wanted to get away from that community and just make it something for us.
We’re at the end of our second month, and it’s been an admittedly crazy busy couple of weeks, so we both got behind. Out of sheer exhaustion from being at the computer tonight, I turned back to my sketchbook to round out my seventh drawing for the month (above).
That one and the one below are my favorites from the month. I like what I’ve omitted in each of them… and the tension between the fruit. I guess it’s just nice not to draw everything I see all the time… creates a more compelling image sometimes.
Cheers to the omissions.
Spent a little time today on another tangent. One thing I didn’t like about those faux-cityscape renders was the fact that you completely lost all sense of the originating data. This isn’t necessarily a bad thing, but part of the reason I developed this whole system was to bring forth the energy and motion that actually went into creating the “seed” drawing.
So I took a step back and created a new renderer that let me isolate the individual “strokes” that were captured during the drawing session. In these renders, I’ve isolated three strokes; each has a unique color. The peaks in the height are derived from the relative velocity of a stroke at a given snapshot in time.
In Cinder, I created a GL scene that has a positionable camera and lets me page through each stroke. Then I can tap a key and dump out that stroke’s Sunflow scene data to the console. Along the way, I learned how to use the TriMesh class in Cinder. It made dumping the Sunflow scene data easier, since it really parallels Sunflow’s “generic-mesh” type.
The render at the top uses three sphere lights, a basic diffuse shader for each object, and Sunflow’s path-tracing global illumination system. The render below uses the Sunflow sunsky light, a foggy Phong shader on the three strokes, and the “fake” global illumination setting in order to get some vague ambient light. I started discovering that there are a lot of possible combinations for lighting in Sunflow — almost an overwhelming number.
Next I’m gonna try a different drawing technique for rendering these strokes.
So, you guys saw a still like this from the last time I posted. Basic OpenGL stuff with a couple lights and some simple cube-like structures. Immediately after getting that far, I wanted to get this stuff rendered in Sunflow. Now, Sunflow is an open-source global illumination renderer built with Java. But at it’s core it’s a ray tracer, which basically means you’ve can make stuff that looks real because it simulates real light rays. Moreover, it has different kinds of cameras… and one of those cameras can render a pretty realistic depth of field and even bokeh.
Here’s the thing, I’m working in Cinder, which means I needed to write something that took my geometry and dumped out a Sunflow Scene File (.sc). But first, I had to figure out how the hell to get the OpenGL viewport projection matrix mapped correctly to the Sunflow camera — and moreover, I had to figure out how to take all these OpenGL transformations (rotate, translate, etc.) and turn them into something Sunflow understood.
Today, I finally wrapped my head around the two main OpenGL matrices. I’ll go into the specifics for folks that are interested in a later post, but the point is this: I can write code and render stuff using OpenGL in Cinder, press a key, and dump out all my Sunflow Scene File data (to the console for now).
Here’s what the above image looks like when it’s rendered with a couple point lights and a pinhole camera:
And here’s what it looks like with ambient occlusion (and no point lights)!
And here’s what happens when you play with the Phong shader and too many lights and don’t want to tweak things until they actually look good!
All of this has me excited because now I can move quickly with framing and general colors and textures and such in OpenGL, then get super-nice renders from Sunflow. Yay open-source!
Spent most of today trying to pull off an idea that I had in my head since the last time I wrote. The idea came to me as soon as I saw a piece called Ephimicropolis by contemporary artist Peter Root. Most of his work is pretty compelling, visually dense, and labor-intensive.
But I thought, hey, I love doing stuff that creates visual rhythm with similar shapes and iteration (see here and here and here and here). And I’m really into this idea that my captured motion data can be turned into something structural, becoming an input stream for a different set of dimensions. I’m almost literally painting an entire world here.
Anyway, now I need to actually start playing with how this thing looks. With lighting and the camera and texture and distribution of elements. Oh, and all of this was built using Cinder.
Here’s a time-lapse render, so you can see it build up:
Here are some of the in-progress renders that happened along the way as I was stumbling my way through all this:
Lots of new stuff learned over the weekend. Here’s a summary:
- Really understand basic OpenGL lighting now.
- Understand how to do basic material coloring now.
- Learned how to texture a quad!
- Starting to learn some things about the Boost C++ libraries.
- Finally remembered that OpenGL is a state machine.
I think I actually (finally) have enough know-how to execute what I’ve got going on in my head. It’s going to require some refactoring of code, but I don’t feel like I’m up against an unclimbable wall anymore, which is precisely how I felt by the end of the day on Sunday.
I still think this looks boring / uninteresting, but it’s just another step closer and a lot of lessons learned.
Here’s a video render of what it looks like in realtime:
Update on 04-29-2010: The C++ framework that I’m using has just been released to the public. It’s called Cinder. Big ups to Andrew Bell, Robert Hodgin, and The Barbarian Group for all the time and effort put into releasing such a powerful and fun open source framework for creative coding!
Yesterday at the BACAA open studio sessions, I started a new drawing. Same model. Same pose. This time I wanted to try to do a portrait. At this point, I’m getting in the first layer of the darker values. I think I concentrated too much on getting the rendering of the form “right” in the first pass, which is a big no-no.
The proportion of her lips and their distance to her nose is off right now, which is why the lips are half-erased — a reminder for myself next week to start with the fixing of that area.
I’m probably going to end up doing what I’ve done in the past with some of my other charcoal drawings which is get things looking technically decent, then sort of smudge over everything… lighten things up… get things “unfocused”… and finally go back through with a more gestural quality to the mark-making.
These past few weeks at the studio sessions have forced me to focus on measuring and really observing: two things I don’t generally spend enough time on.