Tuesday, February 26, 2008

DayBRIGHT: Image to Gradient Translation

The first translator I decided to fully flesh out for DayBRIGHT is the static image translator. The underlying system I have developed only has one principle rule - anything can be translated and used, as long as that translation returns a gradient object. So, how can a gradient be created from an image?

I took the route of writing an algorithm which finds and stores the most dominate colors in an image - then using each sample as a control point within a varying length linearly-smoothed gradient, where the most dominate colors are in order from left to right. Below are some examples of the results, where the top is the source that was sampled and gradient is the output visualized:


Fig 1.1: Sky converted to a gradient based on dominate colors


Fig 1.2: Composition VI, W. Kandinsky - image to gradient conversion


Fig 1.3: Composition VII, W. Kandinsky - image to gradient conversion

Now that a method existed to translate static images into gradients, it becomes a trivial task to write a translator for a video feed, as video can be easily treated as a series of images. Simple alterations to a subsystem can allow for a calculated sample of each frame in a gradient. So, if a video feed were parsed once every hour for twenty four hours, a gradient could be constructed of n-length, linearly being smoothed evenly across each respective control point based on n+1 distance.

Tuesday, February 19, 2008

DayBRIGHT: Generic Structures

The underlying system DayBRIGHT uses to translate various data structures has become less about constraining data to a set format, and more about developing a generic structure (a gradient) to fully represent the data in an elegant way. I find this an interesting topic to explore because it has really opened a lot of doors as to how far I can take his project; in particular, in the sheer variance of data I can now use.

RSS feeds, video streams, static images, audio tracks, physical sensor data - all of which can be harbored by a single generic data structure. Each piece of data simply needs its own translator towards being converted to the structure; once that is done - there is no further special-casing needed.

My goal for this project is to have, by the end of semester, a demonstration of how the various types of data listed above can be used to achieve results for an identical operation. In the case of DayBRIGHT, I will be using the gradient structures to light 3D geometry. How this will be done merits its own detailed entry - which is forthcoming.

I am really excited about this idea, it is a blend of engineering/mathematics and visual aesthetics that I have always been drawn to. I am dedicating all of my classes to it in hope that my time commitment will really push this idea as far as it can go.

Friday, February 15, 2008

DayBRIGHT: Proposal + Test Video

The idea behind the DayBRIGHT project is to be able to visualize data through HDR light probes being influenced by various types of datasets. Each dataset, be it an RSS weather forecast, webcam frame sequence, arbitrary color selection, or any other value - will be translated into a common format to be applied to a core HDR image.

The DayBRIGHT project will be encapsulated within a core software application which will communicate with various data feeds. As mentioned, each type of data will be converted to a common format: a color gradient. A color gradient will allow for analysis of RGBA, as well as time by sampling across its length. Depth can also be calculated by converting RGBA to luminance values.

To render the results, the initial solution will be to use 3Delight, an implementation of the Renderman standard, first introduced by Pixar. Using Renderman will allow for programming flexibility and high-quality renders at HD resolutions, if needed. An example of the early results is shown below, where an orange -> cyan color gradient has been applied:

Wednesday, February 13, 2008

DayBRIGHT: Gradient Sampling

One of the core concepts of the DayBRIGHT project that I am developing over this semester is to enable the ability to drive HDR lighting over time using a variety of methods - the first of which is using a simple color gradient.

For every frame to be rendered, a copy of the core HDR image is created, then a color overlay operation is applied to that copy - with the color applied being the current sample value of the gradient. This relationship is graphically represented below:


Fig 1.1: Graphical example of gradient sampling applied to image

Sunday, February 10, 2008

Mind.Scribble.Form

My mind blurred, sketched, articulated, and scribbled on paper - most of which makes little to no sense to me currently. I normally never write things down to remember for the future - I generally only do so to calculate or brainstorm on the spot. Apparently, in parallel with my messy drawings, I also take photos without much care either. I think I'll write that off as part of my "creative process" and hope that you believe me (which you shouldn't).









Wednesday, February 6, 2008

Blast Radius

Several layers of particles revolving around a central sphere - each layer being rotated based on assigned attraction forces. Particles are symbolic of populous movement combined with the effect of unseen influence over collective entities - physically, socially, and otherwise.


Fig. 1.1: Straight-on shot


Fig 1.2: Corner angle


Source:
BlastRadiusMain.java
GLSL.java

Friday, February 1, 2008

Interactive and Time-Based HDR

I have been thinking a lot about pixels lately - most likely due to having a class with the given root word in it... twice. So, we have all of this information nicely packed within a frame - now what to do with it?

What I find interesting is how an image can be mined for data that is unseen upon first glance - and one of my favorite ideas in regard to this way of thinking is the creation and use of HDR images. High-Dynamic Range images are useful for an assortment of reasons - in the field of computer graphics, they are many times used as a light source. Paul Debevec is known as one of the groundbreaking researchers on the topic - and what initially began as technique for expensive high-end rendering has now been implemented in real-time in the past couple of years. Check out his site to see some of the amazing things HDR images can be used for.


Fig 1.1: Example of a light-probe HDR image
Credit: http://www.debevec.org



So, what else can been done with HDR - and how can it be used to create something interesting? For me, I want to explore moving away from leaving HDR lighting as a static image - and instead have it evolve over time while lighting a static scene. A few ideas are: time-lapse HDR lighting, real-time interaction with the HDR image itself, tying HDR colorization with weather forecasting, integration with existing data sets, and so forth.