Wednesday, April 23, 2008

GLSL Fruit Shaders


Fig 1.1: Screen captures of orange and apple shaders

For an OpenGL project I am helping fellow NYU-ITP graduate student Matt Parker with; I developed two real-time shaders written in GLSL to be applied to poly-spheres representing two types of fruit: apples and oranges. Due to the sheer number of spheres being displayed, the object on my end was to get a high-amount of surface detail using a limited amount of polygons. To accomplish this, I used a combination of displacement and normal mapping calculation in the shader, meaning that all surface detail is accomplished through the GPU.

I have already written about how displacement is done through shaders - the very same technique I used in the past was also used in this set of shaders:
Geometry Displacement through GLSL

Color variance along the geometric surface in regard to light has changed apart from my past shader work, where I have written in ambient and specular calculations through accessing the light (L) and given surface material (M), where:

Diffuse:


Ambient:


Specular (where H is the given half-vector):


To simulate the surface shape variance needed to pull off the appearance of an orange, a popular technique called "bump-mapping" was used. Bump-mapping is generally accomplished by having a normal-map which is sampled as XYZ vectors which are supplemented across the geo-normal matrix that the shader is applied to. The Lambert term for the normal is calcuated, which is simply the dot product of the normal and the light direction. If the lambert term is above 0.0, calcuate specular and add to the final color for the fragment. Here is some sample code to illustrate:

vec3 N = texture2D(normalMap, gl_TexCoord[0].st).rgb *
2.0 - 1.0;
float lambert_term = max(dot(N,L),0.0);

if (lambert_term > 0.0) {
// Compute specular
vec3 E = normalize(eyeVec);
vec3 R = reflect(-L, N);
float specular = pow(max(dot(R, E), 0.0),
gl_FrontMaterial.shininess);
}
The finished shader encompasses texturing, displacement, and normal mapping, resulting in a nice aesthetic simulating a complex surface with light calculations matching the surface variance. All of this occurring on a low-poly (16 x 16) quadric sphere object. Rotation animation simulated in real-time of the orange shader below:


Fig 1.2: Orange shader rotation animation


Sources for general information/reference and equation images:
ozone3D.net
Lighthouse3D.com
OpenGL.org

Wednesday, April 16, 2008

Maya: Fluid Gradient Programming (Plug-In)


Fig 1.1: Still of fluid-based fire simulation, with colors generated through plug-in gradient mapping system

I am currently developing a plug-in for Maya, a popular 3D modeling/animation package. The goal for the plug-in as a whole is to reinvent how fluid dynamic systems/effects are developed in Maya by constructing a node-based GUI which can be used within Maya, with each node encapsulating a current feature Maya fluids currently have - essentially adding on a layer of abstraction for the user to more easily generate simulations. The plug-in also aims to extend the current feature set of fluids within Maya.

I decided to start the development of the plug-in with a quick experiment, that being the integration of the generic gradient object class I developed earlier this semester with the Maya native gradient data driving color and incandescence. The aim being that images could be sampled and have their most dominate colors applied to the color/incandescence of the fluid voxel-system within Maya - thus automating the construction of the color palette for a given effect. The results and rendered video are posted below:


Fig 1.2: Default Maya Fluid Effects values



Fig 1.3: Early development of Maya plug-in as a drop-menu



Fig 1.4: Image chosen to have dominate colors sampled



Fig 1.5: Image sampled gradient constructed and mapped into Maya's attribute data



Fig 1.6: Fluid fire simulation animation, colors derived from gradient sampling without modification

Wednesday, April 2, 2008

Animation: Luma-Displacement / Glow (OpenGL / GLSL)

Sphere with an animated displacement GLSL shader that calculates a glow / bloom effect along defined distance from midpoint.

Tuesday, April 1, 2008

Luma-Based Displacement


Fig 1.1: In-progress glacier shader with geometric displacement

Monday, March 24, 2008

Python vs. Java vs. Python/Weave

In case you haven't noticed, I am a rather big proponent of using Python whenever possible. Between Python's flexibility and just the enjoyment I get out of writing in the language, I generally look to use Python before turning to a static language such as C/C++ or Java. The times I have had to turn to a compiled language are usually linked to when I need computational speed.

There are several ways to get Python moving faster - some a bit more ugly than others; code speedups with Python are usually about trading in readability for performance, but that doesn't mean it can't be straight-forward in implementation. In the past, I have used the usual bag of Python tricks: list comprehensions, method/variable assignment outside loops, generators, and countless more pure-Python solutions.

Recently, I stumbled upon Weave, a module existing inside the popular SciPy library which allows for C code to be mixed with Python code. Using Weave allows for the coder to stay inside the bounds of Python, yet use to power of C to compute the heavier algorithms. Not only does it provide a speed increase... it beats Java. Consider this (nonsensical) algorithm comparison between pure Python, Java, and Weave + Python:


Python Example:

Time: 1.681 sec.


Java Example:

Time: 0.037 sec.


Python + Weave Example:

Time: 0.017 sec.


Using identical algorithms in all three tests; using Weave with Python was nearly 100 times faster (98.9x) than using pure Python alone. In comparison to Java, Weave + Python was a shade over 2 times faster (2.1x) than Java. Although the Python/Weave code itself is bigger than the pure Python and Java examples - the speed it provides is absolutely phenomenal, without ever leaving your .py file.

Hopefully, I'll be able to use the power of mixing C with Python for more elaborate purposes - but this test alone is enough to get me excited about future projects with Python.

Wednesday, March 12, 2008

DayBRIGHT: Gradient / Render Integration

I have now managed to get DayBRIGHT to render out a full animation using a gradient as the light source. While the animation scene itself is rather basic, with three spheres inside a box, it demonstrates some of the primary goals of the project.

Now that the gradient data structure integrates with the rendering process, it now becomes a matter of writing the translators for each type of media I'd like to use - as well as developing a more interesting scene to light.




Fig 1.1 + 1.2: Animation using displayed gradient as lighting source over time


Indirect surface shader code:

Monday, March 3, 2008

DayBRIGHT: Post-Processing

Changing gears a bit, I decided to begin writing in some post-image processing functionality to DayBRIGHT to handle the final image output. The first post-process was writing a color noise filter that blended a 1:1 ratio of random colored pixels with each rendered image . This is done to counter the fact that the illusion of computer graphics being reality is often blown due to a discrepancy in noise artifacts across the image, most commonly noticed when graphics are composited with film or video footage. Even without footage, the human mind does not normally process perfectly smooth color and shape with reality. Reality rarely has instances of complete visual fidelity, at least from an everyday human perspective.


Fig 1.1: Simple color noise composite process

Another post-process that is necessary was gamma correction. The algorithm resulting from the equation is straight-forward, where a gamma corrected set of values from 0-255 is created, then each RGB value from the given image is changed to the gamma corrected equivalent. Below is the simple math equation for gamma correction and corresponding algorithm in (non-optimized) Python code:


Fig 1.2: Gamma correction equation


Saturday, March 1, 2008

DayBRIGHT: Renderman Rendering

Below are some test renders coded through Renderman and woven into the DayBRIGHT graphics and data handling subsystems. Using the classic Cornell box, I was able to successfully simulate global illumination in combination with image based lighting. The photon calculation isn't as smooth (or as fast) as I'd like it to be - but optimization will be taken care of later once I can generate reliable image sequences.


Fig 1.1: Global illumination / color bleeding


Fig 1.2: Image-based lighting with accurate shadows. Shadows no longer gradient of grays, but rather mixed with color pertaining to casting objects.


Fig 1.3: Classic Cornell box setup with pronounced color bleed

Tuesday, February 26, 2008

DayBRIGHT: Image to Gradient Translation

The first translator I decided to fully flesh out for DayBRIGHT is the static image translator. The underlying system I have developed only has one principle rule - anything can be translated and used, as long as that translation returns a gradient object. So, how can a gradient be created from an image?

I took the route of writing an algorithm which finds and stores the most dominate colors in an image - then using each sample as a control point within a varying length linearly-smoothed gradient, where the most dominate colors are in order from left to right. Below are some examples of the results, where the top is the source that was sampled and gradient is the output visualized:


Fig 1.1: Sky converted to a gradient based on dominate colors


Fig 1.2: Composition VI, W. Kandinsky - image to gradient conversion


Fig 1.3: Composition VII, W. Kandinsky - image to gradient conversion

Now that a method existed to translate static images into gradients, it becomes a trivial task to write a translator for a video feed, as video can be easily treated as a series of images. Simple alterations to a subsystem can allow for a calculated sample of each frame in a gradient. So, if a video feed were parsed once every hour for twenty four hours, a gradient could be constructed of n-length, linearly being smoothed evenly across each respective control point based on n+1 distance.

Tuesday, February 19, 2008

DayBRIGHT: Generic Structures

The underlying system DayBRIGHT uses to translate various data structures has become less about constraining data to a set format, and more about developing a generic structure (a gradient) to fully represent the data in an elegant way. I find this an interesting topic to explore because it has really opened a lot of doors as to how far I can take his project; in particular, in the sheer variance of data I can now use.

RSS feeds, video streams, static images, audio tracks, physical sensor data - all of which can be harbored by a single generic data structure. Each piece of data simply needs its own translator towards being converted to the structure; once that is done - there is no further special-casing needed.

My goal for this project is to have, by the end of semester, a demonstration of how the various types of data listed above can be used to achieve results for an identical operation. In the case of DayBRIGHT, I will be using the gradient structures to light 3D geometry. How this will be done merits its own detailed entry - which is forthcoming.

I am really excited about this idea, it is a blend of engineering/mathematics and visual aesthetics that I have always been drawn to. I am dedicating all of my classes to it in hope that my time commitment will really push this idea as far as it can go.

Friday, February 15, 2008

DayBRIGHT: Proposal + Test Video

The idea behind the DayBRIGHT project is to be able to visualize data through HDR light probes being influenced by various types of datasets. Each dataset, be it an RSS weather forecast, webcam frame sequence, arbitrary color selection, or any other value - will be translated into a common format to be applied to a core HDR image.

The DayBRIGHT project will be encapsulated within a core software application which will communicate with various data feeds. As mentioned, each type of data will be converted to a common format: a color gradient. A color gradient will allow for analysis of RGBA, as well as time by sampling across its length. Depth can also be calculated by converting RGBA to luminance values.

To render the results, the initial solution will be to use 3Delight, an implementation of the Renderman standard, first introduced by Pixar. Using Renderman will allow for programming flexibility and high-quality renders at HD resolutions, if needed. An example of the early results is shown below, where an orange -> cyan color gradient has been applied:

Wednesday, February 13, 2008

DayBRIGHT: Gradient Sampling

One of the core concepts of the DayBRIGHT project that I am developing over this semester is to enable the ability to drive HDR lighting over time using a variety of methods - the first of which is using a simple color gradient.

For every frame to be rendered, a copy of the core HDR image is created, then a color overlay operation is applied to that copy - with the color applied being the current sample value of the gradient. This relationship is graphically represented below:


Fig 1.1: Graphical example of gradient sampling applied to image

Sunday, February 10, 2008

Mind.Scribble.Form

My mind blurred, sketched, articulated, and scribbled on paper - most of which makes little to no sense to me currently. I normally never write things down to remember for the future - I generally only do so to calculate or brainstorm on the spot. Apparently, in parallel with my messy drawings, I also take photos without much care either. I think I'll write that off as part of my "creative process" and hope that you believe me (which you shouldn't).









Wednesday, February 6, 2008

Blast Radius

Several layers of particles revolving around a central sphere - each layer being rotated based on assigned attraction forces. Particles are symbolic of populous movement combined with the effect of unseen influence over collective entities - physically, socially, and otherwise.


Fig. 1.1: Straight-on shot


Fig 1.2: Corner angle


Source:
BlastRadiusMain.java
GLSL.java

Friday, February 1, 2008

Interactive and Time-Based HDR

I have been thinking a lot about pixels lately - most likely due to having a class with the given root word in it... twice. So, we have all of this information nicely packed within a frame - now what to do with it?

What I find interesting is how an image can be mined for data that is unseen upon first glance - and one of my favorite ideas in regard to this way of thinking is the creation and use of HDR images. High-Dynamic Range images are useful for an assortment of reasons - in the field of computer graphics, they are many times used as a light source. Paul Debevec is known as one of the groundbreaking researchers on the topic - and what initially began as technique for expensive high-end rendering has now been implemented in real-time in the past couple of years. Check out his site to see some of the amazing things HDR images can be used for.


Fig 1.1: Example of a light-probe HDR image
Credit: http://www.debevec.org



So, what else can been done with HDR - and how can it be used to create something interesting? For me, I want to explore moving away from leaving HDR lighting as a static image - and instead have it evolve over time while lighting a static scene. A few ideas are: time-lapse HDR lighting, real-time interaction with the HDR image itself, tying HDR colorization with weather forecasting, integration with existing data sets, and so forth.

Thursday, January 17, 2008

Programming, Ubiquity, Devices, and You

After telling fellow students at ITP about the classes I decided to sign up for the coming Spring semester, I have often been asked the following two questions:

1. "Why are you taking programming centric classes if that is already one of your strengths?"

2. "Why aren't you taking any physical computer related classes?"

The answer to the first one is rather concise; I feel that staying constantly involved in software and mathematics allows me to build myself as a better overall problem-solver on both an analytical and creative basis. Writing code to solve a problem is concurrently a step-by-step breakdown of the problem itself; patterns emerge and reveal themselves, woven with systems of varying elegance and complexity. Understanding concepts which reveal such harmony will endure far past simply designing the next iPhone - leading me to the answer to the second question:

My position on physical computing at it pertains to projects at ITP (and in general) is that the development of devices which we physically interact with is inevitably obsolete. This is the century of biology - and consequently, biological micro-interfacing will be the future of device technology. Increased ubiquity leading to result/reward has nearly always been the trend in tool advancements - computing is to exist behind a curtain whenever possible. While it is certainly a valid venture to develop large-scale projects involving common electronics and so forth - I personally have little interest in pursuing similar implementation methods.