Monday, December 10, 2007

Software for Physical Display Project

Group: William McDonald, Sunghun Kim, Matt Parker

Further documentation:
Sunghun with videos / images
Matt on construction / mechanics

For our group project, we wanted to develop a physical system to represent graphical depth data. We decided to use a collection of sine and noise waves to start off with - as using depth data really only makes (practical) sense with a 3D grid.

Using Processing, a straight-forward GUI was created featuring four buttons, with a graphical representation of the waves being dynamically created to the right of the buttons. The fourth button allows the user to paint his/her own wave (or any shape, for that matter) and the software will generate a visualization and physical output for the image painted.



Fig 1.1: Drawing canvas allows user to draw images to physically visualize



Fig. 1.2: Visualization of user input wave drawing


To sample each wave - an algorithm was written which samples across the width of a given image proportional to the resolution of the physical system - in our case, 20 samples. At each sample - the program checks to see how far down (Y) the image it takes in pixels before a color other than white is found. Upon finding a color, the value of that sample is received by the simple equation of :

sample_value = y_coord / (image_height / max_value)




Fig 1.3: Graphical representation of sampling method on wave images


In regard to feeding this data into the servo motor array - we decided to use four arduino microprocessors connected via USB to the computer running the software application; with each arduino effectively controlling five servos:



Fig 1.4: Computer / Arduino / Servo array configuration


Saturday, December 8, 2007

Shiver - Final


Fig 1.1: Splash screen for Shiver

The last week of work on Shiver has primarily been a focus on adding features on the GUI side and tying up some loose ends. The largest new feature is the ability to 'simulate' - which allows a user to simulate a full rotation (or however many wanted) of the globe, outputting the frames as an image sequence. This process allows the user to see the rotation as a movie clip running at full speed. The video in this blog entry is derived from that process.



Fig 1.2: Simulate options window


Shiver is still very much a work in progress - however, I am pleased with the result of my work over the past 6-7 weeks, particularly with knowing that I was able to achieve high-quality imagery in real-time using Python and OpenGL together.



shiver_simulation.mov - 10.6mb


Images from Shiver:




Tuesday, December 4, 2007

Shiver Development: Event Density + Time Charting

For the past few days, I decided to develop a type of visualization which attempts to better communicate the areas that have a high density of earthquakes over time. I decided to do this by having the program create a texture on the fly which is fed into the primary globe fragment shader, where areas of high seismic event activity would get higher amounts of red. Since the texture being created is simply a collection red blobs on a black background, it is simple process to add the red areas on top of the globe through the shader. Additionally, the seismic events themselves are visualized through simple lines aligned to the normals of the sphere - with the length of the line determined by its Richter scale value.



Fig 1.1: Globe with event density visualization

After wrapping up event density mapping, I wanted to start trying to visualize events as they occurred over time. The visualization itself doesn't communicate time as much as it displays the erratic nature of seismic activity happening around the world over an extended period (currently, over seven days). To create lines that seemingly wrap themselves around the curvature of the globe - I decided to write a small bezier curve generator which had end points at the 3D coordinates of selected events. The distance of those events is calculated using the Euclidean distance formula of:




Where the distance value is used to apply weight (w) to midpoints in rational bezier curves, such that:






Fig 1.2: Bezier curves mapped across globe based on time of occurrence

Wednesday, November 28, 2007

Shiver Development: Displacement

As I continue to polish the overall look of the globe - I decided to explore how to display the topology of the earth, which will become more important upon the incorporation of zooming in and out. Another plus for having a working shader pipeline is the ability to write vertex shaders in a way which changes the geometry of the given shader is attached to - better known as displacement.


Fig 1.1: Altitude map provided by the NASA Blue Marble project

Having an altitude map which precisely matches the texture maps I am already using is quite convenient - so there was no need to alter the image or the coordinate mappings in the my shaders. The concept of displacing geometry per vertex is a rather simple one:


Fig 1.2: Formula for displacement per vertex, where:

P0 = original vertex position
p1 = new vertex position
N = vertex normal position
df = normalized displacement factor
uf = user-defined scaling value


Fig 1.3: Diagram of per vertex displacement (Image credit: oZone3d.net)



Fig 1.4: No displacement / displacement comparison in Shiver (wireframe)

My next goal for Shiver is to implement a sunlight calculation system which will auto-generate a texture map accurately representing where how the sun is lighting the earth based on the time of day. Additionally, I plan on (finally) starting the preliminary visualization of the actual seismic events being geomapped.

Friday, November 23, 2007

Woven

I developed a small piece of software for a group presentation I had in Applications class that has to do with visualizing collaboration among 1st year students at ITP. Someone during the Q+A asked if the software was open-source and if so, where could the source be downloaded? The answers were "yes" and "soon, on my blog". Unfortunately, I don't really have the time as it stands to package the software up nicely for download - so I am just going to post the raw source code for now, with the hope that I can package it later on...



Source:
ProcessingMain.java
StudentDataParser.java
Student.java
Button.java
student_data.txt

Shiver Development: Event Mapping

A couple of new updates; one being that I have a new fog shader working on the globe - which adds to the effect of the atmospheric layering. The ability to program shaders and apply them to geometry is really what separates imagery that is noticeably fake and that which is nearing photorealistic. Real-time photorealism is the next big step in computer graphics - although you could make a very valid claim that photorealism hasn't been reached in pre-rendered form either.


Fig 1.1: Comparison between shader enhanced and non-shaded geometry

The next update in progress is actually a rather large one - that being that I can now plot events on the earth sphere, needing only an event's latitude and longitude values. This was actually a larger challenge than originally anticipated, as I didn't take into consideration polar coordinates in regard to texture mapping in OpenGL as it relates to sphere mapping. I also ran into an issue pushing and popping the transformation matrix - but that turned out to be an instance of not having certain draw events happening between the correct combination of push/pop commands.


Fig 1.2: Seismic events mapped on globe


Sunday, November 11, 2007

Shiver Development: Shaders in PyOpenGL

After a turbulent weekend trying to get shaders working with PyOpenGL - I finally got my first vertex and fragment files compiled and working inside Shiver. Without any documentation to bail me out of my usual jams with new code - I hacked away through the weekend until I stumbled upon a way of changing C-type function arguments, then converted variables into C-compliant data to be fed into the altered function in question. In English: I needed to convert data so it could be read and processed correctly.


Fig 1.1: Globe with atmospheric shader applied

So - in the lab I am often asked: "why are shaders important"? Shaders can provide surface appearance variance in regard to color, opacity, reflection, refraction, etc... even physical alterations to the geometry itself. In relation to what I am attempting to achieve - it is vital to be able to replicate light and physical phenomena that exists when viewing the earth from space.

My first goal was to get a representation of the atmospheric layer that wraps around the earth - a thin haze of a light blue hue. The trick is - the haze should not rotate along with the earth as the camera moves around the globe. To achieve this; a shader was created that calculates how light is hitting the surface at each given frame - then simply finds where the light drop-off occurs on the surface (the edges) and colors those areas a varying hue of blue based on how much light exists on a given fragment of the geometry. Since I applied the shader on a sphere and the lighting is both hitting the sphere straight on and is static in all attributes over time - we can assume perfect symmetry in the light distribution across the sphere at any given time, which allows for the Lambertian Reflection calulation of:



Where:
I0 = reflected intensity
Ld = light diffuse color,
Md = material diffuse coefficient

Once we have the reflected intensity of each fragment, simple logic can be installed within the shader that only allows for color to be applied a certain areas of intensity - and, since we are dealing with a sphere with light shooting right at it (symmetrical light reflection!) - it becomes simple to create a halo light effect.


Fig 1.2: Closer look of the glow-like effect of the atmospheric shader

The next steps with using shaders are to create a fog shader for the globe, as well as the potential for a very subtle bump-map, as well as other effects as time permits.

Wednesday, October 31, 2007

Shiver - Final Proposal

A few posts back - I wrote about developing a seismic data visualizer for my ICM final. This is still the case - however, the details involving the execution of the project are now very different. The most notable change involves adding several physical components to the project, meaning; this project will now be both my ICM and my Physical Computing project for this semester. Excited? Splendid - now on to the details...


Technical
On the ICM front - I made a decision a week ago that I would in fact use Python as my primary programming language opposed to my original choice of Java. Why? Well, as I mentioned in the past - Python is slow in regard to intensive computation in comparison to compiled languages such as C/C++ or Java. However, where Python shines is in its fast development time - and through its flexibility to glue together various components effectively. Without a doubt, hitting the visual high-bar I am aiming for in regard to the graphics in this project will put an enormous strain on Python - however, much of that strain can ideally be handled through C code, if necessary. I will be using an OpenGL binding called pyOpenGL - which is a mature library with strong documentation, so I should have information/learning support when I need it. For the GUI components, I will be using the wxPython toolkit - which allows for native OS widget use, so the program will automatically reflect the standard appearance of whatever OS the program is being run on. Very slick.



Fig 1.1: Shiver in early development, click to enlarge.


Representation
As for how the data will be represented, it will be done so on a 3D globe - much like applications such as Google Earth and NASA's World Wide Wind. When the applications gathers seismic data activity from the USGS - each event will be mapped to the globe according to its latitude and longitude values. Based on an event's Richter Scale value - the event will be visualized along the nearest normal to in correlation to its lat/long. mapping. Events can be selected through an easily organized directory structure tree, or on the globe itself. When selected - various attributes are displayed, as well as a simulation option - which leads us to the physical computing side of the project...


Physical
After sitting down with fellow ITP'er Sunghun Kim, we decided that we would like to team up and pursue a physical representation of seismic data that can be partnered with software I am developing for my ICM final. This idea will involve creating a 2D representation of the seismic events - dictated by the Richter scale value of each event. This will ideally result in a mesh being created by rods pushing and pulling at a rubber-like surface. Each rod will be driven of an average value of pixel color values dependent on the ratio between the rod count and the image resolution. Once a 2D simulation is established, it will ideally branch out to a 3D grid - but that is likely for another semester.

Thursday, October 25, 2007

Elevator Previsualizer

This is a midterm project I worked on with a talented group of people in Physical Computing. Follow the link for more:

Elevator Previsualizer Main Site

Wednesday, October 17, 2007

Phys. Comp Midterm: Implementation / User Observations

For our midterm in Phys. Comp - my group decided to take on the task of breaking up the boredom which happens when waiting for an elevator. Petra, Sunghun, and myself set up what we had completed for the project this far the first time -- with mixed results. It is clear that we have a lot of implementation issues to resolve before getting the type of user interaction data we need for the later stages of polishing usability.


Fig 1.1: Will (me) interacting with camera.

Implementation Issues/Observations

1. The projector, if angled from the ceiling, will display a distorted shape due to the light being shot at an angle.

2. The image being shot through the projector does not immediately fit in the space of the door. The video can be clipped, but the light (although black) still shows up.

3. Even if the projector is raised up on the ceiling; the light from the projector will be blocked by people standing directly in front of the door.

4. There is no (reasonable) way of preventing significant light pollution at any given time because we are unable to strictly control the pubic environment, and we are bound by the collective lumen value the given projector can output.

5. We do not yet have a good solution for binding the light-blocking module to the projector – although this likely has a simple solution.

6. Due to the central and elevated nature of both the camera and the projector, light/motion recursion through the camera is very likely to occur unless steps are taken to filter an area – which has its consequences that will result in trade-offs likely too severe to compromise over.

7. The location of the crank has to be outside the field of the camera and projector. It is possible to have the crank located right in the middle, within the camera range but not obstructing the projector light - but this will mean that the user of the crank will be the focus of the image, which may create static results.

8. Due to the environment we are contained in, the projector will have to be turned sideways to establish the amount of vertical space needed to cover the door.


User Issues/Observations

1. Users are drawn to the computer if it is out in the open. They seem to glance at the projected image, then want to look “behind the curtain” at the computer screen. The computer should be hidden from view.

2. Users seem to want to view the projected image by being in front of it, just as people watch television.

3. People stepping out of the elevator (or stepping in) do not particularly enjoy being blasted in the face by the projector light. (this issue has been solved, however).

4. The moment seems fleeting when people interact, people interact and a result is presented, then instantly replaced by a new result. This does promote a sense of real-time, but doesn’t reward


Fig 1.2: Software cutting out background, leaving only the person displayed.


Fig 1.3: Petra using a crank which drives the projected image up and down.

Friday, October 12, 2007

ICM Final Project Proposal: Global Data Visualizer

Proposal:
When developing software, I feel it is important to develop around a system that is flexible, yet powerful when focus is needed on a particular task. Many times, a generic solution to a problem requiring precision results in software that has the ability to handle many tasks, yet does so in a mediocre way. Software should be quiet in interface, elegant in result - and predictable in terms of being intuitive to use.

My commentary on this aspect of software leads me to my proposal -- that being; I want to develop a system which can visualize various types of data as it pertains to our planet, and I'd like to do so in a way which promotes ease of use and flexibility in how it handles different data. Ideally, the data will be presented in a way that will be understood, yet a good deal more 'artsy' than most data visualizers I have seen. Whether 'artsy' means abstraction or simply a clean/slick way of visualizing the data I cannot really say, I very rarely ever know what I will create on the art-side until I am up to my neck in code architecture. That isn't to say that the art is an after-thought, because the fact negating such an idea lies in much of my past code, where I practically destroy my nicely planned out system for the sake of making the output look 'cool'. Of course, I'd prefer to maintain both... but the engineer inside me brings a knife to a gunfight if I have to choose only one due to time restraints.

Technical:
Initially, my task will be to visualize seismic activity happening around the world. This will ideally expand to harbor other data as time permits. I have actually already done this to an extent - but I'd like to have the events be projected on a 3D globe with correct coordinates an so forth. I have made the decision that I will be using OpenGL through Java for this project. I toyed with the idea of using C++ or Python in combination with OpenGL - but C++ coding is still a very slow process for me and Python simply isn't going to provide the optimal speed I need (which is too bad, coding in Python is really enjoyable). I was planning on using JOGL - but the ever helpful Daniel Shiffman pointed me in the direction of LWJGL (Lightweight Java Game Library) which looks really promising, so I will be venturing in that direction.

Here are a few links that are helping me think things through on this project:
Lightweight Java Game Library
NASA Blue Marble
USGS.gov

Tuesday, October 9, 2007

Elevator Project: Motion Tracking

For my group project in Physical Computing, we have decided to address the issue of people waiting outside of elevators and the boredom that inevitably occurs during the wait. Petra, Sunghun, and myself all agreed that a good way to break the static nature of waiting is to allow people to interact with something that has immediate output in correlation with their movements. This allows those waiting to interact quickly with limited effort, thus making the amount of time one actually has to wait for the elevator to arrive non-consequential, which matters a great deal since upon arrival, the elevator could be very close to arriving, or not at all.

My part in the project has been developing the software - using Java/Processing for the graphics and motion tracking system, as well as C coding for the Arduino microprocessor we are using. The motion tracking system does a combination of background subtraction and motion detection, which basically filters out all imagery which not both moving and not within a certain brightness range. The pixels that meet these prerequisites have their coordinates tracked and drawn on using a combination of small rectangles of lines being drawn between each tracked pixel in sequential order, which results in an abstraction of imagery that is vastly different from real-life, yet familiar enough to be predictably interacted with, at least in the regard towards positioning oneself towards making an intended impact on a defined space.


Fig 1.1: Although heavily abstracted, a face emerges through motion.


Fig 1.2: Rapid motion completely destroys all recognizable form, as intended.


Fig 1.3: Slight motion will in turn bring about only slight abstraction, to
the point of nearing a keyed cutout of the form.


Processing:
ProcessingMain
MotionCapture

Arduino:
cb_software

Wednesday, September 26, 2007

Shiver - Early Development

I have decided to go ahead and start dedicating a good amount of time toward one of the two major projects I hope to accomplish this semester at ITP. The project name is tentatively called "Shiver" and it involves parsing seismic event data and using that content as a way of provoking both visual and physical simulations. Right now, I am concentrating on the visual side - as it is a nice starting place to begin prototyping the "how" as it relates to progress I will need to make later in development. However, one of the major tasks I wanted to immediately address was the ability to interface hardware I create with the software I am programming. I was able to get an analog signal to influence variables inside Processing using its serial library - with the pictures and code below:



Fig 1.1: Simple circuit hooked up to a Arduino and variable resister
to test analog/serial communication.




Fig 1.2: Still image of Shiver at current development point


Link:
Main Demonstration Page

Processing - Source code:
parse_blocks
xml_parser

Arduino - Source code:
serial_communication

Saturday, September 22, 2007

Motion Tracking Glove Prototype

In my first attempt in pairing concepts from my Physical Computing and Computational Media classes - I decided that I would like to try my hand (pun fully intended) at developing a type of glove-based motion tracking system. Since I have a larger project in mind for Computational Media which involves tracking movement through a camera - this was a perfect opportunity to start roughing out a few of the pixel tracking / blob detection algorithms that would be easily reusable when I move on to developing the larger project. So, after an afternoon of traversing through pixels, I had the color tracker working on high densities of near-pure red:


Fig 1.1: Ellipse following over pixels being tracked


So, with the tracking algorithms working well enough to run some practical tests - it was then time to move into the physical computing side of my endeavor. The wiring was a rather simple affair, as I really only intended to have two LEDs, one which was to be consistently on and the other to require a button press to light up. After a successful test with my Arduino code - I cut up some cardboard and wrapped two pieces of equal length in eletrical tape, then sandwiched my board into an enclosure, as seen below:


Fig 1.2: An ugly yet effective circuit board enclosure



Fig 1.3: The enclosure opened - revealing the breadboard and
microprocessor.


I had bought some gloves from K-Mart that were of the variety where if you flip down a flap the gloves could become mittens. The flap turned out the be the (near) perfect holder for my enclosure. The fit isn't exact - but just fine for the prototypical nature of this project. A few holes through fabric and some snaking of the wire through the palm and fingers of the glove and it was ready for action.


Fig 1.4: Motion tracking glove - ready for your local fashion show.



Fig 1.5: Glove plugged into a USB port and working

After putting on the glove and using it in combination with the Processing program I wrote, the test results indicate that better lit rooms gives the color tracking algorithms a bit too many high values than it can currently handle - which results in flickering of the tracking point. However, in moderate to low lit rooms - the glove works rather well. Below is the screen shot of the glove in action with the Processing program:


Fig 1.6: Motion/Color tracking algorithms paired with Processing graphics, pointer
being directed by glove use.



Processing - Source code:
motion_tracker
blinds
gui_components

Arduino - Source code:
motion_glove

Physical Computer - Lab #2

Moving forward in the adventures of the electrical nature - it was already time to dive into the programming side of physical computing. Lab #2 consisted of wiring up a simple circuit with a switch and a couple of LEDs and being able to program a Arduino board to process simple logic through authored code. After working through the previous lab - I found the circuit-boarding side of this lab to be far easier, even with the addition of the Arduino microprocessor, which as the time was completely foreign to me. Below are a couple of pictures of the process, plus a link to the source code for my first Arduino program.



Fig 1.1: First success using Arduino


Fig 1.2: Using a simple switch logic gate


Fig 1.3: Six LEDs blinking one after another


Source Code: six_led_lights


Note: I have noticed that one of the (many) limitations of the programming language initially available to use with the Ardunio microprocessor is the inability to directly get the size of an array through a method or any sort of standalone function. This becomes problematic when needing to accurately loop through an array of LED pins, or anything else stored in an array for that matter. A way around this limitation without resorting to counting the elements manually and hard-coding that number to a variable is to instead use the following code, as additionally outlined in my source code.

int led_pins[] = {3,4,5,6,7,8};
int led_pin_count = sizeof(led_pins) / 2;


The global function 'sizeof' returns a value which represents the number of bytes currently stored in a given variable. Since we can assume that an integer is 2 bytes; dividing the value sizeof returns by 2 will result in the elemental size of our array.

Wednesday, September 19, 2007

Physical Interface: Cash Register

In all likelihood - I have walked more in these past couple of weeks than I had in any given week during my time as an undergrad. I am fortunate enough to live within walking distance (roughly 1.8 miles) to the Interactive Telecommunications Department where I am undertaking my graduate research at NYU. More often than not - I drop into a local deli on my way to ITP to grab a vitamin water or perhaps one of those energy bars of varying brand. Upon selecting what is likely going to turn out to be lunch - I walk over to the cashier to pay my usual $4.00 in cash, a much more 'pleasant' affair now after being glared at several times for sticking out a credit card. Besides the usual stiff attitude of the deli cashiers, I have always been taken by the speed of which the cashiers operate. Usually, the cash register itself is of the older variety - a 'do it yourself' style machine where numbers of prices are punched in rather than scanned. What is most interesting is that in my experience, the old-style cash registers are substantially faster and the users seem far less likely to have a "someone get the manager over here" type moment compared to a scan-based register with touch screens. So, is this an instance of technology getting in the way rather than helping? Here is my completely research-free and high-level look:

First, there needs to be an acceptance to the fact that technology isn't always built for speed. Be it for all the right or wrong reasons - technology many times can substitute what was traditionally a simple and fast process with more functionality; resulting in a steeper learning curve for the end-user(s). I have chosen cash registers as an example for a user-interface to analyze because its advancement in features are an interesting mix of positives and negatives, as well as a device that has taken on a new relationship with its users. The cash register has 'evolved' from a interface requiring one employee to use the device, to needing an employee and a customer to interact with, to most recently - only needing a customer to interact with, now coming full-circle. I won't focus on the third advancement, but I want to write a quick comparison of the first two user-interfaces - the singular and dual user interface cash registers.


Fig 1.1: Older model cash register: simple and fast
(Stoic NYC cashier attendant not pictured)


In my example of the dual-user register - I recently went to a Best Buy in lower Manhattan to buy a DVD (The Office: Season 3... a must have) and approached the cashier in front of a rather new looking cash register system, complete with a customer input pad and stylus pen for a signature. In the end, this transaction took several times longer than an average transaction at the deli - with the very same amount of items using the exact same type of payment: a credit card (to the chagrin of the deli cashier). Generally with technology, if a user is being asked to allocate more time, then the end result is expected to be enhanced to return. My end result as a customer yielded no instant enhancements - I still received a receipt, my item, and a debit from my banking account in both transactions. As a user-interface, the technologically enabled cash register at Best Buy required effort on my part by signing with the stylus (awkwardly, as always) and paying attention to which buttons to press to proceed. The cashier seemingly did less work in relation to number of button-presses and analysis than that of the deli cashier - but the overall workload wasn't necessarily shortened, only abstracted across multiple users (such as myself) and varying interfaces. Sure, there are a few hidden benefits of the high-end register such as receipt caching for faster returns and cash back options using a debit card - but what is the ratio between transactions and the collective number of returns and debit card cash back requested on a given day? I'd say it is very likely a substantially lopsided ratio leaning toward the quantity of transactions - yet the technological advances cater to the lower frequency of occurrences in the case of the cash register.


Fig 1.2: "Sign here and press button to continue, now
press this combination of buttons to confirm...hey,
pay attention or this transaction isn't happening."

My concluding thought is that you could make the case that technology advancement for the cash register earns its keep because it lessens the work load of the cashier - but then again, the cashier is getting paid to work less at the expense of my time, so as a customer I wouldn't exactly call that a positive step unless I was on the other side of the register.

Monday, September 17, 2007

ICM - Assignment 2: Mouse Tracker

This is just a quick demonstration of how shapes can change states based on mouse interaction - and continue animation after the interaction has taken place. This example (which I cannot embed in this blog because of its large size) is just the simplistic animation mouse tracking part of a larger project I am undertaking which involves using a connected camera to track movements of a certain color, then using that concentration of color as a pointer. More on this project later...



Mouse Tracker using Processing / Java

Tuesday, September 11, 2007

Physical Computing - Lab #1

So, this is lab was my first venture into the depths of physical computing. I feel that overall, things went rather smoothly - escaping without any serious injuries or damage to private property. For my efforts - I was able to make multiple LEDs of various colors light up, as well as garnering the ability to fade them in and out and using a switch to turn them on and off. Perhaps not too exciting to read about - but certainly exciting to accomplish on my end.



My first circuit.


Switch Activated LED



Fading LED using a potentiometer.


This lab served as a great introduction to some of the really core basics/principles of electrical engineering. I am looking forward to start programming microprocessors - especially when it comes pairing code logic with sensors. More to come.

Friday, September 7, 2007

ICM - Assignment 1: Circle Creator

I wrote up a little applet in Processing which simply creates circles of varying sizes at pseudo-random coordinates within a bounding box.

For this assignment, I wanted to use the time I had to write up a couple of class objects I knew that I could reuse later on in this class (and just in general) - in particular a general use button and a border frame. Clicking the button signals a callback of sorts to the main circle drawing function - pretty straight forward, although not being able to pass a function as an argument to another function makes it less flexible currently than I'd like.


































This browser does not have a Java Plug-in.



Get the latest Java Plug-in here.









Handle Events:
Mouse click (within viewport box): erase viewport
Key 'r': Red scale
Key 'g': Green scale
Key 'b': Blue scale

Source code: circle_creator

Built with Processing