Showing posts with label phys-comp. Show all posts
Showing posts with label phys-comp. Show all posts

Monday, December 10, 2007

Software for Physical Display Project

Group: William McDonald, Sunghun Kim, Matt Parker

Further documentation:
Sunghun with videos / images
Matt on construction / mechanics

For our group project, we wanted to develop a physical system to represent graphical depth data. We decided to use a collection of sine and noise waves to start off with - as using depth data really only makes (practical) sense with a 3D grid.

Using Processing, a straight-forward GUI was created featuring four buttons, with a graphical representation of the waves being dynamically created to the right of the buttons. The fourth button allows the user to paint his/her own wave (or any shape, for that matter) and the software will generate a visualization and physical output for the image painted.



Fig 1.1: Drawing canvas allows user to draw images to physically visualize



Fig. 1.2: Visualization of user input wave drawing


To sample each wave - an algorithm was written which samples across the width of a given image proportional to the resolution of the physical system - in our case, 20 samples. At each sample - the program checks to see how far down (Y) the image it takes in pixels before a color other than white is found. Upon finding a color, the value of that sample is received by the simple equation of :

sample_value = y_coord / (image_height / max_value)




Fig 1.3: Graphical representation of sampling method on wave images


In regard to feeding this data into the servo motor array - we decided to use four arduino microprocessors connected via USB to the computer running the software application; with each arduino effectively controlling five servos:



Fig 1.4: Computer / Arduino / Servo array configuration


Wednesday, October 31, 2007

Shiver - Final Proposal

A few posts back - I wrote about developing a seismic data visualizer for my ICM final. This is still the case - however, the details involving the execution of the project are now very different. The most notable change involves adding several physical components to the project, meaning; this project will now be both my ICM and my Physical Computing project for this semester. Excited? Splendid - now on to the details...


Technical
On the ICM front - I made a decision a week ago that I would in fact use Python as my primary programming language opposed to my original choice of Java. Why? Well, as I mentioned in the past - Python is slow in regard to intensive computation in comparison to compiled languages such as C/C++ or Java. However, where Python shines is in its fast development time - and through its flexibility to glue together various components effectively. Without a doubt, hitting the visual high-bar I am aiming for in regard to the graphics in this project will put an enormous strain on Python - however, much of that strain can ideally be handled through C code, if necessary. I will be using an OpenGL binding called pyOpenGL - which is a mature library with strong documentation, so I should have information/learning support when I need it. For the GUI components, I will be using the wxPython toolkit - which allows for native OS widget use, so the program will automatically reflect the standard appearance of whatever OS the program is being run on. Very slick.



Fig 1.1: Shiver in early development, click to enlarge.


Representation
As for how the data will be represented, it will be done so on a 3D globe - much like applications such as Google Earth and NASA's World Wide Wind. When the applications gathers seismic data activity from the USGS - each event will be mapped to the globe according to its latitude and longitude values. Based on an event's Richter Scale value - the event will be visualized along the nearest normal to in correlation to its lat/long. mapping. Events can be selected through an easily organized directory structure tree, or on the globe itself. When selected - various attributes are displayed, as well as a simulation option - which leads us to the physical computing side of the project...


Physical
After sitting down with fellow ITP'er Sunghun Kim, we decided that we would like to team up and pursue a physical representation of seismic data that can be partnered with software I am developing for my ICM final. This idea will involve creating a 2D representation of the seismic events - dictated by the Richter scale value of each event. This will ideally result in a mesh being created by rods pushing and pulling at a rubber-like surface. Each rod will be driven of an average value of pixel color values dependent on the ratio between the rod count and the image resolution. Once a 2D simulation is established, it will ideally branch out to a 3D grid - but that is likely for another semester.

Thursday, October 25, 2007

Elevator Previsualizer

This is a midterm project I worked on with a talented group of people in Physical Computing. Follow the link for more:

Elevator Previsualizer Main Site

Wednesday, October 17, 2007

Phys. Comp Midterm: Implementation / User Observations

For our midterm in Phys. Comp - my group decided to take on the task of breaking up the boredom which happens when waiting for an elevator. Petra, Sunghun, and myself set up what we had completed for the project this far the first time -- with mixed results. It is clear that we have a lot of implementation issues to resolve before getting the type of user interaction data we need for the later stages of polishing usability.


Fig 1.1: Will (me) interacting with camera.

Implementation Issues/Observations

1. The projector, if angled from the ceiling, will display a distorted shape due to the light being shot at an angle.

2. The image being shot through the projector does not immediately fit in the space of the door. The video can be clipped, but the light (although black) still shows up.

3. Even if the projector is raised up on the ceiling; the light from the projector will be blocked by people standing directly in front of the door.

4. There is no (reasonable) way of preventing significant light pollution at any given time because we are unable to strictly control the pubic environment, and we are bound by the collective lumen value the given projector can output.

5. We do not yet have a good solution for binding the light-blocking module to the projector – although this likely has a simple solution.

6. Due to the central and elevated nature of both the camera and the projector, light/motion recursion through the camera is very likely to occur unless steps are taken to filter an area – which has its consequences that will result in trade-offs likely too severe to compromise over.

7. The location of the crank has to be outside the field of the camera and projector. It is possible to have the crank located right in the middle, within the camera range but not obstructing the projector light - but this will mean that the user of the crank will be the focus of the image, which may create static results.

8. Due to the environment we are contained in, the projector will have to be turned sideways to establish the amount of vertical space needed to cover the door.


User Issues/Observations

1. Users are drawn to the computer if it is out in the open. They seem to glance at the projected image, then want to look “behind the curtain” at the computer screen. The computer should be hidden from view.

2. Users seem to want to view the projected image by being in front of it, just as people watch television.

3. People stepping out of the elevator (or stepping in) do not particularly enjoy being blasted in the face by the projector light. (this issue has been solved, however).

4. The moment seems fleeting when people interact, people interact and a result is presented, then instantly replaced by a new result. This does promote a sense of real-time, but doesn’t reward


Fig 1.2: Software cutting out background, leaving only the person displayed.


Fig 1.3: Petra using a crank which drives the projected image up and down.

Tuesday, October 9, 2007

Elevator Project: Motion Tracking

For my group project in Physical Computing, we have decided to address the issue of people waiting outside of elevators and the boredom that inevitably occurs during the wait. Petra, Sunghun, and myself all agreed that a good way to break the static nature of waiting is to allow people to interact with something that has immediate output in correlation with their movements. This allows those waiting to interact quickly with limited effort, thus making the amount of time one actually has to wait for the elevator to arrive non-consequential, which matters a great deal since upon arrival, the elevator could be very close to arriving, or not at all.

My part in the project has been developing the software - using Java/Processing for the graphics and motion tracking system, as well as C coding for the Arduino microprocessor we are using. The motion tracking system does a combination of background subtraction and motion detection, which basically filters out all imagery which not both moving and not within a certain brightness range. The pixels that meet these prerequisites have their coordinates tracked and drawn on using a combination of small rectangles of lines being drawn between each tracked pixel in sequential order, which results in an abstraction of imagery that is vastly different from real-life, yet familiar enough to be predictably interacted with, at least in the regard towards positioning oneself towards making an intended impact on a defined space.


Fig 1.1: Although heavily abstracted, a face emerges through motion.


Fig 1.2: Rapid motion completely destroys all recognizable form, as intended.


Fig 1.3: Slight motion will in turn bring about only slight abstraction, to
the point of nearing a keyed cutout of the form.


Processing:
ProcessingMain
MotionCapture

Arduino:
cb_software

Wednesday, September 26, 2007

Shiver - Early Development

I have decided to go ahead and start dedicating a good amount of time toward one of the two major projects I hope to accomplish this semester at ITP. The project name is tentatively called "Shiver" and it involves parsing seismic event data and using that content as a way of provoking both visual and physical simulations. Right now, I am concentrating on the visual side - as it is a nice starting place to begin prototyping the "how" as it relates to progress I will need to make later in development. However, one of the major tasks I wanted to immediately address was the ability to interface hardware I create with the software I am programming. I was able to get an analog signal to influence variables inside Processing using its serial library - with the pictures and code below:



Fig 1.1: Simple circuit hooked up to a Arduino and variable resister
to test analog/serial communication.




Fig 1.2: Still image of Shiver at current development point


Link:
Main Demonstration Page

Processing - Source code:
parse_blocks
xml_parser

Arduino - Source code:
serial_communication

Saturday, September 22, 2007

Motion Tracking Glove Prototype

In my first attempt in pairing concepts from my Physical Computing and Computational Media classes - I decided that I would like to try my hand (pun fully intended) at developing a type of glove-based motion tracking system. Since I have a larger project in mind for Computational Media which involves tracking movement through a camera - this was a perfect opportunity to start roughing out a few of the pixel tracking / blob detection algorithms that would be easily reusable when I move on to developing the larger project. So, after an afternoon of traversing through pixels, I had the color tracker working on high densities of near-pure red:


Fig 1.1: Ellipse following over pixels being tracked


So, with the tracking algorithms working well enough to run some practical tests - it was then time to move into the physical computing side of my endeavor. The wiring was a rather simple affair, as I really only intended to have two LEDs, one which was to be consistently on and the other to require a button press to light up. After a successful test with my Arduino code - I cut up some cardboard and wrapped two pieces of equal length in eletrical tape, then sandwiched my board into an enclosure, as seen below:


Fig 1.2: An ugly yet effective circuit board enclosure



Fig 1.3: The enclosure opened - revealing the breadboard and
microprocessor.


I had bought some gloves from K-Mart that were of the variety where if you flip down a flap the gloves could become mittens. The flap turned out the be the (near) perfect holder for my enclosure. The fit isn't exact - but just fine for the prototypical nature of this project. A few holes through fabric and some snaking of the wire through the palm and fingers of the glove and it was ready for action.


Fig 1.4: Motion tracking glove - ready for your local fashion show.



Fig 1.5: Glove plugged into a USB port and working

After putting on the glove and using it in combination with the Processing program I wrote, the test results indicate that better lit rooms gives the color tracking algorithms a bit too many high values than it can currently handle - which results in flickering of the tracking point. However, in moderate to low lit rooms - the glove works rather well. Below is the screen shot of the glove in action with the Processing program:


Fig 1.6: Motion/Color tracking algorithms paired with Processing graphics, pointer
being directed by glove use.



Processing - Source code:
motion_tracker
blinds
gui_components

Arduino - Source code:
motion_glove

Physical Computer - Lab #2

Moving forward in the adventures of the electrical nature - it was already time to dive into the programming side of physical computing. Lab #2 consisted of wiring up a simple circuit with a switch and a couple of LEDs and being able to program a Arduino board to process simple logic through authored code. After working through the previous lab - I found the circuit-boarding side of this lab to be far easier, even with the addition of the Arduino microprocessor, which as the time was completely foreign to me. Below are a couple of pictures of the process, plus a link to the source code for my first Arduino program.



Fig 1.1: First success using Arduino


Fig 1.2: Using a simple switch logic gate


Fig 1.3: Six LEDs blinking one after another


Source Code: six_led_lights


Note: I have noticed that one of the (many) limitations of the programming language initially available to use with the Ardunio microprocessor is the inability to directly get the size of an array through a method or any sort of standalone function. This becomes problematic when needing to accurately loop through an array of LED pins, or anything else stored in an array for that matter. A way around this limitation without resorting to counting the elements manually and hard-coding that number to a variable is to instead use the following code, as additionally outlined in my source code.

int led_pins[] = {3,4,5,6,7,8};
int led_pin_count = sizeof(led_pins) / 2;


The global function 'sizeof' returns a value which represents the number of bytes currently stored in a given variable. Since we can assume that an integer is 2 bytes; dividing the value sizeof returns by 2 will result in the elemental size of our array.

Wednesday, September 19, 2007

Physical Interface: Cash Register

In all likelihood - I have walked more in these past couple of weeks than I had in any given week during my time as an undergrad. I am fortunate enough to live within walking distance (roughly 1.8 miles) to the Interactive Telecommunications Department where I am undertaking my graduate research at NYU. More often than not - I drop into a local deli on my way to ITP to grab a vitamin water or perhaps one of those energy bars of varying brand. Upon selecting what is likely going to turn out to be lunch - I walk over to the cashier to pay my usual $4.00 in cash, a much more 'pleasant' affair now after being glared at several times for sticking out a credit card. Besides the usual stiff attitude of the deli cashiers, I have always been taken by the speed of which the cashiers operate. Usually, the cash register itself is of the older variety - a 'do it yourself' style machine where numbers of prices are punched in rather than scanned. What is most interesting is that in my experience, the old-style cash registers are substantially faster and the users seem far less likely to have a "someone get the manager over here" type moment compared to a scan-based register with touch screens. So, is this an instance of technology getting in the way rather than helping? Here is my completely research-free and high-level look:

First, there needs to be an acceptance to the fact that technology isn't always built for speed. Be it for all the right or wrong reasons - technology many times can substitute what was traditionally a simple and fast process with more functionality; resulting in a steeper learning curve for the end-user(s). I have chosen cash registers as an example for a user-interface to analyze because its advancement in features are an interesting mix of positives and negatives, as well as a device that has taken on a new relationship with its users. The cash register has 'evolved' from a interface requiring one employee to use the device, to needing an employee and a customer to interact with, to most recently - only needing a customer to interact with, now coming full-circle. I won't focus on the third advancement, but I want to write a quick comparison of the first two user-interfaces - the singular and dual user interface cash registers.


Fig 1.1: Older model cash register: simple and fast
(Stoic NYC cashier attendant not pictured)


In my example of the dual-user register - I recently went to a Best Buy in lower Manhattan to buy a DVD (The Office: Season 3... a must have) and approached the cashier in front of a rather new looking cash register system, complete with a customer input pad and stylus pen for a signature. In the end, this transaction took several times longer than an average transaction at the deli - with the very same amount of items using the exact same type of payment: a credit card (to the chagrin of the deli cashier). Generally with technology, if a user is being asked to allocate more time, then the end result is expected to be enhanced to return. My end result as a customer yielded no instant enhancements - I still received a receipt, my item, and a debit from my banking account in both transactions. As a user-interface, the technologically enabled cash register at Best Buy required effort on my part by signing with the stylus (awkwardly, as always) and paying attention to which buttons to press to proceed. The cashier seemingly did less work in relation to number of button-presses and analysis than that of the deli cashier - but the overall workload wasn't necessarily shortened, only abstracted across multiple users (such as myself) and varying interfaces. Sure, there are a few hidden benefits of the high-end register such as receipt caching for faster returns and cash back options using a debit card - but what is the ratio between transactions and the collective number of returns and debit card cash back requested on a given day? I'd say it is very likely a substantially lopsided ratio leaning toward the quantity of transactions - yet the technological advances cater to the lower frequency of occurrences in the case of the cash register.


Fig 1.2: "Sign here and press button to continue, now
press this combination of buttons to confirm...hey,
pay attention or this transaction isn't happening."

My concluding thought is that you could make the case that technology advancement for the cash register earns its keep because it lessens the work load of the cashier - but then again, the cashier is getting paid to work less at the expense of my time, so as a customer I wouldn't exactly call that a positive step unless I was on the other side of the register.

Tuesday, September 11, 2007

Physical Computing - Lab #1

So, this is lab was my first venture into the depths of physical computing. I feel that overall, things went rather smoothly - escaping without any serious injuries or damage to private property. For my efforts - I was able to make multiple LEDs of various colors light up, as well as garnering the ability to fade them in and out and using a switch to turn them on and off. Perhaps not too exciting to read about - but certainly exciting to accomplish on my end.



My first circuit.


Switch Activated LED



Fading LED using a potentiometer.


This lab served as a great introduction to some of the really core basics/principles of electrical engineering. I am looking forward to start programming microprocessors - especially when it comes pairing code logic with sensors. More to come.