Thursday 10 December 2009

Mark Rothko

Mark Rothko's paintings have been described as total immersion. I became dissatisfied with making a simple drawing application, it seemed once mastered to be rather shallow in its meaning. I have spent some time looking at ways of adding more dimensions to this project and to try some free form thinking. One conclusion was that something unexpected should happen during the interaction between the person playing with the installation and the installations itself. In the first instance, I revisted work by artist Camille Utterback, the way that she uses subverts the process of drawing by doing things that are counter intuitive such as having lines move away from the point of drawing and older lines gently migrating to a predetermined position were to some an extent an inspiration for me to explore the possiblitlies further.
I also thought it might be interesting to have the canvas be the focus of the interaction so that depending on what the person does it would change what happened on the screen before them, to this end I am developing an application which is based on the paintings of Mark Rothko. The idea is that a persons movements in front of the 'canvas' will trigger the images to change and morph into new configurations which is something that Rothko's paintings do on prolonged viewing. I want to end up with an immesive and evocative enviroment which is controlled and changed by the users. This is not to imply that it wont be surprising and hopefully delight. I also want to include a kind of sound scape that can envelop the space and may use a mixture of ambient sounds perhaps alongside some words from Rothko himself.
The demo below are the first steps. There is no cursor that you can see but if you make movements some things will begin to happen.

Tuesday 24 November 2009

filters

Here is what I have been working on, this is the result of a lot of experimentation with filters used through bitmap data objects. The drawing object itself is a movie clip which has filters applied to it.

Tuesday 10 November 2009

Sixth Sense at MIT labs

'The SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user's hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tip of the user’s fingers using simple computer-vision techniques. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. The maximum number of tracked fingers is only constrained by the number of unique fiducials, thus SixthSense also supports multi-touch and multi-user interaction.'
http://www.pranavmistry.com/projects/sixthsense/index.htm



'But the current prototype promises to be a bit more consumer friendly. It consists of a small camera and projector combination (about the size of a cigarette pack) worn around the neck of the user. An accompanying smartphone runs the SixthSense software, and handles the connection to the internet.'
http://news.bbc.co.uk/1/hi/technology/7997961.stm
The miniaturization of the camera/projector means that any surface can be used as a the display. Presumably the projector adjusts itself depending on the distance it is from the surface being projected upon and presumably it is also able to focus on a given surface depending once again on the distance. The cel phone runs the software that  motion tracks of the colored fingers. Using gps technology and internet search the software become exponentially extended.

Thursday 5 November 2009

Is it painting or an action?

What was to go on the canvas was not a picture but an event.
The painter no longer approached his easel with an image in his mind; he went up to it
with material in his hand to do something to that other piece of material in front of him.
The image would be the result of this encounter.

Many of the painters were ‘Marxists’ (WPA unions, artists’ congresses); they had been
trying to paint Society. Others had been trying to paint Art (Cubism, Post-Impressionism)
—it amounts to the same thing.
The big moment came when it was decided to paint . . . just to PAINT. The gesture on the
canvas was a gesture of liberation, from Value—political, aesthetic, moral.
The American Action Painters
Harold Rosenberg
Following on from this approach,see this video below, it was written in processing using ultra violet light in the paint brush and touch pads in the color change pots. Interestingly some the brush strokes originated in flash which was adapted. The link to 'splatter' is here:
http://stamen.com/projects/splatter

Tuesday 3 November 2009

James Alliban


Virtual Ribbons from James Alliban on Vimeo.

I will be attending a talk on Nov 11th by James on Augmented Reality, which includes motion detection techniques. His blog and work can be found at http://jamesalliban.wordpress.com
Information on how to register for the talk is also on his site.

Wednesday 28 October 2009

changing stroke of drawing

I kind of gave up with the color picker component for the time being and decided to work on a more simple change; to change the line style and filter which is activated by a 'hot' movieclip.
I went back to the work done by Dan Zen, because he has an example of a static button that is changed by touching it. see the link below for a demonstration, I have not included the video here because I already posted it on an earlier entry.
http://ostrichflash.wordpress.com/video/
Through Dan Zen's work, I gained 'proof of concept', but I was fairly sure that instead of using his classes which replace usual mouse events such as mouseOver, mouseOut etc, I can use hitTest, since I no longer have a 'mouse'. My reasoning here is that since I am only starting to learn AS3 it is easier to use a function that I am already familiar with and the code is basically the same for AS2 and AS3.
As I mentioned before, I have attached the drawing api (graphics.lineTo and moveTo) to a movie clip called myDrawing.
I therefore built another movie clip called myButton and did a hitTest between the two. Since I want the hitTest to remain 'true' I did not include an 'else' statement. I then made another movie clip called redDrawing and assigned a different filter and line style to it. The function that calls the hitTest then attaches the child of redDrawing if the hitTest is true.

Also just a quick note to say that I put a round white circle on the cursor to make it easier to see where it is, it will be removed in the final application

The code follows:

myDrawing.addEventListener(Event.ENTER_FRAME, changeStroke);
function changeStroke(e:Event):void{
    if (myDrawing.hitTestObject(myButton)){
        trace("hit");
        var redDrawing:MovieClip = new MovieClip();

        redDrawing.graphics.lineStyle(30, 0xff6347, 50);
        redDrawing.graphics.moveTo(myFairy.x+2, myFairy.y+2);
            redDrawing.graphics.lineTo(myFairy.x, myFairy.y);
            var dropShadow:DropShadowFilter = new DropShadowFilter();
dropShadow.color = 0x000000;
dropShadow.blurX = 10;
dropShadow.blurY = 10;
dropShadow.angle = 0;
dropShadow.alpha = 0.5;
dropShadow.distance = 10;
var filtersArray:Array = new Array(dropShadow);
redDrawing.filters = filtersArray;
addChild(redDrawing);

And here is the result:



As you can see the hitTest works, but there are a few problems still to resolve. Firstly it seems like the myDrawing movie clip is still there even though it is barely visible. Secondly and more frighteningly it is now clear that the movie clip that holds the api is creating a new api every time it enters a frame, so that the drawing is contained only within the movie clip. Since I need the drawing to be a continuous line this will not suffice. I am going to look back thru my previous api files and see what i can come up with. I am however happy with the strange effects that are produced with this current method, and am wondering if there is a way of increasing and decreasing the size of the blobs as the go away or get nearer to the center of the drawing environment so that they will become like 3d objects. Given that there is now a z co-ordinate available in flash it might be worth a little time experimenting....

Sunday 25 October 2009

color picker

Today I have been trying to integrate a color picker into the application so that the line can change color. I am now committed to using AS3, which offers a color picker component. I managed to figure out how to include it in the application and to target the drawing api movie clip (called myDrawing) as the beneficiary of the color changes.
Here is the code:
//remember to import the class!!
cpColor.addEventListener(ColorPickerEvent.CHANGE,changeColor);
            function changeColor(evt:ColorPickerEvent):void {
    var newColorTransform:ColorTransform = myDrawing.transform.colorTransform;
    newColorTransform.color = evt.color;
    myDrawing.transform.colorTransform = newColorTransform;
}
Unfortunately it doesnt quite work because in order to activate the color picker, it has to be 'clicked' which is not possible for a cursor that is being controlled by motion detection. So although I know that in theory it will work, I need to figure out how to make it activate using a MouseOver event instead. Progress hopefully will follow soon.

Friday 23 October 2009

Myron Krueger

One of the earliest examples of motion detection is the work of Myron Krueger. He graduated in the sixties and remarkably was working with 'responsive environments' (his term) that responded to peoples movements without the use of any special gadgetry (no gloves or headgear). He used a computer, video camera and projector in exactly the way that artists and game designers are doing now. However what he didn't have was a programming language and so he worked on the machine level language without an operating system 'in the way', as he puts it. In 'Video Place'(1988) he is using techniques and conceptual frameworks that artists today are mirroring.
Kruegers work dates back almost forty years and is credited as the pioneering start of virtual reality. It is not surprising that responsive environments would lead to complete virtual environments, but perhaps it is the play between the so called 'real' and 'virtual', that of 'natural commuting' that has seen a renewed interest in Kruegers work. The idea that we will one day live in Virtual reality has lost some of its charm, partly as a result of the actualisation of virtuality being assimilated into our 'real' world.
Here is Kruegers description of  'Video Place':

'In the installation, the visitor faces a video-projection screen. A screen behind him is backlit in order to produce a high contrast image for the camera and allow the computer to distinguish the visitor from the background.
Interesting that he back lights the video projection,,,, I will experiment with this!! The other project I did I back projected onto a touch screen. It was alright but the image was de-saturated. I would like to find a way of front projecting where the image does not get interrupted by a person moving in front of it. Ideas anyone??
The visitor’s image is then digitized to create a silhouette and processors can then analyze its posture and movement, and its relationship to other graphic elements in the system. The processors can then react to the movement of the visitor and create a series of reactions, either visual or auditory. Two or more environments can also be linked to the system'. (Andrew Hieronymi, ND)
http://classes.design.ucla.edu/Winter04/256/projects/andrew/report.html









Tuesday 20 October 2009

Drawing API update

I did some experiments with attaching a movie clip to use as a drawing mechanism. Below is a link to a tutorial which is in AS2 that is inspiring. My idea after seeing this was to grab a paint or charcoal mark from Corel painter and use it instead of the flower. The problem with this approach is that each of the marks is a blob
http://www.gotoandlearn.com/play?id=16
Here is the swf that I made, CLICK ANYWHERE BELOW and you will see that you can draw with the mouse. The problem with this experiment is that I want the line to be fluid and not blobs, although they are a very interesting and unusual drawing medium. Also this file is AS2 and I am unhappy with the motion detection results that I could achieve in AS2 and have decided to go with AS3 (for now). This is gives me the opputunity of learning some AS3, it will probably render AS2 obsolete sooner or later.


The way forward is to make use of new filters that are available for flash Player 10. Adobe have extended the classes so that it is possible to add blur effects, drop shadows and the like to movie clip objects. I have taken the original motion detection files which have a simple movie clip that follows the motion and then added a drawing API to the movieClip. This was problematic in so far as I had to find resources that explained how to make the drawing API with AS3 (which I did), In fact the code is similar to AS2 except that the graphics class has to be imported and then the lineTo and moveTo tags have to reference the graphics class as follows:

import flash.display.MovieClip;

graphics.lineStyle(50, 0xbb00ff, 50);
graphics.moveTo(myCursor.x+2, myCursor.y+2);
graphics.lineTo(myCursor.x, myCursor.y);


Once I had this code working, the next step was to figure out how to change the properties of the line styles, the whole point of this drawing app is to make different marks.
It turned out that the effects class can only be attached to movieClip not directly to the lineStyle, so I 'grew' a new movieClip and put attached the drawing API to it.
as follows
var myDrawing:MovieClip = new MovieClip();
addChild(myDrawing);   
myDrawing.graphics.lineStyle(50, 0xbb00ff, 50);
myDrawing.graphics.moveTo(myCursor.x+2, myCursor.y+2);
myDrawing.graphics.lineTo(myCursor.x, myCursor.y);

Once the api is attached to a myDrawing, it is then possible to add properties to myDrawing and therefore the API.
as follows;
var filterList:Array = myDrawing.filters;
filterList.push(new BlurFilter(30,30,20));
myDrawing.filters = filterList;

which adds blur. The next step is to see how this works once it is projected onto a big screen, because I have the feeling that the movement will be affected once the whole thing is scaled up.
For the time being here is the swf of where I am at.

Thursday 15 October 2009

thinking outloud

i am trying to think about how to make the drawing api work so that it can create lines which are my own design. somehow i am thinking that i should be able to make a movieClip that moves to a new location but leaves a trace of itself on each frame it takes to get there. One problem with this approach is that it might be extremely heavy and slow everything down.
Another approach is to try and figure out how to mess around with the lineStyle but my guess is that it is far too advanced from a programming point of view to attempt. I will do some research to see if anyone else has done so.

Tuesday 13 October 2009

big shout out to

Dave Stewart at metanurb.
http://www.metanurb.com/
you can see his work below. I haven't seen anything that comes close to the precision with which he has the motion detection working. He was very kind and sent me his .fla files for one of the games and told me to investigate infa-red cameras, which relates to the work of Jonny Chung Lee at MIT.
http://johnnylee.net/projects/wii/
Here is the game that Dave made.



Josh
http://hotarubug.wordpress.com/
 is being very helpful, sent me stuff on back projection, great book on AS3, the friends of ed books, Foundation Actionscript Animation: Making things Move. which has a section on drawing with the mouse which I used to get the motion detector to display in the example I posted yesterday.
Here is the final outcome of Josh's work. 

Jason Bruges studio

Jason Bruge Studio is an hybrid architectural/interactive art/lighting/environment designers. I found an article in Wired magazine

The Art of Surveillance

By Hugh Hart Write to the Author   
11.30.07

London Bridge

It's not falling down. Instead, the storied span has been rigged with motion sensors. "We monitored all the movement of people going across London Bridge and played it back as a matrix of colors on the top truss of Tower Bridge a few blocks away," says designer Jason Bruges. The installation also detects every phone with a visible Bluetooth connection and projects a color unique to the Bluetooth address.



 Jason Bruges does a lot of work with motion detection, he is speaking here about his contribution to the Pandamonia project: This studio proves the potential for marketable products and that clients are now the stakeholders of technology it is not so surprising that the Tower Bridge project was supported. (I do not know who the client was, but it could easily have been a mobile phone company or a bluetooth provider. The demand for complex interactive work is demonstated by the success of this company. In an interview Jason Bruge stated that he didn't have a job description. This area is so new that old forms simply do not fit any longer.



EyeCon

EyeCon is a software application developed by Frieder Weiss, working with Chunky Move, an Australian Dance company that uses interactive techniques in their performances. The software is uses motion tracking and is designed around creating 'hotpots' within the stage area which then trigger events such as sounds, film, or images (and flash). I downloaded the software and played around with it, as a rapid development tool for another project I finished that I want to make into an projected installation it would be ideal, and I could probably get it to work in a day or two. theatre and dancecompanies such as Chunky Move and TPO are producing interactive experiences that are at the forefront of motion detection.

whos doing what and why

what is going on out there with motion detection applications? In researching how to do this project I found there are loads and loads of people experimenting with it and putting it to all kinds of uses. First some of the technologies and approaches used: There is a c# environment cal AForge.net which is open source
see
http://www.aforgenet.com/articles/hands_gesture_recognition/ 
Since this is an c# programming approach it was not going to work for my project.
and it might be useful for anyone interested in c# to look at
http://www.codeproject.com/KB/audio-video/Motion_Detection.aspx

as the code is posted.

Another approach is 'processing' which looks totally cool and I have already heard about it, completely open source and non-proprietary (unlike Flash). However since I am working in flash this had to be rejected.
see Myron which is
http://webcamxtra.sourceforge.net/index.shtml
Here is some work that really blends the real with virtual, using the age old shadow game which becomes hugely embelished. Buit using 'processing' it shows an exceptional understanding of the potential of natural movement enhanced by digital media. The trend toward using digital media to extend the capacities of human movement is what interests me.  Artur C Clarke points out that advanced technology cannot be distinguished from magic.

Monday 12 October 2009

motion tracking and drawing together

I am in the midst of trying to get to grips with AS3 in order to use the ostrich files by Dan Zen.
I have managed to get a cursor to follow the motion, but can's, yet get it to draw. I have sent some cries for help and hopefully that alongside bashing away at the code will produce results. I have completed yet more tutorials on the drawing api in AS3, but still cant understand how to integrate it. A most frustrating couple of days have passed.
....Isnt it amazing that once you give up and admit you failed that you somehow find the answer. Its very primitive and the lines are not ideal but it does work.

All I had to do was to add the graphics.lineTo  action to the cursor. Now I will have to figure out how to make the line drawing less jagged.

Wednesday 7 October 2009

Motion detection Project 7/10/09

 Today was the first proper day of researching motion detection using flash. I will start with the best first.
Dan Zen. the name says it all. He currently is a professor of Interactive Multimedia at Sheridan in Oakville Canada running a post grad course in interactive media.
His site is
www.danzen.com

Dan Zens blog site which holds the crown jewels (i.e. source code and video guidance about what he is doing and how) is at
Then navigate to the ostrich section where you will find this video amongst other things.



This is the only helpful code source that I found that attaches a cursor to the x and y co-ordinates of the motion detection. More than that he also uses customized cursors which are pretty cool. Anyway there is a draw back which is that it is all in AS3 which I know nothing about since all my work thus far is in AS2.
great.
I did discover though that AS3 is way faster at processing so if I can nail it the results should be better.
I also spent $1.98 on a video tutorial which included the source files from
which was not that helpful because it didn't take the application far enough. It was a repeat of what I have already found at such places as soulwire
see
http://blog.soulwire.co.uk/flash/actionscript-3/webcam-motion-detection-tracking/
Justine Windle had kindly made the flash source code in both AS2 and AS3 available for download.

I am begining to think about how all this is gonna work with a large projected image and a camera. I took another look today at Jonny Lees work on wii remote (wiimote) and whiteboard, where he only uses the infra-red of wiimote and built his own LED pen which all sounds simple enough. As you will see from the video it is really impressive stuff, however the hard bit is the c# code working behind the scenes. Since my project is meant to be about extending my flash skills, it is a bit of a tangent. Also he is using really definate tools (the LED pen) and I want to produce something that is not dependent on having to have a special pen, except if that fact is not obvious. i.e. it looks and behaves like a normal pen.



Also I have found only a very few examples of people using back-projection and I am wondering why that is and if I could get away with a projector that was really high above the head of the user so that they would not interfer with the projection.

  • I am using Scrum PM to manage this project with a team of one (laughable, but there you go). I allocated half an hour a day to update this blog, so I might have to steal time from somewhere else, i.e. the research paper. I used a bullet here to reflect corporate mentality.


Monday 5 October 2009

Masters project: Interactive media: University of Hertfordshire: Tina Burton
Motion tracking and Flash.
Is is possible to track a person hand movements, feed them into flash, track x and y co-ordinates, attach a drawing api to the tracker and produce a drawing based on those movements.