Labels

digital (36) art (23) tech (22) programming (21) graphic (17) processing (11) openFrameworks (10) photography (9) gigapixel (8) fire (7) foundationstudies (7) internet (7) visionandperception (7) image processing (6) music (6) advertising (5) animation (5) python (5) film (4) school (4) 3D (3) carpentry (3) final cut pro (3) writing (3) CMS (2) bed (2) git (2) jQuery (2) knots (2) ofxMSAPhysics (2) ofxPostProcessing (2) projection (2) research (2) supercollider (2) touchdesigner (2) video (2) web development (2) FFT (1) John Mackey (1) Mou (1) PHP (1) Skrollr (1) administration (1) bitcoin (1) controlP5 (1) drawing (1) jazz (1) linux (1) markdown (1) maya (1) nic (1) notetoself (1) oscP5 (1) oursociallubricant (1) raspberry pi (1) resume (1) server (1) sound (1) stippling (1) theory (1) unix (1)
Showing posts with label programming. Show all posts
Showing posts with label programming. Show all posts

Tuesday, 9 June 2015

Research write-up

Over the course of my photomosaic project I have needed to do a bunch of research into the art form and the available software out there.

Most of the software/services that I found either didn't match images very well (see app i. Accurate matching), only match to a grid (see app ii. IPA vs API), scale everything down (see app. iii. Scaling up) or repeat images (app iv. How to avoid repeats).

I found someone who claims to be the "inventor" of the photomosaic here, and while, yes he has photomosaics, they are all only matched to a grid.

I found a photomosaic stiching program called mazaika, and while the feature list is quite impressive, it has less accurate matching than my sofware and only matches to a grid. It also uses the (terrible IMO) feature to mask bad matching, where it colour corrects the images to the reference image.

I ran across an interesting method of grid mosaic tile matching using a "nearest neighbour" algorithm. This leverages machine learning to create a satisfactory result, however it does so to an approximation accuracy rather than an exact one. Also the creator has a good flow for generating tiles.

I've also read up on some of the other methods that people have used to create mosaics both online and offline, most of them only use one technique and/or modify the source images rather than arranging them.

Here is a quick list of some of the options that I investigated:

  • http://www.theartoftylerjordan.com/make-an-easy-photo-mosaic-in-photoshop-2/
  • http://www.mazaika.com/
  • http://tvldz.tumblr.com/
  • http://www.photomosaic.com/portfolio.html
  • http://mosaically.com/
  • http://www.canvaspop.com/photo-mosaic
  • http://www.mosaizer.com/

Saturday, 2 May 2015

New AV work in progress

Now with 100% more screenshots!!!

Part of the work that I've been doing with John Mackey has involved programmatic music analysis, and one of the things that I started doing was experimenting with some older code to see some of the results I could get out of it.

One of the projects that I settled on first was a failed experiment to transpose my Squiddy type things into 3d. Unfortunately, at the time I wasn't as experienced as I am now at manipulating vfx and using sound to control things and so the experiment didn't leave the "basic shapes" phase.

Now however I have the resources and skills to turn it into an actual work, and I've been able to leverage shaders and physics systems to my advantage to produce a really nice work that I'm really proud of.

The flecks in the "background" that you can see in the screenshots are illuminated by a flickering light at the top right (the light flickers based on a random frequency), they all flow down and to the left, and are destroyed once offscreen. They can also occlude the main subjects.

The subjects themselves are actually a recursion tree, with each step linked to a physics object and mapped to the bins of a Fast Fourier Transform (linear not logarithmic unfortunately ). The result of that mapping is that the 2 spheres that represent each of those physics objects "pop" with the music i,e, they grow larger when their particular frequency is hit. All the subjects are allowed to go offscreen (to some degree) except for the primary subject. All the subjects have an emissive material bound to their wireframe to give them detail up close and have a bit of a glow.

Apart from that, the Depth of Field shaders focus is linked to the head of the primary subject and the aperture is set (small?large its just a number to me) so that there is only a narrow focal area.







Friday, 1 May 2015

Changed major video work

The work that I was previously doing for "Creative Possibilities in Video" was a work inspired by the book Flatland and Don Hertzfeld's "Rejected" animations, however... I'm not doing that anymore.

The reasoning behind this is that in my spare time, I've been making some music videos using software that I've written for a jazz musician called John Mackey. I now feel like these works are becoming some of my stronger works for the year and so rather than continue in an uphill battle pushing a time consuming animation project, I decided to pivot and change the project to be on these works.

A sample of one of these works:

.

Unfortunately Vimeo doesn't allow me to upload any more of these yet. So I am going to upload them at a rate of 1-2 per week.

Soon I will also publish a short demonstration of how I have been using this software to create things and some of the common tweaks to the source that I've done during the production of these kind of works.

Also an interview with John Mackey about the other half of these works will follow.

Tuesday, 17 March 2015

Finally got the colours right

The image processing algorithm I was using was actually sorting everything with dissimilarity valued over similarity, which was not so good (duh). This was all due to my stupidity in reverse sorting lists in the processing phase rather than the placement phase (essentially doing the right thing in the wrong place).

Either way its fixed now.
And I have another photo of Johnny Depp, again this breaks JPEG so this is a very scaled down



EDIT:
I also now have a Nic Cage image (again scaled horribly):

Monday, 16 March 2015

New Nicolas Cage

Massive scale flickr integration into the get_images.py
This image at full size breaks JPEG size limits, so this is a scaled down version.


Wednesday, 11 March 2015

Depp

I pointed my program at flickr with the tag and search query of "Johnny Depp":


Tuesday, 24 February 2015

More images of Nicolas Cage

I've refined my method a little bit and created a couple of good images that I like enough.

It now fills in the background with unique images (this doesn't work correctly as of 25th feb due to upgrades on the algorithm that places the background)




 

These two images have been good tests so far.



This image is nearing the level that I want

Tuesday, 17 February 2015

Main work so far

I've been putting together a project using some image processing techniques to achieve a static image result.

I've also been using (whisper) image search engine scraping to download unique images from the web in bulk.

So far I have been able to come up with an example image of where I'm up to with this project:


As you can most likely see, I'm trying to recreate an image out of other unique images. 

My next step is to fill the background with images first before anything else.

Technical updates:
 - I want unique images so I'm using perceptual hashing to check incoming images for uniqueness.
 - I was using a list of images until I realised that checking the hamming distance for uniqueness would be best suited to using a hashmap so I've built a new data structure base on the assumption that I'll be doing a hamming distance on the hashes for the hashmap.
 - I've got some ideas in mind for trying to fill in the remaining space on the image:
     - Grid up the reference image in sections that are have the same number of pixels as the average number of pixels per image and place the an image per area and then doing passes of area per image (its currently doing multiple passes of area per image only).
     - Store all the placements that were performed during processing, and sort them by score, then bump images that are close to or under higher scored images down a score until they are sufficiently separated. This would be great with some way to balance separation with high score.
 - I should probably try using PyPy rather than the standard Python compiler that I'm using currently because while its faster than doing it by hand, I am doing quite a large amount of processing (specifically image processing and list comprehension) in a language that is notoriously slow.
 - I need to eventually get this program onto a computer with more than 10Gb of RAM because I cannot scale images up as much as I want to because as soon as images get decompressed to memory (jpegs ~50MB) , things die.

Tuesday, 18 November 2014

Final Work Reflections

This project (now named Ansa++) has been a long and far reaching one, which has required a bunch of skills, some that I already had some that I had to learn.

I'll list some of the skills I had to learn first and then skills that I had to get from other projects.

  • Objective-C
    • This language, while not fundamentally different from C++ has some complex interactions with UI elements that take some other time to master (or at least to get functional).
    • I had a great many problems getting the UI to work as the UI-language interface is confusing to someone who has not been brought up programming Macs.
    • Objective-C does memory allocation in a very different way to C++. This brought about many problems during my attempted use.
    • I have no mentor, most of the Obj-C that I know is learned from the documentation, which couldn't be more confusing (what the hell is a predicate?).
    • Next on the list for Obj-C: write an entire application in Obj-C.
  • Cocoa
    • A lot of my problems/issues with learning this is that I have no-one to teach me how to properly structure an app like this, and how to make sure that it is "production ready" 
    • I had some issues trying to implement things like dragging and dropping, and lists of videos, due to my lack of proper understanding of the interaction between the Cocoa and Obj-C. I eventually settled on an NSArrayController (an inbuilt helper class designed to maintain the order and workings of NSArray objects).
    • I did learn how to feedback information to a UI element from the rest of the program, and pass values from, through and to different programming languages.
    • I learned a lot about creating the inbuilt UI elements for the OSX system (file dialogues, question popups, sliders, buttons, switches etc) and how to implement their inputs in Objective-C.
    • Next on the list for Cocoa: multiple control windows, that can be opened and closed from a master window.
  • OpenGL/GLUT & Objective-C & OSX Windowing system.
    • I started using a version of ofxNSWindower, which quickly became super modified, to enable me to automatically select the last display, and use the whole display as a borderless window.
    • Unfortunately all the other modifications I wanted to make made the program horribly unstable.
  • Design
    • UI
      • I think my UI design for the project was satisfactory, some things still need to be made (the ability to return to windowed mode) but for the most part it is functional.
      • The buttons all have feedback.
      • Every element that does not cause a momentary effect displays the status of the system by nature
      • My UI is consistent in terms of the functional elements and the values are only displayed where it is necessary to keep clutter 
    • Functional Design
      • In some places I have limited the interaction to where it is necessary and not confusing, for example, it is not necessary for the user to be able to control the individual length or order of the videos, and hence they are not given the opportunity to do so, as controlling the individual length would be impractical for over 2 videos (the use case that this was designed for).
      • In one case I neglected to account for user/programmer error (returning to windowed mode), because I believe that I can fix this (it is a known problem, however it doesn't affect the general operation).
      • I implemented a number of low level features which enable this program to be so responsive and not as resource hungry as it could be, for instance I noticed that the video player objects took up a couple of extra processor threads due to their asynchronous threaded loading programs (I used a wrapper for the Audio-Visual Foundation framework) and so I decided to, instead of adding a new video player object to the end of the array that I was storing them in (and have a good 3-20 videos load at once), I created one array of video players, and within each polygon object, made an array of pointers to those specific video players and bound the texture of the current player to the polygon, and from the parent class I paused rewound and (if in use) played the videos required. This allowed me to get the videos to change fast, load fast, and have no noticeable lag when adding a new video to the loop (it doesn't actually add a new video, just a new pointer).
      • I enabled the saving/loading of polygons! This is a feature that I wanted to have done ages ago but didn't really have time for. i really should have saved them with a name other than the memory address of the polygon, but I figure its random enough (maybe later I'll use a timestamp, or a hash. Either way, it works, and it works well. I'm hoping to do a test where I can build the whole polygon map from home and then reload the objects an site 
      • I have still enabled the use of masking, however I have found it to be not as useful since finding out how to triangulate a polygon based on a line (the method I am now using).
  • Computer Vision
    • For this project I created a (plugin?) thing for automatic mapping of features!
    • Unfortunately it is still super alpha however there are a great many things that I have learned from this such as
      • Polygon triangulation
      • Proper use of an OpenCV wrapper
      • A little bit more about how OpenCV works and how its selection algorithms work things out
    • I made it so that you can import an image that you just took (even one that you took during the performance!) and make a homography of that image (place "in" points on that image and drag the out points to where it should be projected) so that selection of features from that image can done without the need to warp/crop using an external program and so that the homography can be done live.
    • I enabled the selection of different selection algorithms for OpenCV (so that you can select by RGB, or HSV or just hue etc)
  • After Effects
    • All of the clips for the video themes that I produced, were done in After Effe
Those are most of the new skills that I had to learn to do this project. On top of those I had to leverage my knowledge in the following fields:
  • C++
  • openFramewoks
  • Video editing
  • File IO
  • Object management
  • Classing and subclassing
I endeavoured to create some artistically interesting video clips to be used and to implement different ways to sequence and manage the loops.

Lastly, I wanted t give FPX a go, but I ended up not having the time, also I'm going to continue making themes for it (I wanna do a bramped sunrise/sunset theme).

The video below is of the final work being projected on the Peter Karmel building.


Thursday, 20 February 2014

Spherical

I've been playing around with meshes in oF and I found a cool algorithm to create a 3d sphere of points using segments and a radius here. I also found a way to scramble the points from the set segment line by changing a variable.

I used the product of the points position relative to the centre of the sphere and an array (see C++ vector) of scaled noise values to change the position of the points.

Once I had the positions moving and changing to noise I added the other drawing modes like applying a subsection of the noise array to entire rows of the sphere.

Then I applied a mesh to the points in the sphere and drew the mesh to screen

I also added a simple UI for control of the object, along with lighting, and colour between the vertices.

At this point it looked a bit like this:



Also I used this method of getting normals for the faces of the object:
for all the vertices in the mesh except the first and last, add a normal at the normalized vector of the perpendicular between the previous and the next vertices.
Then I tweaked the lights a bit and considered the project finished, however I kept having problems with the addon I used to do the UI (ofxSimpleGuiToo) not saving the settings sometimes and not loading them from the XML file correctly (for some unknown reason, I think because its not compatible with oF 0.8.0)

I also wanted to implement some post processing fx. I hunted around for a bit and found ofxPostProcessing, and then implemented it.

By this point ofxSimpleGuiToo was having a conniption and wouldn't do anything that I asked of it and so I decided to upgrade to the elegant but slightly more complex, ofxUI, which is by far the superior technology.

Here are some screenshots of the finished product, along with a demo video:












Spherical Demo from Gareth Dunstone on Vimeo.

Friday, 11 October 2013

The ArrayCube

This project started as a quick sketch with Nick Noble, a COMP1720 student, to learn about arrays.

We created a cube object, gave it some values, a vector location and stuff. And whacked it into a 3d array, drew them all. And as soon as you know it we have some weird stuff like this:



And thats all great. But I liked this. I wasn't happy with all the crap that Processing was giving me.
"Only 8 lights" they said.
"You're drawing too many cubes" they said.
So I started porting it to openFrameworks:







Now look at me, creating classes in C++ for glory. 
Problem is that its even slower. 
WHATS THAT OPENFRAMEWORKS? YOU'RE EVEN SLOWER THAN PROCESSING?
So I did a few more iterations:



Made some changes:



But still, why is this crap running so slow?
C++ doesn't treat variables like Processing does. Assigning a value (using =) makes a copy of it. And I have a problem with that. I'm drawing many, many cubes. Like a lot of them. All of the cubes. And I want them to just grab some value from somewhere else to use as their opacity or brightness or whatever the hell I want to to be.

I needed to start using pointers.

I started using pointers. Things got faster.
I started using more pointers. Things got much faster.
I used pointers in a class, had them coming out my arse.
I set *my = &day and filled it with win.
I pointed at 0x4632b2b and fucking killed it.
And so now I'm like:
"HELL YEA! I CAN DO THE POINTER DANCE! EAT MY SHORTS, PROCESSING!"
But I was still not fully pleased. It was missing something. Some kind of other input and interaction with its surroundings that would make it whole. I looked high and low. I googled this and I googled that. 
And then it struck me.
"WE NEED SOME DSP ALL UP IN THIS BITCH!"
 And after learning very quickly how to use openFrameworks' ofSoundStream I finalised the code for having it not only play sound (End Theme by Zero 7), but react dynamically to sound input.
And this is the result:

Saturday, 24 August 2013

Simple Nodes with openFrameworks

I made a small, simple node system in openFrameworks today.
Its essentially just an array of ofNode objects, with lines in between them, randomly placed on screen.
Next I'll try creating my own node class, and add some animation.

EDIT:
I finished animating it. Unfortunately the openFrameworks ofNode class doesnt have an inbuilt velocity or acceleration. So I built my own in by making 3 arrays the same size as the number of nodes, filling the arrays with random values between -0.1 and 0.1 and then ofNode.move(); ing them using those values.
Not the most elegant solution, but hey... It works.
Next I think I will Perlin the shit out of their movement.


First video of animation

Second video of animation

Updates

Working with Processing for Uni, mainly creating my hybrid work, however I also have been practicing trying to create really really short programs really quickly. For storytelling I am still constructing ideas and how/what to do. For my personal work at the moment I've decided to ditch Cinder and begin working in openFrameworks because it has a gigantic community and seems to be more supported. Either way they are both C++ so it should be fine.

Hybrid Work
Working with the twitter4j library in processing along with a 3D simple gravity system to produce a data visualisation of hashtags. It can talk to arduinos as well, through Serial.write(). Still in progress and not polished however its moving along quite rapidly. Heres some of the preliminary photos of it.











Really Really Short Programs
I got bored and wrote a program in 5 minutes.
I call it 256 Shades of Grey.













openFrameworks
I have been playing around with computer vision and I've been trying to get the addon for face and expression detection working (unsuccessfully).
I did manage to get the openCv addon working (with the help of a couple of the examples).
Next step is controlling an arduino, or outputting sound.

Tuesday, 21 May 2013

Autonomous Agents, Genetic Algorithms and balancing Natural Selection

For the past week I've been working almost exclusively on a branch of the flocking algorithm used in FishBirds to create a Predator vs Prey program, however it has proved to be much more fruitful than that.
Since starting last friday I have learned how to implement effective natural selection and semi random evolution like this:

 - Predator dies constantly.

 - More prey is introduced when under a certain number.

 - Predator and preys values (speed, agility, weight, sight) are decided randomly with a bell curve around set values.

 - Some Predators are born better and hence do much better, and stay alive much longer.

 - Some prey are born better at escaping predators and hence also stay alive much longer.

I did however find this system lacking in its complexity and method of "evolution" so I decided to try my hand at a genetic algorithm. I wanted something where the offspring was given values similar to the parent, but that could also drastically deviate from them. So I put together a moving mean. And implemented 2 different types of reproduction, time and "well fed". This more detailed list:

 - Predators die constantly, and die faster the smaller the flock is.

 - Predators reproduce when a certain amount of prey is eaten in a certain timespan and when they are of reproducing age (this is to stop the predator slamming into a group and then all its children reproduce causing a huge increase in number of predators).

 - Predators spawn one offspring, and stay alive (this is to for competition between generations).

 - Prey reproduces by staying alive for  certain amount of time. Prey then has 2 offspring, and dies.

 - The moving mean: The normal of the gaussian distribution of random values for the offspring are the current values of the parent. This ensures a good development of genetics and keeps the gene pool relatively diverse.


Problems that I have encountered:

 - If the flock is under a certain number and there are a certain number of predators. The predators will continue to live on the small amount of food that the prey provide.

 - Performance. In a perfect world I would be able to have a flock size of anything. Unfortunately the components of a computer, and Processing have limits. I have been pushing these and I think its time to move on to something more powerful, at least for a language (Discussion in my next post)

 - The flock is limited at the moment. It stops multiplying once it reaches a certain size. This was implemented to reduce performance issues and hence has stopped me producing a more realistic simulation.

 - I couldn't work out how to implement a method of generation counting. I'm sure with more time I could work out how to do it, but I don't want to spend any longer on this project.

I've published a final video to Vimeo.

In conclusion I think I am now significantly more adept at coding than I was last week.
I had a more in depth exploration of OOP, and became much more experienced in balancing systems. I learned how to implement some more advanced random statistical tools to give more useful data. And I learned how to implement genetic algorithms.

Tuesday, 14 May 2013

FishBirds updates

I've been working on the FishBirds continuously since my last post. In that time I have moved from the local branch to developing a networked version which no longer uses a ControlFrame and which supports MIDI, to developing a networked version which has no midi control and no control sketch, it is controlled using TouchOSC.

Currently the TouchOSC branch has the most features, just because its the branch which I'm using the most.

I think that they are as complete as I will have them for the foreseeable future, so for the time being I'm going to focus on developing my skills coding other methods of simulating behaviour and manipulating those algorithms. I'm working my way through Shiffmans "The Nature of Code", to help me get the skills that I need. Its a fascinating read, and my coding has already improved.

I will return to the FishBirds when I can write genetic algorithms.


Wednesday, 24 April 2013

Github, and finished project.

I started a git, and started some repositories.

https://github.com/stormaes/singing-fishbirds/tree/local

Everything is on there. I'm not going to speak much about it because most of its in the readme and I don't want to repeat myself, because I hate repeating myself.

But I will say this: that project, was HUGE.

I have used 3 new programs, 2 new languages, learned to use 2 new libraries. And then compiled all that into a single project, learned how to use Git. Then wrote a 1300 word readme.

Goodnight internets.

p.arcc.cc

I've been working on the site p.arcc.cc for a while now as an exhibition space for processing sketches.

So far I've implemented a number of php scripts which control the content that goes on the page. There is a couple of small php scripts that read files out of a directory and draw them to canvas, and there is also an upload script, so that anyone can submit a sketch. The inputs of the php upload script are sanitised so that no-one can upload malicious code. It only accepts pde files.

I've also implemented a navigation bar that uses jQuery smoothscroll to navigate to different points on the page that are written using the php scripts. The navbar itself is populated by using a php script, so that it remains up to date.

Skrollr was also implemented to give parallax scrolling effect to the background.
I encountered a strange error concerning this when I was building the site. At a certain point (not the end of the page) the parallax effect would stop. The background would not scroll, it fixed itself.
So I did what any other web developer would do, check the web dev tools, however when I checked the tools, the background started behaving as it should. I could even close the inspector and it would still work. If I refreshed the page it would have the same problem as when I started.
I eventually found a solution to this though.

Tuesday, 5 March 2013

More improvements to Arcc.cc, particularly the content loading on square and hexagon.
I used jquery to hide all the content first

$(document).ready(function(){
$('#video0, #video1, #video2').fadeTo(0,0)
$('#video, #video1, #video2').hide();


Then once the selection on the navbar is clicked (#intro0), hide the other content that may be active(#video1, #video2) and fade in the content.

$('#intro0').click(function(){
$('#video1,#video2').hide();
$('#video0').fadeTo("slow", 1);


Once the navigation has been clicked, bind to the 'ended' event, and when the even triggers fade out and return to frame 0. (this is not present in hexagon, as there is no end. The hover functions work all the time after the navigation has been clicked)

$("#video0").bind("ended", function() {
$('#video0').fadeTo("slow", 0);
this.currentTime = 0;


Hover functions: when the ended event has triggered enable hovering over the content so that it can be played again.

$('#video0').hover(function(){
$('#video0').fadeTo("slow", 1);},
function(){
$('#video0').fadeTo("slow", 0);}
);
});
});


This code is then just copied over with different entries, to hide the other content and show the correct content.

The only problem I have found with this code is that when the video has ended, and the hover function is run, if the video is restarted more than twice and ended, the hovering gets increasingly longer response times. I think i can fix this by stopping the hover function on ended before restarting it. I don't know how to do this yet, and the bug is low priority.

I was helped with this code with direction from Christopher Fulham, and actual jQuery help from <insert classmates name who I have shamefully forgotten here>.


Wednesday, 27 February 2013

Processing experiments with flocking

For the past 2 days I have been intensively into Processing. I've spent about 20 hours out of the past 48 coding.
The result has been that I've created a few Processing sketches, all on the subject of "flocking". Each of the boids detects the proximity of its peers and steers towards the average direction of the "flock". They also repel each other, and attract towards each other to form the "flocks". When the boids collide they both take a new vector of the average sum of the previous ones.
For some of these sketches, colour is controlled by the weighted number of nearby boids, some of them alpha is the attribute thats modified, and for all of them diameter/size is also controlled by size of the flock.

For colourising the main 3D sketches I mapped the boids location in 3D space to control the values of RGB. There is also basic user interface for the 3D sketches, allowing the user to change colour modes between B&W, Blue & Black, and Rainbow, and to change the camera mode between spiral rotation, level rotation, and mouse rotation. There is also facility to pause the camera so the user may examine the flocking inside the cube.
The 2D sketches respond to mouse gestures and the boids repel away from the mouse.

Also most of the 2D sketches are js compatible and you can view one of them here, its #04.

I used heavily modified code from these locations:
http://www.openprocessing.org/sketch/47022
http://processing.org/learning/topics/flocking.html


3D sketches


Video of 3D Flocking Demo


2D sketches

Monday, 25 February 2013

More work done on hexagon page

So...
I'm back at uni and working hard again.
Today a bit more work was done on Arcc.cc, specifically the hexagon page (link for the lazy). It still has a couple of small pieces of code that need to be written (or rewritten) but overall its functionality is acceptable.

Breakdown of changes:
1. Switched from iframes to javascript.
  • Removes the reliance on openprocessing.com.
  • Faster load times.
  • No java applet permission requests as it draws straight to canvas.

2. Changed from a straight up layout to request and interaction based, also changed the size of sketches.
  • Gives a more clear place for viewing.
  • No more confusion and scrolling.
  • Borders to know that happens when you click a navigation button.
  • Allows for more organisation and layout, and sets a set size for sketches.
  • Con - the change of size of one sketch causes quite intensive load (this is due to the sketch itself as well as the fact that it is being drawn in a browser).

3. Implemented JQuery.
  • Can modify content on the page much, much easier.
  • Quickly implemented and can be quickly added to.


What I learned from today (in chronological order):
  • My Web Development class is not nearly as fast paced and awesome as my Internet Art class.
  • Hackertyper.net is good for trolling other students who aren't as computer literate as I am.
  • JQuery is AWESOME! and can do all the things.
  • Revere Occam's Razor. It is your god.