Wednesday 10 June 2015

Research Post Appendices

Appendix i. Accurate Matching

The use of an accurate, or at least acceptably inaccurate matching method needs to be found. In almost all situations accuracy is traded for speed.Some methods of comparing images that I investigated (and used):
  • Perceptual HASHing
  • Mean Squared Error
  • Structural SIMilarity

Perceptual HASHing

Perceptual hashing is useful for a number of things in this project however not creating the actual mosaic. The primary use of this algorithm is for locating duplicates. the reason for this is that phashes can be compared and a measure of difference (from 0-36) between hashes can be acquired very fast. The main reason why this algorithm is not good for creating mosaics is due to the way that it is processed, the order of operations goes as such:


This is great because you can tell the bit difference and its fast, unfortunately some of its strengths are also its weaknesses concerning mosaic creation: it doesn't care about scaling, rotation or colour.
Due to the way that the hashes are compared, rotation doesn't matter (the same image rotated will still have a Hamming distance of 0), because they are all reduced to the same size (so that the hashes have the same size and can be compared) scaling is irrelevant, and because the image is converted to greyscale colour is irrelevant. This was useful for matching and deleting duplicates in the first round of image collection, however was wildly inaccurate for matching against the reference image.

Mean Squared Error

Mean Squared Error was the best algorithm for comparing against the reference image. This is because it is so basic.
Unfortunately the primitiveness of this algorithm means that you can only compare images of the same size.
The algorithm goes like this (assume that the images are 3x3):
  • This gets the error:
[ 3 2 5 ]   [ 1 0 3 ]    [ 2 2 2 ]
[ 2 1 4 ] - [ 1 1 2 ] = [ 1 0 2 ]
[ 5 1 2 ]   [ 3 1 2 ]    [ 2 0 0 ]

  • Square that:
[ 2 2 2 ]        [ 4 4 4 ]
[ 1 0 2 ] ^2 = [ 1 0 4 ]
[ 2 0 0 ]        [ 4 0 0 ]

  • Get the mean:
(4+4+4+1+0+4+4+0+0)/9 = 2.33333r

Because this can be applied to arrays of arbitrary dimensions, it can operate on colour (alpha channels even!), and gets a measure of dissimilarity that is standard.
This was the algorithm that was primarily used for the mosaic creation.

Structural SIMilarity

I still do not fully understand how structural similarity works (a lot of really complex mathematics), but I did find that the version that I used didn't fully take into account colour and had issues with incorrect detection.
Maybe later I will go back and look into it further.


Appendix ii. IpA vs ApI

The distinction between these two high level algorithms and understanding of why they are different is very important to understanding the methods behind mosaics that I have been creating.

IpA

Image per Area (IpA) is where you divide the reference image up into equally sized sections, that approximately add up to the number of images you have to compare, and then each image is compared against each reference section and ranked based on its similarity, the rankings are stored in a list for each section.
When the time comes to perform placements on the reference image, a random reference section is selected from the list of sections to be processed and the highest ranked image that is not in the list of used images is placed, 
the image is added to a list of used images and the reference section is removed from the list of reference sections to be processed, repeat.
This places all images into a grid, and makes sure there are no repeats (it does end up with a few leftover images, due to the way it divides up the reference images).


ApI

The IpA algorithm set is in stark contrast to the other high level algorithm that I have termed Area per Image (ApI).
ApI is implemented in a less structured but more scalable than IpA, this is because it has less set restrictions on the amount of accuracy that it can have in the placement of the images, and less restrictions on the actual placement of the images. It is also an "Embarrassingly Parallel" process.
In ApI placement the placement images are grouped into a list, the number of different scaling values is set, the number scatters is set, of and the number of recursion passes is set. Once this has happened each image is rescaled on a gaussian distribution with the mean centred around the average size for all the images, this is done a number of times. For each time the image is rescaled a number of points are scattered around the reference image and a comparison is made for the image at each of these points, the highest similarity comparison point is selected. This image again scattered around that comparison point, with the region within which it is scattered around being 6x the size of the scaled image itself. This is repeated for the number that recursion passes was set to, each time the number of scatters and the region being multiplied by 1/n. At each point in this recursion process, if a better comparison score is detected, it is selected as the origin for the next recursion. When n reaches the number of recursions, it returns the last comparison point (the highest similarity score), compares this to best scores of the alternate scaling values (acquired using the same process), and stores the data.
This data is then sorted into reverse order (best scores at the end), and placed on a blank, black image, from first to last so that the best scored images are placed on top.

In my works I used both these methods, first IpA (for the background) and then ApI (for the detail).

Appendix iii. How to avoid duplicates

After the previous appendices this one seems almost like an afternote. It is almost irrelevant in comparison to the depth of the previous ones, however it is actually one of the most fundamental parts of how I created these mosaics, and what makes them unique is their ability of only have unique images in the database.
The way in which it was done is this:

  • (for each image: generate phash and store in the database)
  • (for each phash in the database:
  •     (compare this phash against every other phash
  •         (if the difference is under a specific value: delete the lower res image, and remove it from the database)))
This is just basic duplicate removal, however it is key to the kind of images that I have created.

Tuesday 9 June 2015

Research write-up

Over the course of my photomosaic project I have needed to do a bunch of research into the art form and the available software out there.

Most of the software/services that I found either didn't match images very well (see app i. Accurate matching), only match to a grid (see app ii. IPA vs API), scale everything down (see app. iii. Scaling up) or repeat images (app iv. How to avoid repeats).

I found someone who claims to be the "inventor" of the photomosaic here, and while, yes he has photomosaics, they are all only matched to a grid.

I found a photomosaic stiching program called mazaika, and while the feature list is quite impressive, it has less accurate matching than my sofware and only matches to a grid. It also uses the (terrible IMO) feature to mask bad matching, where it colour corrects the images to the reference image.

I ran across an interesting method of grid mosaic tile matching using a "nearest neighbour" algorithm. This leverages machine learning to create a satisfactory result, however it does so to an approximation accuracy rather than an exact one. Also the creator has a good flow for generating tiles.

I've also read up on some of the other methods that people have used to create mosaics both online and offline, most of them only use one technique and/or modify the source images rather than arranging them.

Here is a quick list of some of the options that I investigated:

  • http://www.theartoftylerjordan.com/make-an-easy-photo-mosaic-in-photoshop-2/
  • http://www.mazaika.com/
  • http://tvldz.tumblr.com/
  • http://www.photomosaic.com/portfolio.html
  • http://mosaically.com/
  • http://www.canvaspop.com/photo-mosaic
  • http://www.mosaizer.com/