Uncategorized

Wow, that’s a mouthful, so how about a video to make it a bit clearer?

Basically, it’s a type of optical illusion, making a static image appear to animate, by taking a card with slits in it, and dragging it over the image. Of course, the image itself isn’t normal looking by any means – at best, it looks like a mess of blobs and lines – which is where the whole “optical illusion” comes in.

For example, here’s a colorful one that I made.

No, this isn’t a colorful inkblot test.

So, what’s the mess about? Well, it might be a little easier to understand, by looking at what the slits in the card does. Namely, each slit shows a portion of a single animation frame, and the space between slits, is equal to the width of the slit, times the number of frames in the animation, minus one. That “one” being the width of the slit.

Since that space between slits blocks out large portions of any given frame, it’s essentially “dead” space, that can be used for the other frames in the animation. Each frame has the dead space removed, and then is shifted over one unit to the right from the frame before it. Finally all the frames are merged together to form one image.

This is why moving the card from side to side, makes it appear that it’s animating – as each unit the card moves over, the slits display a different frame. Below is that same image as above, this time with an appropriate “card” over it. Drag the card right or left, to see the animation.

As an avid World of Warcraft player, I’ve always wanted to get a good capture of my characters, for use in other media. Unfortunately, the only options were third-party model viewers, which didn’t quite get things looking exactly how the characters look in game. While the Addon API gave access to a model viewer element, there was really no way of just capturing the model, without getting the background as well.

So, I had to come up with some way to get the transparency of the model, out of said background. The good news is, that I had full control over what that background would be, and “green screening”, isn’t just limited to film. The bad news, is that this method would only work for models that were completely solid. Any semi-transparency, such as glow effects, would blend in with that background producing a new color that was both visually wrong, but also couldn’t be filtered out.

Unlike film, using live actors, a digital “actor” can be copied, and both models could have their position, rotation, and even animation synced. So I elected to double up. This time, the first one would only take up half the width of the capture area. In the remaining space, a copy was added, this time with a blue background.

transtest

 

This method allowed me to have a green screened version, and a second version, that would contain the original green channel of that image. So, if I were to replace the green channel of the green side, with the green channel of the blue, I’d end up with the proper colors. However, simply doing that, just ends up with a black background, not a transparent one. Yet, the secret to transparency is also in those two green channels.

See, everything from the green side’s background, to the influence said background has on the glow effect, would be in that green channel, along with any other “proper” contributions to the color of the model. But the only thing in the green channel of the blue half,  are those “proper” contributions. So, if you copy the green side’s green channel, and invert it,  then copy the blue side’s green channel, and add them together, you end up with an alpha mask. To make it a bit more clear, here’s some basic pseudo code:

alphaPixel = Math.min((255-greenSideGreenPixel)+blueSideGreenPixel,255)

Which means that each pixel in the final image, would look something like this:

pixel = Color (greenSideRedPixel,blueSideGreenPixel,greenSideBluePixel,alphaPixel)

And the result:

transtest