By the look of the “Date Modified” attribute of my first Fallout map, I’ve been at this now for about three years. On the surface, it seems pretty straightforward. The GECK (or Creation Kit) allows you to view game worldspaces from the top down, on a cell by cell basis, with an orthographic camera. Basically all the requirements for creating a seamless map. Capture an image of each cell, paste it into Photoshop, and assemble the world’s most boring jigsaw puzzle.

Free Maps! Just add tedium!

Of course, just because it seems straightforward, doesn’t mean there weren’t bumps in the road. First among them, was to capture a specific area of the screen. At first I handled this with SnagIt’s region capture, but every time I went to make a map, I had to manually re-adjust the window, and the region to fit the game cell. So it took a bit of trial and error, and there was little consistency. To make matters worse, I was greatly limited by the performance and drive space of my computer at the time – dealing with massive images has a tendency to eat up both. Finally, there was the whole tedium part of it – even a smaller (DLC) map could have about 900 cells, with main game maps being even larger!

Step One: The Upgrade

Shortly before the System Requirements were posted for Fallout 4, I had finished my upgrade. Nothing that would make hardware fetishists excited, but it did (luckily) exceed said requirements. This upgrade included two new hard drives. A monster 2TB platter drive, and a 500GB SSD. So not only did I have significantly more RAM, GPU RAM, and a faster CPU, but also, conveniently had more room for Photoshop scratch disk usage.  So while cell captures on the Mojave Wasteland map were limited to around 300px across, The newly created Commonwealth map boasted 865px cell captures. By this point, I had went and created a tiny tool that would do the job of SnagIt. Something I could build on, to help me solve future problems.


Isn’t it cute?

Step Two: Consistency and Pushing the Limits

I was rapidly approaching the limits of what a 1080p screen could show at once, but I discovered if I could adjust the window just right I could expand that capture to 1021px across. But this required a lot of trial and error to achieve. So it was time to add another feature to my little tool. Using Win32 functions, I could now change the position and the size of the Render Window, so I could consistently get those large captures, without having to fiddle with position/bounds.

While I could now consistently get large cell captures, with the absolute minimum of effort, the tedium of building the map itself suddenly became the largest issue. While I could capture a cell with a key, I found myself constantly moving between the Cell View window to manually navigate the map, and back to Photoshop to place the capture. Not to mention all the key combinations required.  It was time to add some more bells and whistles

Step Three: Moar Hotkeys

To alleviate at least some of the back and forth, the next step would be to add a total of five new hotkeys, and bind all of these to the mouse buttons on my Logitech G600 Two buttons would now do the capture and pasting, another one will reset the view, and four more would handle the map navigation, by using Win32 to directly alter the cell coordinates on the Cell View window, and hit the “Go” Button.

All your control belong to us.

At this point, I could now do everything just using the mouse – and it was a big improvement; but there were still some issues.

First, Photoshop had problems, not only with getting focus, but also registering that new data was on the clipboard. This led to needing a couple of clicks to be able to paste, and through trial and error, clicking the capture button a couple of times, just so I wouldn’t get the previously captured cell again when I attempted to paste. Due to the aforementioned inaccuracy with the clipboard, I had to keep Photoshop zoomed in enough to verify I was getting the right cell, which required me to pan/scroll more while working on the map. Then of course, there was user error. Even with guides, the occasional capture wouldn’t be placed correctly, and I fat-fingered the wrong button, more than I’d like to admit. Which required additional time to get the map back to the correct cell.
So, for my next trick, I was going to try to eliminate both Photoshop and…well…me, as much as possible.

Step Four: When a Little Tool Grows Up

Now, I couldn’t remove both from the picture – entirely. I’m not going to implement my own scratch disk system to allow my app to do the entire map by itself. But what I can do, is work with large chunks of the map, and assemble them in Photoshop when I’m done. Instead of a thousand cells, I’d only have to deal with 15-20 larger images – so a lot less room for error. I also can’t remove myself from the process – a human eye is necessary to catch map symbols that, while useful when editing a map, aren’t actually visible in game. Judgement is also required to verify that the cell has finished loading. But again, what I can do, is limit my interaction to simply capturing, while the map navigation and placement, is handled by the app.

Mapmaking at a click of a button.

What you see above, is a proof of concept – a 16k pixel square map chunk created by a simple button press for each cell, and other than the occasional brief loading time, nearly instant. Massively cutting down the time it takes to build it. The next step is to make it more project focused – setting the upper left and lower right bounds for the whole map, making the app figure out how many chunks are necessary to complete it, and allowing the ability to save/switch between chunks.

The idea is to be able to start in the upper left corner, and simply hit the capture button repeatedly, with the app saving and switching chunks as needed – while automatically navigating to the proper cell. By my estimate, this could turn a multi-day project for a full game map down to something that takes only a handful of hours.

Epilogue: To be Continued…

While this should work nicely for Fallout 3/New Vegas maps, Fallout 4/Skyrim and newer maps still have an additional twist that needs to be handled. See, the GECK requires manually changing cell coordinates to continue loading new cells, while the Creation Kit will continue loading just by navigating with the arrow keys. However, trying to navigate by going directly to the cell in question, will reset the zoom for the perspective view. The perspective view is zoomed in so far by default, that it will clip terrain/objects in the orthographic view. So, when I get to Skyrim maps, I’ll have to figure out how to get around this.

Wow, that’s a mouthful, so how about a video to make it a bit clearer?

Basically, it’s a type of optical illusion, making a static image appear to animate, by taking a card with slits in it, and dragging it over the image. Of course, the image itself isn’t normal looking by any means – at best, it looks like a mess of blobs and lines – which is where the whole “optical illusion” comes in.

For example, here’s a colorful one that I made.

No, this isn’t a colorful inkblot test.

So, what’s the mess about? Well, it might be a little easier to understand, by looking at what the slits in the card does. Namely, each slit shows a portion of a single animation frame, and the space between slits, is equal to the width of the slit, times the number of frames in the animation, minus one. That “one” being the width of the slit.

Since that space between slits blocks out large portions of any given frame, it’s essentially “dead” space, that can be used for the other frames in the animation. Each frame has the dead space removed, and then is shifted over one unit to the right from the frame before it. Finally all the frames are merged together to form one image.

This is why moving the card from side to side, makes it appear that it’s animating – as each unit the card moves over, the slits display a different frame. Below is that same image as above, this time with an appropriate “card” over it. Drag the card right or left, to see the animation.

As an avid World of Warcraft player, I’ve always wanted to get a good capture of my characters, for use in other media. Unfortunately, the only options were third-party model viewers, which didn’t quite get things looking exactly how the characters look in game. While the Addon API gave access to a model viewer element, there was really no way of just capturing the model, without getting the background as well.

So, I had to come up with some way to get the transparency of the model, out of said background. The good news is, that I had full control over what that background would be, and “green screening”, isn’t just limited to film. The bad news, is that this method would only work for models that were completely solid. Any semi-transparency, such as glow effects, would blend in with that background producing a new color that was both visually wrong, but also couldn’t be filtered out.

Unlike film, using live actors, a digital “actor” can be copied, and both models could have their position, rotation, and even animation synced. So I elected to double up. This time, the first one would only take up half the width of the capture area. In the remaining space, a copy was added, this time with a blue background.



This method allowed me to have a green screened version, and a second version, that would contain the original green channel of that image. So, if I were to replace the green channel of the green side, with the green channel of the blue, I’d end up with the proper colors. However, simply doing that, just ends up with a black background, not a transparent one. Yet, the secret to transparency is also in those two green channels.

See, everything from the green side’s background, to the influence said background has on the glow effect, would be in that green channel, along with any other “proper” contributions to the color of the model. But the only thing in the green channel of the blue half,  are those “proper” contributions. So, if you copy the green side’s green channel, and invert it,  then copy the blue side’s green channel, and add them together, you end up with an alpha mask. To make it a bit more clear, here’s some basic pseudo code:

alphaPixel = Math.min((255-greenSideGreenPixel)+blueSideGreenPixel,255)

Which means that each pixel in the final image, would look something like this:

pixel = Color (greenSideRedPixel,blueSideGreenPixel,greenSideBluePixel,alphaPixel)

And the result:




Note: Maybe this will be a series, maybe not – it’s mostly to just put down my thoughts about games that I’ve played, that I feel need to be talked about – beyond just, “This game is awesome! go play it!”

First off, I try to stay away from console-oriented games as much as possible – mostly because the design focus differs quite a bit from the more open-ended focus of the games I tend to enjoy. However, when such a game is noteworthy enough, to get positive reviews from those few game reviewers I trust, I’m willing to bite the proverbial bullet, and give it a shot. Puns may or may not be intended.

Spec Ops: The Line was lauded for its narrative, and its portrayal of the psychological effects of violence. They’ve gone so far as to proclaim it as a true example of games as an art form. I’ll address these points; but first, I want to talk about gameplay.

The gameplay is…well, exactly what you should expect from a game with a console focus. Linear gameplay, checkpoint saves, auto-regenerating health, rail-shooter elements, and the general impression that the developer finds the story and production so important, that they can’t trust players to discover it on their own. Then toss in a wonky context-based control system, and a third person camera that ceases to be useful at all when firing a mounted machine gun, and said gameplay becomes pretty obnoxious – even by console standards.

But, hey – I’ll even “suffer” through what amounts to be an interactive movie, if the story is good enough. Unfortunately, it stumbles through that as well. See, the general concept of the story, is to portray the main character as someone who was driven insane, by a chain of events starting from a couple of “no right answer” scenarios. But the heavy-handed approach to development, and the insistence of having a pre-created main character, completely voiced and scripted, doesn’t work well with this goal.

See, in a movie, or a book – you can watch the main character lose control, and feel for them. You can sit from the outside, and watch the descent to madness step by step. You can do the same thing with a character in a game as well, but only if the perspective is the same. You can also achieve the same goal for a player character; but you have to make sure the player feels like they’re completely in control, while slowly distorting their perspective. Due to the nature of the player character, they had to attempt to use both methods. While that could, theoretically be done – its success depends heavily on the individual player. And all it takes is a couple of “Wait, why did he act that way?” and the entire thing falls apart.

The second issue was with the “no right answer” scenarios, and this ties heavily into the trend of binary “morality” mechanics, where the game forces you to into two decisions, or even one decision, by proxy. The first scenario was a situation where you come across a large group of soldiers, wearing the same uniforms as the ones that have attacked you through the previous levels. At this point, one of your squadmates points out a convenient mortar with white phosphorus shells. The other squadmate then voices his opposition, as it’s an especially cruel and inhumane method of killing people.

But here’s the thing – you literally have to use it. The game gives you no other choice. No attempt to talk, no option to merely walk away, merely walking out there will have them open fire on you, and if you attempt to take them out using more conventional methods, the game will deliver a literally endless flow of enemies until you’re dead. So, effectively – there is no decision at play here. So you go ahead and play the game the way the developer forces you to, and lo and behold, these soldiers were escorting civilians out of the area. So congrats, you just horrifically murdered a ton of civilians, which the game is all too happy to force you to see the gruesome results. At that point, the main character gets that numb gut wrenching expression on his face, and voices is intent to kill whoever is behind this. Presumably the first step towards his eventual insanity. Meanwhile, as the player, I’m just sitting here going “What the hell, devs?!”. That’s not a “no right answer” scenario – that’s a “you’re picking the wrong answer, because we’re forcing you to” scenario.

And then a bit later in the game, you’re forced into a situation where two people are suspended from a girder, both with enemy snipers aiming at them. You’re now told you have to chose which one dies – the civilian who stole water, or the soldier who killed the civilian’s family in the process of apprehending him. And of course, the game won’t progress until you do – but will immediately cut off all player control once the decision is made, while you watch (another) cutscene of the other one running away.

So, in the end, it’s a victim of its own focus. The player is merely the actor following the script, and held in check by the director, in what amounts to be an interactive movie in a shooter’s disguise. Roger Ebert once claimed that a game can never be considered art, because the player element comes between the creator and the work. Personally, I’ve always disagreed with this, despite understanding where he was coming from. But it’s games like this, that make me wonder if he might’ve been right after all. Because Spec Ops: The Line, might very well be a work of art; but it’s certainly not much of a game.

The industry needs more developers like VALVe and Bethesda – that understand that the player’s involvement in a story, is what makes the story have a purpose in the first place. Instead of just giving players something to do, until the next cutscene.

If you’re wondering why your project is desperately seeking “freeglut.lib”, instead of “freeglut_static.lib”, resulting in a linker error. Or a host of 2019 (unresolved external symbol) linker errors, if you try to rename freeglut_static, to freeglut, then your “problem” lies in the freeglut_std.h header file, that’s included in your project.

If you look at that file, you’ll see the following preprocessor conditional:

/* Windows static library */

# define FGAPI

/* Link with Win32 static freeglut lib */
# pragma comment (lib, "freeglut_static.lib")
# endif

/* Windows shared library (DLL) */
# else

# define FGAPIENTRY __stdcall
# if defined(FREEGLUT_EXPORTS)
# define FGAPI __declspec(dllexport)
# else
# define FGAPI __declspec(dllimport)

/* Link with Win32 shared freeglut lib */
# pragma comment (lib, "freeglut.lib")
# endif

# endif

# endif

So to use the static library, you’ll need to add FREEGLUT_STATIC to your preprocessor commands.

If you happen to be using GLEW, that also requires a preprocessor command before it will use the static library. In that case, it’s GLEW_STATIC.

I figured I might as well put a “front end” up, instead of just using my hosting for what amounts to online storage.

I’ll be honest – web design drives me nuts. Not that I don’t know how, or that it’s technically difficult. It’s more of an internal conflict, really. Part of me wants a nice looking site, that avoids the numerous usability failures you see on a lot of other sites. The other part just thinks it’s all useless crap that takes time away from coding something “cool”. So as a compromise, I just use WordPress, find a theme that has potential, and then spend the better part of a day tweaking the appearance, and modifying the theme itself to exorcise the stupid.

I’m more than likely not done yet. I haven’t tested out on anything but Chrome (Both Android and Windows versions), and I’m sure there will be more features as I have need of them, but at least visitors won’t be greeted by just the Siteground “Coming Soon” page anymore.