One important thing I want to achieve in building Hikari as a data-driven game is reusability — being able to define an object once and use it many times without the overhead of duplication.
Think about this: you have 10 enemies on the screen all of the same type. All 10 of the enemies will be playing the same animation, the only difference being that each enemy may be playing a different part of the animation at any point in time. It doesn’t make sense to have 10 copies of the animation when really only a single animation is being played.
Solving this problem is easy: separate the animation data from the playback mechanism. Then you can:
Share instances of Animation objects
Play different parts of the same animation without duplicating it
We’ve already defined our Animation class which solves the animation data part of the problem. So let’s create an Animator:
#include <memory>#include <functional>classAnimation;classAnimator{private:boolpaused;floattimeElapsed;unsignedintcurrentFrameIndex;std::shared_ptr<Animation>animation;voidplay(floatdelta);voidhandleEndOfAnimation();floatgetCurrentFrameDuration()const;protected:floatgetTimeElapsed()const;unsignedintgetCurrentFrameIndex()const;std::shared_ptr<Animation>getAnimation()const;public:Animator();virtual~Animator();/** * Starts playback from the beginning. */voidrewind();/** * Pauses any currently playing animation. Calls to Animator::update will * still work but playback will not advance. * * @see Animator::isPaused * @see Animator::unpause */voidpause();constboolisPaused()const;/** * Resumes animation playback. * * @see Animator::pause * @see Animator::isPaused */voidunpause();/** * Updates animation playback by a specified amount of time. * * @param delta time to advance playback, in seconds */virtualvoidupdate(floatdelta);/** * Sets the animation to play. */voidsetAnimation(std::shared_ptr<Animation>animation);};
Notice that Animator has virtual functions; it is a base class. The reason? It’s possible that we could “apply” an animation to something in a different way but the playback mechanism will remain the same. In this way we can subclass Animator and adapt playback for different kinds of things.
Also notice that there is a private “playing” method: void play(float delta). This is where the core playback functionality is at, and that logic should work the same way for all types of Animators. Keep it secret, keep it safe.
Let’s say that we want to be able to reuse Animation objects between game objects and elements of our GUI. It’s easy to do this by subclassing Animator into more specialized classes: SpriteAnimator and GUIIconAnimator.
For sprites all we need to do is change what frame it’s displaying as the animation plays. We do this by changing it’s “source rectangle”, or which area of its source image is displays when rendered.
SpriteAnimator::SpriteAnimator(sf::Sprite&sprite):Animator(),sprite(sprite),sourceRectangle(sprite.getTextureRect()){}voidSpriteAnimator::update(floatdelta){Animator::update(delta);if(constauto&animation=getAnimation()){constauto¤tFrame=animation->getFrameAt(getCurrentFrameIndex());constauto¤tFrameRectangle=currentFrame.getSourceRectangle();sourceRectangle.top=currentFrameRectangle.getTop();sourceRectangle.width=currentFrameRectangle.getWidth();sourceRectangle.height=currentFrameRectangle.getHeight();sourceRectangle.left=currentFrameRectangle.getLeft();sprite.setTextureRect(sourceRectangle);// Respect the animation frame's "hot spot"sprite.setOrigin(static_cast<float>(currentFrame.getHotspot().getX()),static_cast<float>(currentFrame.getHotspot().getY()));}}
As you can see, this kind of separation makes using Animation objects in different ways easy. You can imagine how GUIIconAnimator might work, assuming a gui::Icon can have it’s “source rectangle” changed. We use this design for animation playback in Hikari – and animations are shared by game objects and GUI elements.
If you’re ever building an animation system, consider decoupling the animation data from the playback mechanism.
Since at this point in time the player is able to shoot and destroy enemies it has become necessary to spawn bonus items for the player to collect. In Mega Man games there aren’t very many bonus items to choose from so this is a pretty simple task. The original games used a simple “bonus probability table” and a random number to pick which item to spawn. I basically wanted to do the same thing so I wrote a small program to test it out. The idea is you simply store a “range” paired with an item identifier. Then you pick a random number, determine which range it falls into, and spawn whatever item was identified by the range.
So, let’s say we have the following chances:
Extra Life (1%)
Large Health Energy (2%)
Large Weapon Energy (2%)
Small Health Energy (15%)
Small Weapon Energy (15%)
Nothing (65%)
Given those chances, the following program allows us to “pick” a single item randomly with the correct percentage chance.
0 = Extra Life
1 = Large Health Energy
2 = Large Health Energy
3 = Large Weapon Energy
4 = Large Weapon Energy
5 = Small Health Energy
6 = Small Health Energy
7 = Small Health Energy
8 = Small Health Energy
9 = Small Health Energy
10 = Small Health Energy
11 = Small Health Energy
12 = Small Health Energy
13 = Small Health Energy
14 = Small Health Energy
15 = Small Health Energy
16 = Small Health Energy
17 = Small Health Energy
18 = Small Health Energy
19 = Small Health Energy
20 = Small Weapon Energy
21 = Small Weapon Energy
22 = Small Weapon Energy
23 = Small Weapon Energy
24 = Small Weapon Energy
25 = Small Weapon Energy
26 = Small Weapon Energy
27 = Small Weapon Energy
28 = Small Weapon Energy
29 = Small Weapon Energy
30 = Small Weapon Energy
31 = Small Weapon Energy
32 = Small Weapon Energy
33 = Small Weapon Energy
34 = Small Weapon Energy
35 = Nothing
36 = Nothing
37 = Nothing
38 = Nothing
39 = Nothing
40 = Nothing
41 = Nothing
42 = Nothing
43 = Nothing
44 = Nothing
45 = Nothing
46 = Nothing
47 = Nothing
48 = Nothing
49 = Nothing
It’s simple. Each std::pair<int, std::string> stores a drop percentage and an item identifier. Storing these in a vector creates ranges between the entries, then you just sequentially check each of the ranges until you find a match. If you don’t find a match then nothing is spawned. I’ve implemented this into the game already and it’s working as intended.
Like in most cases, the simple approach is the best.
In this post I’m going to talk about how I grouped animations together in the game. If you haven’t read my first post about animations I recommend doing so before continuing.
Typically a character will have many animations, so it would be nice to package related animations together in a set. Besides organization, there’s another advantage to grouping animations; if, for example, several characters have similar actions (e.g.: idle, walking, jumping, taking damage, etc.), then is may even be possible to “swap” animation sets and the “correct” animation will automagically play if your logic is sufficiently decoupled from your animation names.
I won’t go too much more into that as it could probably be a post of its own.
Animation Sets
An animation set is not much more than a map of Animation instances and a string that can be used to identify the image/texture to use when drawing them.
Now, defining a set of animations in JSON is dead easy — just create each animation as a key/value pair in an object. That object is your animation set. Here’s an example of an animation set for the “Octoput Battery” enemy found in several Mega Man titles. It has three animations: Idle, Walking, and Stopping.
At the heart of any great platformer is good quality animation, and Mega Man games deliver on that front. Due the the technical limitations of the NES the programmers at the time had to get pretty creative with how they represent sprites and animations.
Luckily for us today computer technology has come a long way and we have the luxury of focusing on solving programming tasks without worrying as much about our hardware limitations.
So let’s talk about animations.
Control
Since I wanted fine-grained control over many aspects of the animation I had a few requirements for how I wanted them to be stored and used within the game. Some things I thought about were:
Frame dimensions may vary
Frame durations may vary
Some animations may repeat
Some animations may have a “beginning” and a “looping” part
Frames
An animation is really a sequence of images (referred to as “frames” from here on) and some data describing how they are displayed. So let’s define a frame:
A frame is simply a rectangle, an origin (or “hotspot”), and a display time.
The “source rectangle” is the region of an image or texture to use when displaying this frame.
The “hot spot” is an offset from the top-left of the source rectangle that can be used to “move” a frame when displaying it. This is useful if the frames of an animation are not all the same size or the subject is not always in the same relative location within a given frame.
The “display time” is how long the frame should be displayed. In Hikari’s case display time is expressed in seconds where 1.0f is one second.
Animations
Now that frames are covered we can focus on the remaining aspects of animations: whether they repeat or not, and if they do, do they start from the beginning or somewhere else?
If an animation repeats there is a chance that you may want to play some sort of “introductory” part and then loop over another part indefinitely. This could be handled by using two different animations and playing them in sequence, but then you run into problems having to define the sequences of animations to play and when to play them, etc. It’s a common enough case that I felt baking it into Animation was the right thing to do…
…and that’s what keyframe is for.
An animation’s keyframe is the frame to start playback from when it repeats. If the keyframe is set to 0 then the entire animation will play from the beginning when it repeats. Let’s say, though, that you have a 9-frame animation of Mega Man running. The first 3 frames show him just starting to move, while the remaining 6 frames show him fully running. You can use a single Animation with a keyframe of 2 (since frame indicies are 0-based) and you’re all set. Then animation will play frames 0-8 the first time through, and then 2-8 any subsequent time until the playback is restarted. Cool.
There’s one other thing that’s important to think about: how does one handle transitioning between two animations? Should they always restart from the beginning when you change from one to the other? What about in the case where you have a running and running + shooting animation? The transition between those two should not cause playback to restart — so how do we handle that? That’s where the syncGroup comes in to play.
Sync Groups
The concept of sync groups is simple: animations in the same sync group have identical playback characteristics, animations in different sync groups do not. The sync group is a marker to indicate that there is no need to reset playback when transitioning between animations in the same group. Any time you transition, check if the previous and next animations are in the same sync group. If they are, don’t reset playback. If they aren’t, then start playing from the beginning of the new animation.
Of course, sync groups have their drawbacks. The biggest being that it’s still up to someone to make sure that animations with the same sync group are actually indentical, frame-wise.
Defining animations
Since our data is JSON it’s easy to represent animations in a structured way. Here’s a simple 2-frame, repeating animation:
An animation with many frames written this way can get a little bit lengthy but the definitions are still very readable and I would prefer this over XML.
Next installment we’ll talk more about grouping animations together as well as how we go about actually playing them.
Project Hikari is a hobby project of mine. The goal is to create a
Mega Man game
in true NES style. I started breaking ground on the project back in 2010 and
worked on it off-and-on for a few months. Eventually I had a working program but
unfortunately it was so terribly written that building an actual game was out of
the question.
So I did what any respectable person would do; I started over. More than
once.
It’s coming along nicely, but it’s still got a long way to go. I hope to document
the progress here. For now, we’ll start at the beginning with some boring stuff.
Technical Details and Dependencies
Here’s a brief overview of some of the libraries being used by Hikari. This list
is not exhaustive but the main points are covered.
Language
The game is written in C++, a language
with which I do not have professional experience, but a language that I have a
strange love for. For audio, event, graphics, and window management it uses the
wonderful Simple and Fast Multimedia Library.
Files
PhysicsFS is used as an abstraction for the file
system. It can be used to treat files, directories, and packaged formats like .ZIP
as one transparent file system. This is a nice thing to have when you want to
package game assets for distribution but may need to modify them during
development.
Data
The game is designed to be data-driven so that content can be easily created for
it without the need to recompile the application. Things like animations, stage
layouts, and configuration files can be added or modified outside the game itself.
I chose JSON as the data format because:
It’s easy for humans (like me) to read
It’s compact
It seemed like a good idea at the time
The JsonCpp library handles parsing the JSON
files for the game.
Audio
To get the right sound in the game there seemed to be no better alternative than
to emulate the NES hardware. Luckily Blargg’s Audio Libraries
make doing that dead simple. The game can actually play the sound and music from
the actual game. Word.
So, is it done?
Not yet. When will it be? That’s a good question. The project is under active
development and all of the code is available on GitHub.
Check it out, critique it, and send feedback.