Sunday 4 December 2011

Flash PID Controlled Steering



I was introduced to the concept of using a PID-Controller during one of my university modules in games, and I had always thought it was an easy and useful algorithm for game AI, especially for missile-like behaviour, but I had not attempted to implement it until today. Fortunately, I'm equipped with the original source, i.e. AI Game Programming Wisdom 2, Chapter 2.8 Intelligent Steering Using PID Controllers by Euan Forrester (EA, Black Box).

I started by reading the text and trying to implement the algorithm based off my understandings of the text alone. However, I got a little stuck at several parts, after which I had to reference the source code to solve them. Overall, I think that helped me learn more as I could compare my original understanding and the source code to see where I went wrong. Here is what I learned:

  1. The PID Controller should be used to influence the direction or angle of the missile, not the position. Originally, I rushed into the code and got "bitten" because I instinctively tried to calculate the difference between the positions of the target and the missile, rather than the angle.
  2. The output of the PID Controller for missiles behaviour should be an angular acceleration. This means that the missile is required to have an angular velocity as well. This does not seem to be mentioned in the text, so it may be hard to figure out by yourself unless you have an engineering background where PID Controllers are used.
  3. The output of the PID Controller should be a negative feedback to the system. This was only obvious to me when I saw the diagram on Wikipedia where it is clearly shown that the feedback should be negative. If the feedback is positive, what you would see is that the missile would start turning away from the target rather than towards it. Perhaps this behaviour can also be exploited for other purposes.
  4. The movement of the missile can and should be independent of the PID Controller. I.e. the missile should move based on its own velocity or acceleration and should be agnostic to the presence of the PID Controller. The PID Controller should only be used to influence the angular acceleration of the missile, somewhat like the wind.
  5. Angular velocity drag force and Clamping the angular acceleration is a good idea. At first when the algorithm was not properly implemented, I found the missile spinning like crazy, so it's a good idea to have a drag force to always reduce the angular velocity as well as a cap for angular acceleration so as to avoid crazy spinning.
  6. The rest of the algorithm is actually pretty straightforward! Aside from the above, it does seem my original implementation was not too far apart, but I definitely would not have been able to complete this demo in time if not for the source code.
  7. P stands for Proportional Gain and it is useful to think of it as the "now" variable. The higher it is, the more the result will react to its current situation. Too little might seem lazy, too high might cause the result to be unstable because it is always overreacting.
  8. I stands for Integral term and it is useful to think of it as the "past" variable. The higher it is, the more inertia the result will have as it puts more emphasis on what its past errors has been. Therefore, it tends to overshoot the target.
  9. D stands for Derivative term and it is useful to think of it as the "future" variable. The higher it is, the more it will try to predict the future and will try to slow the rate of change of the result, especially if it is going to overshoot.

Saturday 14 May 2011

Flash Dynamic Sound Generation 2: Musical Notes




Today I learnt how to produce musical notes in the equal temperament tuning using Flash dynamically generated sound. If you would like to see how I dynamically generated the sounds, you can check my previous post here.


Essentially, music notes are just certain frequencies, so I could reuse the different wave-forms I did previously to produce them. Now all I had to do was to find out how to determine the frequencies of the notes. The formula to produce the notes was:


Frequency = 2 ^ (n/12) * 440;
where n is the number of semitones relative to the note A4 which is 440 Hz.


For e.g. for the note A#4, n is 1, therefore the frequency is 2^(1/12) * 440 = 466.16 Hz. If you're not familiar with the note names, it is basically the note (A#) followed by the octave number (4).


Actually, the formula is quite intuitive if we think about it. There are 12 semitones in an octave and each octave is double the frequency of the previous octave. Using this information, we can tell that if we want to double in frequency we need to multiply by a factor of 2. However, we only double every 12 semitones and that is why we need to divide the semitone number by 12. Finally, there is a reference frequency which we need to use and the standard frequency to reference is the note A4 at 440 Hz.


As shown, calculating musical note frequencies is pretty easy and now I'll show how to convert the semitone number into a note name. I.e. Given the n = 2, let's find out how to get the note name "B4". Here's my code with the explanation below:


function NoteNumberToName(noteNum:int):String
{
// number from -69 to 58
// assumes noteNum 0 is A4

// octave number is counted from C, so we have to add an offset
// noteNum 0 is octave 4
var octaveNum:int = Math.floor((noteNum + 9) / 12) + 4;
var octaveNumStr:String = octaveNum.toString();

var octaveNote:int = (noteNum+69) % 12;

var returnString:String;
switch(octaveNote)
{
case 0: returnString = "C"+octaveNumStr; break;
case 1: returnString = "C#"+octaveNumStr; break;
case 2: returnString = "D"+octaveNumStr; break;
case 3: returnString = "D#"+octaveNumStr; break;
case 4: returnString = "E"+octaveNumStr; break;
case 5: returnString = "F"+octaveNumStr; break;
case 6: returnString = "F#"+octaveNumStr; break;
case 7: returnString = "G"+octaveNumStr; break;
case 8: returnString = "G#"+octaveNumStr; break;
case 9: returnString = "A"+octaveNumStr; break;
case 10: returnString = "A#"+octaveNumStr; break;
case 11: returnString = "B"+octaveNumStr; break;
default: returnString = "invalid"; break;
}
return returnString;
}


First of all, the range of notes I'm using is the same as the range of notes available in the MIDI system i.e. 128 notes from C-1 to G9, see here. I'm also taking in the input as a number relative A4 so note A4 is 0, C-1 is -69 and G9 is 58. However, you may wish to change this to a zero-based index if you like.


The quirky thing about note names is that it starts with C while the reference frequency starts from A4, so we have to add an offset of 9 since C is 9 semitones below A. So to get the octave number, i.e. the "4" in "A4", we just divide the note number by 12 semitones and add 4 since a note number 0 translate to A4.

Finally all that is left is to get the note itself, and to get this, we do a modulus operation and a switch case for the 12 different semitones. I figured there was no formula to determine the notes due to the uneven nature of having no semitone between E and F, and between B and C, so a switch case is ideal. If there is a formula, please do post it in the comments!

Flash Dynamic Sound Generation 1


I messed around with dynamic sound generation in Flash today, inspired by some of the interesting websites like Tone Matrix and Otomata. I created a Flash swf for testing frequencies and wave-forms.

At first I didn't know where to start so some search results turned up this useful link from Adobe. Also found this site which gave a really simple sample code which I decided to try out. However, the code did not produce any sound so I looked up the docs on the SampleDataEvent and worked from there.

Things I learnt :
  1. The number of samples should not be lower than 2048, as recommended by the docs. This is because it may cause the sound to stop playing if there is not enough data, which is why my computer did not produce any sound. The number of samples are just used as buffer and do not immediately play on the next update.
  2. The callback is only called when the buffer is running out of data to stream. I felt this was not clearly explained in the docs. So essentially setting the samples to 2048 or 8192 has no effect on the sound produced. When you update 8192 samples in one update, it just means it will take a longer duration to call the callback again to refill the buffer. However, the drawback of setting a higher sample number is that it will also take up more processing time for that update.
  3. The data is always played at 44100 Hz. This is stated in the extract function of the Sound class, however, it is not reiterated in the SampleDataEvent docs, so it confused me. I'll explain why this detail is important later in (7).
  4. Wikipedia taught me that the sin wave has a rather complex general form which we seldom see. I tried playing around the DC offset in Flash but I could not tell the difference between the sounds. In essence, we only need the simple form i.e. y = A * sin ( f*t ), where A is the amplitude and f is the frequency we want.
  5. Originally I was thinking that square, triangle and sawtooth waves had to be generated by Fourier series and adding sin waves together, which is what I remember from physics class. However, through Wikipedia I found that there are simpler methods to generate these waves. For e.g. square waves can be generated simply by just using the sign of the sin wave, i.e.:
    y = -A if sin (f*t ) < 0
    y = A if sin (f*t) > 0
    In code, this would be simply: y = (sin(f*t) > 0) ? A : -A;
  6. Originally I also thought that the speed of sound was required to generate real sound frequencies. This is because we need to map the wavelength into the generation process using the frequency equation, f = v/w. However, this is not the case as we just need to set the correct wavelength for the sound to be played in the air, so we do not need to account for velocity. In other words, frequency = 1/wavelength and this meant we could directly use the frequency in our equations.
  7. The units are important! As with all mathematical calculations, making sure the units are consistent is very important. At first I was trying out with f = 220 Hz; A = 0.25, i.e. using equation y = A * sin(f*t), I got y = 0.25 * sin (220*t). Although this seems correct, this gave me the wrong sound. The problem here is that I made the wrong assumption about the units for the equation. Instead of t being time, we are actually using samples instead. The samples are playing at 44.1kHZ mentioned in (3). Therefore, the correct equation is instead y = A*sin(f *(s/44100)), where s is the sample number. This took me a while to figure out so I hope it will save you some time.
  8. Keep in mind the periods of the waves. While figuring out how to generate the sawtooth and triangle wave forms, I got a different tone from the sine and square wave forms. I figured out that I needed to multiply 2π to the sine and square wave forms because they are using the sine function but not to the other wave forms.

Tuesday 19 April 2011

Gamasutra Programming Jobs Word Clouds

I compiled all 122 of the Gamasutra Jobs related to programming, excluding scripting that are listed today and used http://www.wordle.net/ to create these word clouds:


Based on the Job Titles of the 122 jobs (click for larger view):


Based on the Job Descriptions of the 122 jobs (click for larger view):

Based on the Platforms listed in the 85 out of 122 jobs (click for larger view):
Data used


Some interesting observations I found about the current state of the games industry:

  • Senior developers are highly sought after.
  • Gameplay programmers are in demand perhaps due to the fact that many companies have switched to engines and tools.
  • Console platforms seem to be hiring more than mobile ones. This could be due to the fact that mobile developers are mostly startups.
Disclaimer: this data is just based on Gamasutra alone and only for today, so this information could be outdated easily or biased towards only the Gamasutra community.

Get Adsense

Want to earn money like me? Get Google Adsense now by clicking this