OpenGL uses cartisian coordinates by default. That means it defines the top of the viewport as 1.0 and the bottom as -1.0 as the y-axis and 1.0 on the right and -1.0 on the left as the x-axis.
cartisian coordinates
I suppose the idea is that OpenGL doesn't try to assume what the end user wants to do and provides something standard that will work for any viewport. In this case, since we're working with 2D, we simply would like to be able to use pixel coordinates, (also called Orthogonal coordinates) so we can use coordinates relative to the 640x480 window size we have defined.
Orthogonal coordinates
To do this, we're going to edit our vertex shader to look like this.bas
We add a new variable a uniform, which gets set once and stays the same until we redefine it, as opposed to an attribute which are variables that are fed into our shader. In this case we will define a 4x4 matrix to convert our pixel coordinates back into standard coordinates.
So next we need to edit our main file to define our orthographic matrix and pass the value into our shader program.