Entity Models
Entity models are assets loading into a stage to populate a specific scene. These can include enemies, bullets for enemies, garbage cans and NPC's.
These files start at offset 0x124800
in memory, and all pointers in the section are relative to the start of the section. This means that you you either need to truncate the bytes before this location, or add this offset to each one of the pointers to align them to the models.
Adjustments
const SCALE = 0.00125
const ROT = new Matrix4()
ROT.makeRotationX(Math.PI)
Two notes to make about models on the Playstation is that there are no floating point values. All of the models are done with whole integers. This makes the numbers used absolutely massive. So in order to scale the massive numbers used to a more reasonble size in meters, we use the scale of 0.00125
.
Another note is that -y
is up, and y
is down. In order to fix this by setting y=-y
causes the models to render inside-out. Rendering them as-is and then rotating has the issue that the models are upside-down on export. So the best approach seems to be to rotate all of the coordinates by 180 degrees in the y direction to put them upright. This has some effects on the animations, but once accounted for, it provides models that are the right-side up.
List Header
The models in this section are included for a list. The list has the format of having the count followed by the type and offsets for each models. The definition for the first model begins right after the end of the list. Pseudo-code for parsing the list header is shown below.
typedef struct {
uint32_t type;
uint32_t meshOfs;
uint32_t trackOfs;
uint32_t controlOfs;
} MML2_EntityHeader;
#define ENTITY_MODEL_OFFSET 0x124800;
uint32_t count;
MML2_EntityHeader* modelList;
fseek(fp, ENTITY_MODEL_OFFSET, SEEK_SET);
fread(&count, sizeof(uint32_t), 1, fp);
modelList = malloc(count * sizeof(MML2_EntityHeader));
fread(modelList, sizeof(MML2_EntityHeader), count, fp);
Note: type
here is defined as a uint32
value, but it's more of uint8_t[4]
value. The first byte provide flags for how the enginer should handle the behavior for the mesh (enemy, npc, ect). The second byte provides the specific asset type, and the third byte provides a modifier for the type.
For instance the Green Horokko enemy is type 0x000520
and the Red Horokko type is 0x01>>0520
. With 0x00 or 0x01 defining the color, 0x05 defining a Horokko, and 0x20 defining an enemy type. But the whole 4 bytes effectively acts as a unique identifier, and I haven't spent time into looking how the specific flags are handled.
Entity Model Header
The top of each model provides a header with offsets to each of the properties defined for the model. The first byte is the number of submeshes in the model.
Following that is a list of offsets for the mesh, skeleton (if exists), hierarchy (if exists), texture (if exists), collision (if exists), and shadow. The properties annotated with (if exists) have a possibility of not being defined in the cases where the model does not have a skeleton, or has a flat color in the case of textures.
Note: That all of the offsets are relative to the start of 0x124800, not to the start of the model.
typedef struct {
subMeshCount uint8_t;
unknown uint8_t[3];
meshListOffset uint32_t;
skeletonOffset uint32_t;
hierarchyOffset uint32_t;
textureOffset uint32_t;
collisionOffset uint32_t;
shadowOffset uint32_t;
} MML2_MeshHeader;
Going through each of the properties, submesh count provides the number of submeshes in the asset. For the purpose of this document, a "submesh" is a weighted mesh or limb. For instance with Data the Monkey would have 6 submeshes, head, body, left hand, right hand, left leg, right leg. Note that some of the meshes can be hidden by default.
The meshList offset is the offset which defines the geometry of the model.
The skeleton offset provides an offset to the bones. Bones are a list of x, y, z coordinates for the position of that bone. And the bone count does not necessarily match the mesh count.
The hierarchy offset provides a list of parent child relationship for each of the submeshes. This effectively defines the order of the skeleton bones. There are other flags and considerations that will need to be taken into account when reading this section.
The texture offset provides a list to the texture coordinates for the framebuffer. The number of textures is not provides as far as I can tell.
The collision section is labeled as such because when you comment it out (set to zero), you can walk through things. I've never analyzed this section as it has no effect on the asset.
Lastly, the shadow section is a pointer to a list of bytes where 0x00 represents black and 0xff represents which represents and edge on a face for the vertex color to give the model shading. This section is also not covered in this document.
Parsing the Mesh
Reading the mesh has a HARD dependency on skeleton, hierarchy and texture, so we will cover these first.
Skeleton
Reading the skeleton is one of the most straight forward structs in Megaman Legends 2. If the skeleton is declared, each one of the bones will be an x, y, z vector where each axis is a signed 2 byte value.
typedef struct {
int16_t x;
int16_t y;
int16_t z;
} MML2_MeshBone;
In most cases the number of bones will equal the number of submeshes, but there are often cases where there are more meshes for objects that can be toggled with visibility. As such, the number of bones needs to be calculated by subtracting the difference in the hierarchy offset which comes after the skeleton offset, dividing the result by six ignoring the remainder as anything left over is padding bytes.
let ofs = skeletonOfs
const bones: Bone[] = []
const nbBones = Math.floor((heirarchyOfs - skeletonOfs) / 6)
for (let i = 0; i < nbBones; i++) {
// Read Bone Position
const x = view.getInt16(ofs + 0x00, true)
const y = view.getInt16(ofs + 0x02, true)
const z = view.getInt16(ofs + 0x04, true)
ofs += 6
// Create Threejs Bone
const bone = new Bone()
bone.name = `bone_${i.toString().padStart(3, '0')}`
const vec3 = new Vector3(x, y, z)
vec3.multiplyScalar(SCALE)
// Rotate 180 Degrees
vec3.applyMatrix4(ROT)
bone.position.x = vec3.x
bone.position.y = vec3.y
bone.position.z = vec3.z
bones.push(bone)
}
After that, the only thing that needs to be done is to read each one of the bones. For each of the bones, we scale to get the size in meters, and then rotate by 180 degrees to account for -y
being up.
Hierarchy
The hierarchy describes how to weight the submeshes along with other flags. The heirarchy section defines a 4 byte value for each one of the submeshes. The structure is shown in the following code.
typedef struct {
uint8_t index;
uint8_t parentBoneIndex;
uint8_t weightedBoneIndex;
uint8_t flags;
} MML2_MeshFlags;
The first byte is an index. This is a count that starts at zero and increments for each of structs describing a submesh. Following that is the parent index for the bone. This is what actually defines the bone structure for parent and child. This can be used to assemble the bones into a skeleton. After that is the weighted index for each bone. The final byte is flags, this describes behavior that a mesh can have, such as being hidden, or sharing meshes. More information on this can be found in the Mesh Processing section.
ofs = heirarchyOfs
const hierarchy = []
const nbSegments = (textureOfs - heirarchyOfs) / 4
for (let i = 0; i < nbSegments; i++) {
const polygonIndex = view.getInt8(ofs + 0x00)
const boneParent = view.getInt8(ofs + 0x01)
const boneIndex = view.getUint8(ofs + 0x02)
const flags = view.getUint8(ofs + 0x03)
const hidePolygon = Boolean(flags & 0x80)
const shareVertices = Boolean(flags & 0x40)
if (bones[boneIndex] && bones[boneParent] && !bones[boneIndex].parent) {
bones[boneParent].add(bones[boneIndex])
}
if (flags & 0x3f) {
console.error(`Unknown Flag: 0x${(flags & 0x3f).toString(16)}`)
}
hierarchy.push({
polygonIndex,
boneIndex,
boneParent,
hidePolygon,
shareVertices,
})
ofs += 4
}
bones.forEach((bone) => {
bone.updateMatrix()
bone.updateMatrixWorld()
})
Texture
Each Texture entry is a 4 byte value and like the other values, no count is provided. Which means that we need to calculate the length of the section based on the offsets and then parse from there.
Either the collision or shadow offset should follow the texture offset. The number of textures is this length divided by four.
const mats = []
if(textureOfs) {
ofs = textureOfs
const textureCount = ((collisionOfs || shadowOfs) - textureOfs) / 4;
for (let i = 0; i < textureCount; i++) {
const imageCoords = view.getUint16(ofs + 0x00, true);
const paletteCoords = view.getUint16(ofs + 0x02, true);
ofs += 4;
const canvas = renderTexture(imageCoords, paletteCoords);
const texture = new Texture(canvas);
texture.flipY = false;
texture.needsUpdate = true;
mats[i] = new MeshBasicMaterial({
map : texture,
transparent: true,
alphaTest: 0.1
});
}
}
The first two bytes of each entry are the images coordinates, and the second two bytes are the palette coordinates.
The image above shows an example of the framebuffer in Megaman Legends 2. The area outlined in red on the right are the images. These are 4 bit indexed images. There are 22 "pages" of indexed textures that are 64 x 256 pixels in size. When the 4 bits per pixel are referenced, this becomes a 256x256 true color texture. So the image coordinates describe which page to use, and the palette area is outlines in blue describes which 16 colors the 4 bit indexes reference.
The x and y coordinates in the framebuffer for each type respectfully are shown in the code snippet below.
const image_x = (imageCoords & 0x0f) << 6;
const image_y = imageCoords & 0x10 ? 0x100 : 0;
const palette_x = (paletteCoords & 0x3f) << 4;
const palette_y = paletteCoords >> 6;
In the geometry, uv values are defined as a single byte (0 - 255) value which represent a pixel index in the 256x256 texture.
Geometry
Once we have the bones, the hierarchy, and textures, we can finally start reading the mesh or geometry. The geometry is made of two parts, a header which describes primitives for each one of the submeshes, and then the primitives themselves. The primitives are defined as a list of vertices and then either triangle or quad faces.
typedef struct {
uint8_t triCount;
uint8_t quadCount;
uint8_t vertexCount;
int8_t scale;
uint32_t triangleOffet;
uint32_t quadOffset;
uint32_t vertexOffset;
} MML2_Submesh_Header;
Each of the submeshes has the following header with length of 0x10. The first three bytes are the count for the triangles, quads and vertices respectively. The fourth byte is used for scale. Since the vertices use a 10bit format, the scale parameter provides how many bits the 10bit encoded vertices should be scaled to for the true size of the submesh.
const appliedScale = scale === -1 ? 0.5 : 1 << scale;
Following the bytes values are offsets to each of the offsets for the primitives. Remember there is a heirarchy entry associated with each submesh.
Vertices
The vertices need to be parsed before reading the faces. The reason because of the 0x40 bit for the flags in the hierarchy. When this bit is set, if a vertex shares the same space as a vertex in the parent bone, the index should map to the vertex of the parent bone. This means that the index of each vertex needs to be mapped from a global to local index in order to be mapped to an index provided in the face list.
const readVertex = (
view: DataView,
vertexOffset: number,
vertexCount: number,
scale: number,
bone: Bone,
shareVertices: boolean,
vertices: WeightedVertex[],
boneIndex: number,
boneParent: number
) => {
const localIndices: WeightedVertex[] = []
let ofs = vertexOffset
const haystack = bone.parent
? vertices
.filter((v) => {
return v.boneIndex === boneParent
})
.map((v) => {
const { x, y, z } = v.pos
return [x.toFixed(2), y.toFixed(2), z.toFixed(2)].join(',')
})
: []
for (let i = 0; i < vertexCount; i++) {
const dword = view.getUint32(ofs, true)
ofs += 4
const xBytes = (dword >> 0x00) & VERTEX_MASK
const yBytes = (dword >> 0x0a) & VERTEX_MASK
const zBytes = (dword >> 0x14) & VERTEX_MASK
const xHigh = (xBytes & VERTEX_MSB) * -1
const xLow = xBytes & VERTEX_LOW
const yHigh = (yBytes & VERTEX_MSB) * -1
const yLow = yBytes & VERTEX_LOW
const zHigh = (zBytes & VERTEX_MSB) * -1
const zLow = zBytes & VERTEX_LOW
const vec3 = new Vector3(
(xHigh + xLow) * scale,
(yHigh + yLow) * scale,
(zHigh + zLow) * scale
)
vec3.multiplyScalar(SCALE)
vec3.applyMatrix4(ROT)
vec3.applyMatrix4(bone.matrixWorld)
const vertex: WeightedVertex = {
pos: vec3,
boneIndex,
}
// If the flag is set for weighted vertices, we check to see
// if a vertex with the same position has already been declared
if (shareVertices) {
const { x, y, z } = vec3
const needle = [x.toFixed(2), y.toFixed(2), z.toFixed(2)].join(',')
if (haystack.indexOf(needle) !== -1) {
vertex.boneIndex = boneParent
}
}
localIndices.push(vertex)
vertices.push(vertex)
}
return localIndices
}
Each each vertex is an x, y, z coordinate mapped from a 4 byte value. Which means that all 4 bytes (32 bits) need to be read. The values for x, y, and z then need to be masked to two's compliement 10 bit values. This means that the bottom 9 bits for each value are possitive and the highest bit is a negative value. These need to be caclulated and then multiplied by the scale to get the vertex value as a float.
From there each value needs to be rotated by 180 in the y axis to account for y = -y, and then multiplied by the weighted bone to move the vertices into world coordinates. Lastly, the indices need to mapped to a global index which takes into account vertex sharing. And the list of mapped indices needs to be returned by the function for use in the face mapping.
Faces
Last is the actual faces. Both triangles and quads use the same format.
const readFace = (
view: DataView,
faceOffset: number,
faceCount: number,
isQuad: boolean,
localIndices: WeightedVertex[],
faces: FaceIndex[]
) => {
const FACE_MASK = 0b1111111
const PIXEL_TO_FLOAT_RATIO = 0.00390625
const PIXEL_ADJUSTMEST = 0.001953125
let ofs = faceOffset
for (let i = 0; i < faceCount; i++) {
const dword = view.getUint32(ofs + 0x08, true)
const materialIndex = (dword >> 28) & 0x3
const au = view.getUint8(ofs + 0x00)
const av = view.getUint8(ofs + 0x01)
const bu = view.getUint8(ofs + 0x02)
const bv = view.getUint8(ofs + 0x03)
const cu = view.getUint8(ofs + 0x04)
const cv = view.getUint8(ofs + 0x05)
const du = view.getUint8(ofs + 0x06)
const dv = view.getUint8(ofs + 0x07)
ofs += 0x0c
const indexA = (dword >> 0x00) & FACE_MASK
const indexB = (dword >> 0x07) & FACE_MASK
const indexC = (dword >> 0x0e) & FACE_MASK
const indexD = (dword >> 0x15) & FACE_MASK
const a: FaceIndex = {
materialIndex,
boneIndex: localIndices[indexA].boneIndex,
x: localIndices[indexA].pos.x,
y: localIndices[indexA].pos.y,
z: localIndices[indexA].pos.z,
u: au * PIXEL_TO_FLOAT_RATIO + PIXEL_ADJUSTMEST,
v: av * PIXEL_TO_FLOAT_RATIO + PIXEL_ADJUSTMEST,
}
const b: FaceIndex = {
materialIndex,
boneIndex: localIndices[indexB].boneIndex,
x: localIndices[indexB].pos.x,
y: localIndices[indexB].pos.y,
z: localIndices[indexB].pos.z,
u: bu * PIXEL_TO_FLOAT_RATIO + PIXEL_ADJUSTMEST,
v: bv * PIXEL_TO_FLOAT_RATIO + PIXEL_ADJUSTMEST,
}
const c: FaceIndex = {
materialIndex,
boneIndex: localIndices[indexC].boneIndex,
x: localIndices[indexC].pos.x,
y: localIndices[indexC].pos.y,
z: localIndices[indexC].pos.z,
u: cu * PIXEL_TO_FLOAT_RATIO + PIXEL_ADJUSTMEST,
v: cv * PIXEL_TO_FLOAT_RATIO + PIXEL_ADJUSTMEST,
}
const d: FaceIndex = {
materialIndex,
boneIndex: localIndices[indexD].boneIndex,
x: localIndices[indexD].pos.x,
y: localIndices[indexD].pos.y,
z: localIndices[indexD].pos.z,
u: du * PIXEL_TO_FLOAT_RATIO + PIXEL_ADJUSTMEST,
v: dv * PIXEL_TO_FLOAT_RATIO + PIXEL_ADJUSTMEST,
}
faces.push(a, c, b)
if (!isQuad) {
continue
}
faces.push(b, c, d)
}
}
For the faces, the first 8 bytes are reserved for uv values for each one of the indices. In the case of quads uvs for all four indices are provided. In the case of triangles the last two bytes are ignored. After that is a4 byte (32 bit) value which contains each one of the indices, and the material index.
Each one of the indices is a 7 bit value that will need to be masked and shifted. Indices are a local number which maps to the number of vertices. For instance if there are 30 vertices in the submesh, the indices will be labeled 0 - 29. This means that you will need to do a global mapping from the local vertice count to their position in the global vertice count to be able to parse the T-pose.
The highest 2 bits are used to define the texture for the face provided by the texture coordinates parsed from the texture header. From there the faces and uv values will need to be added to the model.
Last updated
Was this helpful?