Skip to content

Commit 30eb98a

Browse files
committed
Fix typos
1 parent df291b9 commit 30eb98a

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

bookcontents/chapter-14/chapter-14.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ You can find the complete source code for this chapter [here](../../booksamples/
1010

1111
In skeletal animation the way a model is transformed to play an animation is defined by its underlying skeleton. A skeleton is nothing more than a hierarchy of special points called joints. In addition to that, the final position of each joint is affected by the position of their parents. For instance, think of a wrist: the position of a wrist is modified if a character moves the elbow and also if it moves the shoulder.
1212

13-
Joints do not need to represent a physical bone or articulation: they are artifacts that allow the creatives to model an animation (we may use sometimes the terms bone and joint to refer to the same ting). The models still have vertices that define the different positions, but, in skeletal animation, vertices are drawn based on the position of the joints they are related to and modulated by a set of weights. If we draw a model using just the vertices, without taking into consideration the joints, we would get a 3D model in what is called the bind pose. Each animation is divided into key frames which basically describes the transformations that should be applied to each joint. By changing those transformations, changing those key frames, along time, we are able to animate the model. Those transformations are based on 4x4 matrices which model the displacement and rotation of each joint according to the hierarchy (basically each joint must accumulate the transformations defined by its parents).
13+
Joints do not need to represent a physical bone or articulation: they are artifacts that allow the creatives to model an animation (we may use sometimes the terms bone and joint to refer to the same thing). The models still have vertices that define the different positions, but, in skeletal animation, vertices are drawn based on the position of the joints they are related to and modulated by a set of weights. If we draw a model using just the vertices, without taking into consideration the joints, we would get a 3D model in what is called the bind pose. Each animation is divided into key frames which basically describes the transformations that should be applied to each joint. By changing those transformations, changing those key frames, along time, we are able to animate the model. Those transformations are based on 4x4 matrices which model the displacement and rotation of each joint according to the hierarchy (basically each joint must accumulate the transformations defined by its parents).
1414

1515
If you are reading this, you might probably already know the fundamentals of skeletal animations. The purpose of this chapter is not to explain this in detail but to show an example on how this can be implemented using Vulkan with compute shaders. If you need all the details of skeletal animations you can check this [excellent tutorial](http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html).
1616

@@ -55,7 +55,7 @@ public class ModelData {
5555
```
5656
The new `animMeshDataList` attribute is the equivalent of the `meshDataList` one. That list will contain an entry for each mesh storing the relevant data for animated models. In this case, that data is grouped under the `AnimMeshData` and contains two arrays that will contain the weights that will modulate the transformations applied to the joints related to each vertex (related by their identifier in the hierarchy). That data is common to all the animations supported by the model, since it is related to the model structure itself, its skeleton. The `animationsList` attribute holds the list of animations defined for a model. An animation is described by the `Animation` record and consists on a name the duration of the animation (in milliseconds) and the data of the key frames that compose the animation. Key frame data is defined by the `AnimatedFrame` record which contains the transformation matrices for each of the model joints for that specific frame. Therefore, in order to load animated models we just need to get the additional structural data for mesh (weights and the joints they apply to) and the transformation matrices for each of those joints per animation key frame.
5757

58-
After that we need to modify the `Entity` class to add new attributes to control its animation state to pause / resume the animation, to select the proper animation and to select a specific key frame):
58+
After that we need to modify the `Entity` class to add new attributes to control its animation state to pause / resume the animation, to select the proper animation and to select a specific key frame:
5959
```java
6060
public class Entity {
6161

@@ -161,7 +161,7 @@ public class ModelLoader {
161161
...
162162
}
163163
```
164-
As you can see we are using a new flag: `aiProcess_LimitBoneWeights` that limits the number of bones simultaneously affecting a single vertex to a maximum value (the default maximum values is `4`). The `loadModel` method version that automatically sets the flags receives an extra parameter which indicates if this is an animated model or not. We use that parameter to avoid setting the `aiProcess_PreTransformVertices` for animated models. That flag performs some transformation over the data loaded so the model is placed in the origin and the coordinates are corrected. We cannot use this flag if the model uses animations because it will remove that information. In the `loadModel` method version that actually performs the loading tasks, we have added, at the end, code to load animation data. We first load the skeleton structure (the bones hierarchy) and the weights associated to each vertex. With tha information, we construct `ModelData.AnimMeshData` instances (one per Mesh). After that, we retrieve the different animations and construct the transformation data per key frame.
164+
As you can see we are using a new flag: `aiProcess_LimitBoneWeights` that limits the number of bones simultaneously affecting a single vertex to a maximum value (the default maximum values is `4`). The `loadModel` method version that automatically sets the flags receives an extra parameter which indicates if this is an animated model or not. We use that parameter to avoid setting the `aiProcess_PreTransformVertices` for animated models. That flag performs some transformation over the data loaded so the model is placed in the origin and the coordinates are corrected. We cannot use this flag if the model uses animations because it will remove that information. In the `loadModel` method version that actually performs the loading tasks, we have added, at the end, code to load animation data. We first load the skeleton structure (the bones hierarchy) and the weights associated to each vertex. With that information, we construct `ModelData.AnimMeshData` instances (one per Mesh). After that, we retrieve the different animations and construct the transformation data per key frame.
165165

166166
The `processBones` method is defined like this:
167167
```java
@@ -639,7 +639,7 @@ Prior to jumping to the code, it is necessary to briefly describe compute shader
639639

640640
As mentioned above, a key topic of compute shaders is how many times they should be invoked and how the work load is distributed. Compute shaders define the concept of work groups, which are a collection of of shader invocations that can be executed, potentially, in parallel. Work groups are three dimensional, so they will be defined by the triplet `(Wx, Wy, Wz)`, where each of those components must be equal to or greater than `1`. A compute shader will execute in total `Wx*Wy*Wz` work groups. Work groups have also a size, named local size. Therefore, we can define local size as another triplet `(Lx, Ly, Lz)`. The total number of times a compute shader will be invoked will be the product `Wx*Lx*Wy*Ly*Wz*Lz`. The reason behind specifying these using three dimension parameters is because some data is handled in a more convenient way using 2D or 3D dimensions. You can think for example in a image transformation computation, we would be probably using the data of an image pixel and their neighbor pixels. We could organize the work using 2D computation parameters. In addition to that, work done inside a work group, can share same variables and resources, which may be required when processing 2D or 3D data. Inside the computer shader we will have access to pre-built variables that will identify the invocation we are in so we can properly access the data slice that we want to work with according to our needs.
641641

642-
In order to support the execution of commands that will go through the compute pipeline, we need first to define a new class named `ComputePipeline` to support the creation of that type of pipelines. Compute pipelines are much simpler than graphics pipelines. Graphics pipelines have a set of fixed and programable stages while the compute pipeline has a single programmable compute shader stage. So let's go with it:
642+
In order to support the execution of commands that will go through the compute pipeline, we need first to define a new class named `ComputePipeline` to support the creation of that type of pipelines. Compute pipelines are much simpler than graphics pipelines. Graphics pipelines have a set of fixed and programmable stages while the compute pipeline has a single programmable compute shader stage. So let's go with it:
643643
```java
644644
public class ComputePipeline {
645645

@@ -996,7 +996,7 @@ public class AnimationComputeActivity {
996996
...
997997
}
998998
```
999-
In this methods, we first discard the models that do not contain animations. For each of the models that contain animations, we create a descriptor set that will hold an array of matrices with the transformation matrices associated to the joints of the model. Those matrices change for each animation frame, so for a model, we will have as many arrays (ans therefore as many descriptors) as animation frames the model has. We will pass that data to the compute shader as uniforms so we use a `UniformDescriptorSet` per frame that will contain that array of matrices. For each mesh of the model we will need at least, two storage buffers, the first one will hold the data for the bind position (position, texture coordinates, normal, tangent and bitangent). That data is composed by 14 floats (4 bytes each) and will be transformed according to the weights and joint matrices to generate the animation. The second storage buffer will contain the weights associated to each vertex (a vertex will have 4 weights that will modulate the bind position using the joint transformation matrices. Each opf those weights will be associated to a joint index). Therefore we need to create two storage descriptor sets per mesh. We combine that information in the `MeshDescriptorSets` record. That record also defines a paramater named `groupSize`, let's explain now what is this parameter for. As mentioned previously, compute shaders invocations are organized in work groups (`Wx`, `Wy` and `Wz`) which have a local size (`Lx`, `Ly` and `Lz`). In our specific case, we will be organizing the work using just one dimension, so the `Wy`, `Wz`, `Ly` and `Lz` values will be set to `1`. The local size is defined in the shader code, and, as we will see later on, we will use a value of `32` for `Lx`. Therefore, the number of times the compute shader will be executed will be equal to `Wx*Lx`. Because of that, we need to divide the total number of vertices, for a mesh, per the local size value (`32`) in order to properly set up the `Wx` value, which is what defines the `groupSize` parameter. Finally, we store the joint matrices descriptor sets and the storage descriptor sets in a map using the model identifier as the key. This will be used later on when rendering. To summarize, this method creates the required descriptor sets that are common to all the entities which use this animated model.
999+
In this method, we first discard the models that do not contain animations. For each of the models that contain animations, we create a descriptor set that will hold an array of matrices with the transformation matrices associated to the joints of the model. Those matrices change for each animation frame, so for a model, we will have as many arrays (and therefore as many descriptors) as animation frames the model has. We will pass that data to the compute shader as uniforms so we use a `UniformDescriptorSet` per frame that will contain that array of matrices. For each mesh of the model we will need at least, two storage buffers, the first one will hold the data for the bind position (position, texture coordinates, normal, tangent and bitangent). That data is composed by 14 floats (4 bytes each) and will be transformed according to the weights and joint matrices to generate the animation. The second storage buffer will contain the weights associated to each vertex (a vertex will have 4 weights that will modulate the bind position using the joint transformation matrices. Each of those weights will be associated to a joint index). Therefore we need to create two storage descriptor sets per mesh. We combine that information in the `MeshDescriptorSets` record. That record also defines a parameter named `groupSize`, let's explain now what is this parameter for. As mentioned previously, compute shaders invocations are organized in work groups (`Wx`, `Wy` and `Wz`) which have a local size (`Lx`, `Ly` and `Lz`). In our specific case, we will be organizing the work using just one dimension, so the `Wy`, `Wz`, `Ly` and `Lz` values will be set to `1`. The local size is defined in the shader code, and, as we will see later on, we will use a value of `32` for `Lx`. Therefore, the number of times the compute shader will be executed will be equal to `Wx*Lx`. Because of that, we need to divide the total number of vertices, for a mesh, per the local size value (`32`) in order to properly set up the `Wx` value, which is what defines the `groupSize` parameter. Finally, we store the joint matrices descriptor sets and the storage descriptor sets in a map using the model identifier as the key. This will be used later on when rendering. To summarize, this method creates the required descriptor sets that are common to all the entities which use this animated model.
10001000

10011001
The records mentioned before are defined as inner classes:
10021002
```java

0 commit comments

Comments
 (0)