Animation Compression: Unreal 4

Unreal 4 2017

Unreal 4 offers a wide range of documented character animation compression settings.

It offers the following compression algorithms:

Note: The following review was done with Unreal 4.15

Least Destructive

Reverts any animation compression, restoring the animation to the raw data.

This setting ensures we use raw data. It sets all tracks to use no compression which means they will have full floating point precision where None is used for translation and Float 96 for rotation, explained below. Interestingly it does not appear to revert the scale to a sensible default raw compression setting which is likely a bug.

Remove Every Second Key

Keyframe reduction algorithm that simply removes every second key.

This setting does exactly what it claims; it removes every other key. It is a variation of sub-sampling. Each remaining key is further bitwise compressed.

Note that this compression setting also removes the trivial keys.

Remove Trivial Keys

Removes trivial frames of tracks when position or orientation is constant over the entire animation from the raw animation data.

This takes advantage of the fact that very often tracks are constant and when it happens a single key is kept and the rest is discarded.

Bitwise Compress Only

Bitwise animation compression only; performs no key reduction.

This compression setting aims to retain every single key and encodes them with various variations. It is a custom flavor of simple quantization and will use the same format for every track in the clip. You can select one format per track type: rotation, translation, and scale.

These are the possible formats:

  • None: Full precision.
  • Float 96: Full precision is used for translation and scale. For rotations, a quaternion component is dropped and the rest are stored with full precision.
  • Fixed 48: For rotations, a quaternion component is dropped and the rest is stored on 16 bits per component.
  • Interval Fixed 32: Stores the value on 11-11-10 bits per component with range reduction. For rotations, a quaternion component is dropped.
  • Fixed 32: For rotations, a quaternion component is dropped and the rest is stored on 11-11-10 bits per component.
  • Float 32: This appears to be a semi-deprecated format where a quaternion component is dropped and the rest is encoded onto a custom float format with 6 or 7 bits for the mantissa and 3 bits for the exponent.
  • Identity: The identity quaternion is always returned for the rotation track.

Only three formats are supported for translation and scale: None, Interval Fixed 32, and Float 96. All formats are supported for rotations.

If a track has a single key it is always stored with full precision.

When a component is dropped from a rotation quaternion, W is always selected. This is safe and simple since it can be reconstructed with a square root as long as our quaternion is normalized. The sign is forced to be positive during compression by negating the quaternion if W was originally negative. A quaternion and its negated opposite represent the same rotation on the hypersphere.

Sadly, the selection of which format to use is done manually and no heuristic is used to perform some automatic selection with this setting.

Note that this compression setting also removes the trivial keys.

Remove Linear Keys

Keyframe reduction algorithm that simply removes keys which are linear interpolations of surrounding keys.

This is a classic variation on linear key reduction with a few interesting twists.

They use a local space error metric coupled with a partial virtual vertex error metric. The local space metric is used classically to reject key removal beyond some error threshold. On the other hand, they check the impacted end effector (e.g. leaf) bones that are children of a given track. To do so a virtual vertex is used but no attempt is made to ensure it isn’t co-linear with the rotation axis. Bones in between the two bones (the one being modified and the leaf) are not checked at all and should they end up with unacceptable error, it will be invisible and not accounted for by the metric.

There is also a bias applied on tracks to attempt to remove similar keys on children and their parent bone tracks. I’m not entirely certain why they would opt to do this but I doubt it can harm the results.

On the decompression side no cursor or acceleration structure is used; meaning that each time we sample the clip we will need to search for which keys to interpolate between and we do so per track.

Note that this compression setting also removes the trivial keys.

Compress each track independently

Keyframe reduction algorithm that removes keys which are linear interpolations of surrounding keys, as well as choosing the best bitwise compression for each track independently.

This compression setting combines the linear key reduction algorithm along with the simple quantization bitwise compression algorithm.

It uses various custom error metrics which appear local in nature with some adaptive bias. Each encoding format will be tested for each track and the best result will be used based on the desired error threshold.


Animation compression algorithm that is just a shell for trying the range of other compression schemes and picking the smallest result within a configurable error threshold.

A large mix of the previously mentioned algorithms are tested internally. In particular, various mixes of sub-sampling are tested along with linear key reduction and simple quantization. The best result is selected given a specific error threshold. This is likely to be somewhat slow due to the large number of variations tested (35 at the time of writing!).


In memory, every compression setting has a layout that is naive with the data sorted by tracks followed by keys. This means that during decompression each track will incur a cache miss to read the compressed value.

Performance wise, decompression times are likely to be high in Unreal 4 in large part due to the memory layout not being cache friendly and the lack of an acceleration structure such as a cursor when linear key reduction is used; and it is likely used often with the Automatic setting. This is an obvious area where it could improve.

Memory wise, the linear key reduction algorithm should be fairly conservative but it could be minimally improved by ditching the local space error metric. Coupled with the bitwise compression it should yield a reasonable memory footprint; although it could be further improved by using more advanced quantization techniques.

Up next: Unity 5

Back to table of contents