Six Degrees of Freedom

From Software Product Documentation
Jump to navigation Jump to search
Language:  English  • français • italiano • português • español 

Pose Estimation with 6 Degree of Freedom Segments

A set of 3 or more markers attached to a rigid segment is used to track the movement of the segment and at each frame of data specify the pose (position and orientation) of the segment. This method is referred to as a 6 degree of freedom method because each segment (or each joint) is considered to have 6 variables that describe its pose (3 variable describe the position of the origin, 3 variables describe the rotation about each of the principal axes of the segment)

The idea is that although the motion-tracking apparatus reports marker positions by their laboratory or LCS coordinates, and in general all markers are moving, it can safely be assumed that the target markers move with the body segments to which they are attached, i.e., each target’s coordinates in the appropriate segment coordinate system (SCS) do not change throughout the movement. Provided at least three target markers, not positioned in a line, are tracked for each body segment, Visual3D will have enough information to determine the model pose.

The pose (position and orientation) of each segment is calculated using an optimal method. This is contrasted with many software packages that compute segment coordinate systems on a frame-by-frame basis resulting in inconsistencies throughout the data. The use of optimal strategies is perhaps the most important attribute of the Visual3D 6 DOF model. Optimal strategies are described in more detail in the literature. (Terminology from Cappello, A., Cappozzo, A., La Palombara, P.F., Lucchetti, L., Leardini, A. Multiple anatomical landmark calibration for optimal bone pose estimation. Human Movement Science. 16: 259- 274, 1997).

The process of building a segment defines the transformation from the recorded markers to the pose of the biomechanical model, to the pose of a transducer (force plate or force transducer) or to the pose of an assistive device.

Is the segment endpoint the joint center?

This is one of the occasions where the common use of the term "joint center" confuses people. For most segments of the body there really isn't a center about which the proximal and distal segment rotate (an exception is that the hip joint is often assumed to be a ball and socket joint). Visual3D tracks segment pose (position and orientation) using 6 degree of freedom methods, that allow the endpoints of the proximal and distal segment to move relative to each other. Note that this movement may be real (e.g. the knee "joint" is not a fixed axis) or may be caused by errors due to marker movement on the skin relative to the bones. Excessive movement of the endpoints between two segments may be an indication of a serious problem in the data collection that should be addressed.

We prefer to use the term "segment endpoint" because the segment coordinate systems are based on an axis between the segment endpoints, which aren't necessarily joint centers. Even we "fall back" on the common jargon, and you will find the term "joint center" in Visual3D's model builder mode in defining a segment.

What then is a JOINT?

In 6 DOF methods there is no explicit linkage (or joint) connecting the segments. Visual3D explores the collection of segments and considers any two segments in proximity (the distal end of one segment and the proximal end of another segment within the radius of the segment ends) to be "linked" and references a Joint between them. The Joint does not constrain the segments, but is rather a bookkeeping tool that keeps track of which segments are assumed to have an equal and opposite Joint Reaction Force acting between their endpoints and an equal and opposite Joint Moments acting on the adjacent segments.

6 DOF Tracking

The motion of rigid segments in space can be fully described by measuring three independent translational degrees-of-freedom (position) and three independent rotational degrees-of-freedom (orientation).

Photogrammetric procedures for obtaining measurements of six degree-of -freedom (dof) segmental motion require that a system of three or more noncolinear points be fixed to each segment. These non-colinear points are used to define orthogonal segment coordinate systems (SCSs) located independently within each of the segments. In addition, an orthogonal laboratory coordinate system (LCS), which is assumed to be stationary, is defined during system calibration.

Measurement of the position and orientation of a local segment coordinate system (SCS) with respect to the laboratory coordinate system (LCS) can be used to completely describe the segment's motion.

A least squares procedure is used by Visual3D to determine the position and orientation. To understand how this procedure works, consider a point located on a segment at position A in the SCS. The location of the point in the LCS (P) is given by:

where T is the rotation matrix from the SCS to the LCS and O is the translation between coordinate systems.

If the position O and orientation T are defined for some reference position, then the fixed SCS coordinates of a target A can be determined from measurement of P at this position

If the segment undergoes motion, the new orientation matrix T and translation vector O may be computed at any instant, provided that for at least three noncolinear points A is predetermined and P is measured. The matrix T and origin vector O are found by minimizing the sum of squares error expression:

under the orthonormal constraint

where m is equal to the number of targets on the segment ( m > 2).

Since the above system of equations represents a constrained maximum -minimum problem, the method of Lagrangian multipliers can be used to obtain the solutions. A function

is used to supply the boundary conditions (This solution is adapted from the solution outlined by Spoor & Veldpaus in the Journal of Biomechanics, pp. 391- 393, 1980.).

The fact that a least squares fit can be done on an over determined system (m >3) allows the user to employ up to 8 targets to track each segment. Over determination also allows Visual3D to calculate segment positions and orientations if data from one or more targets is lost (However, coordinates must be determined for at least three points on the segment.).

In Practice

In principle, tracking markers can be placed anywhere on a rigid segment. In practice, the markers should be distributed over the maximum area possible for a segment, they should be placed in areas that exhibit the least soft tissue artifact, and they should be visible from as many cameras as possible throughout the movement.

Depending on your laboratory setup, your subject population, the movement being recorded, most labs need to make custom compromises.

For example, consider tracking the thigh segment for a walking trial relative to the above rules. The markers would cover the maximum volume by being placed on the greater trochanter, medial mid thigh, and lateral knee. The greater trochanter and lateral knee exhibit terrible soft tissue artifact, and the medial thigh marker would probably get knocked off the segment during walking, so none of these 3 locations should probably be used. The compromise is to place markers distributed along the anterior and lateral side of the mid thigh. These markers could be on a rigid cluster (or 3 or 4 markers), or distributed around the leg (eg 8 markers).

6 DOF Segment Residual

Visual3D computes the 6 Degree of Freedom pose of a segment using a Least Squares fit of the tracking marker locations in the standing trial to the tracking marker locations at each frame of the movement trial. The Goodness of Fit is described by the residual.

The residuals are stored in meters as the segment residual

The amount of soft tissue artifact can be explored by computing the residual of the segment pose but it is difficult to claim a meaningful difference between two residuals from different marker sets (e.g. trying to determine a "best" marker placement) because the magnitude of the residual depends on the distribution of the targets.

Even ignoring soft-tissue artifact the residuals alone often do not supply enough information about how well the targets are tracking the segment:

  • For example if you have three targets which are fairly close together then small residuals can often equate to large changes in a segments orientations. Likewise if the targets are fairly spread out then the same small residuals will not equate to as much change in orientation.
  • In addition to distance the targets are spread out, how they are arranged often effect how sensitive the kinematics are to the residuals. For an extreme example of this consider a user who arranges their tracking targets in nearly a straight line (a bad practice which I doubt you are employing but I have seen it done.) In this case a small residual will produce a larger rotation about the axis the targets are nearly perpendicular to and the fact there is a small residual is misleading.
  • Let me give another illustrations of how targets residuals are not simply correlated with accuracy: if a users tracks a segment with a cluster of three or more targets which all lie in the same plane then this arrangement will tend to be more accurate tracking rotations about an axis perpendicular to plane of the targets then the other two axis. What I am trying say is that how the targets are arranged will have a large effect on how well the residuals indicate the accuracy of the tracking. Thus when you jump between different sets of tracking targets you it is not surprising to see different kinematics but similar residuals. (Of course if your measurement system was perfect and the targets where fixed to perfectly rigid bodies all combinations would produced zero residuals and the same kinematics.)

This long-winded explanation does not even consider the issue of soft-tissue artifacts. Many labs attach their targets to shells which tend to keep the spatial relationship between the targets fairly consistent and thus produce low residuals; however in theory the shells may actually be doing a terrible job actually tracking the bone. In this case since the residuals are low the user has no indication of how bad the soft tissue movement is. I have witnessed a lot of debate on the use of shells (which offer some advantages) without any real conclusions being reached.

So my bottom line is that I am not surprised different target combination produce different kinematics.

Blips caused by Marker Dropout

If you have four markers during calibration Visual3d will store the fixed (expected) location of these markers in local space.

Now let's assume the calibration was perfect, the markers remain fixed to the rigid skeleton during the motion and the motion data is also perfect (which we know will never happen), in this hypothetical case it will not matter if a target drops out. (No blips will occur). However as you have seen blips do occur when a target drops out because:

  • The spatial relationship between the targets during calibration has some error.
  • The data during the motion has some error
  • Soft-tissue movement causes the spatial relationship between the targets changes
  • Filters do not work very well at the end points of signals. Gaps that occur during the trial exacerbate this problem because the frames(s) just before and after the gap usually aren't very reliable (they were likely just reliable enough), which makes the filtering even worse. It is possible that the blip would be smaller for the unprocessed signals, but unfortunately, these signals aren't particularly useful for Kinematics and Kinetics.

Thus as you noted when you change combinations of targets used to track the segment you get the resultants blips.

Now if you have three targets and try to create a fourth virtual target from the other three, this new virtual target is totally dependent on the information in the other three targets and actually contains no additional information!! Thus the new virtual target based on the three remaining targets will not eliminate the blip. (It will appears to decrease the weight of the targets to 25% each among the three real and one virtual target, but since the virtual target is not an independent measure its's 25% is composed entirely of information from the other targets.)

So if your question is can you eliminate the blip by a creating landmark (virtual target) from the three remaining tracking targets the answer unfortunately is no.

The practical solution is probably to throw away tracking markers that show these blips during the range of frames that you are interested in.

Another option is to require that all tracking markers must exist or the pose won't be computed. See here for an example

Retrieved from ""