visual3d:documentation:kinematics_and_kinetics:six_degrees_of_freedom
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
visual3d:documentation:kinematics_and_kinetics:six_degrees_of_freedom [2024/06/19 12:48] – sgranger | visual3d:documentation:kinematics_and_kinetics:six_degrees_of_freedom [2024/07/17 15:45] (current) – created sgranger | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | {{{{{{{{{{==== pose estimation with 6 degree | + | ====== Six Degrees |
- | a set of 3 or more markers attached to a rigid segment is used to track the movement of the segment and at each frame of data specify the pose (position and orientation) of the segment. this method is referred to as a 6 degree of freedom method because each segment (or each joint) is considered to have 6 variables that describe its pose (3 variable describe the position of the origin, 3 variables describe the rotation about each of the principal axes of the segment) | + | === Pose Estimation with 6 Degree |
- | the idea is that although the motion-tracking apparatus reports marker positions by their laboratory | + | A set of 3 or more markers |
- | the pose (position and orientation) of each segment | + | The idea is that although the motion-tracking apparatus reports marker positions |
- | the process | + | The pose (position and orientation) |
- | === is the segment | + | The process of building a segment |
- | this is one of the occasions where the common use of the term "joint center" confuses people. for most segments of the body there really isn't a center about which the proximal and distal segment rotate (an exception is that the hip joint is often assumed to be a ball and socket joint). visual3d tracks segment pose (position and orientation) using 6 degree of freedom methods, that allow the endpoints of the proximal and distal segment to move relative to each other. note that this movement may be real (e.g. the knee " | + | == Is the segment endpoint |
- | we prefer to use the term "segment endpoint" | + | This is one of the occasions where the common |
- | === what then is a joint? === | + | We prefer to use the term " |
- | in 6 dof methods there is no explicit linkage (or joint) connecting the segments. visual3d explores the collection of segments and considers any two segments in proximity (the distal end of one segment and the proximal end of another segment within the radius of the segment ends) to be " | + | == What then is a JOINT? == |
- | ===== 6 dof tracking ===== | + | In 6 DOF methods there is no explicit linkage (or joint) connecting the segments. Visual3D explores the collection of segments and considers any two segments in proximity (the distal end of one segment and the proximal end of another segment within the radius of the segment ends) to be " |
- | the motion of rigid segments in space can be fully described by measuring three independent translational degrees-of-freedom (position) and three independent rotational degrees-of-freedom (orientation). | + | ==== 6 DOF Tracking ==== |
- | photogrammetric procedures for obtaining measurements | + | The motion |
- | measurement | + | Photogrammetric procedures for obtaining measurements |
- | a least squares procedure is used by visual3d to determine | + | Measurement of the position and orientation |
- | constructequation1.gif | + | A least squares procedure is used by Visual3D to determine the position and orientation. To understand how this procedure works, consider a point located on a segment at position A in the SCS. The location of the point in the LCS (P) is given by: |
- | where t is the rotation matrix from the scs to the lcs and o is the translation between coordinate systems. | + | {{: |
- | if the position o and orientation t are defined for some reference position, then the fixed scs coordinates of a target a can be determined from measurement of p at this position | + | where T is the rotation matrix from the SCS to the LCS and O is the translation between coordinate systems. |
- | constructequation2.gif | + | If the position O and orientation T are defined for some reference position, then the fixed SCS coordinates of a target A can be determined from measurement of P at this position |
- | if the segment undergoes motion, the new orientation matrix t and translation vector o may be computed at any instant, provided that for at least three noncolinear points a is predetermined and p is measured. the matrix t and origin vector o are found by minimizing the sum of squares error expression: | + | {{:ConstructEquation2.gif}} |
- | constructequation3.gif | + | If the segment undergoes motion, the new orientation matrix T and translation vector O may be computed at any instant, provided that for at least three noncolinear points A is predetermined and P is measured. The matrix T and origin vector O are found by minimizing the sum of squares error expression: |
+ | |||
+ | {{: | ||
under the orthonormal constraint | under the orthonormal constraint | ||
- | constructequation4.gif | + | {{: |
where m is equal to the number of targets on the segment ( m > 2). | where m is equal to the number of targets on the segment ( m > 2). | ||
- | since the above system of equations represents a constrained maximum -minimum problem, the method of lagrangian | + | Since the above system of equations represents a constrained maximum -minimum problem, the method of Lagrangian |
+ | |||
+ | {{: | ||
- | constructequation5.gif | + | is used to supply the boundary conditions (This solution is adapted from the solution outlined by Spoor & Veldpaus in the Journal of Biomechanics, |
- | is used to supply the boundary conditions | + | The fact that a least squares fit can be done on an over determined system |
- | the fact that a least squares fit can be done on an over determined system (m >3) allows the user to employ up to 8 targets to track each segment. over determination also allows visual3d to calculate segment positions and orientations if data from one or more targets is lost (however, coordinates must be determined for at least three points on the segment.). | + | === In Practice === |
- | ==== in practice | + | In principle, tracking markers can be placed anywhere on a rigid segment. In practice, the markers should be distributed over the maximum area possible for a segment, they should be placed in areas that exhibit the least soft tissue artifact, and they should be visible from as many cameras as possible throughout the movement. |
- | in principle, tracking markers can be placed anywhere | + | Depending |
- | depending | + | For example, consider tracking the thigh segment for a walking trial relative to the above rules. The markers would cover the maximum volume by being placed |
- | for example, consider tracking the thigh segment for a walking trial relative to the above rules. the markers would cover the maximum volume by being placed on the greater trochanter, medial mid thigh, and lateral knee. the greater trochanter and lateral knee exhibit terrible soft tissue artifact, and the medial thigh marker would probably get knocked off the segment during walking, so none of these 3 locations should probably be used. the compromise is to place markers distributed along the anterior and lateral side of the mid thigh. these markers could be on a rigid cluster (or 3 or 4 markers), or distributed around the leg (eg 8 markers). | + | ==== 6 DOF Segment Residual ==== |
- | ===== 6 dof segment residual | + | Visual3D computes the [[# |
- | visual3d computes | + | The residuals are stored in meters as the segment residual |
- | the residuals are stored in meters as the segment residual | + | The amount of soft tissue artifact can be explored by computing |
- | the amount of soft tissue artifact | + | Even ignoring |
- | even ignoring soft-tissue artifact | + | * For example if you have three targets which are fairly close together then small residuals can often equate to large changes in a segments orientations. Likewise if the targets are fairly spread out then the same small residuals |
+ | * In addition to distance the targets are spread out, how they are arranged | ||
+ | * Let me give another illustrations of how targets residuals are not simply correlated with accuracy: if a users tracks a segment with a cluster of three or more targets which all lie in the same plane then this arrangement will tend to be more accurate tracking rotations | ||
- | * for example if you have three targets which are fairly close together then small residuals can often equate | + | This long-winded explanation does not even consider the issue of soft-tissue artifacts. Many labs attach their targets |
- | * in addition to distance | + | |
- | * let me give another illustrations | + | |
- | this long-winded explanation does not even consider the issue of soft-tissue artifacts. many labs attach their targets to shells which tend to keep the spatial relationship between the targets fairly consistent and thus produce | + | So my bottom line is that I am not surprised different target combination |
- | so my bottom line is that i am not surprised different target combination produce different kinematics. | + | ==== Blips caused by Marker Dropout ==== |
- | ===== blips caused by marker dropout ===== | + | If you have four markers during calibration Visual3d will store the fixed (expected) location of these markers in local space. |
- | if you have four markers during | + | Now let's assume the calibration |
- | now let's assume | + | * The spatial relationship between |
+ | * The data during | ||
+ | * Soft-tissue movement causes | ||
+ | * Filters do not work very well at the end points of signals. Gaps that occur during the trial exacerbate this problem because the frames(s) just before | ||
- | * the spatial relationship between the targets | + | Thus as you noted when you change combinations of targets |
- | * the data during the motion has some error | + | |
- | * soft-tissue movement causes the spatial relationship between the targets changes | + | |
- | * filters do not work very well at the end points of signals. gaps that occur during the trial exacerbate this problem because the frames(s) just before and after the gap usually aren't very reliable (they were likely just reliable enough), which makes the filtering even worse. it is possible that the blip would be smaller for the unprocessed signals, but unfortunately, | + | |
- | thus as you noted when you change combinations | + | Now if you have three targets and try to create a fourth virtual target from the other three, this new virtual target is totally dependent on the information in the other three targets and actually contains no additional information!! Thus the new virtual target based on the three remaining targets will not eliminate the blip. (It will appears to decrease the weight |
- | now if you have three targets and try to create | + | So if your question is can you eliminate the blip by a creating landmark (virtual target) from the three remaining |
- | so if your question | + | The practical solution |
- | the practical solution | + | Another option |
- | another option is to require that all tracking markers must exist or the pose won't be computed. see [[visual3d: | ||
- | }}}}}}}}}} |
visual3d/documentation/kinematics_and_kinetics/six_degrees_of_freedom.1718801285.txt.gz · Last modified: 2024/06/19 12:48 by sgranger