ERGONOMICS IN DIGITAL ENVIRONMENTS:IMMERSIVE VIRTUAL REALITY

IMMERSIVE VIRTUAL REALITY

Many of the predictive technologies available for human modeling currently do not adequately answer the questions designers pose to their human modeling solutions. This limitation is especially pro- nounced in the areas of natural complex motion simulation and cognitive perception. As mentioned in previous sections, a designer might ask how a person will move to perform an operation, or ask if there is sufficient clearance for a person to grasp a part within the confines of surrounding parts. Situations that require nontypical, complex motions currently cannot be answered adequately with the movement prediction algorithms available. Only through the use of immersive virtual reality technology that allows the mapping of a designer’s movements to an avatar in the virtual scene can these complex movement situations be adequately and efficiently analyzed. Similarly, cognitive mod- els providing subjective perceptions of an environment, such as feelings of spaciousness, control, and safety, are not currently available, yet a designer looking through the eyes of a digital human can assess these emotions of the virtual environment under design. For these reasons, immersive virtual reality (VR) is increasingly being used in both design, and manufacturing applications. Early in a design cycle when only CAD models are available, VR can allow the design to be experienced by designers, managers, or even potential users. Such application allows problems to be identified earlier in the design cycle and can reduce the need for physical prototypes. Immersive VR usually includes a combination of motion tracking and stereo display to give the user the impression of being immersed in a 3D computer environment. Auditory and, increasingly, haptic technology are also available to add realism to the user’s perspective.

Virtual reality does not necessarily require a digital human model. Simply tracking a subject’s head motion is sufficient to allow the stereo view to reflect the subject’s view accurately and thus provide the user with a sense of being present in the computerized world. However, the addition of a full human model, tracking the subject’s body and limb movements in real time, allows for addi- tional realism because the user can see a representation of themselves in the scene. Additional analysis is also possible with full-body motion tracking. For example, collisions between limbs and the objects in the scene can be detected so that reach and fit can be better assessed. This capability is especially useful in design for maintainability or design for assembly applications. Another area where the full body tracking can be useful is for a designer to gain experience in interacting with the design from the perspective of a very short or very tall person. By scaling the environment in proportion to the increase or decrease in anthropometric size he or she wishes to experience, the designer can evaluate such issues as clearance, visibility, and reachability of extreme-sized individuals without actually having to recruit a subject pool of these people. Real-time application of the tracked motions to the virtual human also gives observers a realistic third-person view of the human motion in relation to the design geometry.

In addition to the qualitative assessments provided by the engineer’s subjective perceptions of interacting with the design, quantitative assessments are possible. Analysis tools, such as those de- scribed in Section 3, can often be run in real time while immersed. For example, an engineer can perform a virtual operation using virtual tools, while in real time the performance tools evaluate the postures, forces, and motions to derive performance metrics such as population strength capability, low-back strain, fatigue, or postural affects. The designer gets immediate feedback as to the specific actions that are likely to put the worker at an elevated risk of injury without exposing the test subject to unsafe loading conditions. The design surrounding these actions can then be assessed and modified to reduce the injury risk, all while in the digital design space. Alternatively, motions can be captured and then played back for human performance analysis or presentation purposes.

Such quantitative analyses may be performed in the context of a full immersive VR application or may simply make use of the same human motion-tracking and capture technology to assist in the generation of accurate human postures. For example, a dataglove with posture-sensing electronics incorporated can be a valuable tool with which to obtain accurate hand postures while avoiding the tedium of trying of manipulate each individual finger joint irrespective of the actual application.

Motion-Tracking Technologies

A number of technologies are available for tracking human motions, including piezioelectric strain gages, magnetic and optical. Such human motion-tracking technologies have long been used for scientific and clinical applications (e.g., Chao 1978; Davis et al. 1991). In recent years, real-time forms of these technologies have become feasible and made VR possible. In addition to VR appli- cations, such real-time technologies have found application in the entertainment industry, enabling quick generation of realistic human movements for computer games, 3D animation, and movie special effects.

Data gloves, such as Virtual Technology, Inc.’s Cyberglove (www.virtex.com), and other such devices measure relative motion between two body segments using either fiberoptic or strain gage- based technologies. The location of the segment in space is not reported. This limitation has rami- fications for how these devices are used in human models. For example, the data gloves can measure the amount of finger flexion and splay, yet these gloves do not provide information about where the hand is located relative to the body or in the scene. For this reason, they cannot be used in isolation in such applications as maintenance part extraction, where the orientation and position of the hand is equally as important as the hand posture. This global positioning information can however be captured using whole-body-tracking technologies.

Both magnetic and optical motion-tracking devices are used to capture the global spatial position of the body in space. Magnetic systems are composed of a transmitter that emits an electric field and sensors that can detect their position and orientation (six DOF) in this field. The magnetic sensors are attached to body segments (e.g., the hand, forearm, arm, torso) to determine the relative positions of adjacent body segments. These data are then used to animate a digital human figure. Magnetic systems until recently were the only systems that could track multiple segments in real time and thus are very popular for immersive applications. However, metallic objects in the environment can affect the magnetic fields emitted by these systems. The field distortion caused by metal in the surroundings, including structural metal in the floor, walls, and ceiling, can cause measurement inaccuracies. In contrast, video-based methods use retroreflecting or LED markers placed on the subject and cameras in the environment to triangulate the position of the markers. Multiple markers can be arranged on segments to derive both position and orientation of individual segments. Although multiple markers are required to obtain the same position and orientation information as one magnetic sensor, these markers are typically passive (simply balls covered with reflective material) and so do not encumber the motion of the subject as dramatically as the wires of magnetic systems. The downside of optical motion-tracking technology is that it is necessary for every marker to be seen by at least two (and preferably more) cameras. Placement of cameras to meet this requirement can be a challenge, es- pecially in enclosed spaces such as a vehicle cab.

Examples of commercially available magnetic systems include the Ascension MotionStar (www. ascension-tech.com) and Polhemus FastTrak (www.polhemus.com). Examples of optical systems in- clude those sold by Vicon Motion Systems (www.vicon.com), Qualysis AB (www.qualysis.com), and Motion Analysis Corp. (www.motionanalysis.com).

Comments

Popular posts from this blog

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

INTEGER PROGRAMMING:THE BRANCH-AND-CUT APPROACH TO SOLVING BIP PROBLEMS