The present study demonstrated that the magnitude of after-effect due to wedge prisms depends on the form of the visual feedback used to represent hand and target position in fast, targeted, transverse reaches. Trained human subjects made reaches with and without prisms in three visuomotor representations (VR): (1) the subject’s actual hand and targets (Direct), (2) a real-time video broadcast of hand and targets (Video), or (3) abstract, computer-generated targets and a cursor representing hand position (Cursor). A significant after-effect occurred in each VR. However, the magnitude of the after-effect was significantly different among VRs: the magnitude was greatest in Direct, smaller in Video and smallest in Cursor. A significant after-effect (carryover) also occurred when a subject prism-adapted reaches in one VR and then removed the prisms and made initial reaches in another VR. Our data showed that when reaches were prism-adapted in Direct and then prisms were removed, there was a large carryover to initial reaches in Video or Cursor (D→V and D→C). In contrast, when prisms were worn in Video and removed for reaches in Direct (V→D), there was a significantly smaller carryover than from both D→V and D→C. Finally, when prisms were worn in Cursor and removed for reaches in Direct (C→D), there was very little detectable carryover. Our results suggest that adaptation is context-dependent and that the magnitude of carryover is dependent on the VR in which adaptation occurred. Interpretations of adaptations made in abstract training and experimental conditions may be greatly affected by this finding.
Sheidt, R.A., Conditt, M.A., Secco, E.L., Mussa-Ivaldi, F.A. (2005), Interaction of visual and proprioceptive feedback during adaptation of human reaching movements, Journal of Neurophysiology, 93, 3200-3213.
People tend to make straight and smooth hand movements when reaching for an object. These trajectory features are resistant to perturbation, and both proprioceptive as well as visual feedback may guide the adaptive updating of motor commands enforcing this regularity. How is information from the two senses combined to generate a coherent internal representation of how the arm moves? Here we show that eliminating visual feedback of hand-path deviations from the straight-line reach (constraining visual feedback of motion within a virtual, "visual channel") prevents compensation of initial direction errors induced by perturbations. Because adaptive reduction in direction errors occurred with proprioception alone, proprioceptive and visual information are not combined in this reaching task using a fixed, linear weighting scheme as reported for static tasks not requiring arm motion. A computer model can explain these findings, assuming that proprioceptive estimates of initial limb posture are used to select motor commands for a desired reach and visual feedback of hand-path errors brings proprioceptive estimates into registration with a visuocentric representation of limb position relative to its target. Simulations demonstrate that initial configuration estimation errors lead to movement direction errors as observed experimentally. Registration improves movement accuracy when veridical visual feedback is provided but is not invoked when hand-path errors are eliminated. However, the visual channel did not exclude adjustment of terminal movement features maximizing hand-path smoothness. Thus visual and proprioceptive feedback may be combined in fundamentally different ways during trajectory control and final position regulation of reaching movements.