【摘要】：正 Understanding how signal properties are optimized for the reliable transmission of information requires accurate descriptionof the signal in time and space. For movement-based signals where movement is restricted to a single plane, measurementsfrom a single viewpoint can be used to consider a range of viewing positions based on simple geometric calculations.However, considerations of signal properties from a range of viewing positions for movements extending into three-dimensions(3D) are more problematic. We present here a new framework that overcomes this limitation, and enables us to quantify the extentto which movement-based signals are view-specific. To illustrate its application, a Jacky lizard tail flick signal was filmed withsynchronized cameras and the position of the tail tip digitized for both recordings. Camera alignment enabled the construction ofa 3D display action pattern profile. We analyzed the profile directly and used it to create a detailed 3D animation. In the virtualenvironment, we were able to film the same signal from multiple viewing positions and using a computational motion analysisalgorithm (gradient detector model) to measure local image velocity in order to predict view dependent differences in signalproperties. This approach will enable consideration of a range of questions concerning movement-based signal design and evolutionthat were previously out of reach [Current Zoology 56 (3): 327-336, 2010].