PERCEPTION OF PURSUIT AND EVASION BY PEDESTRIANS BY JONATHAN ANDREW COHEN B.S., TUFTS UNIVERSITY, 2004 A Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in the Program of Cognitive and Linguistic Sciences: Cognitive Science at Brown University Providence, RI May 2010 © Copyright 2010 Jonathan Andrew Cohen This dissertation by Jonathan Andrew Cohen is accepted in its present form by the Department of Cognitive and Linguistic Sciences as satisfying the dissertation requirement for the degree of Doctor of Philosophy Date________________ _______________________ William H. Warren, Advisor Recommended to the Graduate Council Date________________ _______________________ David M. Sobel, Reader Date________________ _______________________ Odest C. Jenkins, Reader Approved by the Graduate Council Date________________ _______________________ Sheila Bonde, Dean of the Graduate School iii JONATHAN ANDREW COHEN CIRRICULUM VITAE HOME ADDRESS UNIVERSITY ADDRESS 365 Smithfield Avenue #2 Department of Cognitive and Pawtucket, Rhode Island 02860 Linguistic Sciences Tel: (401)-726-1826 Brown University, Box 1978 Mobile: (401)-480-4272 190 Thayer Street Jonathan_Cohen@brown.edu Providence, Rhode Island 02912 EDUCATION 2004-2010 Doctor of Philosophy (Ph.D.), Cognitive Science Department of Cognitive and Linguistic Sciences Brown University, Providence, Rhode Island 02912 Dissertation titled “Perception of pursuit and evasion by pedestrians” Primary Advisor: William H. Warren, Ph.D. 2000-2004 Bachelor of Science (B.S.), Psychology Department of Psychology Tufts University, Medford, Massachusetts 02155 Minor in History Summa cum Laude, Highest Thesis Honors Phi Beta Kappa (2003) Honors thesis titled “Learning manual tasks by imitation from video demonstrations” Primary Advisor: Emily W. Bushnell, Ph.D. RESEARCH INTERESTS Applying cognitive and ecological principles to the design of traditional and digital serious games; Understanding crowd interactions and intentions at the individual agent level; and Investigating the development of theory of mind and problem-solving as they relate to group dynamics. ACADEMIC POSITIONS 2004-present Research Assistant, Virtual Environment Navigation Laboratory (VENLab) iv Department of Cognitive and Linguistic Sciences, Brown University Supervisor: William H. Warren, Ph.D. 2002-2004 Research Assistant, Perception Laboratories Department of Psychology, Tufts University Supervisors: Emily W. Bushnell, Ph.D., Donne L. Mumme, Ph.D. HONORS & AWARDS 2004 Graduate Student Fellowship, Brown University 2004 Benjamin Brown Scholarship, Tufts University 2004 Priscilla N. Dunne Prize in Psychology, Tufts University, Dept. of Psychology 2003 Class of 1921 Leonard Carmichael Prize, Tufts University, Dept. of Psychology 2003 Phi Beta Kappa National Honor Society, Tufts University Chapter 2001 Phi Beta Kappa Book Award, Tufts University Chapter MANUSCRIPTS UNDER REVIEW Cohen, J.A., Bruggeman, H., & Warren, W.H. Behavioral dynamics of moving obstacle avoidance. Journal of Experimental Psychology: Human Perception and Performance. MANUSCRIPTS IN PREPARATION Bruggeman, H., Cohen, J.A., Fajen, B.R., & Warren, W.H. Testing a general steering dynamics model. PRESENTATIONS AT MEETINGS Cohen, J.A. & Warren, W.H. (2009). Perceiving the intent to pursue or evade in a moving avatar. Poster presentation at the 9th Annual Meeting of the Vision Sciences Society, Naples, Florida. Rhea, C.K., Cohen, J.A., & Warren, W.H. (2009). Follow the leader: Extending the locomotor dynamics model to crowd behavior. Oral presentation at the 15th International Conference on Perception and Action, Minneapolis, Minnesota. Cohen, J.A., Cinelli, M. E., & Warren, W.H. (2008). A dynamical model of pursuit and v evasion in humans. Oral presentation at the 8th Annual Meeting of the Vision Sciences Society, Naples, Florida. Cohen, J. A., Fink, P.W., & Warren, W. H. (2007). Choosing between competing goals during walking in a virtual environment. Poster presentation at the 7th Annual Meeting of the Vision Sciences Society, Sarasota, Florida. Cohen, J. A., Bruggeman, H., & Warren, W. H. (2006). Combining moving targets and moving obstacles in a locomotion model. Poster presentation at the 6th Annual Meeting of the Vision Sciences Society, Sarasota, Florida. Cohen, J. A., Bruggeman , H., & Warren, W. H. (2005). Switching behavior in moving obstacle avoidance. Poster presentation at the 5th Annual Meeting of the Vision Sciences Society, Sarasota, Florida. Bushnell, E.W., Ballesteros, S., Reales, J.M., Cohen, J., & Chiang, N, C. (2003). Haptic priming for attended and unattended interacts with “viewing” conditions. Paper presented at the 44th Annual Meeting of the Psychonomic Society, Vancouver, Canada. INVITED PRESENTATIONS Cohen, J.A. (2009, April 7). Developing dynamic avatars: Using a model of human steering to investigate the perception of intentions in virtual avatars. Department of Computer Science, Robotics, Learning, and Autonomy Group (Principal Investigator: Chad Jenkins). Cohen, J.A. (2008, November 17). From obstacles to avatars: Dynamics of pursuit and evasion in humans. Department of Cognitive and Linguistic Sciences Colloquium Series, Brown University. TEACHING EXPERIENCE, BROWN UNIVERSITY 2007, spring Teaching Assistant, Children’s Thinking: The Nature of Cognitive Development (CG 63/ COGS 0630), Instructor: David M. Sobel, Ph.D. 2006, fall Teaching Assistant, Approaches to the Mind: Introduction to Cognitive Science (CG 1/ COGS 0100), Instructor: Sheila Blumstein, Ph.D. 2006, spring Teaching Assistant, Human Cognition (CG 42/ COGS 0420), Instructor: David M. Sobel, Ph.D. 2005, fall Teaching Assistant, Children’s Thinking: The Nature of Cognitive Development (CG 63/ COGS 0630), Instructor: James Morgan, Ph.D. vi OTHER WORK EXPERIENCE 2002 Summer Research Intern, Austen Riggs Center and Erikson Institute for Education and Research, Stockbridge, MA. STUDENT MENTORING Graduate Students 2007-2008 Jonathan Ericson, First-Year Project, navigating through non-Euclidean space. Undergraduate Students 2010 Chaz Firestone, Honors Thesis, countermeasures to the constant bearing strategy 2009 Henry Harrison, Independent Study, using attention cues to avoid multiple moving obstacles. 2007-2008 Lea Ventura, Honors Thesis, four-year olds’ causal inference. PROFESSIONAL SERVICE 2009-present Ad hoc reviewer for Ecological Psychology. 2007-2008 Colloquium Series Chair, Department of Cognitive and Linguistic Sciences, Brown University. PROFESSIONAL MEMBERSHIPS 2004-present Vision Sciences Society BIOGRAPHICAL INFORMATION Date of Birth: July 2, 1982 Place of Birth: Silver Spring, MD Marital Status: Married Citizenship: US Citizen vii ACKNOWLEDGMENTS The journey that is this dissertation has been a long, challenging, and rewarding part of my life. Thankfully, I did not travel alone. For six years, my advisor, Bill Warren, has supported, mentored, and most importantly, believed in me. Bill provided me with the means to conduct my research, through grant funding (NIH EY-10923) and through all of the wonderful toys made available to me in the VENLab. I have had the additional benefit of counsel from my dissertation committee members, David Sobel and Chad Jenkins, who have provided me with insights, comments, and encouragement throughout this process. During my time in the VENLab, I have had the pleasure of working with a fantastic group of people, all of whom have contributed in some way to the completion of this work. In particular, I would like to thank Justin Owens, my long-time friend and colleague, Hugo Bruggeman, with whom I have collaborated closely, and Michael Cinelli, who co-conducted Experiment 1 in this dissertation. In addition, my work would have never been realized if not for the efforts of our VENLab programmers, Joost DeNijs and Kurt Spindler, and research assistants, especially Nick Varone and Tara Noble, who co-conducted Experiment 3. And to all the others not mentioned here: Thank you, I will miss you all. My family has played no small role in my graduate career, and I thank them – Mom, Dad, Seth, Evan, Karen, Randy, Rachel, and Nicki, my constant writing companion – for all of their patience, guidance, support, and love during these long years. And finally, my greatest thanks to my loving, supportive, and inspiring wife, Kate Hesse, to whom this work is dedicated. viii For Kate, Well, I’m back. ix TABLE OF CONTENTS Chapter One Introduction…………………………………………………….……………… 1 Chapter Two Concerning Agents…………………………………………………………….. 8 Chapter Three Of Models and Steering Dynamics……………………………………………. 20 Chapter Four General Method………………………………………………………………… 45 Chapter Five Experiment 1: Expanding the Steering Dynamics Model to Pursuit-Evasion Interactions……………………………………………………………………… 52 Chapter Six Information for Perceiving Pursuit and Evasion…………………………….. 88 x Chapter Seven Experiment 2: Perceiving Pursuit or Evasion in a Virtual Avatar…………… 97 Chapter Eight Experiment 3: Identifying a Pursuer in a Crowd……………………………... 141 Chapter Nine General Discussion………………………………….…………………………… 155 References………………………………………………………………………... 166 xi LIST OF FIGURES Chapter Three Figure 3.1 Plan view of space and variables for an environment containing a goal and stationary obstacle…………………………………………………………… 23 Figure 3.2 Obstacle repulsion function; the agent’s heading is repelled from the unstable fixed point at (0, 0)……………………………………………………… 29 Figure 3.3 Plan view of an environment containing a moving target or obstacle and a generalized illustration of the constant bearing strategy: a = agent, o = object (target, obstacle)………………………………………………………. 31 Figure 3.4 Plan view of space and bearing angle used to intercept a moving target…………………………………………………………………………….... 33 Figure 3.5 Plan view of space and bearing angle used for avoiding a moving obstacle…………………………………………………………………………… 35 Chapter Four Figure 4.1 (A) The SR-80A head-mounted display with IS-900 MicroTrax unit attached and (B) the backpack unit worn by participants........................................ 47 Figure 4.2 The Intersense IS-900 ultrasonic/inertial Sonistrip tracking grid……. 48 Figure 4.3 Participant measuring his inter-ocular distance (IOD)………………... 50 xii Figure 4.4 Experimenter placing head-mounted display on an eager participant.. 51 Chapter Five Figure 5.1 Participant in Experiment 1A wearing a bicycle helmet, MicroTrax receiver unit, and cloth blindfold………………………………………………….. 57 Figure 5.2 Diagram of five conditions in Experiment 1A: (A) Target Interception, (B) Obstacle Avoidance, (C) Mutual Evasion, (D) Mutual Pursuit, and (E) Pursuit- Evasion. Instructions provided to each participant are shown in quotations……... 58 Figure 5.3 (A) Participant starting locations in Experiment 1A. (B) Photograph of the VENLab with three of the starting locations shown………………………. 59 Figure 5.4 Mean paths generated by participants in the Obstacle Avoidance condition: (A) Preferred route, 70% of trials, and (B) Non-preferred route, 30% of trials………………………………………………………………………. 63 Figure 5.5 Mean paths generated by participants in the Mutual Evasion condition: (A) Preferred route, 68% of trials, and (B) Non-preferred route, 32% of trials….. 63 Figure 5.6 Moving target model fits to mean data in the Target Interception condition: (A) Mean human and model paths and (B) Mean human and model heading time-series……………………………………………………………… 66 Figure 5.7 Moving obstacle model fits to mean data in the Obstacle Avoidance condition (preferred route): (A) Mean human and model paths and (B) Mean human and model heading time-series……………………………………………. 67 xiii Figure 5.8 Moving obstacle model predictions of mean data in the Mutual Evasion condition (preferred route): (A) Mean human and model paths for both agents and (B-C) Mean human and model heading time-series for each agent…… 69 Figure 5.9 Moving target model predictions of mean data in the Mutual Pursuit condition: (A) Mean human and model paths for both agents and (B-C) Mean human and model heading time-series for each agent……………………………. 71 Figure 5.10 Moving target and moving obstacle model predictions of mean data in the Pursuit-Evasion condition: (A) Mean human and model paths for both agents and (B-C) Mean human and model heading time-series for each agent………….. 74 Figure 5.11 Diagram of Confederate Action conditions in Experiment 2B: (A) Same Direction condition, (B) Opposite Direction condition, (C) Oscillate condition………………………………………………………........ 79 Figure 5.12 Passing distances between participant and confederate…………….. 83 Figure 5.13 Mean paths and moving obstacle predictions for Experiment1B. The black participant paths move in a positive z direction from a starting point of 0.5 (2 m Distance conditions) and -0.5 m (4 m Distance conditions). The red confederate paths move in the negative z direction from a starting point of 2.5 m (2 m Distance conditions) and 3.5 m (4 m Distance conditions)………. 85 Figure 5.14 Mean and simulated heading time-series for Experiment 1B……….. 86 xiv Chapter Seven Figure 7.1 Diagram of design and conditions in Experiment 2A………………… 102 Figure 7.2 Avatars used in Experiment 2A: (A) Gliding / Rotating Pole, (B) “Nose Pole”, (C) Male Walking Human / Human with Head Fixation, (D) Female Walking Human / Human with Head Fixation……………………… 103 Figure 7.3 Percent Perceived Avatar Evasion data for the Pursuit Condition and 0-degree approach angle Evasion Condition…………..…………………………. 111 Figure 7.4 Percent Perceived Avatar Evasion data for the three approach angles (0-, 1-, 2-degrees) in the Evasion Condition………………………...…………… 113 Figure 7.5 Mean d’ scores for each avatar in Experiment 2A. Sensitivity to the Gliding Pole was significantly poorer than the other four avatars……………….. 114 Figure 7.6 Mean bias (c) scores for each avatar in Experiment 2A. Negative values indicate a bias toward perceived evasion, while positive values indicate a bias toward perceived pursuit……..……………………………………………. 116 Figure 7.7 Diagram of design and conditions in Experiment 2B…….…………. 125 Figure 7.8 Mean d’ scores in Experiment 2B..………………………………….. 130 Figure 7.9 Mean bias (c) scores for each Initial Avatar Distance in Experiment 2B. Negative values indicate a bias toward perceived evasion, while positive values indicate a bias toward perceived pursuit ………………………………… 131 Figure 7.10 Mean reaction times to correctly answered trials (Pursuit xv Condition)……………………………………………………………………...… 133 Figure 7.11 Mean reaction times to correctly answered trials (Evasion Condition)……………………………………………………………………...… 134 Figure 7.12 Mean distance to avatar at response (Pursuit Condition)…………… 136 Figure 7.13 Mean distance to avatar at response (Evasion Condition)………… 137 Figure 7.14 Mean d’ scores for the Looking Group……………………………. 138 Chapter Eight Figure 8.1 Example of a display in Experiment 3 containing a Target Avatar and three Distracter Avatars……………………………………………………… 147 Figure 8.2 Diagram of design and conditions in Experiment 3 depicting the avatar “spawn boxes”………………………………………………………… 148 Figure 8.3 Mean percentages of correctly answered trials in Experiment 3……... 151 Figure 8.4 Mean reaction times of correctly answered trials in Experiment 3..…. 152 xvi CHAPTER ONE INTRODUCTION 1 2 The ability to accurately perceive the behaviors and actions of other people while we move around the world is crucial to adaptive locomotion. In order to avoid collisions with other pedestrians, for instance, we must perceive how others move to avoid obstacles themselves. We then can adjust our own motion in response. Likewise, if an animal hopes to evade a predator on a pursuit course, it must be able to perceive the specific information in the predator‟s movement and make an appropriate response. Classic and contemporary research on the perception of animacy and causality has shown that when people view animations of moving objects in a top-down, third-person perspective, they attribute intentional behaviors to the objects‟ motions (Heider & Simmel, 1944; Michotte, 1963; Scholl & Tremoulet, 2000). When viewing animations based on Heider and Simmel‟s (1944) moving geometric figures, observers often state that one object appears to pursue a second, while the second attempts to evade the first. The present work proposes that the attribution of intentional states to moving objects is not necessary if the patterns of motion of the objects specify particular behaviors. The information in the objects‟ movement can be perceived directly by observers. Pursuit and evasion are two examples of intentional behavior that have been studied using animations, but they are also behaviors that pedestrians engage in regularly. Therefore, this work focuses on how people use the information present in a moving pedestrian to determine whether the pedestrian is pursuing or evading. In order to provide an information-based account of how people perceive pursuit and evasion in pedestrians, one must first model how people perform pursuit and evasion. A model of human pursuit and evasion provides a description of the informational variables present in human movement that specifies each behavior. Moreover, by using a 3 model of pursuit and evasion one can generate stimuli of moving objects in a principled fashion, as opposed to using ad-hoc animations. In the present work I extend the steering dynamics model developed by Warren and colleagues (Fajen & Warren, 2003; 2007; Cohen, Bruggeman, & Warren, under review) to human pursuit and evasion scenarios. I then use the model to generate patterns of motion in objects to test which informational variables are used by human observers to perceive pursuit and evasion. Once these variables are identified by using a single model- driven stimulus, I then present multiple moving objects to test whether the information for pursuit and evasion is perceived sequentially or whether a lone pursuing object „pops out‟ of a crowd. The stimuli in the present work are presented to observers in a first-person perspective using immersive virtual environments. While Heider and Simmel (1944) and Scholl and Tremoulet (2000) presented their animations to observers using a third-person perspective, I argue that a first-person viewpoint is the natural (i.e. observer) perspective to use when investigating the perception of intentional behavior. Three core questions motivated this research: (1) Can a dynamical model of steering and obstacle avoidance be used to describe the interactions of two pedestrians in pursuit-evasion scenarios? Specifically, can human pursuit and evasion be described by a model of target interception and moving obstacle avoidance originally derived for inanimate objects? (2) If so, do human participants perceive pursuit and evasion based on these modeled movements? What information present in another pedestrian‟s movement provides the information used to perceive these behaviors? (3) Do people perceive the behaviors of multiple pedestrians present in scene in a sequential fashion? In addition to 4 answering these questions with original research, my aim for this thesis is to provide a new method and experimental paradigm for studying agent locomotor behavior and interactions that uses model-driven, realistic stimuli. In the first of three experiments I extended the steering dynamics model originally developed by Fajen and Warren (2003) to locomotor interactions between two people. This study demonstrated that the constant bearing strategy shown to be used to intercept moving targets (Fajen & Warren, 2007) and avoid moving obstacles (Cohen, Bruggeman, & Warren, under review) is also applied to human pursuit and evasion, respectively. Experiment 2 investigated how people use the contingent movement, trajectory, features, and fixation of a virtual avatar to judge whether an avatar is pursuing or evading. The avatars used in Experiment 2 were driven by the steering dynamics model. The results indicate that pursuit is specified by contingent movement that preserves a constant bearing angle, while evasion is specified by movement that avoids a constant bearing angle. Moreover, perceptions of evasion are increased when an avatar moves on a trajectory that causes it to drift laterally. In addition people are more accurate in their judgments if the avatar contains a feature that specifies how it rotates. The direction of a humanoid avatar‟s head was found to increase sensitivity to pursuit and evasion only when the avatar is presented at a close range. Experiment 3 presented multiple moving avatars to observers in a visual search task: They had to identify a target pursuing avatar from among a small crowd of evading distracters. The results indicate that people perceive the movements of multiple pedestrians sequentially, and require more time to locate a pursuer as the number of evaders increases. 5 The results of this thesis provide an information-based account of the perception of pedestrian intentional behavior. This dissertation created a new methodological paradigm through the use of model-driven virtual avatars and laid the groundwork for future investigations into the dynamics and perception of crowd behavior. The Shape of Things to Come The following is a brief overview of the remaining eight chapters in this thesis: Chapter Two, Concerning Agents places the problem of modeling pursuit and evasion behaviors in an agent-based context. Literature concerning the modeling of agent behavior is reviewed and critiqued, and an alternative method based on Fajen and Warren‟s (2003) steering dynamics model is proposed. Chapter Three, Of Models and Steering Dynamics presents a full description of the steering dynamics model developed by Warren and colleagues. Each of the model‟s four components is described: (1) steering to a stationary goal, (2) avoiding a stationary obstacle, (3) intercepting a moving target, and (4) avoiding a moving obstacle. The model‟s extensions, strengths, and limitations are discussed. Chapter Four, General Method presents the Virtual Environment Navigation Laboratory (VENLab), a large facility used to study human locomotor behavior in immersive, ambulatory virtual environments. The hardware, software, and protocols used throughout the research in this thesis are described. Chapter Five, Experiment 1: Expanding the Steering Dynamics Model to Pursuit-Evasion Interactions introduces the first study in the present work, designed to 6 answer whether the steering dynamics model and constant bearing strategy can be used to describe human pursuit and evasion. Pairs of individuals interacted in a variety of pursuit and evasion conditions designed to test the model. Comprised of two experiments (Experiment 1A and 1B), the methods, results, and simulations of this study are presented and discussed. Chapter Six, Information for Perceiving Pursuit and Evasion reviews literature related to the perception of pursuit and evasion behavior. Three informational variables used to perceive pursuit and evasion are identified. Chapter Seven, Experiment 2: Perceiving Pursuit or Evasion in a Virtual Avatar consists of two experiments. The model developed in Chapter Five is used to generate moving stimuli to test how observers use the informational variables identified in Chapter Six to perceive pursuit and evasion. Experiment 2A investigated how participants used an avatar‟s contingent movement to preserve or avoid a constant bearing angle, approach trajectory, and features to determine whether it was pursuing or evading. Experiment 2B focused on the contribution of head fixation information in humanoid avatars presented at different distances Chapter Eight, Experiment 3: Identifying a Pursuer in a Crowd extends the results of Experiment 2 to scenarios containing a small crowd of avatars. I investigate whether the information present in multiple moving objects is perceived sequentially by observers, or if pursuit behavior appears to „pop-out‟ of a crowd. Participants in Experiment 3 determined which avatar within a group of two, three, or four avatars was pursuing them. 7 Chapter Nine, General Discussion synthesizes the main findings of Experiments 1-3. Larger issues related to the perception of behavior are and future directions are discussed. CHAPTER TWO CONCERNING AGENTS 8 9 The research presented in this thesis focuses on the perception of intentional behavior in other pedestrians. In order to characterize the information in patterns of movement that specifies behavior such as pursuit and evasion, a model of the behavior is required. This chapter reviews agent-based models of locomotor behavior from the fields of computer graphics, robotics, and architectural and urban planning, with the aim of evaluating relevant pedestrian models. The goal of this chapter is to highlight approaches to modeling locomotor behavior that do not necessarily specify the information an observer would use to perceive pedestrian behavior. A critique of these approaches demonstrates that many agent-based models do not rely on empirical data, nor are they parsimonious descriptions of behavior. The present work investigates the outstanding issue of whether one can develop such an account of human pursuit-evasion behaviors. Agent-Based Models Agents are autonomous, special-purpose entities that perform goal-directed tasks in a defined environment (Brustoloni, 1991; Franklin & Graesser, 1996; Wooldridge & Jennings, 1995). One can use agents to represent or simulate human or animal behavior on the individual level, and agents can be clustered or grouped to simulate crowds. Most agent-based models are autonomous, meaning that while the user can set parameters representing attributes of an agent (e.g. velocity, mass, end-point goals), the user does not directly instruct the agent regarding its path of locomotion from moment to moment. Some approaches, conversely, do allow the user some level of interactivity with the agent, either giving the agent an example or template path to follow (Metoyer & 10 Hodgins, 2004) or allowing the user to control the agent in real-time through a variety of inputs (Lee, Chai, Reitsma, Hodgins, & Pollard, 2002). However, these approaches do not constitute models of behavior. As the user can intervene at any point, the agent does not need to rely upon any principled constraints or rules derived from theory and experimentation. Agent-based models typically fall into one of two camps, defined primarily by the methods used to compute the agent‟s path through space (Pirjanian, 1999). The first is characterized by a priori path planning, or deliberation. These models make use of algorithms that search and select the optimal path for an agent to follow towards a defined goal in the environment. For example, the path planning language A* (e.g. Dechter & Pearl, 1985; Felner, Stern, Ben-Yair, Kraus, Netanyahu, 2004) computes the shortest path between an agent‟s location and its desired goal in a known environment. A* uses a “best-first” heuristic: When the algorithm searches for all possible routes from the agent to the goal, it selects the first path it deems to be short and optimal. The advantage of using such an algorithm is clear: It will always return an optimal path for the agent to follow. However, if one considers the computation necessary for determining the optimal paths for multiple interacting agents, it becomes clear that this approach is unwieldy and reliant upon the available computing power: The number of possible routes to be evaluated increases exponentially with the number of objects in the environment. In addition, this number increases even more if each object is an agent that interacts with the other agents. If an agent‟s path is determined solely by deliberation, one must question how it reacts to new or unforeseen obstacles that might move to impede its route. The answer lies in the other class of agent path models: reactive or priority-based models. 11 Reactive models have layered architectures that define how to handle obstacles should they arise while also governing unimpeded path formation. An excellent case study for reactive agent models is Rodney Brooks‟ (1990, 1991) subsumption architecture for mobile robots. The subsumption technique uses a set of nested commands that dictate a robot‟s actions; commands with higher priority can override, or subsume, lower priority commands. For example, a robot can have a low priority algorithm for moving towards a distant goal. However, should an obstacle appear, a more important command concerning obstacle avoidance overrides the current behavior. After the obstacle has been passed is no longer a consideration the robot resumes the behavior dictated by the lower priority command. The advantage of the subsumption architecture is its ability to consider unknown aspects of the environment as they become available. Khatib (1986) proposed the artificial potential field model for collision avoidance in robots. This approach characterizes a stationary obstacle as a point surrounded by a repulsion field in two-dimensional space, which repels an agent via an inverse square law. However, this approach is not appropriate for studying human evasion, as the informational variables and locomotor capacities available to humans differ from the robots used in Khatib‟s (1986) work. Moreover, when compared to a steering dynamics model of human obstacle avoidance (Fajen, Warren, Temizer, & Kaelbling, 2003; see Chapter 3), the latter produced smoother paths through environments when used in a mobile robot. Another example of a reactive model is by Reynolds (1987) work on bird flocking animation. Using a layered command scheme similar to the Brooks, Reynolds assigned a series of hierarchical commands to agents in order to simulate a flock of birds moving 12 through space. The most important command in Reynolds simulations, and arguably for all agent-based models, was to avoid obstacles. To successfully avoid collisions, a bird- agent („Boid‟) measured the silhouette edge of obstacles in its path and steered to a minimum of one body-length beyond the edge of the obstacle. As Reynolds‟ simulations involved a collection of Boids, these obstacles included any given Boid‟s adjacent neighbors. By maintaining the correct minimum distance away from all obstacles, each Boid effectively remained one body-length away from each other Boid in the flock. One can imagine that while this algorithm for collision avoidance would keep the Boids from flying into one another, it hardly alone would ensure a stable flock. To produce a stable flock of Boids, Reynolds implemented two lower-level commands. The first simply instructed a Boid to match the velocity (a combination of current heading direction and flight speed) of its near neighbors; the second instructed a Boids to attempt to stay toward the center of near neighbors; the last command lies in direct opposition with the collision avoidance rule. All three nested instructions produced stability within the flock: Each Boid moved to remain close to its neighbors, but maintained a safe distance to avoid collisions, and matched the heading and speed of its neighbors. The result was a flock of agents that moved together in consort. The reactive models seem to be less unwieldy than deliberative approaches, and the principles of layered rules and competition between attractive and repulsive forces provide a good description of how an agent travels through an environment. However, these approaches are not models of actual human or animal locomotion. The rules are not derived from empirical data, nor are they necessarily applicable to novel situations. Moreover, as the environment or task complexity increases, it becomes necessary to 13 „invent‟ new rules and incorporate it into the reactive model. By not basing a model on locomotor information used by agents, one runs the risk of developing an unprincipled algorithm. Models of Pedestrian Movement One class of agent-based models focuses on the simulation of pedestrian patterns of movement. An understanding of pedestrian traffic patterns is a critical aspect in designing urban environments, and helps to ensure the safety of people in either emergencies (Helbing, Farkas, & Viscek, 2000) or large-scale public events (Batty, Desyllas, & Duxbury, 2003). A number of researchers have developed agent-based models that describe patterns of crowd movement, often using observed traffic patterns as a basis. It should be noted at the outset that these pedestrian models are not necessarily concerned with predicting the movement of an individual agent, but rather are concerned with the traffic pattern as a collective. A prime example of a pedestrian model is the Social Force Model, developed by Helbing and colleagues (e.g. Helbing & Molnar, 1995). The Social Force Model conceptualizes pedestrian locomotion as a set of influences, or forces, acting on an agent‟s path, which the authors describe as “motivation[s] to act” (Helbing & Molnar, 1995, p. 4283). Each force on the agent stems from goals and other objects in the environment, like other pedestrians, borders and obstacles. Forces are characterized by monotonically increasing (for goals) or decreasing (for obstacles) functions that either attract or repel an agent‟s direction of motion, 14 respectively. Goals are defined as regions that the agent moves toward at a constant speed. At each time-step of a simulation, the agent moves towards the closest point within a goal‟s region. An advantage of using a goal region is that a number of agents can move towards the same area in space without running the risk of colliding with one another, whereas a singular goal point may produce such a result. The disadvantage is that the model cannot predict accurately how an individual agent will move, as the specific point within the goal region may change over time. Again, this individualized prediction is not the goal of the Social Force Model, but should be considered when evaluating the model‟s overall strength when considering it as an approach for modeling human pursuit and evasion. If an agent‟s specific path through an environment cannot be predicted, then there is no way to specify the information that the agent used to achieve that path. Obstacles or other pedestrian agents act as repelling forces on an agent‟s direction of motion, forcing them off of the shortest path to a goal region. Each agent is provided with an ellipse-shaped minimal distance that it must keep from other pedestrians and walls; this effectively ensures that collisions between agents do not occur. Agents decelerate when in the presence of obstacles and re-accelerate to a preferred speed after clearing it. The agent‟s total path is thus defined by the sum of all the forces exerted on its direction of motion and velocity due to the goal region, obstacles, and other agents. To account for route selection in scenarios where two or more routes are identical, Helbing and Molnar (1995) introduced random variations in an agent‟s path (i.e. noise), producing different behavior across agents. Simulations with multiple agents displayed self- organizing behavior, as agents created lanes of traffic. 15 Helbing and colleagues later extended their model and simulated emergency situations where multiple panicked agents are all attempting to leave an area (Helbing, Farkas, & Viscek, 2000). The authors parameterized panic using an additional term in their model, which could be changed for any given agent. The value of an agent‟s panic parameter determined how likely they were to either take a path of their own towards the goal region or simply follow the path of a neighboring agent. By varying this parameter, then, the model could produce herds of moving agents. However, a herd moving towards a single exit in a room containing many possible exits could result in a bottleneck that, in real life, would cost lives. Helbing, et al. found that by combining one agent that moves in a direction towards one of many exits and other agents that exhibit herding behavior simulations resulted in optimal crowd movement out of an area. While the results from their simulations depicted patterns that indicate ways to evacuate dangerous environments, the authors admit that it is difficult to test the real-world validity of their model against human data quantitatively. A modified version of the Social Force Model has been used in a later paper to simulate the paths taken by hikers (Gloor, Cavens, Lange, Nagel, & Schmid, 2003). Like the original Social Force Model, goal regions were used to simulate the movement of agents from one part of a hiking path to another. However, to ensure that agents did not prematurely leave the hiking path in favor of a nearby region, this modified model contained a component that kept the agents within the defined path. This model produced following behavior among the agents on the path. However, a lack of reliable data with which to validate the model remains a lingering weakness to this approach. 16 Overall, the Social Force Model stands to be a powerful method for estimating the patterns of agent crowds. However, it is not a model of individual pedestrian behavior, and the claims it makes have not been put to the test with empirical studies involving actual human beings. Therefore, it is not an appropriate approach to studying how humans accomplish and perceive pursuit and evasion. STREETS, developed by Haklay, O‟Sullivan, and Thurstain-Goodwin (2001), models agents as individual pedestrians. An agent‟s behavior is governed by a subsumption architecture consisting of a hierarchy of five interacting modules, each responsible for one component of movement. The module called the Mover, for example, seeks out a goal, while the Helmsman module steers the agent towards a goal once it has been located by the Mover. If the Mover does not immediately find a target, it will move the agent around randomly until a goal is located and the Helmsman takes over. A similar approach has been taken by Batty (2003) to simulate the movement of pedestrian traffic in the Tate Britain Museum. Beginning with random walker sequences (a low priority command), Batty constrained an agent‟s movement first with the architectural geometry of the environment. The agents avoid bumping into walls, move toward desired goals, and avoid hitting obstacles. When dealing with other agents, each agent has both an attraction and repulsion parameter to simulate small groups gathering at exhibits while avoiding larger crowds. In order to ensure that collisions do not occur, the repulsion function generated by other agents at close proximity outweighs their attraction function. The clustering result resembles Boid flocking (Reynolds, 1987), with agents maintaining a distance from each other while simultaneously moving towards small groups. Once a group becomes too large, it no longer carries any attractive force for other agents. 17 Batty‟s model was used to simulate crowd densities at the Notting Hill Carnival in London, where observational data from closed-circuit television could be gathered on actual human pedestrian patterns. In sections of the park where good video data were available, Batty‟s model could predict nearly 80% of the clusters that emerged. However, because the clustering predictions were compared to video recordings and not to experimental results, the model does not directly specify the information each agent uses to judge and react to the behaviors of other agents. Because this approach was able to simulate both the general trajectories taken by crowds and the clustering of agents, Batty remarked that it could be used to predict regions of crowding and potential danger before an actual large-scale event took place. In addition, the predictions of this model were put to the test with video data with fair success. Batty thus accomplished what most pedestrian models, including the Social Force Model, do not. Two groups of researchers (Goldstone, Jones, & Roberts, 2006; Goldstone & Roberts, 2006; Helbing, Keltsch, & Molnar, 1997) have taken the study of pedestrian movement to a longitudinal level by examining how agents‟ paths in an environment affect subsequent agents‟ route selection through the same environment via stigmergic changes. Stigmergy is a concept often used to describe the emergent paths that ants follow due to pheromone deposits left by previous ants; generally speaking stigmergy is a method of effecting change in another agent‟s behavior by subtly altering the environment (cf. Camazine et al., 2001). In Goldstone‟s work, agents responded to the path layout of the environment and altered their own paths to follow those laid down by their predecessors. A modified version of a model developed by Helbing, et al. (1997) was used to simulate agents self-organizing their paths to follow trails. The model created 18 competition between the desired path of an agent and the convenience of using a path already displayed in the environment; each of these components acted as a force to determine how the agent moved. As one path was traveled by more and more agents, that path became more attractive to follow, thereby „enticing‟ future agents to follow it. Goldstone and colleagues proposed that this approach is a means of analyzing crowd control in large environments. Rather than explicitly mandating where crowds can and cannot move, this approach involves laying down well-worn paths or obstacles to indirectly influence the flow of pedestrian movements, providing an elegant framework for multi-agent simulations. A subset of pedestrian approaches attempts to identify potential threats or suspicious behavior by analyzing surveillance footage (e.g. Bui, Venkatesh, & West, 2001; Remagnino, Tan, & Baker, 1998). These specialized algorithms tag individual agents and track their movement, employing Bayesian estimation to predict their future movement. For example, Stauffer and Grimson‟s (2000) method builds up a library of movement patterns from observed data, and then uses the probability of a given pattern of movement as a means of detecting rare or unusual behavior. While at first glance this may seem like an intuitive approach to classifying behaviors into „normal / typical‟ or „dangerous / atypical‟ categories, this method relies entirely upon a large sample of exemplars, rather than formally characterizing how agents move and interact with one another and the environment. Thus, it cannot be considered a true model of agent locomotor behavior, despite its use of observational data as a basis for classification. 19 In general, pedestrian models have the capacity to simulate large crowds in varying environments, from museums and carnivals to hiking trails. In the few instances in which model predictions have been tested against observational data one finds that the simulated crowds perform qualitatively similar to human crowd patterns (e.g. Batty, 2003). What is not clear, however, is whether the behavior of individual agents is at all similar to the individual humans they each represent. To be fair, this level of analysis is typically not the focus of pedestrian models. In addition, a lack of controlled experimental testing makes it is difficult to perform the level of analysis one would need in order to analyze individual agent behavior at a quantitative level. In the present work I propose an approach that generalizes to pedestrian interactions by first developing a model of individual agent locomotion. Moreover, the research presented in this thesis is driven by data by collected in controlled experiments. In this fashion the individual agent trajectories can be compared to specific human counterparts, and one removes the assumptions made in pedestrian models regarding clustering and crowd movement. CHAPTER THREE OF MODELS AND STEERING DYNAMICS 20 21 The steering dynamics model developed by Warren and colleagues (Fajen & Warren, 2003; 2004; 2007; Cohen, Bruggeman, & Warren, under review) addresses the concerns about the lack of parsimony and an empirical basis for the models described in Chapter Two. The goals of the steering dynamics model are to: (1) Describe and predict the route selection and path of an individual agent in a simple environment using a set of dynamical models for steering and collision avoidance, (2) Combine the models to predict the routes taken through more complex, and (3) Describe and predict the interactions of multiple agents. The present work begins a line of research focused on the third goal as a step toward characterizing the movement patterns that specify intentional behavior. This chapter is devoted to a review of the current state of the steering dynamics model, beginning first with its origins and core assumptions about the interactions between an agent and its environment. Inspirations and Assumptions The motivation behind the steering dynamics model is to understand whether the coordinated behavior between an agent and its environment emerges from the dynamics of coupled systems (Warren, 2006), consistent with principles of self-organization and coordination dynamics (Kugler & Turvey, 1986; Kelso, 1995). Specifically, the model seeks to provide a parsimonious account of human route selection and locomotor trajectory formation without resorting to an explicit path planner. Rather, the route and shape of the path results from the interactions between the agent and the objects in its environment. The agent‟s locomotor actions through the environment are generated 22 online using only the currently available information, consistent with the ecological perception framework proposed by Gibson (1958/1998; 1979). As such, another assumption behind the steering dynamics model is that the agent possesses a perceptual system that recovers information about its heading and optic flow, along with the visual direction and distance of objects in the environment. The steering dynamics model is more recently inspired by work on robotic steering control (Schöner, Dose, and Engles, 1995). Schöner, et al. (1995) developed a series of first-order dynamical systems models for robot locomotion, where a robot‟s turning rate is a function of its instantaneous heading direction with respect to objects in its environment. The model described goals and obstacles as attractors and repellers of heading, respectively. As the goal was a fixed, stable point, the robot would turn to face the goal and travel toward it; obstacles were avoided as unstable points. In developing a model for human steering toward a goal, Fajen and Warren (2003) adapted this approach, but used a second-order system to describe human behavior to account for inertial forces on a human body in motion that prevent instantaneous changes in the direction of travel. Following from this approach, Fajen and Warren developed a second model to describe collision avoidance with a stationary obstacle. Later, they implemented the constant bearing strategy in a third component that described moving target interception (Fajen & Warren, 2007). Lastly, Cohen, Bruggeman, and Warren (under review) developed a moving obstacle avoidance component, completing the set of four models describing basic locomotor behavior initially proposed by Fajen and Warren (2003). 23 Defining Space and Variables Before discussing the components of the steering dynamics model, it is necessary to identify and define key behavioral variables that govern the control laws of the model. Figure 3.1 depicts a plan-view of an environment containing a stationary goal and an obstacle. Figure 3.1 Plan view of space and variables for an environment containing a goal and stationary obstacle. An agent‟s heading angle is defined as its current direction of travel with respect to an arbitrary reference axis in the world. Heading can also be expressed as a function of the change in an agent‟s position (x, z) over time (t): (3.1) 24 The bearing angle of an object is defined by the direction of the object with respect to the same reference axis at a distance d. The two types of objects considered in Figure 3.1 are a goal and stationary obstacle. The distance between either of these objects and the agent can be expressed as a function of the object‟s position and agent‟s position: (3.2) and an object‟s bearing angle can be expressed as: (3.3) The difference between an agent‟s heading angle and a goal or obstacle‟s bearing angle is referred to as the target-heading or obstacle-heading angle, respectively (the subscripts “g” and “o” refer to goal and obstacle), and is expressed as: (3.4) 25 General Form of Steering Dynamics Model The steering dynamics model takes the form of an angular mass-spring, a second- order system where angular acceleration is a function of turning rate . When the equation gets integrated the solution is a time-series of heading directions : (3.5) The agent‟s angular acceleration is influenced by a series of angular “spring forces.” A damping term , reflecting the body‟s resistance to turning while in motion, reduces oscillations about heading; the parameter modulates the amount of damping. A goal component acts as a stiffness component that determines the rate of change in heading and stabilizes the agent‟s heading toward an attractor; the parameter is the strength of spring‟s stiffness that acts on the agent‟s turning rate. In scenarios with a stationary goal, the agent‟s heading direction is pulled toward the direction of the goal. If the environment contains an obstacle, it is represented by a stiffness component that repels the agent‟s heading away from the direction of the obstacle. This general model is elaborated upon below, where I review each of the four components of the steering dynamics model. 26 Steering to a Stationary Goal General Strategy: Null the goal-heading error In order to steer toward a goal an agent must null the target-heading angle (the subscript “g” refers to stationary goal). This is accomplished by turning in the direction of the goal, which will bring to zero and stabilize the agent‟s heading in the goal‟s direction (see Figure 3.1). Accomplishing this also nulls this agent‟s turning rate and reduces equation 3.7 to zero (Fajen & Warren, 2003). Goal Component The formal equation for steering to a stationary goal is: (3.6) The damping term acts as a frictional force and prevents oscillation about heading as the agent steers to a goal (modulated by the parameter in units s-1). The stiffness term acts to null the target-heading angle at a rate defined by (in units s-2). The stiffness is modulated by a distance term, reflecting the observation that the turning rate decreased exponentially with the distance of the goal . The parameter defines the decay rate of the goal‟s influence as distance increases, and prevents the goal influence from dropping to zero at far distances by elevating the asymptote of the slope defined by . 27 The agent‟s overall steering through space is expressed as a 4D system, where the agent‟s current heading , turning rate , and position are known ; the agent‟s velocity is provided as input to the model . Written as a system of first-order differential equations, the full model for steering to a stationary goal is as follows: (3.7A) (3.7B) (3.7C) (3.7D) Parameter Fits The goal model‟s parameters were fit to data collected in an immersive virtual environment (Fajen & Warren, 2003). Participants walked to goals on that lay on varying initial positions and directions, and the model was fit to the mean goal-heading error in each condition. When the parameters were optimized the model accurately fit the data (r2 (23) = .98). It is important to note that these parameter values were fixed for future studies and simulations, including those presented in the present work. 28 Avoiding a Stationary Obstacle General Strategy: Avoid nulling the obstacle-heading error To steer around a stationary obstacle, an agent must avoid nulling the obstacle- heading angle by turning away from the direction of the obstacle (Fajen & Warren, 2003); the subscript “o” refers to stationary obstacle. Stationary Obstacle Component The formal equation for steering around stationary obstacle is: (3.8) The positive obstacle stiffness term repels the agent away from the obstacle‟s direction. To reflect the fact that there are two routes an agent can take around an obstacle, the stiffness term is multiplied by an exponential function . The resultant repulsion function (Figure 3.2) repels the agent to the left (negative obstacle- heading angle) or right (positive obstacle-heading angle) of the obstacle at a rate that decreases with the absolute magnitude of the obstacle-heading angle, asymptoting at zero when the obstacle-heading angle approaches ±60°; this corresponds to the point at which the agent passes the obstacle. The amplitude of this function is modulated by the parameter in units s-2 and the spread by the parameter in units rad-1. In addition, to reflect the observation that the influence of the obstacle approaches zero at a distance 3-4 29 m, the negative exponential distance term is multiplied into the obstacle component; the decay rate of this function is determined by the parameter in units m-1. Figure 3.2 Obstacle repulsion function; the agent‟s heading is repelled from the unstable fixed point at (0, 0). Experimental conditions under which the model was developed contained both a goal and obstacle, given as: (3.9) Here the goal and obstacle components each contribute to the agent‟s angular acceleration: The goal component stabilizes the heading direction in the direction of the goal, while the obstacle component ensures that the agent‟s heading will not become 30 stable in the direction of the obstacle. The competition between these “spring forces” produces paths that resemble those taken by human participants (Fajen & Warren, 2003). As with the goal component above, the agent‟s steering around an obstacle to a goal space is expressed as a 4D system: (3.7A, rep) — (3.10) (3.7C, rep) (3.7D, rep) Parameter Fits The obstacle model‟s parameters were fit to data collected in another study in virtual reality (Fajen & Warren, 2003). Participants avoided obstacles positioned at a variety of eccentricities and distances while en route to a stationary goal. The obstacle model was fit to the portion of the obstacle heading-error time-series up to the point at which participants passed the obstacle. Again, strong fits (r2 (23) = .975) were obtained with optimal parameter values and were fixed for later studies. 31 Intercepting a Moving Target Bearing Angle Figure 3.3 illustrates a general a plan-view of an environment containing an agent (a) and a moving object, either a target or an obstacle (o). Figure 3.3 Plan view of an environment containing a moving target or obstacle and a generalized illustration of the constant bearing strategy: a = agent, o = object (target, obstacle). The change in the object‟s bearing angle over time is the result of both the agent‟s movement and the movement of the object. As such, each source of motion contributes to the computation of : 32 (3.11A) (3.11B) (3.11C) General Strategy: Maintain constant bearing angle Unlike steering to a stationary goal, intercepting a moving target involves maintaining the target at a constant bearing angle while approaching it (Fajen & Warren, 2004; 2007; Chardenon, et al., 2005; Lenoir, et al., 2002; Olberg, Worthington, & Venator, 2000). Nulling a change in bearing effectively maintains a constant bearing angle. This effectively nulls the change in the target‟s bearing angle; the subscript “m” refers to moving target. The agent steers not toward the current direction of the target, but rather in the direction of the interception point, which acts as the attractor of the agent‟s heading. Figure 3.4 illustrates an agent using the constant bearing strategy. 33 Figure 3.4 Plan view of space and bearing angle used to intercept a moving target. Moving Target Component The constant bearing model for intercepting a moving target is expressed as: (3.12) The form of equation 3.12 closely resembles the model for steering to a stationary goal (equation 3.7). The damping coefficient has a different value than the parameter (see below). A target‟s stiffness component is multiplied by a linear distance term to reflect the fact that the target‟s optical velocity decreases as its distance increases due to motion parallax; the value „1‟ in the distance term is in fact one meter. Intercepting a moving target is expressed as a 6D system, where the agent‟s current heading , turning rate , and position 34 and the position of the target are known. The velocity of the agent and target and the target‟s direction of motion are provided as input to the model. Written as a system of first-order differential equations, the full model for intercepting a moving target is: (3.7A, rep) (3.13A) (3.7C, rep) (3.7D, rep) (3.13B) (3.13C) Parameter Fits Fajen and Warren (2007) collected data on people intercepting a series of moving targets in virtual reality, where the initial position, speed, and trajectory of the target was manipulated. The parameters were fit to the target-heading error time-series data and the model reproduced the empirical results (r2 (23) = .88). As with the stationary goal and obstacle models, these parameters were then fixed for future simulations of interception behavior. A question regarding whether the parameter values for intercepting inanimate targets are the same for pursuing pedestrians is addressed in Experiment 1 (Chapter Five). 35 Avoiding a Moving Obstacle General Strategy: Avoid a constant bearing angle In order to avoid a collision with a moving obstacle, an agent uses the inverse of the constant bearing strategy used for intercepting a moving target: The agent avoids nulling the change in the obstacle‟s bearing angle ; the subscript “mo” refers to moving obstacle. The agent therefore steers away from the direction of the collision point, which acts as a repeller of the agent‟s heading (Cohen, Bruggeman, & Warren, 2005; under review). Figure 3.5 depicts an agent using the constant bearing strategy to avoid a collision. Figure 3.5 Plan view of space and bearing angle used for avoiding a moving obstacle. Moving Obstacle Component The constant bearing model for avoiding a moving obstacle is expressed as: (3.14) 36 When developed a goal was also present in the environment. The combined models are given as: (3.15) The positive obstacle stiffness term repels the agent away from the direction of the collision point. Again, this term is multiplied by an exponential function . The amplitude of this function is modulated by the parameter in units s-1 and the spread by the parameter in units s/rad. The decay of the exponential distance term is determined by the parameter in units m-1. In order to remove the influence of the moving obstacle once the agent has passed it, a “shut-off” step-function term is multiplied into the model . After the agent has passed the obstacle ( 90°), this term and the influence of the obstacle are reduced to zero. As with intercepting a moving target, avoiding a moving obstacle (while en route to a stationary goal) is expressed as a 6D system, where the agent‟s current heading , turning rate , and position and the position of the obstacle are known. The velocity of the agent and obstacle and the obstacle‟s direction of motion are provided as input 37 to the model. Written as a system of first-order differential equations, the full model for avoiding a moving obstacle is: (3.7A, rep) (3.16A) (3.7C, rep) (3.7D, rep) (3.16B) (3.16C) Parameter Fits Cohen, Bruggeman, and Warren (under review) fit the moving obstacle‟s parameters to obstacle-heading error data of participants avoiding a moving obstacle while walking to a goal in virtual reality. The trajectory, initial position, and speed of the obstacle were varied across experimental conditions. They found that the constant bearing strategy both fit their data (r2 (23) = .99) and predicted a new set of test data (r2 (23) = .98) with optimized parameter values . In simulating more complex environments involving one or more moving obstacles in virtual reality (see below), these parameter values were fixed. As with the 38 moving target model, Experiment 1 addresses whether the parameter values for avoiding a moving obstacle and evading a pedestrian are similar. Robustness and Stability Each of the four model components has been tested for robustness and stability in the presence of noise. Noise has been introduced into simulations either through the model‟s parameter values or the behavioral variables used in the model. In each case, the simulations produced stable paths that resembled the data to which they were compared. However, when noise was introduced into the agent‟s initial heading or lateral position (x) when simulating obstacle avoidance (moving or stationary), the agent often took a route that differed from the corresponding human route. For example, if the data showed participants taking a route ahead of a moving obstacle, the simulated agent might take a route behind the obstacle if its initial conditions were perturbed. These simulations indicated that the initial conditions of a simulation or experimental trial play a large role in determining the route selected around an obstacle, and thus the bifurcations in state-space that exist in the presence of obstacles. Comparisons to Other Models Fajen, Warren, Temizer, & Kaelbling, (2003) compared the steering dynamics model components for steering to a station goal and avoiding a stationary obstacle against the artificial potential field method (Khatib, 1986). Fajen, et al. (2003) simulated both the 39 artificial potential model and the steering dynamics model, and implemented each in a mobile robot for validation. The results indicated that the stationary obstacle component in the steering dynamics model produced locomotor paths that more closely matched human paths (which is unsurprising given that it was based on human data). The more interesting result was that the steering dynamics model produced smoother paths around an obstacle in both simulation and implementation when compared to the artificial potential field approach. The distance decay function in the obstacle model creates a more gradual reduction in the obstacle‟s repulsion force as compared to the inverse square law used in the artificial potential field method, which in turn produces smoother paths around an obstacle. These pieces of evidence support the validity of the model and the decision to derive the agent‟s angular acceleration from heading and turning rate. External Validity All of the steering dynamics model components were developed using data collected in immersive virtual environments. In order to extend the validity of the model to scenarios outside of virtual reality, Fink, Foo, and Warren (2007) conducted a study focusing on avoiding a stationary obstacle using matched real and virtual environments. Roughly half of their participants avoided the stationary obstacle in the virtual environment with a larger clearance and lateral deviation than in the real-world environment. The authors interpreted these results as indicating that while in virtual reality these participants had some uncertainty regarding the obstacle‟s direction and 40 location relative to their own location in the environment. The cause of this uncertainty might be the constraints of the head-mounted display (i.e. limited field of view) and the participants‟ inability to see their own appendages in the virtual environment. Despite these differences in avoidance behavior, both sets of data were simulated with a single set of obstacle model parameter values, providing strong evidence that the steering dynamics model could be used to describe behavior both in and out of virtual environments. Moreover, these results validated the use of immersive virtual environments in studying human locomotor behavior. The current thesis expands upon these findings by further testing the model in situations in real, physical environments. The use of avatars as stimuli in Experiments 2 and 3 is justified in that their behaviors are governed by the steering dynamics model. Infinite Diversity in Infinite Combinations One of the core assumptions of the steering dynamics model is that by linearly combining the four terms above, each of which describes a basic locomotor behavior, one can describe route selection and path formation through more complex environments. As each of the model components formalizes the influence of a single goal or obstacle on an agent‟s angular acceleration, the act of combining components mirrors the literal addition of new objects in the environment. For example, if one wanted to simulate an agent avoiding two stationary obstacles while walking to a goal, one would simply add a second obstacle component to equation 3.9. The sum of the three stiffness terms (goal, 41 first obstacle, second obstacle) would determine the course of the agent‟s heading over time. Herein exists one of the greatest strengths of this modeling approach: The complexity of the model scales linearly with the complexity of the environment, removing concerns about combinatorial explosions in the number of possible route- solutions an agent can take. To date, a number of studies have been conducted by various researchers examining the extent to which the linear combination of model terms can continue to describe human behavior. Bruggeman, Cohen, Fajen, & Warren (in prep.; also Cohen, Bruggeman, & Warren, 2006) have extended the steering dynamics model to environments containing: (1) Two stationary obstacles and a goal, (2) A moving target and a stationary obstacle, and (3) A moving target and a moving obstacle. In each of these scenarios the simulations, with parameters fixed at the values described above, predicted both the route selected by human participants around obstacles and the shapes of the resultant paths. Cohen, Fink, and Warren (2007) also demonstrated the model‟s capacity to describe a person‟s selection between two potential goals in an environment. By placing two stationary goals at different relative distances and eccentricities from participants‟ starting locations, the authors investigated the contributions of perceived walking distance and turning rate in participants‟ decisions to select one goal over another. The results provided evidence that perceived walking distance to a goal played a larger role in this decision, as participants walked to a closer goal even if it required them to turn significantly greater than if they traveled to a less eccentric opposing goal that lay further away. By adding a “suppression” component to the goal models, which reduced the 42 influence of one goal as the other grew in strength over time, the authors successfully simulated participants‟ goal selection. Interestingly, these results mirrored those obtained in other dynamic selection tasks between two competitors (e.g. Spivey, Grosjean, & Knoblich, 2005; Spivey & Dale, 2006). Work on extending the model‟s description of behavior has not been limited to combining components. For example, Owens and Warren (2004) demonstrated that the model could predict the paths traveled by participants to a target moving on a circular trajectory. The present work represents a large step forward in expanding the steering dynamics model, as I use the model to describe the behaviors of two interacting participants using the existing components. Future Challenges The steering dynamics model has been shown to describe a large variety of behavior. However, there is a good deal of behavior against which it has yet to be tested. The linear combination of components described above may at some point fail to describe complex human behavior, at which point either additional terms must be added or a the model must be re-conceptualized for dealing with such complexity. Adding new components in a principled manner that maintains a parsimonious description of behavior is a challenge for future researchers. The model has also not yet been tested with fast walking speeds or running speeds. Participants are not allowed to run during studies. Moreover, no control law for 43 velocity control exists to date. Instead, the walking speed of the agent is provided as input to simulations. Another currently unanswered question is how best to describe how people cross, avoid, or “hug” barriers and walls. The last behavior presents the most difficult aspect to modeling barriers: As an obstacle it should act as a repeller of heading, yet in many scenarios one can imagine an agent walking closely parallel to a barrier (i.e. “hugging” the wall). While some research has been conducted to try to resolve this issue (Gérin- Lajoie & Warren, 2008), no conclusive model or strategy has been developed. Due to the deterministic nature of the model, which through the perturbation of initial conditions can be made quasi-stochastic, simulated agents are incapable of learning from previous runs. Thus, the model does not reflect the learning of specific target or obstacle movements. Owens (2004; 2008) demonstrated that people are fully capable of learning from stimuli presented in succession and adapt their movements in turn. Participants have been shown to take shortcuts to targets and to modify their deviations around obstacles. However, as one would expect, the model predicts that people would take a “naïve” route in all cases. Describing this learning process and implementing it in the model is yet another challenge for future research. Summary The steering dynamics model developed by Warren and colleagues is comprised of a series of components that each describes an elementary locomotor behavior: (1) Steering to a stationary goal, (2) Avoiding a stationary obstacle, (3) Intercepting a 44 moving target, and (4) Avoiding a moving obstacle. Each component has been shown to describe data through parameter fits and to predict new data with fixed parameters. While there are some behaviors not covered or tested by the model, a great deal of research has been conducted investigating the extent to which the model‟s components can be combined to explain more complex behavior. The present work seeks to explain the interactions of two people using the current moving target and moving obstacle models. Having described the steering dynamics model, I present in the next chapter the Virtual Environment Navigation Laboratory and the equipment used to generate immersive virtual worlds. CHAPTER FOUR GENERAL METHOD 45 46 The studies conducted in the development and expansion of the steering dynamics model, including those in the present work, all took place in the Virtual Environment Navigation Laboratory (VENLab) at Brown University. In this chapter I discuss the equipment used in these experiments, along with the general procedures and protocols that are common among the studies in the present thesis. Apparatus The VENLab is a 12 m x 15 m (approx. 40‟ x 50‟) laboratory space used to study human locomotor behavior in immersive virtual environments. Researchers in the VENLab rely on a triumvirate of critical hardware to conduct experiments: (1) The head- mounted display, (2) the IS-900 tracking system, and (3) the graphics PC that generates the virtual environments. The head-mounted display (HMD) used in Experiments 2 and 3 was the SR-80A model developed by Rockwell-Collins / Kaiser Electro-Optics (Carlsbad, CA). Participants wore the unit while walking in immersive environments that were presented on small LCD screens in the HMD at 60 frames-per-second. The SR80-A is a stereoscopic HMD weighing approximately 1.75 pounds, with a 1280 x 1024 screen resolution and 63° (Horizontal) x 53° (Vertical) field of view with 100% binocular overlap. Figure 4.1 (A) displays the SR80-A seated on a head-shaped dock in the VENLab. The eye-relief and inter-ocular distance (IOD) of the displays can be adjusted for each participant‟s comfort and needs. The computed binocular disparity in the graphics was calibrated by measuring each participant‟s inter-ocular distance with 47 calipers and the HMD‟s lenses were adjusted to match the IOD. A MicroTrax receiving unit (details below) is attached to the top of the HMD. Participants wore a backpack unit (Figure 4.1, B) weighing approximately 16 pounds that housed the HMD control box and power supply. A B Figure 4.1 (A) The SR-80A head-mounted display with IS-900 MicroTrax unit attached and (B) the backpack unit worn by participants. As a participant walks around a virtual environment while wearing the HMD the Intersense IS-900 tracking system (Bedford, MA) measures and records the participant‟s head position and orientation. The IS-900 is a hybrid ultrasonic/inertial tracking system, comprised of the MicroTrax unit (Figure 4.1 A) mounted on top of the HMD and a grid of SoniStrip ultrasonic emitters positioned on the ceiling of the VENLab (Figure 4.2). The MictroTrax unit houses accelerometers and gyroscopes that measure the rotational movements of the head, while the SoniStrip grid tracks the translational movements of a participant at 120 Hz; data are recorded at 60 Hz. The position (x, y, z) and orientation (pitch, roll, yaw) information is sent to the graphics computer (detailed below) which updates the images presented in the HMD based on the movements of the participant. 48 The total time delay from head movement to image updating is referred to as latency, and has been measured to be approximately 70 ms, or about four image frames. Figure 4.2 The Intersense IS-900 ultrasonic/inertial Sonistrip tracking grid. The virtual environments in Experiments 2 and 3 were generated, displayed, and updated using one of two graphics PCs in the VENLab. These computers were also used for data collection for Experiments 1, 2, and 3. An Alienware PC was used in Experiments 1A, 1B, and 2A. The Alienware PC ran Windows XP Professional and contained a 3.0 GHz Intel Xeon Processor, 2 GB RAM, and a 256 MB nVidia Quadro FX graphics card. The display generation and data collection for Experiments 2B and 3 were done with a Dell XPS 730 H2C PC that ran Windows Vista Ultimate Edition (64 bit) and contained a 3.7 GHz, Intel i7-965 Processor, 6 GB Tri-Channel RAM, and a 1 GB nVidia GeForce GTX 280 graphics card. 49 Participants used a wireless radio mouse in Experiments 2B and 3 to make responses. The mouse was also used to record participants‟ reaction times in these studies. Displays The VENLab uses the virtual reality toolkit Vizard (WorldViz, Santa Barbara, CA) to program and generate virtual environments. Objects in the environments (e.g. poles, floor texture, etc.) are created in 3D Studio MAX (Autodesk 3ds Max, Autodesk Media and Entertainment, Montreal, Quebec) and are imported to the Vizard platform. Vizard is designed to work in conjunction with technology common to immersive virtual reality laboratories, including the IS-900 tracking system. The data collected in the present work were analyzed using MATLAB, SPSS, and Microsoft Excel. MATLAB was also used to simulate the data collected in Experiment 1. General Protocols All of the participants who took part in the present experiments were adults drawn from the Brown University and Providence, RI communities with no reported visual, motor, or neurological impairments. Each participant read a consent form detailing the possible risks of taking part in VENLab studies (mild nausea or dizziness due to the HMD). Participants were informed that they could request a break at any point in the experiment, and could stop the study entirely with no penalty. Participants were paid for 50 their time at a rate of $8 per hour regardless of whether or the participant completed the experiment. The consent forms and research protocols used in the present work were approved by the Brown University Institutional Review Board (IRB). At the end of the experimental session, participants were debriefed as to the nature of the study in which they had participated. The participant was paid, thanked, and escorted from the VENLab. Participants measured their own inter-ocular distance (IOD) as depicted in Figure 4.3 under the experimenter‟s guidance. This measurement was then used to calibrate the head-mounted display‟s IOD setting and compute the disparity in the display. Figure 4.3 Participant measuring his inter-ocular distance (IOD). Following IOD calibration, the experimenter placed the HMD and backpack detailed above on the participant (Figure 4.4). A second calibration protocol was conducted once the participant was wearing the HMD. A random-dot stereogram was shown to the participant to test the IOD calibration. If the images in the left and right eyes were properly fused a rectangular box image would appear at a different depth level than the background image. 51 Figure 4.4 Experimenter placing head-mounted display on an eager participant. Before engaging in an experiment, participants were first introduced to an introductory immersive virtual world designed to help them adjust to walking with the HMD and backpack in virtual reality. This introductory virtual world contained a ground- texture plane as described above and a series of poles. Participants were instructed to walk to and position themselves inside each pole; a pole disappeared once the participant had entered it. After the final pole had disappeared, participants exited the introductory virtual world and began the experiment. In each study research assistants called “wranglers” carried the cable that tethers the HMD to the graphics PC (via ceiling mounted cable-tray) in a fashion that prevents the participant from tripping or becoming trapped by cable slack. CHAPTER FIVE EXPERIMENT 1: EXPANDING THE STEERING DYNAMICS MODEL TO PURSUIT-EVASION INTERACTIONS 52 53 The motivating question of Experiment 1 was whether the steering dynamics model, and in particular the constant bearing strategy, could extend to locomotor scenarios involving two people. Given the parallels between intercepting a moving target and pursuit behavior and between avoiding a moving obstacle and evasion behavior, dyadic pursuit-evasion interactions appeared to be a natural extension of the steering dynamics work conducted in the VENLab. Therefore, in Experiment 1 I specifically investigated the extent to which: (1) the target interception model developed by Fajen and Warren (2007) describes human pursuit, and (2) the moving obstacle avoidance model (Cohen, Bruggeman, &Warren, under review) describes human evasion. I hypothesized that if the constant bearing strategy is effective with inanimate objects it could be used to characterize how a pedestrian pursues or evades another pedestrian. Experiment 1A was designed to test the steering dynamics model in five scenarios involving pursuit and evasion with two participants; I fit the target interception and obstacle avoidance models‟ parameters to two simple conditions: (1) pursuing a person moving on a linear path and (2) evading a person moving on a linear path. I then tested the model with fixed parameters in the remaining three, more complex, conditions: (3) mutual evasion, where two people evade one another while walking to stationary goals, (4) mutual pursuit, where two people walk to meet each other, and finally (5) pursuit- evasion, where a pursuer walks to intercept an evader, who in turn walks to avoid the pursuer while attempting to reach a goal. In addition to the five behaviors tested in Experiment 1A, I wanted to test whether the moving obstacle avoidance model could describe a fairly common locomotor behavior seen in the world everyday: passing someone on the sidewalk. After completing 54 Experiment 1A, I designed Experiment 1B and collected data on how one pedestrian avoided a second pedestrian in a tightly constrained, sidewalk-like environment. In Experiment 1B a participant was required to walk to one of two goals located a short distance away while avoiding who s/he believed was a second participant, who in turn walked toward the first. The second individual was in fact a confederate who was covertly instructed to walk toward the participant in a direction that would either negate or cause a collision. In the latter case, the participant was forced to avoid the confederate. I also included a condition in which the confederate walked to collide with the participant but oscillated laterally when s/he approached the participant. The hope was that this behavior might initiate a coupled oscillation between the two pedestrians, a “sidewalk shuffle” often observed when two pedestrians attempted to pass one another become locked in a symmetrical, lateral pattern of movement. Using the parameter values derived in Experiment 1A, I simulated the mean data in Experiment 1B with the obstacle avoidance model. Together, Experiments 1A and 1B demonstrated that the constant bearing strategy and steering dynamics model could describe pursuit and evasion behaviors with two pedestrians across a range of conditions. Experiment 1A: Pursuit-Evasion Scenarios In the first experiment I collected data on pairs of adult participants interacting in a series of pursuit and evasion scenarios. I used these data to fit the parameters of the moving target and obstacle models and evaluate whether they described human pursuit and evasion behavior. 55 Both the route taken by participants and the shape of the path should be described by the model. Capturing the dominant route in the data becomes an issue when an agent has more than one way to reach a goal. For example, when avoiding a moving obstacle while walking to a goal, a person can travel in front or behind the obstacle. The target interception model was fit to data collected in the Target Interception condition, while the obstacle avoidance model was fit to data from the Obstacle Avoidance condition. The models were then used with fixed parameters to simulate the mean paths generated by participants in the Mutual Evasion, Mutual Pursuit, and Pursuit- Evasion conditions. I simulated the agents‟ locomotor behaviors in the latter conditions by coupling two interacting models, a large step forward and a first for this line of research. The obstacle avoidance model predicted both participants‟ paths in the Mutual Evasion condition with a high degree of accuracy, and the combined models predicted the Pursuit-Evasion condition mean paths. However, the target interception model failed to accurately simulate the Mutual Pursuit condition. These data indicate that while the constant bearing strategy that underlies each model is used by humans in four of the five tested conditions, pedestrians may use other information when walking to meet another person as depicted in the Mutual Pursuit condition. 56 Method Participants Ten gender-matched pairs (six female pairs, 4 male pairs, ages 19-30) of adult participants took part in this experiment; none reported any visual or motor impairments. They were paid a nominal amount for their participation. Each participant in a pair was assigned a number (Participant 1, Participant 2) based on who arrived at the VENLab first. These numbers were used to determine where each participant began each trial (see below). Apparatus Experiment 1A utilized the IS-900 tracking system to measure the head position and orientation for each participant. Each participant wore a bicycle helmet that had an IS-900 MicroTrax unit attached to it, as well as a cloth blindfold between trials. (see Figure 5.1). Yellow cardboard poles were used in Experiment 1A as starting locations and goal locations. 57 Figure 5.1 Participant in Experiment 1A wearing a bicycle helmet, MicroTrax receiver unit, and cloth blindfold. Design Figure 5.2 illustrates the conditions in Experiment 1A. Participants were instructed to perform one of three actions: (1) “Walk Straight” to the pole 10 m directly across from the starting location (e.g. Participant 1 would walk from location #1 to #2, etc.; see Figure 5.3), (2) “Avoid” the other participant while walking to the pole directly across from the starting location, (3) “Pursue” the other participant. These three instructions were combined to create the five conditions in Figure 5.2 in a simple, within- subjects design. Conditions were such that the “Avoiding” participant was always caught by the “Pursuing” participant. 58 Figure 5.2 Diagram of five conditions in Experiment 1A: (A) Target Interception, (B) Obstacle Avoidance, (C) Mutual Evasion, (D) Mutual Pursuit, and (E) Pursuit-Evasion. 59 Figure 5.3 (A) Participant starting locations in Experiment 1A. (B) Photograph of the VENLab with three of the starting locations shown. Procedure Participants began each trial at one of two starting locations (yellow poles, see Figure 5.3). The participant labeled Participant 1 started at either location #1 or #2; Participant 2 started at either location #3 or #4. At the beginning of each trial, each participant placed the blindfold over their eyes. The experimenters led the participants to their respective starting locations, and oriented each towards the opposite target pole. The experimenters then whispered one of the three actions to each participant. Thus, at the start of each trial each participant was unaware of his/her partner‟s starting location or action. Trials started when the experimenters clicked a button on the radio mouse to begin data collection and instructed the participants to remove their blindfolds and “begin.” Each participant then performed the action given to him/her by the experimenter: Participants told to “Walk Straight” walked at a comfortable speed to the target pole directly in front of them; participants instructed to “Avoid” walked to the 60 target pole in front of them while moving to avoid a collision with the other participant. Finally, participants instructed to “Pursue” walked so as to intercept and catch the other participant. Trials ended when either each participant was standing at his/her goal pole (in the Avoidance or Mutual Evasion conditions), or when one participant touched the other on the shoulder (in the Interception, Mutual Pursuit, and Pursuit-Evasion conditions). After a trial was completed, the participants placed their blindfolds over their eyes and awaited an experimenter to lead him/her to the starting location of the next trial. Participants were instructed to perform their actions explicitly and not attempt to feign movements or “fake out” their partner. While participants were free to walk at comfortable paces they were instructed not to run. Participants were informed that they could take a break at any time or discontinue the experiment if need be; all of the participants completed the experiment without incident. Each pair of participants completed ten practice trials followed by eight trials in each condition for a total of 40 experimental trials; trials were presented in a randomized order. The data for different starting locations were collapsed prior to analysis. Data Analysis The two-dimensional coordinates (x, z) of each participant‟s head position were recorded to disk at a sampling rate of 60 Hz. These data were then filtered using a forward and backward 4th-order Butterworth filter with a cutoff frequency of 0.6 Hz, to reduce noise due to head and gait oscillations. To spatially average paths within each condition, the time-series data were binned into 50 equal time segments, and the mean (x, z) position in each bin was computed. This procedure produced a mean path for each 61 participant in each obstacle condition. In the Avoidance and Mutual Evasion conditions, data were separated into routes based on how each participant passed the other prior to computing mean paths. For example, in the Avoidance condition the Avoider participant could take a route ahead or behind the Obstacle participant (see Figure 5.2). In these conditions, the route that occurred significantly more than 50% of the time, as determined by a sign test, was identified as the preferred route. Only the preferred routes were used in fitting and testing the moving obstacle avoidance model (see below for details). Following an analysis of route selection in the Avoidance and Mutual Evasion conditions the observed path shapes in each condition were evaluated using the steering dynamics model. The target interception model was fit to the path in the Target Interception condition (Figure 5.2A), and the moving obstacle model was fit to the Obstacle Avoidance condition (Figure 5.2B). The remaining three conditions were simulated using the model with fixed parameters derived from fitting the Target Interception and Obstacle Avoidance conditions. The simulation details are described below. Results Route Selection The Target Interception, Mutual Pursuit, and Pursuit Evasion conditions exhibited a single route taken by each participant. The single routes observed in the Target Interception, Mutual Pursuit, and Pursuit-Evasion conditions are shown in Figures 5.6A, 5.9A, and 5.10A below, respectively. Two routes were observed in the Obstacle Avoidance and Mutual Evasion conditions. The preferred route in the Obstacle 62 Avoidance condition was ahead of the moving obstacle-pedestrian: The Avoider participant walked ahead of the Obstacle participant in 70% of trials (sign test p < 0.05). In the Mutual Evasion condition, the preferred route was one in which the Evader 1 participants walked behind the Evader 2 participants, and Evader 2 participants walked in front of Evader 1 participants. This route, which I will call “Behind-Ahead,” occurred in 68% of all trials (sign test p < 0.05). This route was preferred over the other observed route, “Ahead-Behind,” where the behavior of each Evader participant was reversed. Figure 5.4 displays the mean path generated by participants in the Avoidance condition for the (A) preferred and (B) non-preferred routes. The mean paths for each observed route in the Mutual Evasion condition are shown in Figure 5.5. Interestingly, the remaining two possible routes, in which each participant would attempt to cross ahead or behind the other, did not occur in any trial. This is actually expected from the constant bearing strategy for moving obstacles: If each participant attempted to take the same route around their partner, both bearing angles would be approximately constant, indicating a future collision. Therefore, when one participant begins to take a particular route the second participant acts in kind and adopts the alternative route to avoid a collision. 63 Figure 5.4 Mean paths generated by participants in the Obstacle Avoidance condition: (A) Preferred route, 70% of trials, and (B) Non-preferred route, 30% of trials. Figure 5.5 Mean paths generated by participants in the Mutual Evasion condition: (A) Preferred route, 68% of trials, and (B) Non-preferred route, 32% of trials. 64 Path Shape - Simulations To evaluate the steering dynamics model‟s description of the data collected in Experiment 1A, I fit the target interception and obstacle avoidance models to the mean time-series of heading in the Target Interception condition and for the preferred route in the Obstacle Avoidance condition. Model parameters were fit by iteratively searching a parameter space, computing a simulated time-series for each combination of parameter values, and comparing it to the mean human data using a least-squares approach. In fitting the moving target model, I compared the entire heading time-series generated by the model with the mean data. However, when fitting the parameters of the moving obstacle model, I compared only the portion of the time-series prior to the point at which the participant passed the z coordinate of the obstacle, which is indicated by a filled circle on the obstacle trajectory in the figures. The remainder of the model‟s heading time- series was determined solely by the goal component, with the parameter values found in Chapter 3. Simulations were performed in MATLAB at 60 Hz using a 4th-order Runge-Kutta manual-step integration routine (Polking, 2002). I computed the best-fitting values for two free parameters for the moving target model and three free parameters for the moving obstacle model using an iterative least-squares procedure. I fit each model by providing as input the mean initial lateral position (x), mean initial heading , and mean velocity from the Pursuer and Avoider participants in the Interception and Avoidance conditions, respectively. Because the Target and Obstacle participants walked in a straight path, I provided the mean x and z coordinates at each time-step as input in the simulations. Cohen, Bruggeman, and Warren (under review) 65 found that the moving obstacle model generated equally good predictions regardless of whether the mean velocity or mean velocity time-series was given as input. To account for data that indicates that moving pedestrians require up to 250 ms processing time to respond to changes in target direction (Cinelli & Warren, 2007), I implemented a 250 ms delay in the simulations of the Experiment 1A data. For the first 250 ms of each simulation, the Pursuer or Avoider is unaware of the target or obstacle (angular acceleration is set to zero). After 250 ms, the Pursuer or Avoider responds to the target or obstacle bearing direction based on its position at (t - 250 ms). For both conditions the coefficient of determination (r2) and root-mean-squared error (rmse) were computed between the simulated and human time-series of heading. Target Interception fit Figure 5.6A displays a bird‟s eye view of the mean preferred human and model path. Figure 5.6B shows the corresponding time-series of heading for the mean data and model: heading values of 180° indicate a direction of travel directly in front of the agent‟s starting location, with headings greater than 180° indicating a rightward heading and less 180° a leftward heading. The mean data (red solid line) was fit with the moving target model (red dashed line). Like the mean data for the Target participant (solid black line), the model was provided with a moving target (dashed black line) that traveled on a straight path to a goal (black diamond). The optimal parameter fit for the moving target model was quite good, resulting in rmse = 3.5° and r2 (48) = .87. The human heading time-series makes an initial increase before beginning to monotonically decrease. The model does not capture this pattern. These data are likely 66 due to the fact that participants turned briefly toward the initial position of their partner before turning onto a constant bearing angle. Figure 5.6 Moving target model fits to mean data in the Target Interception condition: (A) Mean human and model paths and (B) Mean human and model heading time-series. Obstacle Avoidance fit In the Obstacle Avoidance condition, the fit for the moving obstacle model to the mean human data was also quite strong, resulting in rmse = 7.8° and r2 (48) = .91 for optimal parameters . The mean paths and heading time-series for human data and simulations are shown in Figure 5.7. 67 Figure 5.7 Moving obstacle model fits to mean data in the Obstacle Avoidance condition (preferred route): (A) Mean human and model paths and (B) Mean human and model heading time-series. These strong fits indicate that participants use the constant bearing strategy when pursuing or avoiding another pedestrian, supporting the hypothesis that the constant bearing strategy generalizes from inanimate virtual objects to real animate pedestrians. The optimal parameters found in the above fits are, however, different from those found in previous work (see Chapter 3). These differences highlight possible variations in how the constant bearing strategy is implemented across different classes or types of objects. Before discussing this issue in further depth, I evaluate how the models predict the remaining conditions with fixed parameters. Part of my hypothesis regarding the steering dynamics model‟s application to pursuit and evasion behaviors requires that it be able to 68 describe the interactions of two agents. For the predictions made below, each interacting agent was simulated in MATLAB using the 250 ms delay function. Mutual Evasion The moving obstacle model predicted the preferred route for each participant, and reproduced the shape of each mean path closely: rmse = 5.2° and r2 (48) = .77 for Evader 1 and rmse = 4.5° and r2 (48) = .89 for Evader 2. The mean paths and heading time-series for human data and model predictions are shown in Figure 5.8. 69 Figure 5.8 Moving obstacle model predictions of mean data in the Mutual Evasion condition (preferred route): (A) Mean human and model paths for both agents and (B-C) Mean human and model heading time-series for each agent. Mutual Pursuit While the target interception model was able to produce two paths that resulted in each agent intercepting its partner, it failed to predict the shape of these paths when 70 compared against the mean human data (Figure 5.9, A). Moreover, the simulations did not capture the pattern in the heading time-series of each agent (Figure 5.9, B-C). This yielded larger error: rmse = 9.8° and r2 (48) = .31 for Pursuer 1 and rmse = 12.8° and r2 (48) = .71 for Pursuer 2. 71 Figure 5.9 Moving target model predictions of mean data in the Mutual Pursuit condition: (A) Mean human and model paths for both agents and (B-C) Mean human and model heading time- series for each agent. 72 There are a number of possible explanations for the poor predictions in this condition. It is possible that when walking to catch or meet a pedestrian who is also moving on a pursuit path, the constant bearing strategy no longer results in the optimal solution. Mutual Pursuit, as I have instantiated it, differs from the other four behaviors in Experiment 1A in that it is a cooperative behavior that does not involve an additional goal; Mutual Evasion, while cooperative, simply involves passing by another person without a collision in order to get to a goal further down one‟s path. Mutual Pursuit requires an additional level of locomotor coupling that is not achieved by nulling the change in the bearing angle alone. Another possibility is that the constant bearing is in fact used when walking to meet another pedestrian, but it requires an attenuating of the control parameters. Additional simulations with the target interception model reveal that when the damping term parameter is relaxed to a value of 1.00, the simulated paths have more curvature and the model‟s error decreases (rmse = 6.5° and 5.6°). Mutual Pursuit may involve a more sensitive response to a partner‟s motion, which is represented in the model by a smaller damping term parameter. A third possible explanation stems from the observation that the curved paths in the Mutual Pursuit condition resemble paths taken by participants toward a stationary goal (Fajen & Warren, 2003). I fit the parameters and of the stationary goal model (while keeping and fixed) to the Mutual Pursuit data and obtained a reasonable fit: rmse = 7.8° and 4.1°, r2 (48) = .48, .8. This fits implies that the stationary goal control law may be used when two people steer toward one another to meet. 73 Pursuit-Evasion In predicting the Pursuit-Evasion condition, the target interception and moving obstacle avoidance models were used to simulate the Pursuer and Evader‟s mean paths, respectively. Each model predicted its human counterpart remarkably well: rmse = 4.5° and r2 (48) = .96 for the Pursuer and rmse = 2.5° and r2 (48) = .78 for the Evader. Figure 5.10 displays both the mean paths and heading time-series. 74 Figure 5.10 Moving target and moving obstacle model predictions of mean data in the Pursuit-Evasion condition: (A) Mean human and model paths for both agents and (B-C) Mean human and model heading time-series for each agent. 75 This condition indicated that: (1) the constant bearing strategy is used to pursue a target that actively seeks to avoid being caught, supporting previous results (e.g. Olberg, Worthington, & Venator, 2000), (2) the steering dynamics model can be used to simulate two interacting, competing human agents, and (3) pursuit-evasion behavior in humans can be generated through the coupling of the target interception and moving obstacle avoidance models without additional modifications. Discussion The results of Experiment 1A provide evidence that the constant bearing strategy underlies pursuit and evasion behaviors in humans. In four of five conditions the steering dynamics model extended dyadic pedestrian interactions. The fifth condition, Mutual Pursuit, was not simulated by the model with fixed parameters. It was, however, accurately simulated when the damping parameter of the moving target model was relaxed and when the stationary goal model was used. Each of these descriptions implies something about the nature of mutual pursuit. The moving target model with decreased damping indicates that the same control strategy is used as in other pursuit tasks (constant bearing strategy) but is modulated for a specific scenario. The stationary goal model implies that a completely different control law governs mutual pursuit (nulling the goal- angle or goal error). Both options require that pedestrians must first know that they are performing mutual pursuit, or must quickly pick up on available information that specifies mutual pursuit in order to make the accommodations to their control strategy. In Experiment 1A 76 neither participant knew that the other would be pursuing them until they had initiated their own mutual pursuit. This would indicate that at some point early in the course of a mutual pursuit trial, the participants shifted into a new control strategy. Future work could investigate the time-course of this shift to better understand whether participants continue to use the constant bearing strategy or switch to a different strategy. The notion that the constant bearing strategy might be „refit‟ for mutual pursuit points to a general finding from Experiment 1A: The different parameter values between the fits obtained in the present work and those found in Chapter 3. One way to interpret the differences between values is that people perceive and regard the movement of other pedestrians differently than virtual objects. While the control laws remain the same between the two types of stimuli, adjustments are made to adapt to the specific objects encountered in the environment. To take an example from the present work: The moving obstacle model parameter is considered to be a “risk” parameter (Fajen & Warren, 2003; Cohen, Bruggeman, & Warren, under review) governing the distance at which an agent begins to deviate to avoid a moving obstacle. As the value of decreases, the minimum distance required for a deviation increases. Cohen, Bruggeman, and Warren (under review) found an optimal value of 1.3 m-1 for , resulting in paths that begin to deviate around a moving obstacle at a distance of around 3.5 m. Conversely, the optimal value of in Experiment 1A was 0.3 m-1, which resulted in paths that deviated at a distance of around 5 m. These results suggest that when pedestrians avoid each another, they do so earlier than when avoiding poles in virtual reality. This could be due to the fact that while inanimate virtual objects move in highly regular trajectories, real pedestrians may be more likely to alter 77 their movements in response to their environment. This in turn may lead participants to deviate earlier in order to allow more time to compensate for unexpected movements. While participants initiated deviations earlier when avoiding one another as compared to virtual objects, there was less distance between participants when they passed by one another when compared to when they avoid moving virtual obstacles. This decreased „passing distance‟ is reflected in the lower value of the parameter found in this study compared to previous work (Cohen, Bruggeman, & Warren, under review), and converges with results that indicates that participants have some level of uncertainty about the locations of virtual objects (Fink, Foo, & Warren, 2007). Future work investigating and modeling pedestrian behavior when interacting with different stimuli may shed light on how people categorize or assign valence to objects in the world as they perform pursuit or evasion. In sum, the constant bearing strategy and steering dynamics model provide an accurate description of pedestrian steering in pursuit and evasion scenarios. The model generalized from inanimate objects to interacting pedestrians. Maintaining a constant bearing angle was a successful strategy for pursuit, and avoiding a constant bearing angle was used for evasion. The patterns of motion generated by a pedestrian that either maintains or avoids a constant bearing angle with respect to an observer should be perceived as pursuit and evasion, respectively. This hypothesis is tested in Experiment 2 (Chapter Seven), where moving stimuli are generated using the steering dynamics model. While the range of scenarios tested in Experiment 1A are not exhaustive, the results predictions of the Mutual Evasion and Pursuit-Evasion conditions indicate that the steering dynamics model may be useful in simulating other types of pedestrian 78 interactions. A common, specific interaction is passing another person on a sidewalk. This form of mutual evasion is investigated in Experiment 1B. Experiment 1B: Sidewalk Passing This follow-up experiment was conducted to further explore the application of the steering dynamics model to pursuit-evasion behaviors with two pedestrians. This experiment focuses on the task of passing a pedestrian on the sidewalk, an activity common in everyday life. Based on the predictions made in the Mutual Evasion condition in Experiment 1A, I hypothesized that the moving obstacle avoidance model would predict the mean data collected on sidewalk passing. I tested six conditions in Experiment 1B. A confederate acting as the “second participant” performed one of three instructed actions on each trial (see Figure 5.11): (1) Walk toward the participant in a direction that prevents a collision (Opposite Direction), (2) Walk toward the participant in a direction that would cause a collision (Same Direction), and (3) Walk toward the participant so as to cause a collision and oscillate laterally during the approach (Oscillate). The third action was intended to provoke symmetrical oscillations in the participant, in the hopes of spontaneously coupling the lateral movements of the participant and confederate. Regardless of whether this behavior was achieved, I hypothesized that the oscillation exhibited by the confederate may cause the participant to take a wider deviation around the confederate. The three action conditions were crossed with two starting distances between the two pedestrians. 79 The moving obstacle model fit the mean data with reasonable accuracy, providing further evidence for the constant bearing strategy. Experiment 1B failed to create the “sidewalk shuffle” behavior with any regularity the data did indicate greater passing distances in oscillatory conditions. Figure 5.11 Diagram of Confederate Action conditions in Experiment 1B: (A) Same Direction condition, (B) Opposite Direction condition, (C) Oscillate condition. Method Participants Eleven adults took part in this experiment, but one was dropped from data analysis due to technical failures involving the IS-900 MicroTrax unit. Of the remaining ten participants, (five female, five male, ages 19-35), none reported any visual or motor 80 impairments nor had any participated in Experiment 1A. They were paid a nominal amount for their participation. Confederate Two VENLab research assistants (one female, one male) acted as a confederate for Experiment 1B; the gender of the confederate always matched the gender of the participant. Participants were introduced to the confederate at the start of the experiment and were told that the confederate was another participant. The confederate‟s identity and association with the VENLab was revealed during debriefing at the conclusion of the study. Design Figure 5.11 illustrates the conditions in Experiment 1B. The participant and confederate began each trial blindfolded, standing either 2 m or 4 m apart and facing one another. Each participant was positioned between a pair yellow poles spaced 1 m apart. The placement of the yellow poles was intended to create a constrained environment similar to a sidewalk. At each starting distance the confederate performed one of three actions (1) Walk toward the participant in a direction to cause a collision, (2) Walk toward the participant in a direction that would prevent a collision, or (3) Walk toward the participant to cause a collision and initiate oscillatory movements. Experiment 1B therefore had a 2 (Starting Distance: 2 m, 4 m) x 3 (Confederate Action: Opposite Direction, Same Direction, Oscillate) within-subjects design. 81 Procedure At the beginning of each trial, the participant and the confederate placed their blindfolds over their eyes. The experimenters led the participant and confederate to his/her respective starting locations (either 2 m or 4 m apart) and oriented each so that s/he faced directly forward. The participant was covertly instructed to walk to one of the two poles opposite him/her while avoiding the confederate, while the confederate was covertly told which direction the participant would be walking (left or right) and was instructed to: (1) Walk in the opposite direction as the participant, (2) Walk in the same direction as the participant, or (3) Walk in the same direction as the participant and initiate oscillatory lateral movements when close to the participant. Trials started when the experimenters clicked a button on the radio mouse to begin data collection and instructed the participants to remove their blindfolds and “begin.” The participant and confederate then performed the actions given to them by by the experimenters; the confederate walked to a yellow pole after completing his/her task. Trials ended when both the participant and confederate were standing at one of the yellow poles opposite from each person‟s starting locations. After each trial was completed, the participant and confederate each preplaced the blindfold and awaited an experimenter to lead him/her to the starting location of the next trial. Participants were free to walk at comfortable paces but were instructed not to run. Participants were informed that they could take a break at any time or discontinue the experiment if need be; all of the participants completed the experiment without incident. 82 Each participant-confederate pair completed eight trials in each condition for a total of 48 experimental trials; trials were presented in a randomized order. The data were collapsed across the participants‟ two potential goal locations prior to analysis. Data Analysis The analysis of head position and route selection performed in Experiment 1A was again used in Experiment 1B. To preserve aspects of the lateral movements of the participant and confederate, I used a 2nd-order Butterworth filter instead of a 4th-order filter when processing the raw position data. In each condition the preferred route taken by participants around the confederate was computed; in conditions where no preferred route existed, both routes were simulated. The mean distance between the participant and confederate when the two passed one another (Passing Distance) was analyzed with a 2 (Starting Distance) x 3 (Confederate Action) repeated-measures analysis of variance. Results Figure 5.12 displays the mean Passing Distance, the distance between the participant and confederate the former passed the latter, in each condition. Unfortunately, oscillatory movements were not reciprocated by participants with any regularity or size; at most a single lateral step was taken in response to the confederate‟s movements. A repeated-measure analysis of variance (ANOVA) found a marginal effect of Confederate Action on Passing Distance, F (2, 18) = 3.2, p = .065, = .14. There was no effect of Starting Distance, F (1, 9) = 1.09, ns, = .02 nor was there an interaction, F (2, 18) = 83 2.0, ns, = .05. I hypothesized above that the Oscillate condition would produce a wider deviation and Passing Distance than the Same Direction condition. I conducted a post- hoc, one-tailed, paired sample t-test between the Same Direction and Oscillate conditions at the 4 m Starting Location, which revealed a significant difference, t (9) = 2.14, p < .05, = .337. 2m 80 * 4m 70 60 Passing Distance (cm) 50 40 30 20 10 0 Opposite Direction Same Direction Oscillate Confederate Action Figure 5.12 Passing distances between participant and confederate. The significant Passing Distance results indicate that the oscillatory movements initiated by the confederate elicited an additional avoidance response in the participants‟ movements. This difference only occurred at a 4 m Starting Distance. At 2 m, the 84 participants had very little time or space to make adjustments to their path around the confederate. However, at 4 m participants could perceive a difference in the confederate‟s movements and incorporated it into their own avoidance response. Simulations I simulated the mean preferred path in each condition with the moving obstacle model; the Oscillate condition at both 2 and 4 m Starting Distance had no preferred route (sign test, ns) and I therefore simulated both mean paths. In a fashion similar to the Avoidance condition fits in Experiment 1A, I provided the confederate‟s mean position at each time-step as input to the model as well as the participant‟s mean initial heading, lateral position, and mean velocity. The 250 ms delay function was again used in all simulations. The mean human and simulated paths are shown in Figure 5.13, and Figure 5.14 displays the corresponding heading time-series data for each condition. The model matched the preferred route in each condition. The heading time-series predictions were reasonably good, with a mean rmse = 5° and mean r2 (48) = .88 across the eight simulations. 85 Opposite Direction Same Direction Oscillate (Route 1) Oscillate (Route 2) Figure 5.13 Mean paths and moving obstacle predictions for Experiment1B. The black participant paths move in a positive z direction from a starting point of 0.5 m (2 m Distance conditions) and -0.5 m (4 m Distance conditions). The red confederate paths move in the negative z direction from a starting point of 2.5 m (2 m Distance conditions) and 3.5 m (4 m Distance conditions). 86 Opposite Direction Same Direction Oscillate (Route 1) Oscillate (Route 2) Figure 5.14 Mean and simulated heading time-series for Experiment 1B. Discussion The model matched the general shape of the human preferred path and heading time-series in nearly every simulation. Deviations around the confederate in these conditions were slightly greater or lesser than the deviations in the human paths. The simulated heading time-series in the 4 m Oscillate (Route 1) condition did not match the human path: The simulation in this condition approached a heading of 180° more quickly than the human data, suggesting that the humans were slower to react and deviate from their original heading direction in this condition. By increasing the time delay in the simulations from 250 ms to 750 ms and resetting the value of parameter to 1.3 m-1 (the agent moves closer to the confederate before deviating; parameter value from Cohen, 87 Bruggeman, & Warren, under review) the model fit to this condition improved: rmse = 3.2°, r2 (48) = .92. The results of Experiment 1B provide further support that the steering dynamics model can describe human (mutual) evasion behavior. Experiments 1A and 1B together provide strong evidence that the constant bearing strategy underlies human pursuit and evasion in a variety of interactive conditions. CHAPTER SIX INFORMATION FOR PERCEIVING PURSUIT AND EVASION 88 89 By demonstrating the robustness of the constant bearing strategy in Experiment 1, I identified it as a source of information one can use to test how people perceive pursuit and evasion. In this brief chapter, I review selected works that focus on the perception of intentional behavior in adults and children in order to highlight the informational variables contained within an agent that pursues or evades using the constant bearing strategy. I identify three pieces of information that have been shown to inform observers‟ perceptions of intentional behavior: (1) The approach trajectory of an agent, (2) how an agent moves contingently with another agent, and (3) the direction of an agent‟s head or eye fixations. These variables are then manipulated in Experiments 2 and 3 by driving virtual avatars in immersive environments with the steering dynamics model. Classic research on the perception of causal events (Michotte, 1963) and dynamic agent behavior (Heider & Simmel, 1944) has given rise to a large body of recent work investigating how adults and children perceive actions with actors or in dynamic displays (Scholl & Tremoulet, 2000). Much of this work has focused on how people parse behavior into goal-directed action segments (e.g. Baird & Baldwin, 2001; Zacks, 2004) or the extent to which infants identify the intentional behavior of an agent (e.g. Woodward, 1998; Bloom & Veres, 1999; Rochat, Striano, & Morgan, 2004; Buresh & Woodward, 2007). While the present thesis is not focused on how observers determine if an observed object is an intentional agent, it is important to note that the hypotheses and tasks in Experiments 2 and 3 are grounded in empirical findings stretching back to infancy. Developmental research by Biro and colleagues (Biro, Csibra, & Gergely, 2007; Biro, & Leslie, 2007) in particular has noted that infants as young as six months are sensitive to information that specifies goal-directed action (e.g. non-linear trajectories, self-propelled 90 motion) when observing live demonstrations, despite having no experience with the stimuli or the coordination to complete the observed actions themselves. Empirical studies have demonstrated that observers use the information provided by the trajectory, contingent motion, and head/eye fixation of moving agents to perceive pursuit and evasion behavior. An agent that pursues using a constant bearing angle should provide different contingency and trajectory information than an agent that is evading by avoiding a constant bearing angle. Moreover, the direction of an agent‟s looks or head fixation should make more salient its intended direction of travel. Experiments 2 and 3 are motivated by the question of how observers use the information provided by the constant bearing strategy to determine whether a pedestrian is pursuing or evading. The following review demonstrates how each informational variable is used by observers to identify pursuit and evasion across a variety of tasks, displays, and stimuli. Informational Variables for Pursuit and Evasion Trajectory The trajectory along which one object travels toward another can inform an observer about its behavior. Andersen and Kim (2001) demonstrated that people are quite sensitive to trajectories that lead to collisions when viewed in a first-person perspective. They presented participants with displays containing multiple objects that moved on linear trajectories toward the participant‟s position (i.e. approaching from a far distance in the display). Participants were asked if they perceived whether an approaching object would collide with their position. Overall participants were successful at distinguishing 91 those trajectories that would result in collisions from those that would not. Ni and Andersen (2008) later extended these findings to objects traveling on curved trajectories. Andersen and Kim (2001) found that objects on collision trajectories visually expand, decrease in distance, and maintain a constant angular position over time. This last piece of information is consistent with the constant bearing strategy implemented in the steering dynamics model. Thus, Andersen and Kim (2001) have shown that observers are sensitive to the interception strategy known to be used in pursuit, as demonstrated in Experiment 1. Perceiving an object that maintains constant bearing and visually expands should likewise be useful to participants in determining whether an agent is pursuing them: An agent on a pursuit trajectory should be perceived as on a collision course. This hypothesis was tested in Experiment 2. Andersen and Kim (2001) also found that participants were more accurate and faster to identify collision trajectories as the number of moving objects in the display decreased. They interpreted this result as evidence for a limited capacity, sequential processing of multiple moving objects. In Experiment 3 the number of moving avatars in a small crowd was manipulated to test whether a similar effect would occur. Gao, Newman, and Scholl (in press) have conducted experiments in which adults identified pursuit behavior in a top-down (i.e. third-person perspective) display containing moving triangular figures. The display was of a black featureless environment containing white figures. The triangular figures are described as agents, with the majority labeled as “sheep,” or evaders. One agent is labeled as the “wolf,” or pursuer. Gao, et al. (in press) manipulated the extent to which the pursuer approached a nearby evader in order to identify what range of behaviors were either easy or difficult to identify: At each 92 frame in the display, the pursuer moved in a direction drawn randomly from within a range of possible headings defined by what the authors call a “subtlety” parameter. This parameter centered the pursuer‟s range of headings on a nearby evader. While the pursuer always moved closer to an evader, it did not necessarily move directly toward it, nor was the small-angle vertex (the “nose”) of the pursuer always facing toward an evader. In a series of studies participants were asked to identify which agent in an observed display was the pursuer. Gao, et al. (in press) also had participants actively control an evader and attempt to avoid capture by the pursuer. With each task the authors found that participants easily identified the pursuer and avoided being caught when its behavior was constrained and consistent, that is, when it moved directly toward and rotated to face evaders. By constantly rotating to keep its “nose” pointed to the evader, the pursuer provided the participant with an addition piece of information that specifies pursuit behavior. However, if the pursuer “stalked” the evader by point away from or moving on a path that took a circuitous route to the evader, participant performance decreased. Gao, et al.‟s (in press) results illustrate that people are sensitive to the approach taken by a pursuer toward an evader. Moreover, they demonstrate the contingent movement of the pursuer to constantly approach the evader in response to the evader‟s movement specifies pursuit behavior. When this information becomes less reliable in a dynamic display (i.e. “stalking” as opposed to pursuit behavior) then people have a more difficult time identifying the pursuer. In a first-person display, “stalking” would be specified by a pursuer maintaining a constant bearing angle with its target, but not expanding visually at a constant or symmetrical rate. In the present work the avatars either performed pursuit or evasion behavior in an explicit fashion (i.e. they did not 93 “stalk” participants). In Experiment 2, using a first-person viewpoint in an immersive virtual environment, I tested the contribution of trajectory to the perception of pursuit and evasion. Contingent Movement How one object moves in response to, or contingent upon, the movements of other objects can specify its underlying behavior. In pursuit and evasion, particularly, the approach of a pursuer causes an avoidance response in the evader, which in turn evokes a new movement in the pursuer, etc. Using third-person, top-down displays inspired by Heider and Simmel‟s (1944) interacting geometric figures, Bassili (1976) showed participants two circles (one leading, one trailing) that moved in either contingent or random patterns; he manipulated both the temporal and spatial contingency of the trailing circle‟s movement with respect to the leading circle. He found that all participants in his study perceived the trailing agent to be pursuing the leading agent when (1) the trailing agent‟s movements were contingent upon the leading agent‟s, and (2) the trailing agent also converged on the location of the trailing agent. When Bassili (1976) changed the spatial contingency of the pursuer from a converging to a parallel trajectory while retaining its temporal contingency with its counterpart, fewer participants perceived pursuit. The latter motion pattern resembles Gao, et al.‟s (in press) stalking behavior described above. When Bassili (1976) removed any spatial or temporal contingency between the two agents, participants not only did not perceive pursuit-evasion behavior, but actually described the circles as inanimate. 94 Experiment 2 to was designed to test whether participants could use an avatar‟s contingent movement to preserve or avoid a constant bearing angle to judge whether it was pursuing or evading. Participants were able to “probe” the avatar‟s contingent movement by moving around an immersive virtual environment. By probing the avatar participants could perceive how the avatar rotated either toward (pursuit) them to maintain a constant bearing or away (evasion) from them to avoid a collision. If participants perceived the contingencies in the avatar‟s movement that specified pursuit or evasion they would accurately identify the avatar‟s behavior. Head & Eye Fixation Researchers have found that in addition to the trajectory and contingent nature of an agent‟s movement, the orientation of its head and/or eyes may be used by observers to determine its behavior (e.g. Pelphrey, Singerman, Allison, & McCarthy, 2003; Pelphrey, Viola, & McCarthy, 2004; Morris, Pelphrey, & McCarthy, 2005; Loomis, Kelly, Pusch, Bailenson, & Beall, 2008; Nummenmaa, Hyona, & Hietanen, 2009). Loomis, et al. (2008) tested an observer‟s ability to judge the orientation of an experimenter‟s head. An experimenter was positioned a different retinal angles at a distance of 2 m. On each trial the experimenter turned his/her head to fixate on one of nine locations, and the participant was instructed to judge the orientation of the fixation. The authors found that people could accurately perceive the orientation of the experimenter‟s head even if the experimenter was positioned at a 90° retinal eccentricity from the observer. Participants were more accurate when they were able to see the movement of the head as opposed to the static initial and final fixation directions. When asked to judge the direction of 95 person‟s eye fixation, participants were only accurate when they were able to clearly see the fixating individual‟s eyes (retinal eccentricity ≤ 4°). Pelphrey, et al. (2004) found that observers could clearly perceive whether the eyes of an approaching virtual avatar shifted and fixated them or resulted in an averted gaze. The experimenters conducted this task while participants were scanned in fMRI. They found that when the avatar‟s gaze shifted to meet the participant‟s, there was increased activity in the superior temporal sulcus (STS), a region known to be used in the perception of social events, when compared to STS activity elicited by avatars with an averted gaze. An extension of Pelphrey, et al.‟s (2004) behavioral finding is seen in recent work by Nummenmaa, et al. (2009), in which participants were asked to select a movement (left or right) to avoid a collision with an approaching avatar. The avatars either gazed to the left or the right of the participant. Participants‟ eye movements were tracked in real-time during each trial. Nummenmaa et al. (2009) found that not only did participants move in the direction opposite that of the avatar‟s fixation, but they looked predominantly in the opposite direction as well. The authors interpreted these results as evidence that the participants were using the avatar‟s gaze as a way to plan or initiate their own collision avoidance behavior, and more generally that fixation provided information for perceiving the intended path of a pedestrian. Overall, these results indicate that the direction of an agent‟s head and eyes can be perceived and used by observers to determine how it will move. Like the stimuli in Loomis, et al. (2008), the avatars in Pelphrey, et al. (2004) and Nummenmaa, et al. (2009) were observed at close range, and participants could not interact with the avatars. Experiments 2 and 3 tested the role of head fixation in judging pursuit and evasion in 96 combination with trajectory and contingent motion information. In addition, they also vary the initial distance of the avatar. Only head fixation was manipulated, for Loomis, et al. (2009) found that head fixation is more useful than eye fixation over a range of retinal eccentricities. Specifically, pursuers fixated the participant, while evaders fixated a stationary goal located behind the participant‟s initial position. I hypothesized that the addition of head fixation information would contribute to the identification of pursuers and evaders. The following chapter presents two studies (Experiments 2A and 2B) that investigate how participants use the trajectory, contingent movement, and head fixation information present in the constant bearing strategy to judge whether a moving agent is pursuing or evading them. CHAPTER SEVEN EXPERIMENT 2: PERCEIVING PURSUIT OR EVASION IN A VIRTUAL AVATAR 97 98 Experiment 1 demonstrated that the constant bearing strategy and steering dynamics model generalize to interactions between two humans in a variety of pursuit and evasion scenarios. The focus of the present work will now shift to investigate whether people perceive pursuit and evasion on the basis of information generated by the constant bearing strategy. In Chapter Six I highlighted three pieces of information that could be used to perceive pursuit and evasion: (1) contingent movement, (2) trajectory, and (3) static features / head fixation. Experiment 2 tests whether these informational variables are used to identify pursuit and evasion in another pedestrian. A Shortcut to Avatars In order to conduct controlled studies investigating the perception of pursuit and evasion, I required consistent and reliable pedestrians. To achieve that end, I turned to virtual avatars whose movements could be programmed in accordance with the steering dynamics model. In this regard, Experiment 2 represents the start of a new paradigm that uses the steering dynamics model to generate human-like displays designed to investigate pedestrian interactions. The avatars use the steering dynamics model as described in Chapter Three. The stationary goal and obstacle components use the original parameters from Fajen and Warren (2003). The moving target and obstacle components take advantage of the new parameter values for pursuit and evasion from Experiment 1A. Thus, the avatars used in Experiment 2 interact with human participants as a person would, from the deviations around obstacles to the constant bearing trajectories towards a moving target. As some 99 participants in Experiment 2 commented, the experience of interacting with the virtual avatars was uncanny and due to the ways in which they react to a participant‟s movements in a realistic fashion. While using avatars presents potential issues related to external validity and the realism of stimuli, the experimental control gained through using them as opposed to confederates (as in Experiment 1B) outweighs the challenges and limitations. One particular challenge relates to the animation of the avatar: The eyes of the avatars could not be manipulated as an independent feature. Therefore, in Experiments 2 and 3 the avatar‟s head direction was manipulated instead. Regardless of how the avatars appeared, the nature of their locomotion was empirically derived and driven using the steering dynamics model, which increases the internal validity of Experiments 2 and 3 when compared to previous work (e.g. Gao, et al., in press). Experiment 2A: Contingent Movement, Trajectory, and Features How pedestrians move contingently with one another can specify the particular behaviors each are exhibiting. In Experiment 2A I investigated the extent to which participants used the contingent movement of a virtual avatar to judge whether the avatar was pursuing or evading them. In the case of pursuing the participant, the avatar moved contingently so as to preserve a constant bearing angle with the participant. However, when evading the participant, the avatar would constantly and contingently avoid a constant bearing angle. Unlike work described in Chapter Six, Experiment 2A was conducted in an immersive virtual environment, with participants viewing the avatars in a 100 first-person perspective. Therefore, the avatars‟ movements in Experiment 2A provided additional optical information that specified whether they are pursuing or evading: Pursuing avatars expanded symmetrically as they maintained a constant bearing angle and did not drift laterally. Evading avatars, in comparison, drifted laterally as they avoided a constant bearing angle, did not expand at the same rate as pursuing avatars, and, depending on the goal of the evading avatar, may have expanded asymmetrically. In addition to the contingent movement and expansion information presented by the avatar, the trajectory on which it moves specifies its behavior. Following Andersen and Kim (2001), a trajectory that indicates pursuit should be highly specified in the visual field, and should co-occur with a constant bearing angle and symmetrical expansion. An evasion trajectory, however, is more variable. The evading avatars in Experiment 2A actively evaded the participants while moving toward a stationary goal. The placement of this goal combined with the motion of the participant defined the avatar‟s trajectory. If this goal was located directly behind the participant‟s initial position, then the evading avatar had less lateral drift, expanded at a nearly symmetrical rate, and sometimes appeared to be pursuing the participant. If participants accurately perceived the avatar‟s contingent movement (through “probing” it with their own movements) then the evading behavior would be revealed. Conversely, if the evading avatar‟s goal was placed to the side of the participant‟s initial position, then the avatar exhibited lateral drift and asymmetrical expansion, as it never faced the participant „head-on.‟ Participants were instructed to walk toward a pole in virtual reality while they judged whether an avatar was pursuing or evading them. They were instructed to respond to the avatar in a complementary manner, that is, to evade a pursuer or to pursue an 101 evader. If the participants could reliably distinguish these two behaviors from the available information, they should evade pursuing avatars more than they would pursue them, and vice versa. There were four movement conditions in Experiment 2A: A Pursuit Condition and three Evasion Conditions (see Figure 7.1). The Pursuit Condition contained an avatar that pursued the participant. The Evasion Conditions used an avatar that evaded the participant while walking to a goal either directly behind the participant‟s initial position (0-degree approach angle) or behind and slightly off to the side of the participant‟s initial position (1-degree and 2-degree approach angles). The 0-degree approach angle was designed to provide an ambiguous Evasion Condition that would test whether participants used the contingent movement of the avatar. I hypothesized that if participants did use this information, they would accurately judge pursuing avatars as pursuers and evading (0-degree approach) avatars as evaders. Regarding the 1- and 2- degree approach angles in the Evasion Condition, I hypothesized that participants would perceive these avatars as evaders more accurately due the increased asymmetry in the avatar‟s expansion and lateral drift when compared to the 0-degree approach angle. In a pilot study I found that as the approach angle increased participants more often perceived the avatars as exhibiting evasion behavior. 102 Figure 7.1 Diagram of design and conditions in Experiment 2A. In addition to the avatar‟s contingent movement and trajectory, I manipulated the avatar‟s appearance and features in Experiment 2A to test whether having features that specified heading direction contribute to more accurate perceptions of pursuit and evasion. There were three types of green-textured pole avatars and two human avatars in Experiment 2A (see Figure 7.2). The Gliding Pole, which moved along the ground plane, provided no feature-based information about its heading direction and served as a baseline for comparing the remaining avatars. The Rotating Pole rotated as it moved to provide texture-based heading information. The “Nose” Pole featured a nose- or beak- 103 like appendage on the front of the avatar that always faced where it was traveling. The Walking Human was an animated human avatar that faced its direction of travel. Finally, the Human with Head Fixation avatar always fixated its head on its goal. If this goal was the participant (i.e. pursuit) it always fixated on the participant regardless of its body‟s direction of travel. If the avatar was evading the participant, it always fixated its stationary goal. A B C D Figure 7.2 Avatar models used in Experiment 2A: (A) Gliding / Rotating Pole, (B) “Nose Pole”, (C) Male Walking Human / Human with Head Fixation, (D) Female Walking Human / Human with Head Fixation. 104 I hypothesized that participants would be more accurate at judging pursuit and evasion when interacting with an avatar with features as the features increase the optical differences between pursuit and evasion. When pursuing, the “Nose” Pole‟s feature and body of the Walking Human expanded symmetrically and rotated when the avatar moved to always face the participant. The Human with Head Fixation constantly tracked the movement of the participant with its head and body, providing an additional piece of information above that provided by the Walking Human. In the 0-degree approach Evasion Condition, these features did not rotate toward or track the participant‟s movements. In the 1-degree and 2-degree approach conditions, these features pointed at an angle, further specifying the avatar‟s trajectory. The Rotating Pole specified all of this information as well but on less overt level. Participants may perceive the texture gradient as it rotates and use this information to judge pursuit and evasion. A further open question was whether participants judged the behaviors of the human avatars differently than the pole avatars. Method Participants Twenty-three adults took part in Experiment 2A, but one was dropped from data analysis due to technical difficulties, and two were dropped due to nausea. Of the remaining 20 participants, (13 female, seven male, ages 18-26) none reported any visual or motor impairments nor had any participated in any previous experiment in the present work. They were paid a nominal amount for their participation. 105 Apparatus Experiment 2A was conducted in an immersive virtual environment. Participants wore the SR80-A head-mounted display and backpack. The virtual environment displays were generated using the Alienware PC and presented at 60 frames per second. Displays The virtual displays in Experiment 2A consisted of a textured ground plane (50 m2) and a black sky. The ground texture map was made up of randomly generated grayscale Julesz pattern. A gray textured virtual pole was used as a starting location for each trial in these studies, and a red textured orientation pole was positioned 10 m away from the starting pole. The start and orientation poles had a radius of 0.1 m and stood 2.5 m tall. Once participants were standing in the starting pole and facing the orientation pole, the orientation pole became a blue textured goal pole (same dimensions). This event signaled the beginning of a trial. After participants walked 1 m a virtual avatar appeared 8 m away, directly in front of the participant. The avatar appeared moving along an initial heading direction and immediately began to turn onto an approach trajectory (see details below). The avatar‟s initial heading was randomly selected from eight possible values on each trial: ±30°, ±45°, ±60°, or ±90°. The number of trials containing each of the eight possible positive (right-moving) and negative (left-moving) initial headings were counterbalanced. The speed of the avatar was fixed at 1 m/s. The 250 ms time delay used in simulating the data in Experiment 1 was implemented in the virtual avatars. 106 The appearance, contingent movement, and trajectory of the avatar were manipulated. Figure 7.1 illustrates the conditions in Experiment 2A. The avatar had one of five appearances: (1) Gliding Pole, a green textured pole that moved along the ground plane (Figure 7.2 A), (2) Rotating Pole, a green textured pole that could rotate about its central (y) axis, (3) “Nose” Pole, which added a nose- or beak-like feature onto the Rotating Pole (Figure 7.2 B), (4) Walking Human, an animated human avatar (“Virtual Complete Characters,” WorldViz, Santa Barbara, CA; Figure 7.2, C and D) and (5) Human with Head Fixation, in which the human avatar‟s head rotated independently of its body and always fixated its goal from the moment it appeared. All five avatars had a radius of .3 m and were 1.625 m tall. The gender of the human avatars was matched to the gender of the participant. The contingent movement and trajectory of the avatar were manipulated by driving the avatar‟s motion either with the moving target model (Pursuit Condition) or the moving obstacle model (Evasion Condition). In the Pursuit Condition, the avatar treated the participant as a moving target, while in the Evasion Condition the avatar avoided the participant as a moving obstacle while moving to a stationary goal located at one of three approach angles: 0-, 1-, or 2-degrees offset from the participant‟s initial position. In the Evasion Condition the avatar‟s stationary goal was invisible so the participant could not see its position. While the Pursuit and Evasion Conditions differ categorically, I did not balance the number of Pursuit Condition trials with the total number of Evasion Condition trials across all three approach angles (e.g. 60 Pursuit trials, 20 0-deg, 20 1-deg, and 20 2-deg Evasion trials). Rather, for design purposes I treated the Pursuit Condition and all three 107 approach angles in the Evasion Condition as a single factor called Avatar Movement. Each Avatar Movement condition had the same number of trials. This allowed me to simultaneously investigate whether contingent movement was sufficient to distinguish between the Pursuit and Evasion (0-deg approach) Conditions and whether the three different Evasion approach trajectories contributed to the perception of evasion. The four Avatar Movement conditions were crossed with the five Avatar Appearances to create a 4 x 5 factorial within-subjects design. In analyzing the data (see below), the Pursuit and Evasion Conditions were treated as categorically different. Procedure At the start of each trial, with only the gray starting and red orientation poles visible, participants were instructed to position themselves inside the start pole and face the orientation pole. Trials began when the red orientation pole turned blue (and hence became the goal pole). This event signaled to participants that they should begin walking directly toward the now-blue pole. After walking 1 m, the virtual avatar appeared. Participants were given prior instructions that the avatar would either look like a pole or a person, and would either be pursuing or evading them. In each trial participants were required to judge as quickly and accurately as possible whether the avatar was pursuing or evading them, and then immediately take the complementary action: If the avatar appeared to be a pursuer, then participants were to avoid it and walk to the blue pole before being caught by the avatar; if the avatar appeared to be an evader, participants were to pursue the avatar and attempt to catch it before it reached its invisible goal and disappeared. Trials ended when one of the following occurred: (1) the participant reached 108 and stepped into the blue goal pole, (2) the avatar reached its goal, or (3) the participant and avatar touched one another. Participants were free to walk at comfortable paces but were instructed not to run, walk backward, or stop walking. Participants were informed that they could take a break or discontinue the experiment at any time. Participants completed eight trials in each of the 20 conditions for a total of 160 trials, which were presented in a randomized order. Experiment 2A was conducted over two sessions for each participant due to time and fatigue constraints: Four blocks (80 trials total with four trials per condition) were completed in each session. Data Analysis Participants‟ actions with respect to the avatar were measured using their head position time-series data. On each trial the participants‟ action was coded as either Perceived Avatar Evasion or Perceived Avatar Pursuit. An action was coded as Perceived Avatar Evasion if the participant turned off of a path toward the blue goal pole and pursued the avatar. An action was coded as Perceived Avatar Pursuit if the participant walked toward the blue goal pole and did not deviate onto a path toward the avatar. The proportion of Perceived Avatar Evasion trials in each condition was computed (i.e. the proportion of trials where participants pursued the avatar), and then arcsine-transformed. Participants‟ responses in each condition were checked against chance levels by comparing the mean of each condition against a proportion value of 0.5 (t test). To examine the contribution of contingent movement to participants‟ perceptions of pursuit and evasion I analyzed the arcsine-transformed Perceived Avatar Evasion scores in the 109 Pursuit and Evasion (0-degree approach) Conditions with a 2 (Contingent Movement) x 5 (Avatar Appearance) two-way repeated-measures ANOVA. A separate 3 (Trajectory) x 5 (Avatar Appearance) two-way ANOVA was conducted on the three approach angles in the Evasion Condition to determine whether larger approach angles contributed to the perception of evasion. The contribution of features (i.e. Avatar Appearance factor) to the perception of pursuit and evasion was analyzed by computing the sensitivity score d’ and bias score c (Macmillan & Creelman, 1991) for each avatar. In computing each score, trials in which participants correctly perceived the avatar as evading were coded as Hits, and trials in which participants incorrectly perceived the avatar as evading (i.e. when it was pursuing the participant) were coded as False Alarms. Hits and False Alarms correspond to when participants pursued the avatar (i.e. Perceived Avatar Evasion) in the Evasion Condition and Pursuit Condition, respectively. A one-way repeated measures ANOVA was conducted on the d’ scores to determine if and how features influenced participants‟ ability to perceive pursuit and evasion. To determine if a bias existed in participants‟ responses, I compared the c scores against a score of zero with a t test. If a score was significantly above zero, it indicated a bias toward perceiving pursuit, while a significant negative score indicated a bias toward perceiving evasion. 110 Results Chance Analysis The mean percentages of Perceived Avatar Evasion responses are displayed in Figures 7.3 and Figure 7.4. The data in each condition were significantly different from chance as determined by a series of one-sample t tests (p < .01). Contingent Movement Analysis: Pursuit vs. Evasion (0-degree Approach Angle) These data are displayed in Figure 7.3. A 2 (Contingent Movement) x 5 (Avatar Appearance) two-way repeated measures ANOVA revealed a significant main effect of Contingent Movement, F (1, 19) = 79.9, p < .001, = .73. This result indicates that participants perceived the difference between avatars that moved contingently with them to preserve a constant bearing angle (pursuit) and avatars that moved contingently but avoided a constant bearing angle (evasion). In addition there was a significant main effect of Avatar Appearance, F (4, 76) = 6.3, p < .001, = .014, and a significant interaction, F (4, 76) = 5.05, p < .005, = .008. These effects are further analyzed and discussed below using a sensitivity and bias analysis. 111 100 90 80 Percent Perceived Avatar Evasion 70 60 50 40 30 20 10 0 Pursuit Evasion (0-degree approach) Contingent Movment Condition Gliding Pole Rotating Pole "Nose" Pole Walking Human Human with Head Fixation Figure 7.3 Percent Perceived Avatar Evasion data for the Pursuit Condition and 0-degree approach angle Evasion Condition. Trajectory Analysis: Evasion 0-, 1-, 2-degree Approach Angles These results are shown in Figure 7.4. A 3 (Trajectory) x 5 (Avatar Appearance) two-way repeated measures ANOVA revealed a significant main effect of Trajectory, F (2, 38) = 4.34, p < .05, = .035. Post-hoc tests revealed that the 0-degree approach angle differed significantly (with Bonferroni correction) from the 2-degree approach angle, F (1, 19) = 7.6, p = .01, = .28. The 0-degree approach angle was not different from the 1-degree approach angle, F (1, 19) = 3.33, ns, = .15, nor was the 1-degree angle different from the 2-degree angle, F (1, 19) = 1.55, ns, = .08. These results 112 provide evidence that participants were sensitive to the differences in asymmetrical expansion and lateral drift of the avatar between the 0-degree and 2-degree approach angles. This suggests that when the type of the avatar‟s contingent movement is the same (i.e. avoid a constant bearing angle) a difference in lateral approach as small two degrees contributes to a stronger perception of evasion. There was also a significant main effect of Avatar Appearance, F (4, 76) = 7.22, p < .001, = .12. There was no significant interaction, F (8, 152) = 1.14, ns, = .02. The Avatar Appearance effect is further analyzed and discussed below. 113 100 90 80 Percent Perceived Avatar Evasion 70 60 50 40 30 20 10 0 0-degree 1-degree 2-degree Evasion Approach Angle Gliding Pole Rotating Pole "Nose" Pole Walking Human Human with Head Fixation Figure 7.4 Percent Perceived Avatar Evasion data for the three approach angles (0-, 1-, 2-degrees) in the Evasion Condition. Avatar Appearance Analysis: Sensitivity and Bias The sensitivity (d’) data for each avatar are shown in Figure 7.5. An effect of the avatar‟s appearance on participants‟ sensitivity to pursuit and evasion was found with a one-way repeated measures ANOVA, F (4, 76) = 5.86, p < .001, = .24. When I removed the Gliding Pole from analysis, this effect disappeared, F (3, 57) = .514, ns, = .026. In addition sensitivity to pursuit and evasion in the Gliding Pole was significantly lower than in the Rotating Pole t (19) = 3.88, p < .01, = .44. These results 114 demonstrate that participants were more sensitive to the motion information of the avatar that distinguished pursuit from evasion when the avatar contained at least one feature that indicated the avatar‟s heading direction. That the four avatars containing features did not differ from one another indicates that the rotation provided by the Rotating Pole avatar was sufficient information to allow for more accurate judgments by participants. The “Nose” Pole, Walking Human, and Human with Head Fixation avatars contained similar rotation information and additional features that pointed to their direction of heading. However, these additional features did not increase participants‟ sensitivity to the avatar‟s behavior. 4 3.5 * 3 2.5 Mean d' 2 1.5 1 0.5 0 Gliding Pole Rotating Pole "Nose" Pole Walking Human with Human Head Fixation Avatar Figure 7.5 Mean d’ scores for each avatar in Experiment 2A. Sensitivity to the Gliding Pole was significantly poorer than the other four avatars. 115 An analysis of participants‟ response bias (Figure 7.6) revealed a significant bias toward perceived evasion in the Walking Human avatar, t (19) = 2.17, p < .05, = .2. The bias scores for the remaining four avatars were at chance levels. This bias explains the greater Perceived Avatar Evasion data for the Walking Human in Figures 7.3 and 7.4. In the Pursuit Condition, this bias led to more error (i.e. False Alarms) while in the Evasion Condition it increased participants‟ judgments of perceived evasion (i.e. Hits). Because the Walking Human was seen as evading more in both the Pursuit and Evasion Conditions, the sensitivity of participants to this avatar‟s behavior was not any greater than the other avatars that contained features. In addition, the sensitivity and bias analyses together explain the main effects of Avatar Appearance found above, along with the interaction between Avatar Appearance and Contingent Movement: The main effects were driven by both the bias toward perceiving evasion in the Walking Human avatar, and by the decreased sensitivity to the Gliding Pole‟s behavior. The interaction was driven by the fact that the avatars with features increased a perception of evasion in the Evasion Condition and decreased it in the Pursuit Condition (with the exception of the Walking Human). 116 0.5 0.4 * Perceived Pursuit 0.3 0.2 0.1 0 Mean c -0.1 -0.2 Perceived Evasion -0.3 -0.4 -0.5 -0.6 -0.7 Gliding Pole Rotating Pole "Nose" Pole Walking Human with Human Head Fixation Avatar Figure 7.6 Mean bias (c) scores for each avatar in Experiment 2A. Negative values indicate a bias toward perceived evasion, while positive values indicate a bias toward perceived pursuit. Discussion Experiment 2A was conducted to investigate the contributions of three informational variables to the perception of pursuit and evasion in a virtual avatar: (1) The contingent motion generated by an avatar that either maintains or avoids a constant bearing angle, (2) the trajectory of an evading avatar, and (3) the features (or lack of features) that specify heading direction. The results provide a good amount of evidence 117 that participants were able to perceive all three pieces of information and use them to accurately judge the behavior of the avatar. Contingent motion was manipulated through the implementation of the constant bearing strategy and steering dynamics model in the virtual avatar. By maintaining a constant bearing angle with respect to participants, the pursuing avatar moved in a manner that caused it to always rotate toward the participants and expand symmetrically in the visual field. These optical variables were perceived as pursuit as indicated by participants‟ responses: Participants perceived these avatars as evading significantly lower than those avatars that avoided a constant bearing angle, moved laterally toward other goals, and did not rotate toward participants as participants moved (i.e. were driven by the moving obstacle model). These results support previous findings that the type of contingent motion exhibited by an agent specifies its behavior (e.g. Bassili, 1976; Gao, et al. 2009). The present results also demonstrate that participants were sensitive to the lateral drift of the avatars, as evidenced by data that showed an increase in perceived evasion as the trajectory of an evading avatar increases. Moreover, the results of the Pursuit Condition show that to be perceived as a pursuer, the avatar not only had to move contingently with participants, but it also had to travel along a pursuit trajectory (i.e. expand and face the participant). These results extend similar findings by Andersen and Kim (2001). Overall, these results provide evidence that the perception of goal-directed agent behavior can be specified by information present in the agent‟s movements in a first-person perspective scene. The results of the d’ analysis revealed that the inclusion of rotation information in the avatar‟s movement improved participants‟ ability to distinguish pursuit and evasion. 118 Surprisingly, neither the addition of the “nose” feature to the pole avatar nor the human avatars improved sensitivity to the avatar‟s behavior. This result points to the notion that participants were attuned to the direction that the avatar was turning and facing, adding more support to the contribution of trajectory and contingency information. Any additional information was supplemental to the rotation of the avatar. A further surprising result was the bias toward perceived evasion in the Walking Human avatar, while no significant biases were observed in the other four avatars. Participants may have assumed by default that the Walking Human was evading more often because it never looked directly at them, unlike the Human with Head Fixation, which fixated them when it pursued and looked away when it evaded. In natural pedestrian interactions, people are more often trying to avoid one another while walking to another location. Moreover, pedestrians tend to look where they are walking, and make saccades and occasional head movements to avoid obstacles. However, rarely does one pedestrian pursue another without fixating on them. This heuristic may have been at work during Experiment 2A: Participants worked under the assumption that unless the human avatar was looking directly at them, the avatar was more likely evading. An alternative explanation for the bias toward perceiving evasion is that the actual number of evading avatar trials outnumbered the pursuit trials three-to-one. Participants may have picked up on this statistical information and used it to make their judgments. This explanation seems unlikely, however, as participants did not make the same assumption with the other avatars. In fact, as Figure 7.6 illustrates, three of the five avatars has non-significant biases toward perceiving pursuit as opposed to evasion. Moreover, participants were informed at the start of the experiment that the avatars would 119 either pursue or evade them, which should have elicited a base-rate expectation of half pursuit trials, half evasion trials. The bias result is even more puzzling when one considers that the “Nose” Pole and Walking Human moved in identical ways: Both had a feature(s) that indicated the current direction of travel. However, the “Nose” Pole condition had no observed bias. The “nose” was an exaggerated feature that protruded far enough in front of the pole that it may have provided more information about the pole‟s motion than the human‟s face and body. Participants may have perceived the “Nose” Pole‟s heading changes earlier or more clearly than the Walking Human due to occlusion information (i.e. the nose occluded the texture of the pole as it began to turn, whereas the Walking Human‟s face and body were all on the same plane). This difference may have prevented participants from developing a bias. Alternatively, the expectations participants may have had about the humanoid avatars (i.e. people tend to avoid collisions with one another) may have influenced their judgments. Further work is required to identify the underlying nature of the bias result. While participants were not biased in their judgments of the Human with Head Fixation avatar, they were no more sensitive to its behavior than they were the Rotating Pole, “Nose” Pole, or Walking Human avatars. Given previous findings on the use of fixation and gaze information to judge an avatar‟s direction of movement (e.g. Loomis, Kelly, Pusch, Bailenson, & Beall, 2008; Nummenmaa, Hyona, & Hietanen, 2009), the present result was puzzling. The head fixation information was presented to participants earlier than the other informational variables that specified the avatar‟s behavior (i.e. when the avatar appeared). The avatar‟s direction of fixation was less apparent when it 120 was evading the participant, but when it was pursuing it looked directly toward the participant as soon as it appeared (and continued to track the participant). One explanation for why the head fixation did not increase sensitivity is that participants were relying more on the rotation, contingent motion, and trajectory information that each avatar shared save the Gliding Pole, rather than paying attention to a piece of information that only one avatar contained. This would reduce the overall task demands of the experiment by allowing participants to capitalize on the common ground shared by the avatars and only devoting attention to those variables. By reducing the number of avatar conditions one might find that participants are more willing to attend to and use the head fixation information. Alternatively the avatars in Experiment 2A may have been presented at too far a distance for participants to accurately use fixation to specify pursuit or evasion. Pelphrey, et al. (2004), Loomis, et al. (2008), and Nummenmaa, et al. (2009) all used displays and designs that presented the human or avatar stimuli at close distances to the observer. Perhaps at closer distances the head fixation information might influence participants‟ sensitivity to the avatar‟s behavior. To address both possibilities I conducted a follow-up study using only human avatars presented at different distances. Using two response tasks in a mixed design, I tested whether participants could judge (1) head fixation direction and (2) pursuit and evasion behavior. 121 Experiment 2B: Sensitivity to Avatar Head Direction at Different Distances In Experiment 2B, one group of participants (Pursuit Group) judged whether the Walking Human and Human with Head Fixation avatars were pursuing or evading. A second group of participants (Looking Group) judged whether the avatars were looking at them. The Looking Group acted as a control to determine if the head direction of the Human with Head Fixation avatar was perceived at each distance. Both groups used a wireless radio mouse to make their responses as opposed to performing a complementary action as in Experiment 2A. The use of the radio mouse enabled me to record the reaction times of participants. The avatars were presented at four distances (9, 8, 7, 6 m away from the participant), and the evading avatars traveled on the 0-degree approach angle from Experiment 2A. Just as in Experiment 2A, the Human with Head Fixation avatars always fixated their goal (the participant when pursuing, a stationary goal located behind the participant‟s initial position when evading) as soon as they appeared. In contrast, the Walking Human avatars always looked to their current head direction (the head was locked with the body and direction of motion). The avatars appeared moving on an initial heading direction between ±15° - 45° and immediately turned toward either the participant (pursuit) or a goal (evasion). Therefore, the Human with Head Fixation avatars always looked in the direction of the participant from the beginning of the trial, whereas the Walking Human avatars did not look in the direction of the participants until after they had turned onto their approach vector, by which point participants have moved and “probed” the avatar for contingent motion. 122 Sensitivity to the avatar‟s behavior and response bias were analyzed for the Pursuit Group. If participants relied on the contingent motion of the avatar and did not use the information provided by the Human with Head Fixation avatar, then the results of Experiment 2B should follow the pattern established in Experiment 2A: There should be no sensitivity differences between the two avatars. However, if by reducing the number of avatars from five to two and presenting them at closer distances allowed participants to take advantage of head direction information, I predicted that participants would have higher d’ scores with the Human with Head Fixation avatar than with the Walking Human avatar. To rule out the possibility that fixation information was not perceived by participants at each distance, the Looking Group‟s sensitivity to head fixation information was analyzed. Regardless of the sensitivity results an analysis of response bias may again reveal a bias toward perceived evasion in the Walking Human avatar. If this result were obtained a second time, it would provide additional evidence that participants had a default assumption that unless the avatar was fixating them, it was less likely to be pursuing. Should a response bias exist I also hypothesized that participants would be faster to respond to the Walking Human avatar as evaders than the Human with Head Fixation avatar. An open question is whether participants responded faster to the Human with Head Fixation avatar as pursuers than the Walking Human avatar. If they did, then it would indicate that while head fixation may not contribute to increased sensitivity, it may allow people to distinguish a pursuer more quickly. Again, this finding would support the heuristic that pursuers tend to fixate, while the default assumption is that pedestrians evade. 123 A final hypothesis relates to the distance at which participants respond to the avatar‟s behavior. I predicted that if participants required the avatar to be at a close range to perceive its behavior and/or fixation, then they would respond to the avatar at a constant distance in each condition. However, if the participants required only some time and space necessary to probe the avatar for contingent movement, then I predicted that their response distances would differ across the four initial avatar distance conditions. The latter case would provide further evidence that contingent movement directly specifies pursuit and evasion behavior. Method Participants Twenty-nine adults in total took part in Experiment 2B. Seventeen adults participated in the Pursuit Group, but three were dropped from data analysis due to technical failures and two were dropped due to nausea. Of the remaining 12 participants, (five female, seven male, ages 18-53), none reported any visual or motor impairments nor had any participated in any previous experiment in the present work. This was also true of the 12 adults who participated in the Looking Group (five female, seven male, ages 18-28). All participants were paid a nominal amount for their participation. 124 Apparatus Participants used a radio mouse to make their responses and end each trial. The displays were generated using the Dell XPS 730 H2C PC and presented at 60 frames per second. Displays The virtual displays in Experiment 2B consisted of the textured ground plane, sky, gray starting pole, and red orientation pole from Experiment 2A. As in Experiment 2A, the red orientation pole turned blue to signal to the participant that a trial had begun. However, it disappeared after the participant walked 1 m. Figure 7.7 illustrates the conditions in Experiment 2B. The avatar that appeared in each trial was either the Walking Human avatar or the Human with Head Fixation avatar. The avatar‟s gender was again matched to the participant‟s gender, and its dimensions were the same as in Experiment 2A. In each trial the avatar either pursued or evaded the participant. When evading the avatar moved to a goal located behind the participant‟s starting location (i.e. 0-degree approach angle from Experiment 2A). On each trial the avatar appeared and moved briefly along an initial heading direction sampled randomly from a range of ±15° - 45° and then immediately turned onto its approach trajectory. The speed of the avatar was fixed at 1 m/s. The 250 ms time delay used in simulating the data in Experiment 1 was again implemented in the virtual avatars. The third within-subjects variable was the initial distance of the avatar from the participant. The avatar appeared at a distance of 9 m, 8 m, 7 m, or 6 m in front of the participant. Initial distance was crossed with the two avatars (Walking Human, Human 125 with Head Fixation) and two behaviors (Pursuit, Evasion) within-subjects. The between- subjects variable was the task participants completed: The Pursuit Group was instructed to judge as quickly and accurately as possible whether or not the avatar was pursuing them, while the Looking Group was instructed to report whether the avatar was looking at them. Each group was presented with the same 2 (Avatar) x 2 (Behavior) x 4 (Initial Distance) design. Figure 7.7 Diagram of design and conditions in Experiment 2B. 126 Procedure, Pursuit Group At the start of each trial participants were instructed to position themselves inside the gray start pole and face the red orientation pole. Trials began when the red orientation pole turned blue. This event signaled to the participants that they should begin walking directly toward the now-blue pole. After walking 1 m, the blue pole disappeared and the virtual avatar appeared. Participants were told that the avatar would either be pursuing or avoiding them. In each trial participants were instructed to judge as quickly and accurately as possible whether or the avatar was pursuing them. As soon as this judgment was made, participants clicked a button on the radio mouse they were holding. This event caused the avatar to disappear, ending the trial and recording reaction time. Participants then verbally reported their response, saying “yes” if the avatar was pursuing them and “no” if the avatar was not pursuing them. Participants had up to seven seconds from the onset of the display to press the button; after this time had elapsed the avatar disappeared automatically and participants were instructed to report their judgment. At the end of each trial participants returned to the gray start pole. Participants were free to walk at comfortable paces but were instructed not to run, walk backward, or stop walking. Participants were informed that they could take a break at any time or discontinue the experiment if need be. Participants completed ten trials in each of the 16 conditions for a total of 160 trials, which were presented in a randomized order. 127 Procedure, Looking Group The procedure for the Looking Group was nearly identical to that of the Pursuit Group, with the following exceptions. Participants were not informed that the avatar would either be pursuing or avoiding them. Participants were instructed to judge as quickly and accurately as possible whether the avatar was looking at them. The radio mouse was used to end each trial, and the participant verbally reported “yes” if the avatar was looking at them and “no” if the avatar was not looking at them. Data Analysis Sensitivity (d’) and bias (c) were computed for each participant in each condition. In the Pursuit Group, trials in which participants correctly responded “yes” to a pursuing avatar were coded as Hits. Trials in which participants incorrectly responded “yes” to an evading avatar were coded as False Alarms. In the Looking Group, Hits were trials in which participants correctly responded “yes” to a Human with Head Fixation avatar, and False Alarms were those trials in which participants incorrectly responded “yes” to a Walking Human avatar. No distinction was made between pursuing and evading avatars. The Human with Head Fixation avatar looks in the direction of the participant or the goal directly behind the participant when it appears, while the Walking Human avatar looks in the direction of its initial heading (i.e. follows its body‟s direction of motion). The Walking Human avatar only begins to look in the direction of the participant or goal once it has turned onto its approach trajectory. In the Pursuit Group, d’ scores were analyzed with a 2 (Avatar) x 4 (Initial Distance) repeated-measures ANOVA (computing d’ necessitated collapsing across the 128 avatar‟s behavior). Bias scores in each condition were tested for significance with a t test against a no-bias score of zero. I analyzed mean reaction times with a three-way, 2 (Avatar) x 2 (Behavior) x 4 (Initial Distance) repeated-measures ANOVA. In addition, the final distance between the participant and avatar when the participant made a response (i.e. pressed the button the mouse) was analyzed with a three-way, 2 (Avatar) x 2 (Behavior) x 4 (Initial Distance) repeated-measures ANOVA. The Looking Group d’ scores were analyzed first with a chance analysis to check for significant differences from scores of zero, which would indicate a complete lack of sensitivity to head direction or fixation, and then with a one-way (Initial Distance) repeated-measures ANOVA to test for differences between initial avatar distances. Results Pursuit Group, Sensitivity and Bias The mean sensitivity (d’) data are shown in Figure 7.8. A 2 (Avatar) x 4 (Initial Distance) repeated-measures ANOVA revealed a marginally significant effect of Avatar, F (1, 11) = 4.42, p = .06, = .1. There was no main effect of Initial Distance, F (3, 33) = 1.16, ns, = .04. The interaction between Avatar and Initial Distance approached significance, F (3, 33) = 2.49, p = .08, = .05. A paired-sample t test on the 6 m Initial Distance condition revealed that the d’ was significantly higher with the Human with Head Fixation avatar than the Walking Human avatar, t (19) = 2.77, p < .05, = .4. This indicates that at the closest initial distance participants may perceive and use head fixation to improve their judgments of pursuit and evasion. 129 The mean bias data are displayed in Figure 7.9. One-tailed t tests against a bias score of zero were performed on each Initial Distance and Avatar condition. The tests revealed that participants had a significant bias toward perceiving evasion in the Walking Human Avatar in all four Initial Distance conditions: 9 m, t (11) = 2.72, p < .05, = .4; 8 m, t (11) = 2.43, p < .05, = .35; 7 m, t (11) = 2.63, p < .05, = .38; 6 m, t (11) = 2.98, p < .05, = .45. These results replicate the bias with the Walking Human avatar shown in Experiment 2A, supporting the interpretation that avatars that do not immediately fixate the participants were assumed to be evading. Participants demonstrated a trend that approached significance toward perceiving pursuit in the Human with Head Fixation avatar in the 9 m, t (11) = 2.16, p < .1, = .3, and 7 m Initial Distance Conditions, t (11) = 1.89, p < .1, = .24. Participants had no significant responses biases in the other two conditions: 8 m, t (11) = .75, ns, = .05; 6 m, t (11) = .97, ns, = .08. That out of four Initial Distance conditions participants exhibited a perceived pursuit bias indicates that head fixation at the onset of a trial may serve as a cue to pursuit behavior. However, because this bias was not shown in the 8 m or 6 m conditions I hesitate to interpret these results too strongly. 130 Human Head with Head Fixation Fixation NoWalking FixationHuman 4 * 3.5 3 2.5 Mean d' 2 1.5 1 0.5 0 9m 8m 7m 6m Initial Avatar Distance Figure 7.8 Mean d’ scores in Experiment 2B. 131 Human with Head Fixation Walking Human 1 0.9 Perceived Pursuit 0.8 0.7 0.6 0.5 0.4 0.3 0.2 Mean c 0.1 0 -0.1 -0.2 -0.3 Perceived Evasion -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 9m * 8m* 7m* 6m* Initial Avatar Distance Figure 7.9 Mean bias (c) scores for each Initial Avatar Distance in Experiment 2B. Negative values indicate a bias toward perceived evasion, while positive values indicate a bias toward perceived pursuit. Pursuit Group, Reaction Time Mean reaction times for correct trials are shown in Figure 7.10 (Pursuit Condition) and in Figure 7.11 (Evasion Condition). A three-way 2 (Avatar) x 2 (Behavior) x 4 (Initial Distance) repeated-measures ANOVA revealed a main effect of Initial Distance, F (3, 33) 18.73, p < .001, = .25, but no effect of Avatar, F (1, 11) = .4, ns, = .005, nor an effect of Behavior, F (1, 11) = .24, ns, = .003. There were no significant two-way interactions: Avatar x Initial Distance, F (3, 33) = 1.97, ns, = .01l; Avatar x Behavior, F (1, 11) = .08, ns, = 0; Initial Distance x Behavior, F (3, 33) = 132 1.69, ns, = .01. There was no significant three-way interaction, F (3, 33) = .674, ns, = .006. To follow up on the main effect of Initial Distance, post-hoc paired-sample t tests were conducted between the 9 m and 6 m Distance conditions. Judgments were significantly faster at 6 m than 9 m in the Pursuit Condition with the Human with Head Fixation, t (11) = 4.95, p < .01, = .69, and with the Walking Human, t (10) = 4.42, p < .01, = .66; as well as in the Evasion Condition with the Human with Head Fixation avatar, t (9) = 4.62, p < .01, = .7, and with the Walking Human, t (11) = 5.07, p < .01, = .69 (degrees of freedom differ due to missing data in some conditions). These results indicate that participants were overall faster to respond when the avatar appeared at a closer initial distance. In addition, post-hoc tests revealed a significantly higher reaction time for the Head Fixation avatar when compared to the Walking Human avatar in the 8 m Evasion Condition, t (9) = 3.96, p < .01, = .64, and a similar effect that approached significance in the 7 m Evasion condition, t (9) = 2.2, p = .06, = .35. These results are consistent with the bias toward perceived evasion for the Walking Human; participants used the lack of fixation as a heuristic for perceived evasion and responded faster. 133 Human Head with Head Fixation Fixation NoWalking FixationHuman 4 * 3.5 3 Reaction TIme (s) 2.5 2 1.5 1 0.5 0 9m 8m 7m 6m Initial Avatar Distance Figure 7.10 Mean reaction times to correctly answered trials (Pursuit Condition). 134 Head Fixation Human with Head Fixation No Fixation Walking Human * 4 * 3.5 3 Reaction Time (s) 2.5 2 1.5 1 0.5 0 9m 8m 7m 6m Initial Avatar Distance Figure 7.11 Mean reaction times to correctly trials (Evasion Condition). Pursuit Group, Response Distance The mean distances between the participant and avatar in each condition are shown in Figure 7.12 (Pursuit Condition) and in Figure 7.13 (Evasion Condition). A three-way (Initial Distance x Behavior x Avatar) repeated-measures ANOVA revealed a significant main effect of Initial Distance, F (3, 33) = 33.94, p < .05, = .2. There was no significant main effects of Behavior, F (1, 11) = .65, ns, = .003, or Avatar, F (1, 11) = .87, ns, = .002. There were no two-way interactions: Distance x Avatar, F (3, 33) = .19, ns, = 0; Avatar x Action, F (1, 11) = 3.48, ns, = .12; Distance x Action, F 135 (3, 33) = .17, ns, = 0. There was, however, a significant three-way interaction, F (3, 33) = 6.88, p < .05, = .012. Post-hoc analyses (with Bonferroni correction for multiple comparisons) revealed that, as with the reaction time results, responses were made significantly closer in the 6 m Initial Distance than 9 m Initial in the Pursuit Condition with the Head Fixation avatar, t (11) = 7.7, p < .005, = .84, and with the Walking Human, t (10) = 7.07, p < .005, = .83; as well as in the Evasion Condition with the Head Fixation avatar, t (9) = 6.3, p < .005, = .82, and with the Walking Human avatar, t (11) = 6.9, p < .005, = .81 (degrees of freedom differ due to missing data in some conditions). These results demonstrate that participants did not respond to the avatar at a constant distance across Initial Distance conditions. Furthermore, the pattern of results indicate that for each Initial Distance condition, participants covered about 50-60% of the total distance between their initial position and initial position of the avatar prior to responding. These data support the hypothesis that participants required space and time to “probe” the avatar for contingent motion, which logically would decrease as the avatar appeared closer to the participants. Participants did not need to walk to a specific distance from the avatar in order to perceive its motion and behavior, which a constant response distance across conditions would have indicated. In addition participants made their responses significantly closer at 8 m with the Head Fixation avatar than the Walking Human, t (9) = 4.53, p < .005, = .69. This result converges with the reaction time result above, and indicates that participants required less time and space to judge the behavior of the Walking Human avatar in this condition. These data are consistent with the observed bias toward perceiving evasion in the Walking Human avatar. 136 Head Fixation Human with Head Fixation No Fixation Walking Human 6 * 5 Distance at Response (m) 4 3 2 1 0 9m 8m 7m 6m Initial Avatar Distance Figure 7.12 Mean distance to avatar at response (Pursuit Condition). 137 Head Fixation Human with Head Fixation NoWalking Fixation Human 6 * Distance at Response (m) 5 * 4 3 2 1 0 9m 8m 7m 6m Initial Avatar Distance Figure 7.13 Mean distance to avatar at response (Evasion Condition). Looking Group, Sensitivity Analysis The Looking Group‟s sensitivity data are shown in Figure 7.14. Each condition was significantly above zero (p < .01), and was equal to or greater than 3.0; 4.65 is considered ceiling (Macmillan & Creelman, 1991). A d’ of zero would indicate no sensitivity to the direction of the avatar‟s head, or in other words, no ability to distinguish between the two types of avatars present in Experiment 2B. Moreover, a one-way ANOVA found no main effect of Initial Distance, F (3, 33) = .21, ns, = .02. This analysis shows that there was no effect of Initial Distance on the ability of participants to perceive the head fixation exhibited by the Human with Head Fixation avatar. The absence of a head fixation effect to participants‟ sensitivity to the avatar‟s behavior in the 138 7-9 m Initial Distance conditions (see above, Figure 7.8) was not to participants being unable to perceive the head fixation. 4.5 4 3.5 3 Mean d' 2.5 2 1.5 1 0.5 0 9m 8m 7m 6m Initial Avatar Distance Figure 7.14 Mean d’ scores for the Looking Group. Discussion Experiment 2B was designed to more closely examine the potential contribution of head fixation on the perception of pursuit and evasion behavior in human avatars. The results replicated the bias toward perceiving evasion in the Walking Human avatar found in Experiment 2A. Interestingly, the results of Experiment 2B indicated that while contingent motion information was used to distinguish between pursuing and evading avatars, the fixation information provided an increase in sensitivity only when the avatar 139 appeared at a distance of 6 m. The lack of fixation effects at other distances was not due to the inability to perceive the Human with Head Fixation avatar‟s head direction, as indicated by the Looking Group‟s d’ results. Moreover, participants did not exhibit a bias toward perceiving pursuit in the Head Fixation avatar at 6 m, so they were not simply relying upon the avatar‟s head direction and ignoring contingent motion when making their judgments. In fact, at an initial distance of 6 m, evading avatars would begin to deviate almost immediately, providing the contingent information necessary to correctly judge evasion with both avatars. The sensitivity and bias results observed at 6 m can be interpreted as participants using contingent motion to specify evasion and head fixation information to specify pursuit. At 9 m, participants did exhibit a bias toward perceiving pursuit in the Head Fixation avatars, which could be interpreted as an overgeneralization of head fixation shown to provide accurate information for pursuit (i.e. no bias) at a close distance to a far distance. The reaction time and response distance results were consistent with the bias observed in both Experiment 2A and 2B toward perceiving evasion in the Walking Human avatar. In addition they supported the idea that participants required time and space to observe and “probe” the avatar‟s contingent motion. In sum, the data from Experiment 2B indicates that the asymmetry in the avatar provided by head fixation may only improve sensitivity to perceiving pursuit and evasion at close distances. Contingent motion so as to preserve a constant bearing angle was again shown to strongly specify pursuit, and movement to avoid a constant bearing angle was shown to specify evasion. The combination of stimulus asymmetry (head fixation), contingent motion, and symmetrical expansion (i.e. pursuit movement) at the 6 m 140 distance improved sensitivity to pursuit without introducing a response bias. Given these data, future experiments could seek to determine how these pieces of information are combined by observers and under what conditions one might supersede the other. Experiments 2A and 2B demonstrate that participants use the visual information generated by a pedestrian that either maintains or avoids a constant bearing angle to accurately perceive pursuit and evasion. Both experiments were conducted using a single avatar. A question that arises from these results is whether observers perceive the behaviors of multiple agents at the same time or if each is perceived sequentially. I investigated this issue in Experiment 3, where observers were asked to identify a pursuer within a crowd of evaders. Each evader avoided a constant bearing angle with respect to other avatars and the participant, and the pursuer maintained a constant bearing angle with respect to the participant but avoided the other avatars. Using this paradigm I sought to further develop a working account of how observers use the information generated by the constant bearing strategy to identify pursuit and evasion. CHAPTER EIGHT EXPERIMENT 3: IDENTIFYING A PURSUER IN A CROWD 141 142 Experiment 1 demonstrated that the constant bearing strategy accurately described pursuit-evasion interaction, and provided a source of informational variables observers might use to perceive pursuit and evasion. In Experiment 2 I manipulated these variables using a single virtual avatar and showed that the information for perceiving pursuit and evasion in another pedestrian is specified by how an agent employs the constant bearing strategy for pursuit (maintain a constant bearing) or evasion (avoid a constant bearing). Experiment 3 extends these findings by asking whether the motion information present in multiple moving agents is perceived by an observer sequentially or if pursuit behavior appears to „pop-out‟ of a crowd of evaders. Furthermore, two questions related to visual search (e.g. Treisman & Gelade, 1980; Treisman, 1982; Nakayama & Silverman, 1986) motivated this study: (1) Do pedestrians process the movements of other pedestrians sequentially or in parallel? (2) Do pursuing and evading pedestrians differ visually from one another along a single dimension or multiple dimensions? If pursuit and evasion are visually distinct along a single dimension then observers should be able to identify a target pursuing pedestrian from among any number of evading pedestrian distracters without an increase in reaction time. In other words, the pursuer should „pop-out‟ from within the crowd. This would provide evidence that observers can attend to and process the behaviors of multiple pedestrians in parallel. However, if pursuit and evasion differ visually along two or more dimensions (i.e. a conjunction of multiple dimensions) then observers should take longer and have a more difficult time identifying a target pursuer as the number of distracter evaders increases. This result would provide evidence that multiple pedestrians pursuing and evading are processed sequentially. 143 The results of Experiment 2 indicated that pursuit was perceived by participants when a pedestrian expanded symmetrically, rotated constantly to face participants, and moved contingently to maintain a constant bearing angle. Evasion was perceived when the pedestrian expanded asymmetrically due to an approach trajectory toward a goal, did not turn to constantly face the participant, and avoided a constant bearing angle. If multiple pedestrians are present in a display, however, this information would become more complex, as each pedestrian would have to avoid colliding with the others. One might observe a pursuer who does not always expand symmetrically, for instance, as it must turn away from the observer in order to avoid another pedestrian. Likewise, an evading pedestrian that acts as a distracter may move on a trajectory that appears like pursuit due to the combined repelling forces of the other pedestrians (i.e. moving obstacles) that it is also trying to avoid. The pursuer will have distracters that look like evaders and on occasion will look like pursuers. I hypothesized that when searching for a pursuer, participants would perform a conjunctive search in a sequential fashion. As the number of distracters increased, participants would become slower and less accurate at identifying the target pursuer. To test this hypothesis I presented participants with between two and four moving avatars in an immersive virtual environment. One avatar was pursuing the participant, and the remaining one to three avatars were evading distracters. Participants were instructed to judge which avatar was pursuing them. The results from Experiment 2B indicated that head fixation increased sensitivityto pursuit and evasion only when avatars were presented a close range. The task in Experiment 2B required participants to make an absolute judgment about the single avatar‟s behavior. However, in Experiment 3 the behavior and motion of each 144 avatar would be judged relative to each other avatar‟s motion. In a relative judgment task, I hypothesized that head fixation may improve participant‟s accuracy and/or reaction time when searching for a pursuer. The pursuer‟s head would always fixate the participant, regardless of how it turned to avoid the other avatars, and the evaders would fixate their goals while they moved one another. Therefore in Experiment 3 I included both the Walking Human and Human with Head Fixation avatars as stimuli. Rather than intermixing trials with Walking Human avatars with trials that had Head Fixation avatars, I manipulated the avatars across two blocks. Participants were presented with a series of trials that only contained one avatar type. After a short break, a second block was presented that contained only the other avatar type. In this way participants would be given the greatest opportunity to perceive and use the head fixation information. Method Participants Twelve adults took part in Experiment 3, but two were dropped from data analysis due to technical failures, and two due to nausea. Of the remaining eight participants, (three female, five male, ages 18-22), none reported any visual or motor impairments nor had any participated in any previous experiment in the present work. Participants were paid a nominal amount for their participation. 145 Apparatus Participants used the wireless radio mouse to make their responses and end each trial. The displays were generated using the Dell XPS 730 H2C PC and presented at 60 frames per second. Displays The virtual displays in Experiment 3 consisted of the textured ground plane, sky, gray starting pole, and red orientation pole from Experiment 2. The red orientation pole turned blue to signal to the participant that a trial had begun. As in Experiment 2B, it disappeared after the participant walked 1 m. On each trial between two and four avatars appeared. The avatars that appeared in each trial were either the Walking Human avatar or the Human with Head Fixation avatar. The Avatar variable was blocked in Experiment 3, and block order was counterbalanced across participants. All avatars matched the participant‟s gender. The speed of each avatar was fixed at 1 m/s, and the 250 ms time delay used in previous studies was again implemented in the virtual avatars. One of the four avatars present in each trial pursued the participant (Target Avatar). The remaining one, two, or three avatars were Distracter Avatars, and evaded the participant while walking to a goal. In addition to interacting with the participant, the avatars treated one another as moving obstacles. Each avatar had a colored orb (red, yellow, blue, or green) above its head. Participants used these colors to identify the Target Avatar (i.e. pursuer) in each trial when making a verbal report. In trials with fewer than four avatars, the colors of the orbs were randomly determined. Figure 8.1 shows a sample image from Experiment 3 with a target and three distracters. The Distracter 146 Number was varied within each Avatar block. Experiment 3 therefore had a 2 (Avatar, blocked) x 3 (Distracter Number) within-subjects design. Figure 8.2 illustrates the conditions in Experiment 3. Each avatar‟s initial position was determined by a “spawn box:” a 1.5 m x 0.5 m zone within which an avatar could appear. The four spawn boxes were placed at a distance of 9 m (z) from the participant‟s start location. Thus each avatar could be at a distance of 9-10.5 m away from the participant upon appearing. Spawn boxes were arranged laterally so that each was 1 m apart. The two outside boxes‟ outer edges had lateral (x) deviations of ± 4.5 m from the participant‟s start location. On each trial the spawn box containing the Target Avatar was randomly determined. The avatars‟ initial headings were restricted to a range of ± 15-30° to prevent the avatars from moving out of the head-mounted display‟s horizontal field of view (63°). The avatars appeared moving on their initial headings and then immediately turned onto their approach trajectory toward the participant (Target Avatar) or stationary goal (Distracter Avatars). The stationary goals of the Distracter Avatars were always different from one another. The goals were placed randomly within 1 m x 1 m spawn boxes located 1 m (z) behind the participant (not pictured in Figure 8.2). A central spawn box was positioned directly behind the participant‟s start location with no lateral deviation, while the other two were placed 1 m away from the central box‟s edge by ± 1 m (x). Each Distracter Avatar present in a trial was assigned a goal randomly. The variations in initial conditions caused by the avatar and goal spawn boxes were designed to allow for a broader range of avatar movements. 147 Figure 8.1 Example of a display in Experiment 3 containing a Target Avatar and three Distracter Avatars. 148 Figure 8.2 Diagram of design and conditions in Experiment 3 depicting the avatar “spawn boxes.” Procedure At the start of each trial, participants stood inside the gray starting pole and faced the red orientation pole. Trials began when the red orientation pole turned blue. This event signaled to participants that they should begin walking directly toward the now- blue pole. After walking 1 m, the blue pole disappeared and between two and four virtual avatars appeared. The colored orbs above the avatars‟ heads were used to distinguish between the otherwise identical avatars. Participants were instructed that one avatar would be pursuing them in every trial, and they had to judge as quickly and accurately as possible which avatar was pursuing them. As soon as this judgment was made, 149 participants clicked a button on the radio mouse they were holding. This event caused the avatars to disappear and recorded the participants‟ reaction time. The colored orbs, however, remained so that the participants could verbally report the color of the avatar that was pursuing them. After the verbal report the experimenter recorded the data and made the colored orbs disappear, ending the trial. Participants had up to seven seconds from the onset of the display to press the button. After this time had elapsed the avatars disappeared automatically and participants were instructed to report their judgment. At the end of each trial participants returned to the gray starting pole. Participants completed 12 trials of each of the three Distracter Number conditions within each block (Avatar conditions), for a total of 36 trials per block and 72 trials total. Data Analysis Trials were coded as either correct or incorrect based on whether participants correctly identified the Target Avatar was pursuing them. Percent correct (accuracy) scores were computed, and the arcsine-transformed percent correct scores were tested first against chance levels (50% for one Distracter, 33% for two Distracters, 25% for three Distracters) and then analyzed with a 2 (Avatar) x 3 (Distracter Number) two-way repeated-measures ANOVA. Reaction times for correct trials were also analyzed with a 2 (Avatar) x 3 (Distracter Number) two-way repeated-measures ANOVA. Post-hoc tests were conducted with Bonferroni corrections for multiple comparisons. 150 Results Accuracy The mean percentages of correctly answered trials in each condition are shown in Figure 8.3. Each condition‟s results were significantly above chance (50% for one Distracter, 33% for two, and 25% for three; all p < .05). A 2 (Avatar) x 3 (Distracter Number) repeated-measures ANOVA revealed a significant main effect of Distracter Number, F (2, 14) = 22.4, p < .001, = .56, but no effect of Avatar, F (1, 7) = .13, ns, = .003, nor an interaction, F (2, 14) = .3, ns, = .006. Post-hoc analyses (collapsed across the two avatars) revealed that accuracy in the 1-Distracter condition was not significantly different from the accuracy in the 2-Distracter condition, F (1, 7) = 5.55, ns, = .4, but was significantly greater than in the 3-Distracter condition, F (1, 7) = 51.4, p < .001, = .8. In addition accuracy in the 2-Distracter condition was significantly greater than in the 3-Distracter condition, F (1, 7) = 15.44, p < .01, = .68. These results indicate that participants became less accurate as the number of Distracter Avatars increased. 151 Head HumanFixation with Head Fixation No Fixation Walking Human 100 * 90 80 70 Percent Correct 60 50 40 30 20 10 0 1 2 3 Number of Distracter Avatars Figure 8.3 Mean percentages of correctly answered trials in Experiment 3. Reaction Time The mean reaction times for correctly answered trials in each condition are shown in Figure 8.4. A 2 (Avatar) x 3 (Distracter Number) repeated-measures ANOVA revealed a significant main effect of Distracter Number, F (2, 14) = 15.29, p < .001, = .41. However, there was no main effect of Avatar, F (1, 7) = .71, ns, = .03, nor was there an interaction, F (2, 14) = .2, ns, = .004. It is worth noting that the mean reaction times were not at ceiling levels (seven seconds), indicating that participants made responses on their own rather than waiting for the full seven seconds to pass. Post-hoc analyses (collapsed across the two avatars) revealed that participants were faster in the 1-Distracter 152 condition than in both the 2-Distracter condition, F (1, 7) = 21.5, p <.01, = .7, and the 3-Distracter condition, F (1, 7) = 28.6, p <.01, = .8. Reaction times did not differ between the 2- and 3-Distracter conditions, F (1, 7) = 4.05, ns, = .37. Overall participants reacted more slowly as the number of Distracter Avatars increased. These results converge with the accuracy data and indicate that participants processed the avatars‟ movements in a sequential fashion. Head Fixation Human with Head Fixation No Fixation Walking Human 6 * 5 4 Reaction Time (s) 3 2 1 0 1 2 3 Number of Distracter Avatars Figure 8.4 Mean reaction times of correctly answered trials in Experiment 3. 153 Discussion The accuracy and reaction time results together provide evidence that participants processed the avatars sequentially in Experiment 3. While in each condition participants were significantly more accurate than chance levels, they decreased in accuracy and increased in reaction time as the number of distracters increased from one to three. These results support the hypothesis that participants were performing a conjunctive search when identifying the pursuer. The data from Experiment 3 also indicate that head fixation did not improve accuracy or reaction time, consistent with most of the results from Experiment 2. Two lines of research can follow from these findings. The first could investigate the upper limits of participants‟ attention and processing of multiple pedestrians. Experiment 3 presented participants with four avatars at most in the scene. The pattern of results suggests that as more distracters are added, performance would continue to decrease until participants were at chance levels. An open question is what size the crowd must be to decrease performance to chance. A second set of future studies could further investigate the combination of informational variables (i.e. expansion, rotation, constant bearing, etc.) that prevent pursuit behavior from popping out from within a crowd. Perhaps if the goals of the distracters were located to the periphery of the participants as opposed to behind them, the pursuer‟s behavior would be more easily perceived in a larger crowd. This would suggest that it is the conjunction of approach trajectory/expansion and rotation to avoid collisions that produced the pattern of results in Experiment 3. Future work could 154 manipulate these variables through goal placement, avatar speed and initial conditions, and environmental constraints (i.e. stationary obstacles) to tease apart the underlying process of perceiving crowd motion information. CHAPTER NINE GENERAL DISCUSSION 155 156 The research presented in this dissertation was motivated by three central questions: (1) Does the steering dynamics model generalize to the interactions of two pedestrians? Specifically, can human pursuit and evasion be described by a model of target interception and moving obstacle avoidance? (2) Can human participants perceive pursuit and evasion based on locomotor behavior alone, and what information in another pedestrian‟s movement specifies pursuit and evasion? Specifically, do the trajectory, contingent movement, static features, and/or head fixation of an avatar contribute to the perception of pursuit and evasion? (3) Do people perceive the behaviors of multiple pedestrians present in scene in a sequential or parallel fashion? In conducting three empirical studies I have sufficiently answered these questions. This chapter summarizes my findings and speaks to the contribution this dissertation makes to the existing literature on agent models, the perception of behavior, and the behavioral dynamics of locomotion. Summary I conducted Experiment 1A to investigate whether the existing components in the steering dynamics model for target interception (Fajen & Warren, 2007) and moving obstacle avoidance (Cohen, Bruggeman, & Warren, under review) could describe human pursuit and evasion, respectively. I fit each model to the mean data from one participant either pursuing or evading a second participant. The models fit each condition very closely. The models were then used to predict, with fixed parameters, the mean behavior in three more complex conditions: (1) Mutual Evasion, (2) Mutual Pursuit, and (3) 157 Pursuit-Evasion. The predictions for the Mutual Evasion and Pursuit-Evasion conditions were quite good, with low error and high coefficients of determination (r2). However, the predictions for the Mutual Pursuit condition did not match the mean data, and resulted in larger error. Adjustments to the model‟s parameters improved the fit. In addition the stationary goal model fit the data, indicating that participants may have adopted a different control strategy for mutual pursuit. The moving obstacle avoidance model was then used to predict the mean participant behavior in a constrained, sidewalk passing environment (Experiment 1B). The model predicted the preferred route participants took around the confederate in each condition. While overall the model reasonably predicted the data, in some conditions the model avoided the confederate either to a greater or lesser degree. Overall, the results of Experiments 1A and 1B demonstrate the robust use of the constant bearing strategy that underlies target interception and moving obstacle avoidance in dynamic pursuit-evasion scenarios. Turning to the second core question of whether humans can perceive pursuit and evasion based on the locomotor behavior modeled in Experiment 1, I conducted Experiment 2 using virtual avatars that were driven by the steering dynamics model. The trajectory, contingent movement to maintain or avoid a constant bearing, static features, and head fixation of an avatar were manipulated to investigate the contribution of each to a participants‟ perception of pursuit and evasion. Experiment 2A indicated that pursuit and evasion could be perceived on the basis of differences in contingent motion. Perceptions of evasion were increased as the trajectory of an evading avatar increased and caused the avatar to drift laterally and face at an angle. Participants were more 158 sensitive to avatars that contained a rotational component to their motion. Static features did not contribute to the sensitivity, but a bias toward perceiving evasion was found with the Walking Human avatar. Head fixation was found not to contribute in Experiment 2A, but was shown to increase sensitivity in Experiment 2B when an avatar was presented at a distance of 6 m. In sum, the results of Experiment 2 provide evidence that people can use the information present in locomotor behavior to perceive pursuit and evasion. Finally, Experiment 3 investigated whether participants could identify a pursuer in a small crowd of evader, and whether the behaviors of multiple avatars were processed sequentially or in parallel. The results indicate that participants performed a conjunctive search when identifying a pursuer in a crowd. Reaction times increased and accuracy decreased (but remained significantly above chance) as the number of distracter avatars present in the display increased from one to three. This is evidence of a sequential processing of the avatar‟s movements and behavior. Head fixation was found not to contribute to making relative judgments among the avatars. These findings in this thesis contribute to the existing literature both empirically and methodologically. First, these results further extend the work with humans used in developing the steering dynamics model (Fajen & Warren, 2003; 2007; Cohen, Bruggeman, & Warren, under review) by showing that the components used for intercepting and avoiding objects traveling on linear trajectories can also be used to simulate the interactions of two pedestrians. Second, by demonstrating that the constant bearing strategy is used in human pursuit and evasion behaviors, the results of Experiment 1 support and converge with work showing that bats (Ghose, Horiuchi, Krishnaprasad, & Moss, 2006) and dragonflies (Olberg, Worthington, & Venator, 2000) 159 also employ the constant bearing strategy in pursuit of prey. Overall these findings demonstrate the adaptive and flexible use of the constant bearing strategy over a variety of scenarios. The differences in parameter values between the original fits shown in Chapter Three and those obtained in Experiment 1 indicate that the constant bearing strategy can be retuned or adapted to specific circumstances and objects or agents encountered in the environment while remaining the same essential strategy. Methodologically, Experiment 1 provides an approach to modeling the interactions among agents by empirically deriving control laws for governing pursuit and evasion behaviors. In addition to adding to the agent modeling and behavioral dynamics literature, this dissertation also contributes its findings to work on the perception of behavior. Experiments 2 and 3 demonstrate the use of contingent movement and trajectory in judging whether an avatar is pursuing or evading. These results support findings that stretch back to Heider and Simmel‟s (1944) contingently-moving geometric displays that provoked clear descriptions of specific behaviors. As work with adults (e.g. Bassili, 1976; Gao, Newman, & Scholl, in press) and children (e.g. Bloom & Veres, 1999; Biro & Leslie, 2007) has demonstrated, spatial and temporal contingencies provide rich information that specify intentional behaviors in agents. The findings of Experiments 2 and 3 both support these conclusions with further evidence that contingent movement perceived and used by participants to distinguish pursuit from evasion: Moving contingently so as to preserve a constant bearing angle combined with symmetrical and constant expansion specifies pursuit, while moving contingently to avoid a constant bearing angle specifies evasion. In addition, the results of Experiments 2A also support 160 work demonstrating that an agent need not appear human for people to perceive its behavior (Biro, Csibra, & Gergely, 2007). Finally, the results of Experiment 3 support Andersen and Kim‟s (2001) conclusion that the trajectories of multiple objects are processed done in a sequential fashion. This finding points to a limited capacity system for attending to pedestrians in a crowd. In terms of methodological contribution, the model-driven avatars used throughout Experiments 2 and 3 are both a proof of concept that the steering dynamics model can be implemented as a stimulus and a qualitative step forward in investigations of crowd interactions and agent behavior. Model-driven avatars provide a powerful tool for the study of perceiving intentional behavior in cognitive, social, and developmental psychology. They can be experimentally manipulated to test information or to present realistic empirically-based behavior. Future Directions Probing Patterns The data collected in Experiments 2B and 3 were analyzed for participants‟ sensitivity, bias, accuracy, and reaction times. However, a plethora of participants‟ locomotor data has been left untouched as they were not the focus of the present work. By delving into these data one could analyze the strategies participants used to probe the avatar(s) for contingent behavior. If preferred or prevalent patterns of motion were identified, they could in turn be used to drive an avatar that was faced with another avatar or participant that was pursuing or evading. The avatar would be required to probe the 161 other agent to determine the nature of its behavior and then make an appropriate response; in this regard one could conduct the inverse of Experiments 2 and 3 either in immersive virtual reality or in simulation. Scaling The present work models the interactions between two agents. The next step is to increase this number and attempt to predict the locomotor interactions within a small crowd. As space and equipment constraints and safety concerns prevent one from conducting a pursuit-evasion study with a larger crowd in the VENLab, future work in this vein can instead take advantage of the virtual avatars used in Experiments 2 and 3. Because the avatars are driven by the steering dynamics model, their trajectories can be simulated along with that of a participant. Thus one can model how a human walks through a crowd of avatars, how one person evades multiple pursuers, or any other potential interactions. Observational data on crowd movements could be used to further validate the steering strategies observed in controlled studies and could serve as inspiration for environmental manipulations (e.g. obstacle placement) in other experiments. Avatars Displays Model-driven avatars can be used to generate realistic displays of agent motion. These displays could then be used to further investigate how people perceive agent behavior, either in immersive virtual environments or while viewing a desktop display. The focus of Experiments 2 and 3 was to determine whether people could perceive 162 pursuit and evasion based on information in the avatars‟ movements. The movements themselves were based directly on the data in Experiment 1A (i.e. the parameter values in the model were fixed). In future work, these parameter values could be manipulated as independent variables to test participants‟ perception of behavior under less ideal or extreme conditions. For example, “stalking” behaviors similar to that found in Gao, Newman, and Scholl (in press) can be simulated by using the constant bearing strategy but matching the “stalker‟s” speed to the participant‟s speed, thereby eliminating approach. From the participant‟s viewpoint, this creates a constant bearing with no optical expansion of the avatar. Object Classes and Valence The differences between previous and present model parameter values imply that different classes of targets and obstacles (with other humans being a particular class) may elicit a retuning of one‟s locomotor control parameters. Moreover, the Mutual Pursuit condition in Experiment 1A extends this implication from different object classes to different behaviors or scenarios. The present work found parameter differences between virtual poles used in previous studies (e.g. Fajen & Warren, 2007) and human pedestrians. Investigating the range of potential object classes and their respective parameter values is a project unto itself. It may be the case that similar object classes result in similar parameter values. If this were true one would be able to map out a parameter space for the constant bearing strategy, where different local minima correspond to similarity clusters in object categorization. In addition, the valence of objects used in pursuit-evasion scenarios could be manipulated. One might predict that 163 objects with a negative valence might elicit wider or earlier deviations. Again, a parameter space for positive and negative attributes could be created. A parsimonious approach to modeling object valence might involve mapping the values of the “risk” parameters and onto the valence of object categories. In investigating object classes and valences with the steering dynamics model one could also create a wide array of avatar „types‟ to be used in other research. Each avatar would have slightly different parameter values controlling its behavior, and experimenters could directly manipulate the valence or class of avatar s/he presented to participants. Prior Knowledge and Pragmatics In Experiments 2 and 3, participants were not informed that the avatars they would encounter were driven by the steering dynamics model, itself based on human locomotor data. They were also not specifically told that the avatars were controlled via computer program. Unless participants knew beforehand about the steering dynamics model, there was no way for them to know it was driving the avatars. However, participants may have implicitly assumed that the avatars were driven by an algorithm, and brought the pragmatic expectations and knowledge about computer-controlled objects to bear when interacting with the avatars. These expectations could have colored the results in Experiments 2 and 3. In order to test how the pragmatics of an experimental scenario can affect participants‟ responses and performance, imagine a Turing (1950) Test for avatars: Participants are greeted by two experimenters and are told that one experimenter would be driving a virtual avatar‟s behavior in real-time through the use of a second head-mounted display; the second head-mounted display would in fact not be 164 connected to the virtual environment in any way. Once the participant was outfitted with his/her own head-mounted display, the second „experimenter‟ could simply leave the room or passively observe. The ruse in this situation is designed to convince the participant that the avatar with whom s/he interacts is another person, despite being controlled by computer in a manner identical to the present work. One may find that participants employ different strategies when probing the avatar for spatial contingencies, and may have entirely different response patterns regarding its pursuit-evasion behaviors if they believe they are interacting with another person. Should this be evident the validity of model-driven avatars as substitutes for human pedestrians in crowd studies might be called into question. Conclusion The results of this dissertation provide insight into how pedestrians, perceive, understand, and react to the behaviors of other pedestrians. By identifying the variables present in pedestrian locomotion that specify pursuit and evasion, I have presented an information-based account of the perception of intentional behavior. Accurately perceiving the behavior of another pedestrian may not require attributing intentions to a “neutral” agent, but rather may be specified directly by the information present in the pedestrian‟s motion. The contingent movement shown to specify pursuit and evasion was perceived by participants through their active “probing” movements, further embedding this information in the context of coupled perception and action among agents. These 165 findings establish the groundwork for future lines of research focused on how we directly perceive the locomotor behaviors of others. REFERENCES 166 167 Andersen, G.J., and Kim, R.D. (2001). Perceptual information and attentional constraints in visual search of collision events. Journal of Experimental Psychology: Human Perception and Performance, 27 (5), 1039-1056. Baldwin, D.A., & Baird, J.A. (2001). Discerning intentions in dynamic human action. Trends in Cognitive Sciences, 5 (4), 171-178. Bassili, J. (1976). Temporal and spatial contingencies in the perception of social events. Journal of Personality and Social Psychology, 33 (6), 680-685. Batty, M. (2003). Agent-based pedestrian modeling. Environment and Planning B: Planning and Design ,28(3), 321-26. Batty, M., Desyllas, J., & Duxbury, E. (2003). The discrete dynamics of small-scale spatial events: Agent-based models of mobility in carnivals and street parades. International Journal of Geographical Information Science, 17(7), 673-697. Biro, S., & Leslie, A.M. (2007). Infants‟ perception of goal-directed actions: development through cue-based bootstrapping. Developmental Science, 10 (3), 379-398. Biro, S., Csibra, G., & Gergely, G. (2007). The role of behavioral cues in understanding 168 goal-directed actions in infancy. In von Hofsten, C., & Rosander K., eds., Progress in Brain Research, 164, 303–322. Bloom, P., & Veres, C. (1999). The perceived intentionality of groups. Cognition, 71, B1-B9. Brooks, R. (1990). Elephants don‟t play chess. Robotics and Autonomous Systems, 6, 3- 15. Brooks, R. (1991). Intelligence without representation. Artificial Intelligence, 47, 139- 159. Bruggeman, H., Cohen, J.A., Fajen, B.R., & Warren, W.H. (in preparation). Testing a general steering dynamics model. Brustoloni, Jose C. (1991). Autonomous agents: Characterization and requirements. Carnegie Mellon Technical Report CMU-CS-91-204, Pittsburgh: Carnegie Mellon University. Bui, H., Venkatesh, S., and West, G. (2001). Tracking and surveillance in wide-area spatial environments using the Abstract Hidden Markov Model . International Journal of Pattern Recognition and Artificial Intelligence, 15(1), 177-196. 169 Buresh, J.S., & Woodward, A.L. (2007). Infants track action goals with and across agents. Cognition, 104, 287-314. Camazine, S., Deneubourg, J-L., Franks, N.R., Sneyd, J., Theraulaz, G., & Bonabeau, E. (2001). Self-Organization in Biological Systems. Princeton University Press. Chardenon, A., Montagne, G., Laurent, M., & Bootsma, R.J. (2005). A robust solution for dealing with environmental changes in intercepting moving balls. Journal of Motor Behavior, 37, 52-64. Cinelli, M., & Warren, W. (2007). Do walkers follow their heads? A test of the gaze- angle strategy for locomotor control [Abstract]. Journal of Vision, 7(9):1022, 1022a, http://journalofvision.org/7/9/1022/, doi:10.1167/7.9.1022. Cohen, J. A., Bruggeman , H., & Warren, W. H. (2005). Switching behavior in moving obstacle avoidance [Abstract]. Journal of Vision, 5(8):312, 312a, http://journalofvision.org/5/8/312/, doi:10.1167/5.8.312. Cohen, J.A., Bruggeman, H., & Warren, W.H. (under review). Behavioral dynamics of moving obstacle avoidance. Journal of Experimental Psychology: Human Perception and Performance. Cohen, J. A., Bruggeman, H., & Warren, W. H. (2006). Combining moving targets and 170 moving obstacles in a locomotion model [Abstract]. Journal of Vision, 6(6):135, 135a, http://journalofvision.org/6/6/135/, doi:10.1167/6.6.135. Cohen, J. A., Cinelli, M. E., & Warren, W. H. (2008). A dynamical model of pursuit and evasion in humans [Abstract]. Journal of Vision, 8(6):835, 835a, http://journalofvision.org/8/6/835/, doi:10.1167/8.6.835. Cohen, J. A., Fink, P.W., & Warren, W. H. (2007). Choosing between competing goals during walking in a virtual environment [Abstract]. Journal of Vision, 7(9):1021, 1021a, http://journalofvision.org/7/9/1021/, doi:10.1167/7.9.1021. Dechter, R. & Pearl, J. (1985). Generalized best-first search strategies and the optimality of A*. Journal of the Association for Computing Machinery, 32(3), 505-536. Fajen, B.R. & Warren, W.H. (2003). Behavioral dynamics of steering, obstacle avoidance, and route selection. Journal of Experimental Psychology: Human Perception and Performance, 29 (2), 343-362. Fajen, B.R. & Warren, W.H. (2004). Visual guidance of intercepting a moving target on foot. Perception, 33, 689-715. Fajen, B.R. & Warren, W.H. (2007). Behavioral dynamics of intercepting a moving target. Experimental Brain Research, 180 (2), 303-319. 171 Fajen, B.R., Warren, W.H., Temizer, S., & Kaelbling, L.P. (2003). A dynamical model of visually-guided steering, obstacle avoidance, and route selection. International Journal of Computer Vision 54 (1/2/3), 13-34. Felner, A., Stern, R., Ben-Yair, A., Kraus, S., & Netanyahu, N. (2004). PHA*: Finding the shortest path with A* in an unknown physical environment. Journal of Artificial Intelligence Research, 21, 631-670. Fink, P.W., Foo, P.S., & Warren, W.H. (2007), Obstacle avoidance during walking in real and virtual environments. ACM Transactions on Applied Perception, 4 (1), Article 2, 1-18. Franklin, S., & Graesser, A. (1996). Is it an agent, or just a program? A taxonomy for autonomous agents. Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, Springer-Verlag. Gao, T., Newman, G.E., & Scholl, B.J. (in press). The psychophysics of chasing: A case study in the perception of animacy. Cognitive Psychology. Gérin-Lajoie, M. & Warren, W. (2008). The circumvention of barriers: Extending the steering dynamics model. [Abstract]. Journal of Vision, 8(6):1185, doi:10.1167/8.6.1158. 172 Ghose, K., Horiuchi, T.K., Krishnaprasad, P.S., & Moss, C.F. (2006). Echolocating bats use a nearly time-optimal strategy to intercept prey. PLoS Biology, 4 (5), 865-873. Gibson, J.J. (1958/1998). Visually controlled locomotion and visual orientation in animals. British Journal of Psychology, 49, 182-194 (Reprinted in Ecological Psychology, 10, 161-176). Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin. Gloor, C., Cavens, D., Lange, E., Nagel, K., Schmid,W. (2003). A pedestrian simulation for very large scale applications. In Koch, A., Mandl, P., eds., Multi-Agenten- Systeme in der Geographie. Number 23 in Klagenfurter Geographische Schriften, 167–188. Goldstone, R., Jones, A., & Roberts, M. (2006). Group path formation. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, 36(3), 611-620. Goldstone, R., & Roberts, M. (2006). Self-organized trail systems in groups of humans. Complexity, 11 (6), 43-50. Haklay M., O'Sullivan D., Thurstain-Goodwin M. (2001). So go downtown: Simulating 173 pedestrian movement in town centres. Environment and Planning B: Planning and Design, 28, 343-359. Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. American Journal of Psychology, 57, 243-259. Helbing D., & Molnar, P. (1995). Social force model for pedestrian dynamics. Physical Review E, 51(5), 4282-4286. Helbing, D., Farkas, I., & Vicsek, T. (2000). Simulating dynamical features of escape panic. Nature, 407, 487-490. Helbing, D., Keltsch, J., & Molnar, P. (1997). Modelling the Evolution of Human Trail Systems. Nature, 388, 47-50. Kelso, J.A.S. (1995). Dynamic patterns: The self-organization of brain and behavior. Cambridge, MA: MIT Press. Khatib, O. (1986). Real-time obstacle avoidance for manipulators and mobile robots. International Journal of Robotics Research, 5, 90-98. Kugler, P.N. & Turvey, M.T. (1987). Information, natural law, and the self- assembly of rhythmic movement. Hillsdale, NJ: Erlbaum. 174 Lee, J., Chai, J., Reitsma, P., Hodgins, J.K., & Pollard, N.S. (2002). Interactive control of avatars animated with human motion data. ACM Transactions on Graphics (SIGGRAPH). Lenoir, M., Musch, E., Thiery, E., & Savelsbergh, G. J. (2002). Rate of change of angular bearing as the relevant property in a horizontal interception task during locomotion. Journal of Motor Behavior, 34, 385-401. Loomis, J.M., Kelly, J.W., Pusch, M., Bailenson, J.N., & Beall, A.C. (2008). Psychophysics of perceiving eye-gaze and head direction with peripheral vision: Implications for the dynamics of eye-gaze behavior. Perception, 37, 1443-1457. Metoyer, R.A. & Hodgins, J.K. (2004). Reactive pedestrian path following from examples. The Visual Computer, 20, 635-649. Michotte, A. (1963). The perception of causality (trans. T. R. Miles & E. Miles). New York: Basic Books. Morris, J.P., Pelphrey, K.A., & McCarthy, G. (2005). Regional brain activation evoked when approaching a virtual human on a virtual walk. Journal of Cognitive Neuroscience, 17 (11), 1744–1752. 175 Nakayama, K., & Silverman, G.H. (1986). Serial and parallel processing of visual feature conjunctions. Nature, 320, 264-265. Ni, R. & Andersen, G.J. (2008). Detection of collision events on curved trajectories: Optical information from invariant rate-of-bearing change. Perception & Psychophysics, 70 (7), 1314-1324. Nummenmaa, L., Hyona, J. & Hietanen, J.K. (2009). I‟ll walk this way: Eyes reveal the direction of locomotion and make passersby look and go the other way. Psychological Science, 20 (12), 1454-1458. Olberg, R. M., Worthington, A. H., & Venator, K. R. (2000). Prey pursuit and interception in dragonflies. Journal of Comparative Physiology A, Sensory, Neural, and Behavioral Physiology, 186, 155-162. Owens, J.M. (2008). Anticipatory control of human locomotion requires visuo-spatial attentional resources. Unpublished dissertation, Brown University, Providence, RI. Owens, J.M. & Warren, W.H. (2004). Intercepting moving targets on foot: Target acceleration and direction change [Abstract]. Journal of Vision, 4(8), 801, http://journalofvision.org/4/8/801/ 176 Pelphrey, K.A., Singerman, J.D., Allison, T., & McCarthy, G. (2003). Brain activation evoked by perception of gaze shifts: The influence of context. Neuropsychologia, 41 (2), 156-170. Pelphrey, K.A., Viola, R.J., & McCarthy, G. (2004). When strangers pass: Processing of mutual and averted social gaze in the superior temporal sulcus. Psychological Science, 15 (9), 598-603. Pirjanian, P. (1999). Behavior coordination mechanisms: State of the art. Technical Report IRIS-99-375, Institute for Robotics and Intelligent Systems, School of Engineering, University of Southern California. Polking, J. (2002). Ode software for MATLAB. http://math.rice.edu/~dfield. Remagnino, P., Tan, T., & Baker, K. (1998). Multi-agent visual surveillance of dynamic scenes. Image and Vision Computing, 16 (6), 529-532. Reynolds, C.W. (1987). Flocks, herds, and schools: A distributed behavioral model. Computer Graphics, 21(4), 25-34. Rochat, P., Striano, T., & Morgan, R. (2004). Who is doing what to whom? Young infants‟ developing sense of social causality in animated displays. Perception, 33 (3), 355-369. 177 Scholl, B.J. & Tremoulet, P.D. (2000). Perceptual causality and animacy. Trends in Cognitive Science, 4 (8), 299-309. Schöner, G., Dose, M., & Engels, C. (1995). Dynamics of behavior: Theory and applications for autonomous robot architectures. Robotics and Autonomous Systems, 16, 213-245. Spivey, M.J., & Dale, R. (2006). Continuous dynamics in real-time cognition. Current Directions in Psychological Science, 15 (5), 207-211. Spivey, M.J., Grosjean, M., & Knoblich, G. (2005). Continuous attraction toward phonological competitors. Proceedings of the National Academy of Sciences, 102 (29), 10393-10398. Stauffer, C. & Grimson, W.E.L. (2000). Learning patterns of activity using real-time tracking. IEEE Trans. Patttern Analysis and Machine Learning, 22 (8), 747-757. Treisman, A. (1982). Perceptual grouping and attention in visual search for features and for objects. Journal of Experimental Psychology: Human Perception and Performance, 8 (2), 194-214. 178 Treisman, A. & Gelade, G. (1980). A feature integration theory of attention. Cognitive Psychology, 12, 97-136. Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433–460. Warren, W.H. (2006). The dynamics of perception and action. Psychological Review, 113 (2), 358-389. Woodward, A.L. (1998). Infants selectively encode the goal object of an actor‟s reach. Cognition, 69, 1-34. Wooldridge, M., & Jennings, N.R. (1995). Agent theories, architectures, and languages: A Survey, in Wooldridge, M. & Jennings, N.R., eds., Intelligent Agents, Berlin: Springer-Verlag, 1-22. Zacks, J.M. (2004). Using movement and intentions to understand simple events. Cognitive Science, 28, 979-1008.