We describe how we implemented the avatars we use to represent users in LivingSpace, our implementation of the Living Worlds standard for multi-user distributed VRML worlds. LivingSpace allows multiple users to interact in a shared VRML world and communicate using spatialised audio. Avatars execute walking animations when the user moves and are capable of sitting, waving, and nodding as well as tracking a surface with a hand. Avatar position and orientation is predicted using dead reckoning based on velocity, curvature and angular velocity. We describe the problems we found, and draw some conclusions such as the need for an Inline that preserves the fields of an imported node and more control of VRML worlds over user interaction. We also discuss the use of avatars in multi-user interfaces, such as our VRML conferencing world. 8 Pages