What is AnimatLab?
AnimatLab is a software tool that combines biomechanical simulation and biologically realistic neural networks. You can build the body of an animal, robot, or other machine and place it in a virtual world where the physics of its interaction with the environment are accurate and realistic. You can then design a nervous system that controls the behavior of the body in the environment. The software currently has support for simple firing rate and leaky integrate and fire spiking neural models. In addition, there a number of different synapse model types that can be used to connect the various neural models to produce your nervous system. On the biomechanics side there is support for a variety of different rigid body types, including custom meshes that can be made to match skeletal structures exactly. The biomechanics system also has hill-based muscle and muscle spindle models. These muscle models allow the nervous system to produce movements around joints. In addition, there are also motorized joints for those interested in controlling robots or other biomimetic machines. This allows the user to generate incredibly complicated artificial lifeforms, otherwise known as animats, that are based on real biological systems. Best of all standard AnimatLab is completely free, and it includes free source code!. You can download the application from the Download page, and you can get the source code from the SDK page. An AnimatLab Handbook is also now available. It provides details on how to use the various features of AnimatLab. It comes as either a PDF or a word document.
Why do we need AnimatLab?
A central goal of neuroscience and computation neurobiology is to understand how the nervous system is organized to control behavior. Behavior is controlled by neural circuits that link sensory inputs to decision networks, and link decision elements to motor networks and muscles. The dynamics of this interaction are central to the functional control of behavior. Each movement generates its own sensory input and changes the animal’s position and perspective in the world. To govern behavior correctly, the nervous system must both predict and respond to the consequences of the animal’s own movements and behavior, and do so on a millisecond to second time scale. Despite the importance of this dynamic relationship between nervous function and behavior, it is poorly understood because of technical limitations in our ability to record neural activity in freely behaving animals. The kinematics and dynamics of many behaviors are well understood, and the neural circuitry for behavior patterns in a variety of animals have been mapped and described in anesthetized, restrained animals, or in preparations where the nervous system has been isolated from the periphery. Investigators have then been left to imagine how the operation of neural circuits might produce the behavior patterns observed in the intact animal, but without any way to test those imaginings.
How does AnimatLab Help?
AnimatLab was written to address this problem. It provides a software environment in which models of the body biomechanics and nervous system interact dynamically in a virtual physical world where all the relevant neural and biomechanical parameters can be observed and manipulated. The program contains a ‘body editor’ that is used to assemble a model of the body of an animal (or part thereof) in LegoTM-like fashion, by attaching different sorts of parts to each other through a variety of joint mechanisms. Muscle attachments, muscles, stretch receptors, touch sensors, and chemical sensors can then be added to provide sensory and motor capabilities. A ‘neural editor’ is used to assemble virtual neural circuits using a variety of model neuron and synapse types. Model sensory neurons can then be linked to the body sensors, and motor neurons can be linked to the Hill model muscles to complete the loop.
|
Figure 1. Screenshots of AnimatLab.
|
The body is situated in a virtual physical world governed by VortexTM, a physics simulator licensed from CM-Labs, Inc. Simulations can then be run in which the animat’s movements in the virtual environment are under neural control as it responds to simulated physical and experimental stimuli. The autonomous behavior of the animat is displayed graphically in 3-D alongside the time-series responses of any designated set of neural or physical parameters. This allows you to complete the sensory-motor feedback loop to test various hypothesis about the neural control of behaviors.
|
Figure 3. Sensory-Motor feedback loop. Sensory information is transduced from the virtual world into the neural network. This information is processed and decisions are made and motor patterns generated. These patterns then control the contraction of muscles or torques in motors that cause subsequent movement, and lead to new sensory information.
|
What can you do with AnimatLab?
AnimatLab currently has two different neural models that can be used. One is an abstract firing rate neuron model, and the other is a more realistic conductance based integrate-and-fire spiking neural model. It is also possible for you to add new neural and biomechanical models as plug-in modules. There are several different joint types, and a host of different body types that can be used. And, if none of the body parts are exactly what you want, then create that part as a mesh and use it directly in your simulations. This gives you complete control over specifying the body and neural control system for your organism. Below is a list of a few of the organisms that you can create using AnimatLab. Several of these examples also have online video tutorials that show you how to build those systems for yourself in a simple, step-by-step process. Follow along as you watch us build those systems. Also, see our published papers describing some of the results we have obtained using AnimatLab.
|
Locust Jumping |
This project is a detailed model of jumping in locust. Properties for the model were taken directly from the literature. Building a virtual locust allows us to perform experiments that would be difficult, or impossible, in the real animal. |
|
|
Hexapod Walking |
Build an ant-like walking robot and see how reciprocal inhibition between central pattern generators can produce coordination of limbs, allowing animals to walk. |
|
|
Crayfish Tailflip |
We are building a virtual crayfish. The neural circuits responsible for the crayfish escape responses have been thoroughly mapped out, but there is no simple way to test the entire circuit to see if it produces the predicted behavior. A virtual crayfish will let us perform those tests. |
|
|
Belly Flopper |
This is a video tutorial that shows you how to build a simple tetrapod that can move around an environment. The tutorial shows you how to build the body and the neural control systems from start to finish. |
|
|
Stretch Reflex |
This is two video tutorials that show how to use limb stiffness to stabilize a limb at a desired joint angle, and how to use equilibrium points to move the limb to new locations. It then goes on to demonstrate the classic stretch reflex and how stretch receptors can be used to detect errors between predicted and actual movements of an arm. |
|
|
Predator-Prey |
Another video tutorial that shows you how to build a predator that tracks an evasive prey item through scent. When the predators energy level is low enough and the prey is close enough then it will strike out its tongue to immobilize the prey and feed on it. If the predator does not catch the prey before its energy runs out then it dies. |
|
|
Skin and Touch Receptive Fields |
This video tutorial shows you how to build a piece of virtual skin that has an array of sensory neurons over its surface. Each neuron has a receptive field for detecting pressure. When something touches the skin the response of the sensory neuron depends on both the magnitude of the force and how far it is from the center of the field. |
|
|
Crayfish Leg |
Model of a single crayfish leg and thorax constructed according to the dimensions, segmental organization, muscle placements, and motorneuron properties as described by D. Cattaert and others. The model will be used to study the dynamic control of posture and locomotion in crayfish. The video shows the response of the leg to two neuromuscular commands, first to levate and flex the leg, and second to depress the leg to enable the crayfish model to stand and hold an upright posture. |
|
The AnimatLab Community.
AnimatLab was designed from the ground up so that other developers from across the world could contribute. The software is composed entirely of plug-in modules. This means that anyone can add new features and models to AnimatLab and then upload the binary files so others can use them as well. We are currently working on an AnimatLab SDK (Software Developer Kit) that will give new users the tools they need to add new tools to AnimatLab. We also plan to host a central database of models so you will always have a central place you can come to search for what you need.
How Do I Get AnimatLab?
This link will take you directly to the download page. AnimatLab is free for anyone to use, but it only works on Microsoft windows at the moment. We have plans to port this over to other operating systems in the future, but that will still be a little while. AnimatLab also has an extensive help system with over 150 pages of text and 45 video tutorials that demonstrate using every major feature of the program. The Getting Started page will help you decide which tutorials you should look at first to learn your way around the system so you can start building some really great projects!
AnimatLab development was supported by National Science Foundation grant #0641326 to Donald H. Edwards.
The text of this page is available for modification and reuse under the terms of the Creative Commons Attribution-Sharealike 3.0 Unported License and the GNU Free Documentation License (unversioned, with no invariant sections, front-cover texts, or back-cover texts).