Honey bees in a virtual environment

Honey bees are well known for their remarkable visual learning skills. They localize and recognize flowers and hive surroundings, using the multiple visual cues offered by these sites of interest. To forage efficiently, they learn and memorize the colors, shapes and the position of rewarding flowers, which allows them to keep track of profitable food sources. For more than a century, many studies have tried to understand the mechanisms of bee visual learning by training free-flying bees to land on a visual target and then reward them with a drop of sucrose solution. Yet, the use of freely moving insects has precluded the study of the neural underpinnings of visual learning. Two recent studies published in Scientific Reports and in the Journal of Experimental Biology have achieved a technological breakthrough by implementing a system of virtual reality in which a tethered bee walking stationary on a treadmill learns and memorizes visual discriminations. This study opens the door to future work combining virtual reality and recording neural activity at the level of visual centers of the honey bee brain.

HFSP Program Grant holders Lars Chittka, Martin Giurfa and Jeffrey Riffell and colleagues
authored on Mon, 05 March 2018

In their natural environment, honey bees show remarkable visual learning skills. They learn and memorize the visual features of flowers they exploit and the visual landmarks that guide them back to the hive. These visual performances have attracted the attention of many researchers for more than a century. Bees learn colors, shapes, symmetry and many other parameters in association with food reward and can also be trained to solve visual problems of high complexity, such as categorization tasks or non-elemental discriminations based on relational concepts such as identity (“sameness” and “difference” relationships), numerosity or on spatial concepts such as “above” or “below” with respect to a constant reference. Yet, the exploration of the neural bases of visual cognitive processing is not possible in freely flying insects. To overcome this limitation, we aimed at developing a virtual reality system in which a tethered bee would walk stationary on a spherical treadmill floating on air jets, while visual stimuli rewarded with sucrose or penalized with a quinine solution are displayed on a semi-circular screen in front of the bee by a high-resolution video projector.

Figure 1: Global view of the virtual reality system. A polystyrene ball floats on a constant airflow provided at the base of a ball support. A tethered bee was placed on the ball with the aid of a holding support. The apparatus was placed behind a semi-spherical opaque screen onto which visual stimuli were projected. Two optic-mouse sensors were placed on the ball support at 90° of each other to record the ball movements. The setup translates the movements of the walking bee into rotations of the ball.  Photo credit: Cyril Fresillon, CNRS.

Two recent studies, one published in Scientific Reports by the Toulouse team of Martin Giurfa and one published in Journal of Experimental Biology by the Seattle team of Jeff Riffell, have managed a great technological breakthrough by implementing a virtual reality in which bees learn and memorize different visual discriminations. Both studies showed that bees learned to discriminate the trained stimuli and modified their original preferences because of the reinforcement received during training.

As stimuli varied in shape and color (e.g. a green disc versus a blue square), one study (J Exp Biol) addressed the question of the dominance of these parameters when set in competition; it showed that visual stimuli are learned in a non-additive manner with the interaction of specific color and shape combinations being critical for learned responses. The other study (Sci Rep) analyzed acquisition and showed that presentation of a single stimulus during training hides learning, as bees tend to fixate and/or respond in a positive phototactic manner to any single illuminated stimulus displayed in front of them, even if in subsequent tests with two stimuli learning was evident. Moreover, it showed that different negative reinforcements associated with the stimulus to be avoided had a different impact on discrimination learning, with quinine solution and distilled water being more efficient to improve discrimination.

Figure 2: Global view of the virtual reality system from above when the setup is illuminated by blue light. A square and a disc were projected onto the screen placed in front of the bee. The bee (antennae visible), placed between the two shapes, had to learn to discriminate between both shapes. Photo credit: Cyril Fresillon, CNRS.

Both studies were performed thanks to the support of HFSP and by two PhD students, Claire Rusch (Seattle) and Alexis Buatois (Toulouse) funded by the HFSP grant. These promising results open new doors for studies combining the use of these virtual reality protocols and invasive techniques for the study of brain activity in the tethered bees. The fact that the insect is partially immobilized makes the recording of neurons underlying visual learning and retention possible. These works are already underway.


Associative visual learning by tethered bees in a controlled visual environment. Buatois A, Pichot C, Schultheiss P, Sandoz JC, Lazzari CR, Chittka L, Avarguès-Weber A, Giurfa M. Sci Rep. 2017 Oct 10;7(1):12903. doi: 10.1038/s41598-017-12631-w.

Pubmed link

Honeybees in a virtual reality environment learn unique combinations of colour and shape. Rusch C, Roth E, Vinauger C, Riffell JA. J Exp Biol. 2017 Oct 1;220(Pt 19):3478-3487. doi: 10.1242/jeb.164731. Epub 2017 Jul 27. Erratum in: J Exp Biol. 2017 Dec 15;220(Pt 24):4746.

Pubmed link