Dolphins are a cognitively sophisticated family of species, often living in complex societies and possessing highly developed auditory abilities well-suited for the primarily acoustic environment in which they live. In such an environment multifaceted communications between group members could afford a significant advantage. Therefore, it might be anticipated that dolphins possess as yet undiscovered vocal mechanisms through which certain types of information are encoded for communication with other dolphins. The goal of Dolphin Communication Analytics is to uncover and investigate these potential mechanisms using an investigative approach that combines theoretical models, software tools, carefully designed dolphin behavioral experiments and playback of artificial dolphin vocalizations.
1. Models – Theoretical models of mechanisms for information encoding in dolphin vocalizations represent an important facet of our research approach. Although conjectural, they provide a guide for the development of software tools used in analyzing recorded vocalizations and, more significantly, are central to the design of behavioral experiments in which dolphins produce vocalizations of potentially known information content.
2. Software – Software tools have been developed that can extract specific acoustic features from sets of dolphin vocalizations. These features may then be investigated as candidate information encodings.
3. Experiments – Although the theoretical models and software tools can identify features of dolphin vocalizations that potentially could encode the relevant information, they cannot determine whether these features actually are used for that purpose or are merely artifacts of the vocalization process. In order to distinguish between these two possibilities, we have designed an experimental framework in which one dolphin must vocalize to “tell” another dolphin to perform one of several specific tasks. Since we know what information must be conveyed for the task to be completed successfully, we can then analyze the recorded vocalizations with our software tools to identify and isolate those acoustic features that may represent the information encodings.
4. Playback – After identifying the set of candidate features from the dolphin vocalizations recorded in the experiments, we will use additional, custom software tools to generate synthetic, dolphin-like vocalizations that incorporate those features. If dolphins then respond to our artificial vocalizations as they did to the original, dolphin produced ones, it increases the likelihood that those features are actually involved in the information transfer. More significantly, however, it would allow us to develop systems for bidirectional communication between humans and dolphins. At first, these would involve only the simple communications of the first sets of experiments. Nevertheless, even such elementary interspecies communications tools could potentially accelerate our understanding of the limits of dolphin acoustic communication.
Above image: Signature whistle of Shiloh, an Atlantic bottlenose dolphin (right), shown in a Gabor “chirplet” spectrogram.