Exploring the Potential of Deep Learning for Precision Livestock Farming of Pigs
Development of two Analysis Frameworks for Behavioral Monitoring
Doctoral thesis
Date of Examination:2023-03-23
Date of issue:2023-06-16
Advisor:Prof. Dr. Armin Schmitt
Referee:Prof. Dr. Armin Schmitt
Referee:Prof. Dr. Imke Traulsen
Referee:Prof. Dr. Mehmet Gültas
Files in this item
Name:MWutke_PhD_Thesis.pdf
Size:36.4Mb
Format:PDF
Abstract
English
With increasing occupancy densities, larger group sizes and intense farming conditions of pigs, the monitoring process of these animals takes on an important and pro-active role in the farm management system. At the same time, the ethical aspect of livestock husbandry and animal welfare are taking on an increasingly important role in both practice and science. Nevertheless, the human-animal ratio in commercial livestock production is decreasing due to intensified farming conditions and also the use of camera technology is limited, as the generated video data usually has to be analyzed manually. In this context, recent advances in machine learning (ML), deep learning (DL), and computer vision (CV) demonstrate the potential to extract helpful information from video data that can assist researchers and practitioners in situational awareness and decision making regarding the status of their animals and animal welfare. The chapters of this thesis address this issue and investigate the suitability of state of the art DL algorithms to automate the video analysis process in order to enhance monitoring systems. In particular, I have developed two analysis frameworks that incorporate techniques from the fields of unsupervised and supervised learning to gain deeper insights into the potential of these algorithms to improve animal monitoring systems and to determine important behavioral traits of pigs. In my first analysis framework I utilized methods from the area of semi-supervised anomaly detection to investigate and monitor the group specific activity levels of different pig compartments and group sizes. By first training a DL model on video sequences with a low level of activity, defined as the normal behavior, anomalous episodes in the form of single movement activities as well as high level group activities can be identified by generating a group specific activity score for each time step of the video. In a second step, I scaled the individual activity scores by using a threshold-based activity score classifier to increase the comparability between different pig compartments and to tackle the problem of heterogeneous camera environments and group sizes. Compared to the group-specific analysis of my first framework, the second framework focuses on an animal-specific perspective. Here, I apply a bottom-up object detection approach, using a keypoint annotation to first determine the location and orientation of individual pigs, and then track each detection for a given period of time by combining the spatial and temporal information with a Kalman Filter (KF) algorithm. Based on the detection and tracking information, I consequently determine potential contacts between animals in the form of head-to-head and head-to-tail contacts for each video frame and generate a social network that provides information on the intensity of each relationship. Overall, both my frameworks demonstrate the applicability of DL-based analysis methods to automatically monitor groups of pigs in a commercial livestock setting, address the problem of big data in livestock farming and can be applied to determine behavior-based animal welfare indicators for further analysis.
Keywords: convolutional neural networks; precision livestock farming; animal tracking; deep learning; animal detection