Probably the coolest research paper name ever! The idea here was to try and model the thoughts and actions of a dog. The authors attach a number of sensors to the dog’s limbs to collect data for its movement; they also attach a camera to the dog’s head to get the same first-person view of the world that the dog does. A set of CNN feature extractors are used to get image features from the video frames, which are then passed to a set of LSTMs along with the sensor data to learn and predict the dog’s actions. The very new and creative application, along with the unique way the task was framed and carried out make this paper an awesome read! Hopefully it can inspire future research creativity with the way we collect data and apply deep learning techniques.
Read paper here (Ehsani, K., Bagherinezhad, H., Redmon, J., Mottaghi, R. and Farhadi, A., 2018. Who let the dogs out? modeling dog behavior from visual data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4051-4060).)
Taken from: The 10 coolest papers from CVPR 2018 (George Seif, Towards Data Science, 28.06.2018)