Deep learning algorithms are widely used in various fields, e.g., computer vision, speech recognition, or natural language processing, and are becoming ubiquitous across particle physics. As these models are getting larger and larger, the computational complexity poses severe demands in terms of resources required for high-throughput computing. In addition to CPU and GPU, the use of dedicated hardware solutions is becoming essential and emerging to provide advantages over pure software solutions. More specifically, in the high performance computing sector more and more Field Programmable Gate Arrays (FPGAs) compute accelerators are being used to improve the computing performance and reduce the power consumption (e.g. in the Microsoft Catapult project, Bing search engine and Amazon EC2 F1 Instances). This talk will present an overview of the ongoing R&D to accelerate deep learning inference on FPGAs to solve the increasing computing challenges in particle physics.