Data-driven techniques help boost agricultural productivity by increasing yields, reducing losses and cutting down input costs. However, these techniques have seen sparse adoption owing to high costs of manual data collection and limited connectivity solutions. Our system, called FarmBeats, includes Cloud, IoT & AI innovations for agriculture that enables seamless collection and analysis of data across various sensors, cameras, drones, and satellites. In this talk, we will describe the system, and outline some of the AI challenges we are currently addressing for agriculture.
Mao V. Ngo (Singapore University of Technology and Design)*; Hakima Chaouchi (CNRS, SAMOVAR, Telecom Sud Paris, Institut Mines Telecom, Paris-Saclay University); Tie Luo (Department of Computer Science, Missouri University of Science and Technology); Tony Quek (Singapore University of Technology and Design)
Jonathan Fürst (NEC Laboratories Europe)*; Mauricio Fadel Argerich (NEC Laboratories Europe); K. Shankari (UC Berkeley); Gürkan Solmaz (NEC Laboratories Europe); Bin Cheng (NEC Laboratories Europe)
Chia-Yu Lin, Tzu-Ting Chen, Li-Chun Wang and Hong-Han Shuai (Department of Electrical and Computer Engineering, National Chiao Tung University, Taiwan)
Mostafa Elhoushi (Huawei Technologies)*; Ye Tian (Huawei Technologies); Zihao Chen (Huawei Technologies); Farhan Shafiq (Huawei Technologies); Joey Li (Huawei Technologies)
Ming-Chih Lin (Microsoft)*; Sio-Sun Wong (Microsoft); Yu-Kwen Hsu (Microsoft); Ti-Hua Yang (Microsoft); Pi-Sui Hsu (Microsoft); He-Wei Lee (Microsoft); Wei-Chen Tsai (Microsoft); Hsiu-Tzu Wu (Microsoft); Sung-Lin Yeh (Microsoft)
Jaeseok Kim (HCL America, Inc.)*; Hank Tsou (HCL America); Ernst Henle (HCL America); Ashish Guttendar (HCL America)
When we look back from this time point, when a new decade just started, there was a quantum leap In our ability to know the world through data we collected from IOT devices and sharing them. Can we push the “sensing as a service” concept to another stage? And will those efforts align with Facebook’s mission to “Give people the power to build community and bring the world closer together.” We will cover that in our talk.
Machine learning (ML) applications have entered and impacted our lives unlike any other technology advance from the recent past. Indeed, almost every aspect of how we live or interact with others relies on or uses ML for applications ranging from image classification and object detection, to processing multi‐modal and heterogeneous datasets. While the holy grail for judging the quality of a ML model has largely been serving accuracy, and only recently its resource usage, neither of these metrics translate directly to energy efficiency, runtime, or mobile device battery lifetime. This talk will uncover the need for building accurate, platform‐specific power and latency models for convolutional neural networks (CNNs) and efficient hardware-aware CNN design methodologies, thus allowing machine learners and hardware designers to identify not just the best accuracy NN configuration, but also those that satisfy given hardware constraints.
Our proposed modeling framework is applicable to both high‐end and mobile platforms and achieves 88.24% accuracy for latency, 88.34% for power, and 97.21% for energy prediction. Using similar predictive models, we demonstrate a novel differentiable neural architecture search (NAS) framework, dubbed Single-Path NAS, that uses one single-path over-parameterized CNN to encode all architectural decisions based on shared convolutional kernel parameters. Single-Path NAS achieves state-of-the-art top-1 ImageNet accuracy (75.62%), outperforming existing mobile NAS methods for similar latency constraints (∼80ms) and finds the final configuration up to 5,000× faster compared to prior work. Combined with our quantized CNNs (Flexible Lightweight CNNs or FLightNNs) that customize precision level in a layer-wise fashion and achieve almost iso-accuracy at 5-10x energy reduction, such a modeling, analysis, and optimization framework is poised to lead to true co-design of hardware and ML model, orders of magnitude faster than state of the art, while satisfying both accuracy and latency or energy constraints.
Madono Koki (Waseda University)*; Masayuki Tanaka (Tokyo Institute of Technology); Masaki Onishi (National Institute of Advanced Industrial Science and Technology); Tetsuji Ogawa (Waseda University)
Dai-Ying Hsieh (National Chiao Tung University); Hsiao-Han Lu (National Chiao Tung University); Ru-Fen Jheng (National Chiao Tung University); Hung-Chen Chen (National Chiao Tung University); Hong-Han Shuai (National Chiao Tung University)*; Wen-Huang Cheng (EE, NCTU)
Nimish Awalgaonkar (School of Mechanical Engineering, Purdue University ); Haining Zheng (ExxonMobil Research and Engineering Company)*; Christopher Gurciullo (ExxonMobil Research and Engineering Company)
Zhengcong Fei (Chinese Academy of Sciences, Institute of Computing Technology)*
Anthony Chen (Dept. of Electrical Engineering, National Taiwan University); Sheng-De Wang (National Taiwan University)*
Luna Zhang (BigBear, Inc.)*
Luna Zhang (BigBear, Inc.)*
Tinghao Zhang (Harbin Institute of Technology)*; Jingxu Li (Harbin Institute of Technology); Jingfeng Li (Harbin Institute of Technology); ling wang (Harbin Institute of Technology); Feng Li (Harbin Institute of Technology); Jie Liu (Harbin Institute of Technology)
The field of AIoT (or AI of Things) is about making real-world impact through learning and decision making from sensor data. Although deep-learning based AI has made amazing breakthroughs in computer vision, natural language processing, machine translation, and gaming, when applied to real world problems, it faces a number of challenges from data, models, to systems. In this talk, we use ambient intelligent environments to motivate these challenges and present a few ideas to address them. We will discuss how multi-modality sensor fusion can overcome the limitation of single sensor types; how to use simulation to generate useful training data; and how to use neural architecture search to optimize the models for embedded platforms. These are initial work that scratches the surface of a complex field. We will discuss a few future research directions towards the end.