Building Smart and Accessible Transportation Hubs with Internet of Things, Big Data Analytics, and Affective Computing

Large transportation hubs are difficult to navigate, especially for people with special needs such as those with visual impairment, Autism spectrum disorder (ASD), or simply those with navigation challenges. The primary objective of this research is to design and develop a novel cyber-physical infrastructure that can effectively and efficiently transform existing transportation hubs into smart facilities capable of providing better location-aware services. We investigated the integration of a number of internet of the things (IoT) elements, including video analytics, Bluetooth beacons, mobile computing, and facility semantic models, to provide reliable indoor navigation services to people with special needs, yet requiring minimum infrastructure changes. Our pilot tests with people with special needs at a multi-floor building in New York City has demonstrated the effectiveness of our proposed framework

[1]  R. Varma,et al.  Visual Impairment and Blindness in Adults in the United States: Demographic and Geographic Variations From 2015 to 2050. , 2016, JAMA ophthalmology.

[2]  Torsten Sattler,et al.  Hyperpoints and Fine Vocabularies for Large-Scale Location Recognition , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[3]  Torsten Sattler,et al.  Camera Pose Voting for Large-Scale Image-Based Localization , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[4]  Xiaogang Wang,et al.  Cross-scene crowd counting via deep convolutional neural networks , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  T. Pajdla,et al.  24/7 place recognition by view synthesis , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Giovanni Maria Farinella,et al.  Representing scenes for real-time context classification on mobile devices , 2015, Pattern Recognit..

[7]  Tao Chen,et al.  Landmark recognition with compact BoW histogram and ensemble ELM , 2015, Multimedia Tools and Applications.

[8]  Niko Sünderhauf,et al.  On the performance of ConvNet features for place recognition , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[9]  Zhigang Zhu,et al.  Mobile Panoramic Vision for Assisting the Blind via Indexing and Localization , 2014, ECCV Workshops.

[10]  Trevor Darrell,et al.  Caffe: Convolutional Architecture for Fast Feature Embedding , 2014, ACM Multimedia.

[11]  Serge J. Belongie,et al.  Recognizing locations with Google Glass: A case study , 2014, IEEE Winter Conference on Applications of Computer Vision.

[12]  Andrew G. Dempster,et al.  How feasible is the use of magnetic field alone for indoor positioning? , 2012, 2012 International Conference on Indoor Positioning and Indoor Navigation (IPIN).

[13]  Roberto Manduchi,et al.  Mobile Vision as Assistive Technology for the Blind: An Experimental Study , 2012, ICCHP.

[14]  Sheikh Tahir Bakhsh,et al.  Indoor positioning in Bluetooth networks using fingerprinting and lateration approach , 2011, 2011 International Conference on Information Science and Applications.

[15]  Andrew Zisserman,et al.  Learning To Count Objects in Images , 2010, NIPS.

[16]  Radu Bogdan Rusu,et al.  Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments , 2010, KI - Künstliche Intelligenz.

[17]  Gérard Lachapelle,et al.  Indoor Positioning System Using Accelerometry and High Accuracy Heading Sensors , 2003 .

[18]  Anant Sahai,et al.  Algorithms for GPS operation indoors and downtown , 2002 .

[19]  Devajyoti Deka,et al.  Assessment of Transportation and Mobility Adults on the Autism Spectrum in NJ 122.008 Assessment of Transportation and Mobility Adults on the Autism Spectrum in NJ , 2015 .

[20]  Shaogang Gong,et al.  Feature Mining for Localised Crowd Counting , 2012, BMVC.

[21]  Lakshminarayanan Subramanian,et al.  Mobile Accessibility Tools for the Visually Impaired , 2012 .