Computational Science – ICCS 2018

In the developing world, majority of people usually take para-transit services for their everyday commutes. However, their informal and demand-driven operation, like making arbitrary stops to pick up and drop off passengers, has been inefficient and poses challenges to efforts in integrating such services to more organized train and bus networks. In this study, we devised a methodology to design and optimize a road-based para-transit network using a genetic algorithm to optimize efficiency, robustness, and invulnerability. We first generated stops following certain geospatial distributions and connected them to build networks of routes. From them, we selected an initial population to be optimized and applied the genetic algorithm. Overall, our modified genetic algorithm with 20 evolutions optimized the 20% worst performing networks by 84% on average. For one network, we were able to significantly increase its fitness score by 223%. The highest fitness score the algorithm was able to produce through optimization was 0.532 from a score

[1]  Nina Dethlefs,et al.  Spatially-aware dialogue control using hierarchical reinforcement learning , 2011, TSLP.

[2]  Jason Weston,et al.  Semi-supervised Learning by Maximizing Smoothness , 2004 .

[3]  Gaël Varoquaux,et al.  The NumPy Array: A Structure for Efficient Numerical Computation , 2011, Computing in Science & Engineering.

[4]  John Scott Bridle,et al.  Probabilistic Interpretation of Feedforward Classification Network Outputs, with Relationships to Statistical Pattern Recognition , 1989, NATO Neurocomputing.

[5]  Kam-Fai Wong,et al.  Composite Task-Completion Dialogue Policy Learning via Hierarchical Deep Reinforcement Learning , 2017, EMNLP.

[6]  Stefan Ultes,et al.  Domain-Independent User Satisfaction Reward Estimation for Dialogue Policy Learning , 2017, INTERSPEECH.

[7]  Kyunghyun Cho,et al.  Task-Oriented Query Reformulation with Reinforcement Learning , 2017, EMNLP.

[8]  Paul H. Whitfield,et al.  Application Potential of Four Nontraditional Similarity Metrics in Hydrometeorology , 2014 .

[9]  Zhi-Hua Zhou,et al.  A brief introduction to weakly supervised learning , 2018 .

[10]  Ali Farhadi,et al.  Target-driven visual navigation in indoor scenes using deep reinforcement learning , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[11]  Chris Watkins,et al.  Learning from delayed rewards , 1989 .

[12]  Jianfeng Gao,et al.  A Persona-Based Neural Conversation Model , 2016, ACL.

[13]  Xiang Zhang,et al.  Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems , 2015, ICLR.

[14]  J. Thepaut,et al.  Toward a Consistent Reanalysis of the Climate System , 2014 .

[15]  Philip Bachman,et al.  Towards Information-Seeking Agents , 2016, ArXiv.

[16]  John D. Hunter,et al.  Matplotlib: A 2D Graphics Environment , 2007, Computing in Science & Engineering.

[17]  Heriberto Cuayáhuitl,et al.  SimpleDS: A Simple Deep Reinforcement Learning Dialogue System , 2016, IWSDS.

[18]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[19]  Jing He,et al.  A Sequence-to-Sequence Model for User Simulation in Spoken Dialogue Systems , 2016, INTERSPEECH.

[20]  Regina Barzilay,et al.  Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning , 2016, EMNLP.

[21]  Quoc V. Le,et al.  A Neural Conversational Model , 2015, ArXiv.

[22]  Piotr Indyk,et al.  Nearest-neighbor-preserving embeddings , 2007, TALG.

[23]  Yishay Mansour,et al.  Policy Gradient Methods for Reinforcement Learning with Function Approximation , 1999, NIPS.

[24]  Marilyn A. Walker,et al.  PARADISE: A Framework for Evaluating Spoken Dialogue Agents , 1997, ACL.

[25]  Maxine Eskénazi,et al.  Towards End-to-End Learning for Dialog State Tracking and Management using Deep Reinforcement Learning , 2016, SIGDIAL Conference.

[26]  E. Deci,et al.  A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. , 1999, Psychological bulletin.

[27]  Fei Yu,et al.  Disambiguation-Free Partial Label Learning , 2017, IEEE Trans. Knowl. Data Eng..

[28]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[29]  Dong Liu,et al.  $\propto$SVM for learning with label proportions , 2013, ICML 2013.

[30]  Shih-Fu Chang,et al.  On Learning with Label Proportions , 2014, ArXiv.

[31]  Jason Weston,et al.  Memory Networks , 2014, ICLR.

[32]  Alex Graves,et al.  Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.

[33]  Qi Zhang,et al.  EM-DD: An Improved Multiple-Instance Learning Technique , 2001, NIPS.

[34]  Guy Shani,et al.  An MDP-Based Recommender System , 2002, J. Mach. Learn. Res..

[35]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[36]  Roberto Pieraccini,et al.  Learning dialogue strategies within the Markov decision process framework , 1997, 1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings.

[37]  Jianfeng Gao,et al.  Deep Reinforcement Learning for Dialogue Generation , 2016, EMNLP.

[38]  Michael L. Littman,et al.  Classes of Multiagent Q-learning Dynamics with epsilon-greedy Exploration , 2010, ICML.