Ruimin KE (柯锐岷)

I am a Ph.D. candidate in Intelligent Transportation and Infrastructure Systems at the University of Washington (UW), where I am advised by Prof. Yinhai Wang. I have been working as a research assistant at the Smart Transportation Applications and Research Lab (STAR Lab) at UW. My research interests lie in intelligent transportation systems, connected and automated vehicles, cyber-physical systems, transportation data science, traffic modeling, and computer vision. I am concurrently taking a Master in Computer Science degree at the University of Illinois at Urbana-Champaign. Earlier, I received a Master degree from the Department of Civil and Environmental Engineering at the University of Washington, and my Bachelor degree from the Department of Automation at Tsinghua University, where I was advised by Prof. Danya Yao.

In my spare time, I am also a badminton player. I have won more than 70 medals/trophies since 2001. I was the junior champion of Sichuan Province (11 times) and City of Chengdu (15 times). Later, I was on the Tsinghua Badminton Team and won 9 titles of Beijing and the national runner-up for Tsinghua University. During my time at Tsinghua, I was the men's singles champion of Beijing (university students) for three consecutive years (2011, 2012, 2013). After moving to Seattle, I've got another 9 trophies, such as the men's singles champion of 2015 Northwest Husky Badminton Open, the men's singles runner-up of 2016 WA State Badminton Open, and the men's doubles runner-up of 2018 WA State Badminton Closed.

E-Mail / Resume / Google Scholar / Research Gate / Master Thesis

News
  • 2020/06: My paper "High-Resolution Vehicle Trajectory Extraction and Denoising From Aerial Videos" is accepted by IEEE Transactions on Intelligent Transportation Systems.
  • 2020/05: My paper "Stacked Bidirectional and Unidirectional LSTM Recurrent Neural Network for Forecasting Network-wide Traffic State with Missing Values" is accepted by Transportation Research Part C: Emerging Technologies.
  • 2020/05: My paper "Safe, efficient, and comfortable velocity control based on reinforcement learning for autonomous driving" is accepted by Transportation Research Part C: Emerging Technologies.
  • 2020/05: I am invited to give an online talk at MIT.
  • 2020/04: My paper "Edge-Based Traffic Flow Data Collection Method Using Onboard Monocular Camera" is accepted by Journal of Transportation Engineering, Part A: Systems.
  • 2020/03: My paper (first author) "A Smart, Efficient, and Reliable Parking Surveillance System with Edge Artificial Intelligence on IoT Devices" is accepted by IEEE Transactions on Intelligent Transportation Systems.
  • 2020/03: My paper "Learning Traffic as a Graph: A Gated Graph Wavelet Recurrent Neural Network for Network-scale Traffic Prediction" is accepted by Transportation Research Part C: Emerging Technologies.
  • 2020/03: I am invited to visit Villanova University and will give a talk.
  • 2020/03: My paper "Testing an Automated Collision Avoidance and Emergency Braking System for Buses" is accepted by Transportation Research Record.
  • 2020/02: My paper (first author) "Two-Stream Multi-Channel Convolutional Neural Network for Multi-Lane Traffic Speed Prediction Considering Traffic Volume Impact" is accepted by Transportation Research Record.
  • 2020/02: My paper (first author) "Advanced Framework for Microscopic and Lane-level Macroscopic Traffic Parameters Estimation from UAV Video" is accepted by IET Intelligent Transport Systems.
  • 2020/01: I am invited to visit Purdue University and will give a talk.
  • 2020/01: I received the Michael Kyte Outstanding Student of the Year Award presented by the Pacific Northwest Transportation Consortium (PacTrans), USDOT Region 10 University Transportation Center.
  • 2019/12: I will demonstrate the technology of Multi-Camera Multi-Target Tracking and Re-identification on behalf of PacTrans as part of the USDOT Exhibit at CES 2020 in Las Vegas.
  • 2019/10: The four papers I submitted to TRB 2020 (two first-author papers) are all accepted for presentation. See you in Washington D.C.!
  • 2019/10: I will be the organizing committee co-chair with Prof. David Fan (UNC Charlotte) for the 23rd COTA TRB Winter Symposium (webpage coming out soon). I will be in charge of the Lightning Talk Session for Young Professionals. We are CALLING FOR PRESENTERS until Nov.25, 2019!
  • 2019/09: My paper "Traffic graph convolutional recurrent neural network: a deep learning framework for network-scale traffic learning and forecasting" is accepted by IEEE Transactions on Intelligent Transportation Systems.
  • 2019/09: I will be co-chairing a session in Connected and Autonomous Vehicles at CICTP 2020 with Prof. Quan Yuan (Tsinghua University). Welcome to submit your abstract by Sept 20, 2019!
  • 2019/06: One paper abstract accepted by INFORMS 2019 in Seattle, will give an oral presentation.
  • 2019/05: I will give a guest talk on "Video-based data collection for intelligent transportation systems" at the 2019 ITE Student Night event.
  • 2019/02: I am invited to be an area editor (area: Intelligent and Connected Transportation Systems) of the 5th International Conference on Transportation Information and Safety (ICTIS 2019).
Selected Publications

A Smart, Efficient, and Reliable Parking Surveillance System With Edge Artificial Intelligence on IoT Devices
Ruimin Ke, Yifan Zhuang, Ziyuan Pu, Yinhai Wang*
IEEE Transactions on Intelligent Transportation Systems, 2020
abstract / bibtex / link

Cloud computing has been a main-stream computing service for years. Recently, with the rapid development in urbanization, massive video surveillance data are produced at an unprecedented speed. A traditional solution to deal with the big data would require a large amount of computing and storage resources. With the advances in Internet of things (IoT), artificial intelligence, and communication technologies, edge computing offers a new solution to the problem by processing all or part of the data locally at the edge of a surveillance system. In this study, we investigate the feasibility of using edge computing for smart parking surveillance tasks, specifically, parking occupancy detection using the real-time video feed. The system processing pipeline is carefully designed with the consideration of flexibility, online surveillance, data transmission, detection accuracy, and system reliability. It enables artificial intelligence at the edge by implementing an enhanced single shot multibox detector (SSD). A few more algorithms are developed either locally at the edge of the system or on the centralized data server targeting optimal system efficiency and accuracy. Thorough field tests were conducted in the Angle Lake parking garage for three months. The experimental results are promising that the final detection method achieves over 95% accuracy in real-world scenarios with high efficiency and reliability. The proposed smart parking surveillance system is a critical component of smart cities and can be a solid foundation for future applications in intelligent transportation systems.

@article{9061155,
title={A Smart, Efficient, and Reliable Parking Surveillance System With Edge Artificial Intelligence on IoT Devices},
author={Ke, Ruimin and Zhuang, Yifan and Pu, Ziyuan and Wang, Yinhai},
journal={IEEE Transactions on Intelligent Transportation Systems},
year={2020},
volum={},
number={},
pages={1-13}
}
IoT System for Real-Time Near-Crash Detection for Automated Vehicle Testing
Ruimin Ke, Zhiyong Cui, Yanlong Chen, Meixin Zhu, Yinhai Wang*
ArXiv Preprint, 2020
abstract / bibtex / link / demo

Our world is moving towards the goal of fully autonomous driving at a fast pace. While the latest automated vehicles (AVs) can handle most real-world scenarios they encounter, a major bottleneck for turning fully autonomous driving into reality is the lack of sufficient corner case data for training and testing AVs. Near-crash data, as a widely used surrogate data for traffic safety research, can also serve the purpose of AV testing if properly collected. To this end, this paper proposes an Internet-of-Things (IoT) system for real-time near-crash data collection. The system has several cool features. First, it is a low-cost and standalone system that is backward-compatible with any existing vehicles. People can fix the system to their dashboards for near-crash data collection and collision warning without the approval or help of vehicle manufacturers. Second, we propose a new near-crash detection method that models the target's size changes and relative motions with the bounding boxes generated by deep-learning-based object detection and tracking. This near-crash detection method is fast, accurate, and reliable; particularly, it is insensitive to camera parameters, thereby having an excellent transferability to different dashboard cameras. We have conducted comprehensive experiments with 100 videos locally processed at Jetson, as well as real-world tests on cars and buses. Besides collecting corner cases, it can also serve as a white-box platform for testing innovative algorithms and evaluating other AV products. The system contributes to the real-world testing of AVs and has great potential to be brought into large-scale deployment.

@misc{ke2020iot,
title={IoT System for Real-Time Near-Crash Detection for Automated Vehicle Testing},
author={Ruimin Ke and Zhiyong Cui and Yanlong Chen and Meixin Zhu and Yinhai Wang},
year={2020},
eprint={2008.00549},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
Two-Stream Multi-Channel Convolutional Neural Network for Multi-Lane Traffic Speed Prediction Considering Traffic Volume Impact
Ruimin Ke, Wan Li, Zhiyong Cui, Yinhai Wang*
Transportation Research Record, 2020
abstract / bibtex / link / data (wsdot --> loopgroup data download)

Traffic speed prediction is a critically important component of intelligent transportation systems. Recently, with the rapid development of deep learning and transportation data science, a growing body of new traffic speed prediction models have been designed that achieved high accuracy and large-scale prediction. However, existing studies have two major limitations. First, they predict aggregated traffic speed rather than lane-level traffic speed; second, most studies ignore the impact of other traffic flow parameters in speed prediction. To address these issues, the authors propose a two-stream multi-channel convolutional neural network (TM-CNN) model for multi-lane traffic speed prediction considering traffic volume impact. In this model, the authors first introduce a new data conversion method that converts raw traffic speed data and volume data into spatial�temporal multi-channel matrices. Then the authors carefully design a two-stream deep neural network to effectively learn the features and correlations between individual lanes, in the spatial�temporal dimensions, and between speed and volume. Accordingly, a new loss function that considers the volume impact in speed prediction is developed. A case study using 1-year data validates the TM-CNN model and demonstrates its superiority. This paper contributes to two research areas: (1) traffic speed prediction, and (2) multi-lane traffic flow study.

@article{ke2020TWO,
title={Two-Stream Multi-Channel Convolutional Neural Network for Multi-Lane Traffic Speed Prediction Considering Traffic Volume Impact},
author={Ke, Ruimin and Li, Wan and Cui, Zhiyong and Wang, Yinhai},
journal={Transportation Research Record},
pages={0361198120911052},
year={2020},
publisher={publisher={SAGE Publications Sage CA: Los Angeles, CA}
}
Advanced Framework for Microscopic and Lane-level Macroscopic Traffic Parameters Estimation from UAV Video
Ruimin Ke, Shuo Feng, Zhiyong Cui, Yinhai Wang*
IET Intelligent Transport Systems, 2020
abstract / bibtex / link / video

Unmanned aerial vehicle (UAV) is at the heart of modern traffic sensing research due to its advantages of low cost, high flexibility, and wide view range over traditional traffic sensors. Recently, increasing efforts in UAV-based traffic sensing have been made, and great progress has been achieved on the estimation of aggregated macroscopic traffic parameters. Compared to aggregated macroscopic traffic data, there has been extensive attention on higher-resolution traffic data such as microscopic traffic parameters and lane-level macroscopic traffic parameters since they can help deeply understand traffic patterns and individual vehicle behaviours. However, little existing research can automatically estimate microscopic traffic parameters and lane-level macroscopic traffic parameters using UAV videos with a moving background. In this study, an advanced framework is proposed to bridge the gap. Specifically, three functional modules consisting of multiple processing streams and the interconnections among them are carefully designed with the consideration of UAV video features and traffic flow characteristics. Experimental results on real-world UAV video data demonstrate promising performances of the framework in microscopic and lane-level macroscopic traffic parameters estimation. This research pushes off the boundaries of the applicability of UAVs and has an enormous potential to support advanced traffic sensing and management.

@article{ke2020advanced,
title={Advanced framework for microscopic and lane-level macroscopic traffic parameters estimation from UAV video},
author={Ke, Ruimin and Feng, Shuo and Cui, Zhiyong and Wang, Yinhai},
journal={IET Intelligent Transport Systems}
year = {2020}
publisher={IET}
}
Real-Time Traffic Flow Parameter Estimation From UAV Video Based on Ensemble Classifier and Optical Flow
Ruimin Ke, Zhibin Li, Jinjun Tang, Zewen Pan, Yinhai Wang*
IEEE Transactions on Intelligent Transportation Systems, 2019
abstract / bibtex / link / data

Recently, the availability of unmanned aerial vehicle (UAV) opens up new opportunities for smart transportation applications, such as automatic traffic data collection. In such a trend, detecting vehicles and extracting traffic parameters from UAV video in a fast and accurate manner is becoming crucial in many prospective applications. However, from the methodological perspective, several limitations have to be addressed before the actual implementation of UAV. This paper proposes a new and complete analysis framework for traffic flow parameter estimation from UAV video. This framework addresses the well-concerned issues on UAV's irregular ego-motion, low estimation accuracy in dense traffic situation, and high computational complexity by designing and integrating four stages. In the first two stages an ensemble classifier (Haar cascade + convolutional neural network) is developed for vehicle detection, and in the last two stages a robust traffic flow parameter estimation method is developed based on optical flow and traffic flow theory. The proposed ensemble classifier is demonstrated to outperform the state-of-the-art vehicle detectors that designed for UAV-based vehicle detection. Traffic flow parameter estimations in both free flow and congested traffic conditions are evaluated, and the results turn out to be very encouraging. The dataset with 20,000 image samples used in this study is publicly accessible for benchmarking at http://www.uwstarlab.org/research.html.

@article{ke2018real,
title={Real-time traffic flow parameter estimation from UAV video based on ensemble classifier and optical flow},
author={Ke, Ruimin and Li, Zhibin and Tang, Jinjun and Pan, Zewen and Wang, Yinhai},
journal={IEEE Transactions on Intelligent Transportation Systems},
year={2018},
volume={20},
number={1},
pages={54-64},
publisher={IEEE}}
New Framework for Automatic Identification and Quantification of Freeway Bottlenecks Based on Wavelet Analysis
Ruimin Ke, Ziqiang Zeng, Ziyuan Pu, Yinhai Wang*
Journal of Transportation Engineering, Part A: Systems, 2018
(Featured in the Editor's Choice Section of the journal)
abstract / bibtex / link

As the amount of traffic congestion continues to grow, pinpointing freeway bottleneck locations and quantifying their impacts are crucial activities for traffic management and control. Among the previous bottleneck identification methods, limitations still exist. The first key limitation is that they cannot determine precise breakdown durations at a bottleneck in an objective manner. Second, the input data often needs to be aggregated in an effort to ensure better robustness to noise, which will significantly reduce the time resolution. Wavelet transform, as a powerful and efficient data-processing tool, has already been implemented in some transportation application scenarios to much benefit. However, there is still a wide gap between existing preliminary explorations of wavelet analysis in transportation research and a completely automatic bottleneck identification framework. This paper addresses several key issues in existing bottleneck identification approaches and also fills a gap in transportation-related wavelet applications. The experimental results demonstrate that the proposed method is able to locate the most severe bottlenecks and comprehensively quantify their impacts.

@article{doi:10.1061/JTEPBS.0000168,
author = {Ruimin Ke and Ziqiang Zeng and Ziyuan Pu and Yinhai Wang },
title = {New Framework for Automatic Identification and Quantification of Freeway Bottlenecks Based on Wavelet Analysis},
journal = {Journal of Transportation Engineering, Part A: Systems},
volume = {144},
number = {9},
pages = {04018044},
year = {2018},
doi = {10.1061/JTEPBS.0000168}
}
Multi-Lane Traffic Pattern Learning and Forecasting Using Convolutional Neural Network
Ruimin Ke, Wan Li, Zhiyong Cui, Yinhai Wang*
COTA International Symposium on Emerging Trends in Transportation (ISETT), 2018
abstract / link

Recently, the emergence of deep learning has facilitated many research fields including transportation, especially traffic pattern recognition and traffic forecasting. While many efforts have been made in the exploration of new models for higher accuracy and larger scale, few existing studies focus on learning higher-resolution traffic patterns. The most representative example is the lack of research in multi-lane pattern mining and forecasting. To this end, this paper proposes a deep learning framework that can learn multi-lane traffic patterns and forecast lane-level short-term traffic conditions with high accuracy. Multi-lane traffic dynamics are modeled as a multi-channel spatial-temporal image in which each channel corresponds to a traffic lane. The constructed multi-channel image is then learned by a convolutional neural network, which can capture key traffic patterns and forecast multi-lane traffic flow parameters. One-year loop detector data for a freeway segment in Seattle are used for model validation. The results and analyses demonstrate the promising performance of the proposed method.

A Cost-effective Framework for Automated Vehicle-pedestrian Near-miss Detection through Onboard Monocular Vision
Ruimin Ke, Jerome Lutin, Jerry Spears, Yinhai Wang*
Computer Vision and Pattern Recognition (CVPR), 2017
abstract / bibtex / link / video / media coverage

Onboard monocular cameras have been widely deployed in both public transit and personal vehicles. Obtaining vehicle-pedestrian near-miss event data from onboard monocular vision systems may be cost-effective compared with onboard multiple-sensor systems or traffic surveillance videos. But extracting near-misses from onboard monocular vision is challenging and little work has been published. This paper fills the gap by developing a framework to automatically detect vehicle-pedestrian near-misses through onboard monocular vision. The proposed framework can estimate depth and real-world motion information through monocular vision with a moving video background. The experimental results based on processing over 30-hours video data demonstrate the ability of the system to capture near-misses by comparison with the events logged by the Rosco/MobilEye Shield+ system which includes four cameras working cooperatively. The detection overlap rate reaches over 90% with the thresholds properly set.

@INPROCEEDINGS{8014858,
author={R. Ke and J. Lutin and J. Spears and Y. Wang},
booktitle={2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
title={A Cost-Effective Framework for Automated Vehicle-Pedestrian Near-Miss Detection Through Onboard Monocular Vision},
year={2017},
pages={898-905},
doi={10.1109/CVPRW.2017.124},
ISSN={2160-7516},
month={July},}
Real-Time Bidirectional Traffic Flow Parameter Estimation From Aerial Videos
Ruimin Ke, Zhibin Li, Sung Kim, John Ash, Yinhai Wang*
IEEE Transactions on Intelligent Transportation Systems, 2017
abstract / bibtex / link

Unmanned aerial vehicles (UAVs) are gaining popularity in traffic monitoring due to their low cost, high flexibility, and wide view range. Traffic flow parameters such as speed, density, and volume extracted from UAV-based traffic videos are critical for traffic state estimation and traffic control and have recently received much attention from researchers. However, different from stationary surveillance videos, the camera platforms move with UAVs, and the background motion in aerial videos makes it very challenging to process for data extraction. To address this problem, a novel framework for real-time traffic flow parameter estimation from aerial videos is proposed. The proposed system identifies the directions of traffic streams and extracts traffic flow parameters of each traffic stream separately. Our method incorporates four steps that make use of the Kanade-Lucas-Tomasi (KLT) tracker, k-means clustering, connected graphs, and traffic flow theory. The KLT tracker and k-means clustering are used for interest-point-based motion analysis; then, four constraints are proposed to further determine the connectivity of interest points belonging to one traffic stream cluster. Finally, the average speed of a traffic stream as well as density and volume can be estimated using outputs from previous steps and reference markings. Our method was tested on five videos taken in very different scenarios. The experimental results show that in our case studies, the proposed method achieves about 96% and 87% accuracy in estimating average traffic stream speed and vehicle count, respectively. The method also achieves a fast processing speed that enables real-time traffic information estimation.

@ARTICLE{7546916,
author={R. Ke and Z. Li and S. Kim and J. Ash and Z. Cui and Y. Wang},
journal={IEEE Transactions on Intelligent Transportation Systems},
title={Real-Time Bidirectional Traffic Flow Parameter Estimation From Aerial Videos},
year={2018},
volume={18},
number={4},
pages={890-901},
doi={10.1109/TITS.2016.2595526},
ISSN={1524-9050},
month={April},}
Roadway surveillance video camera calibration using standard shipping container
Ruimin Ke, Zewen Pan, Ziyuan Pu, Yinhai Wang*
IEEE International Smart Cities Conference, 2017
abstract / bibtex / link

Surveillance video cameras have been increasingly deployed on roadway networks providing important support for roadway management. While the information-rich video images are a valuable source of traffic data, these surveillance video cameras are typically designed for manual observation of roadway conditions and are not for automatic traffic data collection. The benefits of turning these surveillance cameras into data collection cameras are obvious, but collecting traffic data would normally require the development of a cost-effective method to efficiently and accurately calibrate surveillance video cameras. This paper proposes such a robust and efficient method that calibrates surveillance video cameras using standard shipping container as the reference object. The traditional camera calibration model can be simplified and camera parameters can be recovered with precise mathematical derivation. After solving for all the camera parameters, the 3D object world coordinates can be reconstructed from 2D image coordinates, thus enabling the collection of a variety of traffic data using surveillance video camera data.

@INPROCEEDINGS{8090811,
author={R. Ke and Z. Pan and Z. Pu and Y. Wang},
booktitle={2017 International Smart Cities Conference (ISC2)},
title={Roadway surveillance video camera calibration using standard shipping container},
year={2017},
volume={},
number={},
pages={1-6},
doi={10.1109/ISC2.2017.8090811},
ISSN={},
month={Sept},}
Motion-vector clustering for traffic speed detection from UAV video
Ruimin Ke, Sung Kim, Zhibin Li, Yinhai Wang*
IEEE First International Smart Cities Conference, 2015
abstract / bibtex / link

A novel method for detecting the average speed of traffic from non-stationary aerial video is presented. The method first extracts interest points from a pair of frames and performs interest point tracking with an optical flow algorithm. The output of the optical flow is a set of motion vectors which are k-means clustered in velocity space. The centers of the clusters correspond to the average velocities of traffic and the background, and are used to determine the speed of traffic relative to the background. The proposed method is tested on a 70-frame test sequence of UAV aerial video, and achieves an average error for speed estimates of less than 12%.

@INPROCEEDINGS{7366230,
author={R. Ke and S. Kim and Z. Li and Y. Wang},
booktitle={2015 IEEE First International Smart Cities Conference (ISC2)},
title={Motion-vector clustering for traffic speed detection from UAV video},
year={2015},
volume={},
number={},
pages={1-5},
doi={10.1109/ISC2.2015.7366230},
ISSN={},
month={Oct},}
Lane-changes prediction based on adaptive fuzzy neural network
Jinjun Tang, Fang Liu, Wenhui Zhang, Ruimin Ke, Yajie Zou*
Expert Systems with Applications, 2018
abstract / bibtex / link

Lane changing maneuver is one of the most important driving behaviors. Unreasonable lane changes can cause serious collisions and consequent traffic delays. High precision prediction of lane changing intent is helpful for improving driving safety. In this study, by fusing information from vehicle sensors, a lane changing predictor based on Adaptive Fuzzy Neural Network (AFFN) is proposed to predict steering angles. The prediction model includes two parts: fuzzy neural network based on Takagi–Sugeno fuzzy inference, in which an improved Least Squares Estimator (LSE) is adopt to optimize parameters; adaptive learning algorithm to update membership functions and rule base. Experiments are conducted in the driving simulator under scenarios with different speed levels of lead vehicle: 60 km/h, 80 km/h and 100 km/h. Prediction results show that the proposed method is able to accurately follow steering angle patterns. Furthermore, comparison of prediction performance with several machine learning methods further verifies the learning ability of the AFNN. Finally, a sensibility analysis indicates heading angles and acceleration of vehicle are also important factors for predicting lane changing behavior.

@article{TANG2018452,
title = "Lane-changes prediction based on adaptive fuzzy neural network",
journal = "Expert Systems with Applications",
volume = "91",
pages = "452 - 463",
year = "2018",
issn = "0957-4174",
doi = "https://doi.org/10.1016/j.eswa.2017.09.025",
author = "Jinjun Tang and Fang Liu and Wenhui Zhang and Ruimin Ke and Yajie Zou",
}
Traffic flow data compression considering burst components
Shuo Feng, Ruimin Ke, Xingmin Wang, Yi Zhang, Li Li*
IET Intelligent Transport Systems, 2017
abstract / bibtex / link

Many recent applications of intelligent transportation systems require both real-time and network-wide traffic flow data as input. However, as the detection time and network size increase, the data volume may become very large in terms of both dimension and scale. To address this concern, various traffic flow data compression methods have been proposed, which archive the low-dimensional subspace rather than the original data. Many studies have shown the traffic flow data consist of different components, i.e. low-dimensional intra-day trend, Gaussian type fluctuation and burst components. Existing compression methods cannot compress the burst components well and provide very limited choices of compression ratio (CR). A better compression method should have the ability to archive all the dominant information in different components of traffic flow data. In this study, the authors compare the influence of different data reformatting, archive the bursts defined before in descending order with respect to the absolute value of the burst points and propose a flexible compression framework to balance between burst components and low-dimensional intra-day trend. Experimental results show that the proposed framework promotes the reconstruction accuracy significantly. Moreover, the proposed framework provides more flexible choices with respect to CR, which can benefit a variety of applications.

@ARTICLE{8076727,
author={S. Feng and R. Ke and X. Wang and Y. Zhang and L. Li},
journal={IET Intelligent Transport Systems},
title={Traffic flow data compression considering burst components},
year={2017},
volume={11},
number={9},
pages={572-580},
doi={10.1049/iet-its.2016.0328},
ISSN={1751-956X},
month={},}
Active Safety-Collision Warning Pilot in Washington State
Jerry Spears, Jerome Lutin, Yinhai Wang, Ruimin Ke, Steven Clancy
TRANSIT-IDEA Program Project Final Report, 2017
abstract / bibtex / link / media coverage

The Rosco/Mobileye Shield+ system is a collision avoidance warning system (CAWS) specifically designed for transit buses. This project involved field testing and evaluation of the CAWS in revenue service over a three-month period. The system provides alerts and warnings to the bus driver for the following conditions that could lead to a collision: 1) changing lanes without activating a turn signal, 2) exceeding posted speed limit, 3) monitoring headway with the vehicle leading the bus, 4) forward vehicle collision warning, and 5) pedestrian or cyclist collision warning in front of, or alongside the bus. Alerts and warnings are displayed to the driver by visual indicators located on the windshield and front pillars. Audible warnings are issued when collisions are imminent. Research objectives included: create a robust Rosco/Mobileye demonstration pilot for active/collision avoidance within the State of Washington on a minimum of 35 transit buses; determine the ease of retrofit of the existing fleet; develop a methodology for estimating the full costs savings of avoided collisions for each agency; develop a methodology and evaluation process for transit driver feedback and acceptance as well as bus passenger feedback; and provide detailed data and understanding on entrance barriers to this technology. The pilot test showed that although driver acceptance was mixed, there were large reductions in near-miss events for CAWS-equipped buses. Consequently, achieving driver acceptance will be a key factor in continued development and deployment of CAWS. As a result of comments received from the drivers, the vendor has begun a program to incorporate desired modifications to the system including reducing false positives. A second major factor in achieving industry acceptance is to demonstrate the business case for CAWS to both transit agencies and system developers. Although the pilot project produced encouraging results, collisions, injuries and fatalities can be considered rare events. A much larger in-service test will be needed to demonstrate actual cost-savings.

@Report{01643748,
author={Jerry Spears and Jerome Lutin and Yinhai Wang and Ruimin Ke and Steven Clancy},
journal={Transit IDEA Project},
title={Active Safety-Collision Warning Pilot in Washington State},
year={2017},
publisher={Transportation Research Board}
volume={},
number={82},
pages={1-33},
month={May},}
A collision avoidance model for two-pedestrian groups: Considering random avoidance patterns
Zhuping Zhou*, Yifei Cai, Ruimin Ke, Jiwei Yang
Physica A: Statistical Mechanics and its Applications, 2017
abstract / bibtex / link

Grouping is a common phenomenon in pedestrian crowds and group modeling is still an open challenging problem. When grouping pedestrians avoid each other, different patterns can be observed. Pedestrians can keep close with group members and avoid other groups in cluster. Also, they can avoid other groups separately. Considering this randomness in avoidance patterns, we propose a collision avoidance model for two-pedestrian groups. In our model, the avoidance model is proposed based on velocity obstacle method at first. Then grouping model is established using Distance constrained line (DCL), by transforming DCL into the framework of velocity obstacle, the avoidance model and grouping model are successfully put into one unified calculation structure. Within this structure, an algorithm is developed to solve the problem when solutions of the two models conflict with each other. Two groups of bidirectional pedestrian experiments are designed to verify the model. The accuracy of avoidance behavior and grouping behavior is validated in the microscopic level, while the lane formation phenomenon and fundamental diagrams is validated in the macroscopic level. The experiments results show our model is convincing and has a good expansibility to describe three or more pedestrian groups.

@article{ZHOU2017142,
title = "A collision avoidance model for two-pedestrian groups: Considering random avoidance patterns",
journal = "Physica A: Statistical Mechanics and its Applications",
volume = "475",
pages = "142 - 154",
year = "2017",
issn = "0378-4371",
doi = "https://doi.org/10.1016/j.physa.2016.12.041",
author = "Zhuping Zhou and Yifei Cai and Ruimin Ke and Jiwei Yang",
}
A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis
Ziqiang Zeng, Wenbo Zhu, Ruimin Ke, John Ash, Yinhai Wang*, Jiuping Xu, Xinxin Xu
Accident Analysis & Prevention, 2017
abstract / bibtex / link

The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed.

@article{ZENG201751,
title = "A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis",
journal = "Accident Analysis & Prevention",
volume = "99",
pages = "51 - 65",
year = "2017",
issn = "0001-4575",
doi = "https://doi.org/10.1016/j.aap.2016.11.008",
author = "Ziqiang Zeng and Wenbo Zhu and Ruimin Ke and John Ash and Yinhai Wang and Jiuping Xu and Xinxin Xu",
}
Digital Roadway Interactive Visualization and Evaluation Network Applications to WSDOT Operational Data Usage
Yinhai Wang, Ruimin Ke, Weibin Zhang, Zhiyong Cui, Kristian Henrickson
Washington Station Department of Transportation (WSDOT) Research Report, 2016
abstract / link / video / website

DRIVE Net is a region-wide, Web-based transportation decision support system that adopts digital roadway maps as the base, and provides data layers for integrating and analyzing a variety of data sources (e.g., traffic sensors, incident records). Moreover, DRIVE Net offers a platform for streamlining transportation analysis and decision making, and it serves as a practical tool for visualizing historical observations spatially and temporally. In its current implementation, DRIVE Net demonstrates the potential to be used as a standard tool for incorporating multiple data sets from different fields and as a platform for real-time decision making. In comparison with the previous version, the new DRIVE Net system is now able to handle more complex computational tasks, perform large-scale spatial processing, and support data sharing services to provide a stable and interoperable platform to process, analyze, visualize, and share transportation data. DRIVE Net’s capabilities include generating statistics for WSDOT’s Gray Notebook (GNB), including travel times, throughput productivity, and traffic delay calculations for both general purpose and HOV lanes, each of which are important performance indicators in the WSDOT congestion report. The DRIVE Net system includes robust loop detector data processing and quality control methods to address the data quality issues impacting loop detectors throughout the state. The capabilities of the DRIVE Net system have been expanded to include safety modeling, hotspot identification, and incident induced delay estimation. Specifically, the Safety Performance module includes functions that can be used to obtain traffic incident frequency, apply predictive models to estimate the safety performance of road segments, and visualize and compare observed incident counts and different predictive models. Additionally, a module providing multi-modal data analysis and visualization capabilities was developed as a pilot experiment for integration of heterogeneous data. This module includes pedestrian and bicycle, public transit, park and ride, Car2Go, and ferry data downloading and visualization. DRIVE Net now offers role-based access control, such that access privileges to different functions and data resources can be assigned on a group or individual basis. The new system is able to support more complex analytics and decision support features on a large-scale transportation network, and is expected to be of great practical use for both traffic engineers and researchers. With a modular structure and mature data integration and management framework, DRIVE Net can be expanded in the future to include a variety of additional data resources and analytical capabilities.

Deep Bidirectional and Unidirectional LSTM Recurrent Neural Network for Network-wide Traffic Speed Prediction
Zhiyong Cui, Ruimin Ke, Yinhai Wang*,
arXiv, 2018
abstract / bibtex / link

Short-term traffic forecasting based on deep learning methods, especially long short-term memory (LSTM) neural networks, has received much attention in recent years. However, the potential of deep learning methods in traffic forecasting has not yet fully been exploited in terms of the depth of the model architecture, the spatial scale of the prediction area, and the predictive power of spatial-temporal data. In this paper, a deep stacked bidirectional and unidirectional LSTM (SBU- LSTM) neural network architecture is proposed, which considers both forward and backward dependencies in time series data, to predict network-wide traffic speed. A bidirectional LSTM (BDLSM) layer is exploited to capture spatial features and bidirectional temporal dependencies from historical data. To the best of our knowledge, this is the first time that BDLSTMs have been applied as building blocks for a deep architecture model to measure the backward dependency of traffic data for prediction. The proposed model can handle missing values in input data by using a masking mechanism. Further, this scalable model can predict traffic speed for both freeway and complex urban traffic networks. Comparisons with other classical and state-of-the-art models indicate that the proposed SBU-LSTM neural network achieves superior prediction performance for the whole traffic network in both accuracy and robustness.

@article{DBLP:journals/corr/abs-1801-02143,
author = "Zhiyong Cui and Ruimin Ke and Yinhai Wang",
title = "Deep Bidirectional and Unidirectional LSTM Recurrent Neural Network for Network-wide Traffic Speed Prediction",
journal = "arXiv preprint arXiv:1801.02143",
year = "2018",
}

Teaching
CET 590: Traffic Simulation and System Operations
As Graduate Instructor
Course Evaluation Score: 4.8 / 5
Fall 2019
CET 590: Traffic Simulation and System Operations
As Teaching Assistant with Prof. Yinhai Wang
Fall 2018
CET 412: Transportation Data Management and Analytics
As Guest Lecturer / Topic: Advances in Sensor Technology for Robust Traffic Data Collection
Winter 2019
EE(P) 502: Analytical Methods for Electrical Engineering
As Guest Lecturer / Topic: Computer Vision Applications in Transportation Engineering
Fall 2018
Engineering Discovery Days
As Guest Lecturer / Topic: Drone-Based Traffic Detection and Management
Spring 2016, 2017, 2018, 2019