A method to process eye-tracker data with image processing algorithms for driver behavior analysis
Sürücü davranış analizi için göz izleme verilerini görüntü işleme algoritmalarıyla işlemek için bir yöntem
- Tez No: 916101
- Danışmanlar: PROF. DR. LORENZO MUSSONE, PROF. DR. GİANDOMENİCO CARUSO
- Tez Türü: Yüksek Lisans
- Konular: İnşaat Mühendisliği, Civil Engineering
- Anahtar Kelimeler: driving simulator, driver attentional data, driver physiological data, driver behavior, level of details (LOD), road safety, gaze position, driver reaction time, image processing
- Yıl: 2022
- Dil: İngilizce
- Üniversite: Politecnico di Milano
- Enstitü: Yurtdışı Enstitü
- Ana Bilim Dalı: Belirtilmemiş.
- Bilim Dalı: Belirtilmemiş.
- Sayfa Sayısı: 99
Özet
One of the most important factors identified as a cause of road crashes is driver behavior. In a well-conditioned traffic situation, a driving simulator has become the primary instrument for simulating and verifying the interaction between vehicle and driver. It also seeks to determine the optimal LOD (Level of Detail) of those scenarios in a driving simulator that result in increased reliability. The data obtained by the driving simulator and eye-tracker devices of Politecnico di Milano's i.Drive laboratory is the basis for the research presented in this thesis. The driving simulator consists of a fixed seat, monitors, a steering wheel, and pedals for operating the car. Driving scenarios provide routes with a variety of environmental details. It is feasible to observe the driver's behavior when their LOD varies while the road infrastructure remains unchanged by using the driving simulator. The experiment employs four levels of detail, ranging from simple scenarios (with carriageway and road signs) to a scenario that is quite near to reality (with realistic buildings and all other elements like trees, lamps, and pedestrians). In this study, a novel way for properly interpreting the experiment based on Pupil Labs eye-tracking equipment was proposed. The drivers' gaze locations were captured by Pupil Labs' eye-tracking equipment, and the gaze positions were elaborated on in a prior paper on this experiment by Rawal (Rawal, 2021). Because the subjects wore the Pupil Labs eye-tracking apparatus on their heads during the experiment, the world camera on the equipment captured and archived the driver's perspective as a world movie. The world video, as well as the timestamps for each individual frame of the world video, were acquired using the Pupil Labs Player application. In MATLAB, all of the frames from the world video were extracted for the 19th driver, which was chosen to be evaluated, with LODs ranging from 0 to 3. The gaze locations from previous elaborations (Rawal, 2021) and frames acquired from world video with the closest timestamps to the reference array (Ahmed, 2020) were matched. The integration of the new image processing approach with the existing data elaborations of the experiment was thus maintained. Initially, an edge detection method was used for each frame that had been matched with the previous elaboration in order to find pixels with value 1 in the greyscale image that had been converted from the RGB image. A selection window was constructed in MATLAB to choose the road borders from the photo's edges, which are defined as all pixels in the image with value 1. A multitude of elements, including head movement, vehicle exact location, and orientation along the circuit's track impact the driver's perspective, hence the word camera frame images. Due to the constant change in the driver's perspective along the simulation path, this selection window needed to be updated for the proper selection of the road border coordinates in order to detect in which frames the gaze is inside the road limits for better perception of the driver behavior along the circuit, specifically for one rectilinear and one curvilinear segments were chosen to be analyzed with LODs ranging from 0 to 3. The primary purpose of the research is to offer an efficient methodology for processing eye-tracker data with image processing algorithms for driver behavior analysis utilizing various LODs of road scenarios in curvilinear and rectilinear geometrically dissimilar segments for a better interpretation of the complex driver behavior.
Özet (Çeviri)
One of the most important factors identified as a cause of road crashes is driver behavior. In a wellconditioned traffic situation, a driving simulator has become the primary instrument for simulating and verifying the interaction between vehicle and driver. It also seeks to determine the optimal LOD (Level of Detail) of those scenarios in a driving simulator that result in increased reliability. The data obtained by the driving simulator and eye-tracker devices of Politecnico di Milano's i.Drive laboratory is the basis for the research presented in this thesis. The driving simulator consists of a fixed seat, monitors, a steering wheel, and pedals for operating the car. Driving scenarios provide routes with a variety of environmental details. It is feasible to observe the driver's behavior when their LOD varies while the road infrastructure remains unchanged by using the driving simulator. The experiment employs four levels of detail, ranging from simple scenarios (with carriageway and road signs) to a scenario that is quite near to reality (with realistic buildings and all other elements like trees, lamps, and pedestrians). In this study, a novel way for properly interpreting the experiment based on Pupil Labs eye-tracking equipment was proposed. The drivers' gaze locations were captured by Pupil Labs' eye-tracking equipment, and the gaze positions were elaborated on in a prior paper on this experiment by Rawal (Rawal, 2021). Because the subjects wore the Pupil Labs eye-tracking apparatus on their heads during the experiment, the world camera on the equipment captured and archived the driver's perspective as a world movie. The world video, as well as the timestamps for each individual frame of the world video, were acquired using the Pupil Labs Player application. In MATLAB, all of the frames from the world video were extracted for the 19th driver, which was chosen to be evaluated, with LODs ranging from 0 to 3. The gaze locations from previous elaborations (Rawal, 2021) and frames acquired from world video with the closest timestamps to the reference array (Ahmed, 2020) were matched. The integration of the new image processing approach with the existing data elaborations of the experiment was thus maintained. Initially, an edge detection method was used for each frame that had been matched with the previous elaboration in order to find pixels with value 1 in the greyscale image that had been converted from the RGB image. A selection window was constructed in MATLAB to choose the road borders from the photo's edges, which are defined as all pixels in the image with value 1. A multitude of elements, including head movement, vehicle exact location, and orientation along the circuit's track impact the driver's perspective, hence the word camera frame images. Due to the constant change in the driver's perspective along the simulation path, this selection window needed to be updated for the proper selection of the road border coordinates in order to detect in which frames the gaze is inside the road limits for better perception of the driver behavior along the circuit, specifically for one rectilinear and one curvilinear segments were chosen to be analyzed with LODs ranging from 0 to 3. The primary purpose of the research is to offer an efficient methodology for processing eye-tracker data with image processing algorithms for driver behavior analysis utilizing various LODs of road scenarios in curvilinear and rectilinear geometrically dissimilar segments for a better interpretation of the complex driver behavior.
Benzer Tezler
- Emotion recognition process analysis by using eye tracker, sensor and application log data
Göz izleme cihazı, sensör ve uygulama verileri ile insanlarda duygu tanıma analizi
MAHİYE ÖZTÜRK
Doktora
İngilizce
2019
Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrolİstanbul Teknik ÜniversitesiBilgisayar Mühendisliği Ana Bilim Dalı
PROF. DR. ZEHRA ÇATALTEPE
- A system implementation for analyzing and tracking motile objects in biomedical images
Biyomedikal görüntülerde hareketli nesnelerin analizi ve takibi için bir sistem gerçeklemesi
HAMZA OSMAN İLHAN
Doktora
İngilizce
2017
Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve KontrolYıldız Teknik ÜniversitesiBilgisayar Mühendisliği Ana Bilim Dalı
PROF. DR. NİZAMETTİN AYDIN
- Alignment of eye tracker and camera data by using different methods in human computer interaction experiments
İnsan bilgisayar etkileşim deneylerinde göz izleme cihazı ve kamera verisinin farklı yöntemler ile hizalanması
LEYLA GARAYLI
Yüksek Lisans
İngilizce
2017
Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrolİstanbul Teknik ÜniversitesiBilgisayar Mühendisliği Ana Bilim Dalı
PROF. DR. ZEHRA ÇATALTEPE
- Estimation of position and orientation with visual odometry for ground vehicles
Kara araçları için görsel odometri yöntemi ile pozisyon ve duruş tahmini
BURAK ALİ ARSLAN
Yüksek Lisans
İngilizce
2025
Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrolİstanbul Teknik ÜniversitesiKontrol ve Otomasyon Mühendisliği Ana Bilim Dalı
PROF. DR. FİKRET ÇALIŞKAN
- Sanal ortamda ortaokul öğrencilerinin güneş sistemindeki gözlem ve meraklarının göz takip cihazıyla incelenmesi
Investigation of secondary school students' observations and curiosities about the solar system with eye tracking device in virtual environment
BİRCAN KASIMOĞLU
Yüksek Lisans
Türkçe
2025
Eğitim ve ÖğretimDokuz Eylül ÜniversitesiMatematik ve Fen Bilimleri Eğitimi Ana Bilim Dalı
PROF. DR. SUAT TÜRKOGUZ