Geri Dön

A method to process eye-tracker data with image processing algorithms for driver behavior analysis

Sürücü davranış analizi için göz izleme verilerini görüntü işleme algoritmalarıyla işlemek için bir yöntem

  1. Tez No: 916101
  2. Yazar: FURKAN AYDIN
  3. Danışmanlar: PROF. DR. LORENZO MUSSONE, PROF. DR. GİANDOMENİCO CARUSO
  4. Tez Türü: Yüksek Lisans
  5. Konular: İnşaat Mühendisliği, Civil Engineering
  6. Anahtar Kelimeler: driving simulator, driver attentional data, driver physiological data, driver behavior, level of details (LOD), road safety, gaze position, driver reaction time, image processing
  7. Yıl: 2022
  8. Dil: İngilizce
  9. Üniversite: Politecnico di Milano
  10. Enstitü: Yurtdışı Enstitü
  11. Ana Bilim Dalı: Belirtilmemiş.
  12. Bilim Dalı: Belirtilmemiş.
  13. Sayfa Sayısı: 99

Özet

One of the most important factors identified as a cause of road crashes is driver behavior. In a well-conditioned traffic situation, a driving simulator has become the primary instrument for simulating and verifying the interaction between vehicle and driver. It also seeks to determine the optimal LOD (Level of Detail) of those scenarios in a driving simulator that result in increased reliability. The data obtained by the driving simulator and eye-tracker devices of Politecnico di Milano's i.Drive laboratory is the basis for the research presented in this thesis. The driving simulator consists of a fixed seat, monitors, a steering wheel, and pedals for operating the car. Driving scenarios provide routes with a variety of environmental details. It is feasible to observe the driver's behavior when their LOD varies while the road infrastructure remains unchanged by using the driving simulator. The experiment employs four levels of detail, ranging from simple scenarios (with carriageway and road signs) to a scenario that is quite near to reality (with realistic buildings and all other elements like trees, lamps, and pedestrians). In this study, a novel way for properly interpreting the experiment based on Pupil Labs eye-tracking equipment was proposed. The drivers' gaze locations were captured by Pupil Labs' eye-tracking equipment, and the gaze positions were elaborated on in a prior paper on this experiment by Rawal (Rawal, 2021). Because the subjects wore the Pupil Labs eye-tracking apparatus on their heads during the experiment, the world camera on the equipment captured and archived the driver's perspective as a world movie. The world video, as well as the timestamps for each individual frame of the world video, were acquired using the Pupil Labs Player application. In MATLAB, all of the frames from the world video were extracted for the 19th driver, which was chosen to be evaluated, with LODs ranging from 0 to 3. The gaze locations from previous elaborations (Rawal, 2021) and frames acquired from world video with the closest timestamps to the reference array (Ahmed, 2020) were matched. The integration of the new image processing approach with the existing data elaborations of the experiment was thus maintained. Initially, an edge detection method was used for each frame that had been matched with the previous elaboration in order to find pixels with value 1 in the greyscale image that had been converted from the RGB image. A selection window was constructed in MATLAB to choose the road borders from the photo's edges, which are defined as all pixels in the image with value 1. A multitude of elements, including head movement, vehicle exact location, and orientation along the circuit's track impact the driver's perspective, hence the word camera frame images. Due to the constant change in the driver's perspective along the simulation path, this selection window needed to be updated for the proper selection of the road border coordinates in order to detect in which frames the gaze is inside the road limits for better perception of the driver behavior along the circuit, specifically for one rectilinear and one curvilinear segments were chosen to be analyzed with LODs ranging from 0 to 3. The primary purpose of the research is to offer an efficient methodology for processing eye-tracker data with image processing algorithms for driver behavior analysis utilizing various LODs of road scenarios in curvilinear and rectilinear geometrically dissimilar segments for a better interpretation of the complex driver behavior.

Özet (Çeviri)

One of the most important factors identified as a cause of road crashes is driver behavior. In a wellconditioned traffic situation, a driving simulator has become the primary instrument for simulating and verifying the interaction between vehicle and driver. It also seeks to determine the optimal LOD (Level of Detail) of those scenarios in a driving simulator that result in increased reliability. The data obtained by the driving simulator and eye-tracker devices of Politecnico di Milano's i.Drive laboratory is the basis for the research presented in this thesis. The driving simulator consists of a fixed seat, monitors, a steering wheel, and pedals for operating the car. Driving scenarios provide routes with a variety of environmental details. It is feasible to observe the driver's behavior when their LOD varies while the road infrastructure remains unchanged by using the driving simulator. The experiment employs four levels of detail, ranging from simple scenarios (with carriageway and road signs) to a scenario that is quite near to reality (with realistic buildings and all other elements like trees, lamps, and pedestrians). In this study, a novel way for properly interpreting the experiment based on Pupil Labs eye-tracking equipment was proposed. The drivers' gaze locations were captured by Pupil Labs' eye-tracking equipment, and the gaze positions were elaborated on in a prior paper on this experiment by Rawal (Rawal, 2021). Because the subjects wore the Pupil Labs eye-tracking apparatus on their heads during the experiment, the world camera on the equipment captured and archived the driver's perspective as a world movie. The world video, as well as the timestamps for each individual frame of the world video, were acquired using the Pupil Labs Player application. In MATLAB, all of the frames from the world video were extracted for the 19th driver, which was chosen to be evaluated, with LODs ranging from 0 to 3. The gaze locations from previous elaborations (Rawal, 2021) and frames acquired from world video with the closest timestamps to the reference array (Ahmed, 2020) were matched. The integration of the new image processing approach with the existing data elaborations of the experiment was thus maintained. Initially, an edge detection method was used for each frame that had been matched with the previous elaboration in order to find pixels with value 1 in the greyscale image that had been converted from the RGB image. A selection window was constructed in MATLAB to choose the road borders from the photo's edges, which are defined as all pixels in the image with value 1. A multitude of elements, including head movement, vehicle exact location, and orientation along the circuit's track impact the driver's perspective, hence the word camera frame images. Due to the constant change in the driver's perspective along the simulation path, this selection window needed to be updated for the proper selection of the road border coordinates in order to detect in which frames the gaze is inside the road limits for better perception of the driver behavior along the circuit, specifically for one rectilinear and one curvilinear segments were chosen to be analyzed with LODs ranging from 0 to 3. The primary purpose of the research is to offer an efficient methodology for processing eye-tracker data with image processing algorithms for driver behavior analysis utilizing various LODs of road scenarios in curvilinear and rectilinear geometrically dissimilar segments for a better interpretation of the complex driver behavior.

Benzer Tezler

  1. Emotion recognition process analysis by using eye tracker, sensor and application log data

    Göz izleme cihazı, sensör ve uygulama verileri ile insanlarda duygu tanıma analizi

    MAHİYE ÖZTÜRK

    Doktora

    İngilizce

    İngilizce

    2019

    Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrolİstanbul Teknik Üniversitesi

    Bilgisayar Mühendisliği Ana Bilim Dalı

    PROF. DR. ZEHRA ÇATALTEPE

  2. A system implementation for analyzing and tracking motile objects in biomedical images

    Biyomedikal görüntülerde hareketli nesnelerin analizi ve takibi için bir sistem gerçeklemesi

    HAMZA OSMAN İLHAN

    Doktora

    İngilizce

    İngilizce

    2017

    Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve KontrolYıldız Teknik Üniversitesi

    Bilgisayar Mühendisliği Ana Bilim Dalı

    PROF. DR. NİZAMETTİN AYDIN

  3. Alignment of eye tracker and camera data by using different methods in human computer interaction experiments

    İnsan bilgisayar etkileşim deneylerinde göz izleme cihazı ve kamera verisinin farklı yöntemler ile hizalanması

    LEYLA GARAYLI

    Yüksek Lisans

    İngilizce

    İngilizce

    2017

    Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrolİstanbul Teknik Üniversitesi

    Bilgisayar Mühendisliği Ana Bilim Dalı

    PROF. DR. ZEHRA ÇATALTEPE

  4. Estimation of position and orientation with visual odometry for ground vehicles

    Kara araçları için görsel odometri yöntemi ile pozisyon ve duruş tahmini

    BURAK ALİ ARSLAN

    Yüksek Lisans

    İngilizce

    İngilizce

    2025

    Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrolİstanbul Teknik Üniversitesi

    Kontrol ve Otomasyon Mühendisliği Ana Bilim Dalı

    PROF. DR. FİKRET ÇALIŞKAN

  5. Sanal ortamda ortaokul öğrencilerinin güneş sistemindeki gözlem ve meraklarının göz takip cihazıyla incelenmesi

    Investigation of secondary school students' observations and curiosities about the solar system with eye tracking device in virtual environment

    BİRCAN KASIMOĞLU

    Yüksek Lisans

    Türkçe

    Türkçe

    2025

    Eğitim ve ÖğretimDokuz Eylül Üniversitesi

    Matematik ve Fen Bilimleri Eğitimi Ana Bilim Dalı

    PROF. DR. SUAT TÜRKOGUZ