# Eye Tracking Data

## 1. Overview

SightLab exports eye tracking data including:

The x, y, z coordinates of the gaze intersection point (combined and per eye)

Rotational values for each eye (left and right)

## 2. Coordinate System

### 2.1 Intersect Point Coordinates

System: 3D Cartesian coordinate system (X, Y, Z)

Origin: Center of the virtual environment

Axes:

Positive X: Extends to the right

Positive Y: Extends upward

Positive Z: Extends outward from the screen

Units: Meters

Reference: Coordinates are relative to the parent object's coordinate system

### 2.2 Eye Rotation

Represents the rotational orientation of each eye

Measured in degrees

Represented as Euler angles

3. Data Types

### 3.1 Gaze Intersection Data

Combined gaze: x, y, z coordinates of where the combined gaze intersects with objects in the scene

Individual eye gaze: x, y, z coordinates for each eye's gaze intersection

### 3.2 Eye Rotation Data

Left eye rotation: Rotational values for the left eye in Euler angles

Right eye rotation: Rotational values for the right eye in Euler angles

## 4. Data Collection Process

The eye tracking data is collected and processed in the updateGazeObject function within the "sightlab" module. Here's a breakdown of the process:

Retrieve the eye tracker's transformation matrix

Combine it with the main view matrix to get the gaze direction in the virtual environment

Create a line representing the gaze direction

Check for intersection with objects in the scene

Update the gaze point object's position if there's a valid intersection

Record the time, intersection point, main view position, and other relevant flags

## 5. Code Explanation

def updateGazeObject(self):

self.eyeTracker = vizconnect.getTracker('eye_tracker').getRaw() gazeMat = self.eyeTracker.getMatrix()

gazeMat.postMult(viz.MainView.getMatrix())

line = gazeMat.getLineForward(1000)

info = viz.intersect(line.begin, line.end)

if info.valid and self.gazePointObject is not None:

self.gazePointObject.setPosition(info.point)

The function retrieves the eye tracker's matrix, which represents its position and orientation

It combines this with the main view matrix to get the gaze direction in the virtual environment

A line is created to represent the gaze direction

The viz.intersect() function checks if this line intersects with any objects in the scene

If there's a valid intersection, the gaze point object's position is updated to the intersection point

For Individual Eye Data

def updateTrialData(self, currentTime, point, eyeTracker, maxDistance, label):

leftGazeMat = eyeTracker.getMatrix(viz.LEFT_EYE)

leftEuler = leftGazeMat.getEuler()

leftGazeMat.postMult(viz.MainView.getMatrix())

line = leftGazeMat.getLineForward(maxDistance)

leftInfo = viz.intersect(line.begin, line.end)

#leftEuler = vizmat.AngleBetweenVector(viz.MainView.getMatrix().getForward(), leftGazeMat.getForward())

if leftInfo.valid:

left_eye = leftInfo.point

else:

left_eye = vizmat.MoveAlongVector(line.begin, line, maxDistance)

rightGazeMat = eyeTracker.getMatrix(viz.RIGHT_EYE)

rightEuler = rightGazeMat.getEuler()

rightGazeMat.postMult(viz.MainView.getMatrix())

line = rightGazeMat.getLineForward(maxDistance)

#rightEuler = vizmat.AngleBetweenVector(viz.MainView.getMatrix().getForward(), rightGazeMat.getForward())

rightInfo = viz.intersect(line.begin, line.end)

if leftInfo.valid:

right_eye = rightInfo.point

else:

right_eye = vizmat.MoveAlongVector(line.begin, line, maxDistance)

Retrieves gaze matrices for the left and right eyes from the eye tracker.

Extracts the Euler angles (orientation) from these gaze matrices.

Transforms the gaze matrices to the main view's coordinate system.

Creates forward gaze lines from the eye positions to a specified maximum distance.

Performs intersection tests to check if these gaze lines intersect with any objects in the virtual environment.

Records the intersection points if valid; otherwise, records the points at the maximum distance along the gaze lines.

Updates the trial data with the calculated gaze intersection points and Euler angles for both eyes.

## 6. Data Usage and Interpretation

The x, y, z coordinates of the intersection point represent where the user's gaze meets objects in the virtual environment

Combined gaze data can be used for general gaze tracking, while individual eye data allows for more detailed analysis

Eye rotation data can be used to analyze eye movements and potentially detect specific eye behaviors

## 7. Additional Data

Additional data based on the VR system in use (e.g., pupil diameter, eye openness)

Fixation State: Indicates whether the gaze is in a fixation or saccade state

Saccade Angle: The angle of eye movement during a saccade

Saccade Velocity: Average and peak velocity during a saccade

Retrieving the count of views or gaze events for each object in a scene.

Calculating the total gaze duration and the average gaze duration per object based on total gaze time divided by the number of gaze events.

Time to First Fixation: Measuring the time it takes for a participant to first fixate on a specific area of interest after a stimulus is presented.

Fixation Sequence Analysis: The order in which different areas of interest are fixated upon, which can indicate the cognitive process or strategy employed by the viewer.

Heatmaps

Scan Paths

Walk Paths

Interactive playback

Area of Interest (AOI) Analysis: Defining specific regions within the visual scene to examine how much and how long subjects look at these areas.

Gaze Contingent Display: Changing what is shown on the screen based on where the user is looking, often used in dynamic experiments.

For more information see the eye tracking metrics page

Sample Trial Data