Giter VIP home page Giter VIP logo

analysis-of-reading-behavior-based-on-pos-through-sentence-completion-tasks's Introduction

Analysis of reading behavior based on Part-of-Speech (POS) through sentence completion tasks

Abstract

In this paper, we evaluate the reaction of people’s behavior toward information from reading an incomplete sentence based on their eye movements’ metrics while they try to fill the missing word. The experiment has potential meaning for psycholinguistics as well. However, this paper only uncovers the relationship between part of speech(POS) and the missing word in one sentence, or how syntactic works with incomplete grammar dependency of POS. We discovered that fixation metrics perform differently under missing word scenario with ordinary reading tasks. Through an eye-tracking experiment, the eye tracker obtained eye measurements from 19 subjects in 2 groups of missing word sentences of different POS. In the first group of the stimuli, there was 30 individual Noun missing sentences, and in the other group, 30 individual Verb missing sentences are placed. Gaze metrics of each AOIs(tokens of a sentence) were obtained over 19 subjects; a qualitative analysis was conducted afterward based on the mean of total fixation duration on AOIs. alt text

Figure: fixation Heatmap from 19 subjects on one missing word sentences.

Stimuli and experiment setup

For stimuli content, We constructed 60 different sentences, each with a missing word. Out of the 60, half of it concerned only Nouns as missing words, and the other one Verbs. We paid attention not to put too many PPs, as ambiguity is not something with which we wanted to deal. The sentence length differs. There are some 5-word sentences, those are the shortest ones, and the longest one was 15 word long. They are all one sentences, except for a few, where it was necessary for the context to have two, but they are closely related. Some of the sentences had emotion words in them, mostly because we wanted to see how subjectivity or personality changes the gaze duration on these specific words.

For the construction part, we plant a notification page at the beginning of the experiment, briefing the upcoming experiment sections. Participants could press any key to start the experiment. For the stimuli sections, we put each sentence in one page, and form them with two parts, Noun and Verb, thus each part has 30 pages, in each page, 3an enlarged sentence is placed in the center of the page. We also manually increased space between each word in the sentence for the convenience of later AOI region selection process. Between each section, we planted a notification screen with highly colored contrast information towards stimuli, indicating the break between each section and the beginning of next section. Users could press any key to swap to next page of sentence or section. While they reading the sentence, a voice recorder is placed to record their voice. Participants are required to speak out the word which they’d like to fill in the missing place; this is also informed by conductor and consent form. After the last sentence of stimuli, a black page of notification was showed to the participant, notifying the ending of the experiment, "Click to finish and please notify the conductor before your leave...".

The voice recording is a method to testify to guarantee that participant is focusing on the task, which is also an authentication for later selection of gaze data and has no functionality towards data collection and analysis. List of our experiment stimuli sentences could be seen at App

For more

This experiment was conducted to follow the exam of the eye-track experiment on Cognitive Science 3 course(2017) at the University of Copenhagen. Jintao Ren and Peter Pribil designed the experiment. Future research is delayed indefinitely due to lack of funds and time.

analysis-of-reading-behavior-based-on-pos-through-sentence-completion-tasks's People

Contributors

bbqtime avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.