Menu
Home
Log in / Register
 
Home arrow Language & Literature arrow COGNITIVE APPROACH TO NATURAL LANGUAGE PROCESSING
Source

Can the Human Association Norm Evaluate Machine-Made Association Lists?

This chapter presents a comparison of a word association norm created by a psycholinguistic experiment to association lists generated by algorithms operating on text corpora. We compare lists generated by the Church-Hanks algorithm and lists generated by the LSA algorithm. An argument is presented on how those automatically generated lists reflect semantic dependencies present in human association norm, and that future comparisons should take into account a deeper analysis of the human association mechanisms observed in the association list.

Introduction

For more than three decades, there has been a commonly shared belief that word occurrences retrieved from a large text collection may define the lexical meaning of a word. Although there are some suggestions that cooccurrences retrieved from texts [RAP 02, WET 05] reflect the text’s contiguities, there also exist suggestions that algorithms, such as the LSA, are unable to distinguish between co-occurrences which are corpus- independent semantic dependencies (elements of a semantic prototype) and co-occurrences which are corpus-dependent factual dependencies [WAN 05, WAN 08]. We shall adopt the second view to show that existing statistical algorithms use mechanisms which improperly filter word co-occurrences retrieved from texts. To prove this supposition, we shall

Chapter written by Michal Korzycki, Izabela Gatkowska and Wieslaw Lubaszewski.

compare the human association list to the association list retrieved from a text by three different algorithms, i.e. the Church-Hanks [CHU 90] algorithm, the Latent Semantic Analysis (LSA) algorithm [DEE 90] and the Latent Dirichlet Allocation (LDA) algorithm [BLE 03].

LSA is a word/document matrix rank reduction algorithm, which extracts word co-occurrences from within a text. As a result, each word in the corpus is related to all co-occurring words and all texts in which it occurs. This makes a base for an associative text comparison. The applicability of the LSA algorithm is the subject of various types of research, which range from text content comparison [DEE 90] to the analysis of human association norm [ORT 12]. However, there is still little interest in studying the linguistic significance of machine-made associations.

It seems obvious that a comparison of the human association norm and machine-created association list should be the base of this study. And we can find some preliminary studies based on such a comparison: [WAN 05, WET 05, WAN 08], the results of which show that the problem needs further investigation. It is worth noting that all the types of research referred to used a stimulus-response association strength to make a comparison. The point is that, if we compare association strength computed for a particular stimulus-response pair in association norms for different languages, we can find that association strength differs, e.g. butter is the strongest (0.54) response to stimulus bread in the Edinburgh Associative Thesaurus (EAT), but in the Polish association norm described below the association chleb “bread”-maslo “butter” is not the strongest (0.075). In addition, we can observe that association strength may not distinguish a semantic and nonsemantic association, e.g. roof 0.04, Jack 0.02 and wall 0.01, which are responses to the stimulus “house” in EAT. Therefore, we decided to test machine-made association lists against human association norms excluding association strength. As a comparison, we use the norm made by Polish speakers during a free word association experiment [GAT 14], hereinafter referred to as the author’s experiment. Because both LSA and LDA use the whole text to generate word associations, we also tested human associations against the association list generated by the Church-Hanks algorithm [CHU 90], which operates on a sentence-like text window. We also used three different text corpora.

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel