Reverse associations by a machine

We introduced the product-of-ranks algorithm and showed that it can be successfully applied to the problem of computing associations if several words are given. To evaluate the algorithm, we used the EAT as our gold standard, but assumed that it makes sense to look at this data in the reverse direction, i.e. to predict the EAT stimuli from the EAT responses.

Although this is a task even difficult for humans, and although we applied a conservative evaluation measure that insists on exact string matches between a predicted and a gold standard association, our algorithm was able to do so with a success rate of approximately 30% (54% for the

Kent-Rosanoff vocabulary). We also showed that, up to a certain limit, with increasing numbers of given words, the performance of the algorithm improves, and only thereafter degrades. The degradation is in line with our expectations because associative responses produced by only one or very few persons are often of almost arbitrary nature and therefore not helpful for predicting the stimulus word[1].

Given the notorious difficulty to predict experimental human data, we think that the performance of approximately 30% is quite good, especially in comparison to the human results shown in Table 4.6, but also in comparison to the related work mentioned in the introduction (11.54%), and to the results on single stimuli (17%). However, there is of course still room for improvement, even without moving to more sophisticated (but also more controversial) evaluation methods that allow alternative solutions. We intend to advance from the product-of-rank algorithm to a product-of-weights algorithm. But, this requires that we have a high-quality association measure with an appropriate value characteristic. One idea is to replace the log- likelihood scores by their significance levels. Another is to abandon conventional association measures and move on to empirical association measures as described in Tamir and Rapp [TAM 03]. These do not make any presuppositions on the distribution of words, but determine this distribution from the corpus. In any case, the current framework is well suited for measuring and comparing the suitability of any association measure. Further improvements might be possible by using neural vector space models (word embeddings), as investigated by some of the participants of the CogALex-IV shared task [RAP 14].

Concerning applications, we see a number of possibilities: one is the tip- of-the-tongue problem, where a person cannot recall a particular word but can nevertheless think of some of its properties and associations. In this case, descriptors for the properties and associations could be fed into the system in the hope that the target word comes up as one of the top associations, from which the person can choose.

Another application is in information retrieval, where the system can help to sensibly expand a given list of search words, which is in turn used to conduct a search. A more ambitious (but computationally expensive) approach would be to consider the (salient words in the) documents to be retrieved as our lists of given words, and to predict the search words from these using the product-of-ranks algorithm.

A further application is in multiword semantics. Here, a fundamental question is whether a particular multiword expression is of compositional or of contextual nature. The current system could possibly help to provide a number of quantitative measures relevant for answering the following questions:

  • 1) Can the components of a multiword unit predict each other?
  • 2) Can each component of a multiword unit be predicted from its surrounding content words?
  • 3) Can the full multiword unit be predicted from its surrounding content words?

The results of these questions might help us to answer the question regarding a multiword unit’s compositional or contextual nature, and to classify various types of multiword units.

The last application we would like to propose here is natural language generation (or any application that requires it, e.g. machine translation or speech recognition). If in a sentence, one word is missing or uncertain, we can try to predict this word by considering all other content words in the sentence (or a somewhat wider context) as our input to the product-of-ranks algorithm.

From a cognitive perspective, the hope is that such experiments might lead to some progress in finding an answer concerning a fundamental question: is human language generation governed by associations, i.e. can the next content word of an utterance be considered as an association with the representations of the content words already activated in the speaker’s memory?

  • [1] Such associations might reflect very specific experiences of a test person.
 
Source
< Prev   CONTENTS   Source   Next >