Many models have been proposed as to how we process words on the page in order to understand them. A lot of these models are flawed and don’t incorporate what actually happens when we read, they make a theory that fits there understanding and see if it matches the data – a lot of them don’t.
What happens when we read:
Do we read letters all at once or one by one?
- We tend to read letters all at once which is shown in studies that look into the word length effect. Surely if we read letters one by one longer words would take longer to read. This is not the case, longer words regardless of how frequently they are heard are recognised in the same time.
- Non words do present a length effect but then this is because these words flout our rules in our language and so take longer to recognise the letters in an unrecognisable pattern.
- Another effect happens when we read words and that’s called the neighbourhood effect. It could cause a lengthened Reaction time (RT) as one word may have lots of different similar words that interfere with recognition, this is especially seen in low frequency words. This may be due to more high frequency words being neighbours and so interfering with recognition.
Is there feedback between letters and words to help recognition? If so which way does the feedback flow?
- Words help us recognise letters
- This is seen in studies that present different words to participants like below
- K/d (single letter condition)
- Participants were asked to say when they saw the letter D
- They were quickest to detect this letter when it was presented in ‘word’
- Slowest when presented on its own
- This shows there seems to be a context effect enabling us to recognise letters faster in words.
Do we get better at recognition the more we are exposed to a word?
- To measure this a Lexical decision task helps us to see how word is processed and therefore the effect on recognition.
- More errors are made when we rank words by absolute frequency and less in ranked situations
- Ranked tasks display our recognition features so this is how we rank words in our mind for recognition
- Repetition priming also shows that high frequency words are recognised easier than those we see less often
- Somewhere in our minds it seems we have a bank for words that are seen more often and we draw upon this to detect it later (backs up that words help recognition of letters)
What do we look out for in recognition?
- Syllables? – used through colours and testing an/vil or anv/il more mistakes made on second option
- Morpheme? – different letters were used as primes t/ tea/ teasp better identification with tea as prime (not with teasp – more letters but less meaning morphologically)
- Associations? – if a prime is the same as a target word then we are easily able to detect this. If the words are associated such as salt/ pepper then recognition is made quicker. This suggests a semantic contextual method involved in reading
- Letters? – as shown more letters does not mean quicker recognition. However if a target shares letters with a prime then it is recognised quicker than if they don’t share letters. There could be a bank of letters that we become sensitive to more and more from exposure.
What do the models say?
There’s are three main models
- Forsters model –
- good to explain that we have words ranked in our minds into categories (called bins in this model)
- Also explains that we recognise words upon shared letters
- Explains that we use letters to recognise words which is wrong
- Main problem : too powerful it can do everything and has no exceptions (we do)
2. Logogen model
- Explains that we look at all letters together so supports findings of no words length effects
- Explains that words have thresholds and upon repitition thresholds are lowered speeding up recognition
- Main problem: hard to test as it explains a cognitive semantic element of context to influence words that can’t be measured
- Frequency and context are explained as effecting recognition in the same way however the results are very mixed. Morton (founder of Logogen) decides that frequency doesn’t effect the Logogen but effects cognitive factors instead. – this doesn’t really explain it in real life
- Also suggests that all words regardless of modality (reading, speaking, listening, writing) effect one another- not seen in real life
3.Interactive activation and competitorn model
- Again, like the Logogen model, the IAC model explains that we read letters in parallel supporting finding of no word length effects.
- Unlike other models IAC demonstrates that we have feedback from words to help recognise letters – as seen in the study asking participants to recognise a letter which was done quickest with a word.
- Main problem: explains that we rank words on absolute frequency which is not seen in real life. Furthermore IAC suggests that we have the same threshold for all words which is not the case as seen through high frequency words being recognised easier than low frequency.
- Also the model doesn’t help us understand why certain words have higher resting levels and how that occurs – these can be adapted by the researcher in studies making it hard to accept these models as there is little consistency
- All these models are so conflicting and no model seems so win outright.
- Interactive actvation model seems to explain the most but still has flaws
- A model is needed to explain how we read to treat problems such as dyslexia efficiently.
- A lot of work has looked into hybrid models
- These models explain that we have a serial process perceptually and then a parallel process which checks with context that has the final say – backers verification model
I feel these hybrid models need much more exploration to understand how words on a page and our previous experiences interact to influence our perception of a word. This will better help reading problems such as dyslexia to see what goes wrong between reading words and understand the meaning of them.