Recall and Mental Models:

Designing A User Interface To Affect Memory

 

Carrie Heeter

West Portal Comm Tech Lab

Michigan State University

heeter@pilot.msu.edu

 

Lynn Rampoldi Hnilo

Department of Telecommunication

Michigan State University

rampoldi@pilot.msu.edu

 

Brian M. Winn

Comm Tech Lab

Michigan State University

winnb@pilot.msu.edu

 

Submitted for presentation consideration to CHI '98 (Computer-Human Interaction) on September 12, 1997.


ABSTRACT

A simple interface intended to have specific memory and learning impacts was tested against alternatives under 3 experimental conditions. Free Recall and Mental Model Diagrams were assessed prior to use, immediately after use and one week later. Recall of the 178 facts in the software was limited and not homogenous. However, learners did generate Mental Models which conformed reasonably well to the intended expert organization of the content. The attributes of SIZE, COMPLEXITY and CONGRUENCE with expert mental models are defined and applied. Implications for learning, design and research are discussed. Close scrutiny of what users recalled and did not recall will affect our future designs.

 

KEYWORDS

hypermedia, user interface, learning, memory, presentation style, mental models, knowledge construction

 

INTRODUCTION

Developers make hundreds of educational software design decisions based on hunches, not research. Creativity and intuition dominate the design process in our laboratory. But sometimes we wonder whether a key design feature actually impacts learners as intended.

We believed the menu navigation technique in our Virgin Islands Environmental Science project gave learners an explicit mental model of unfamiliar topics. The interface seemed as if it should enhance recall and provide conceptual scaffolding for novices to think like experts do about the topic.

We examined research literature on memory and designed a formal experiment to assess how (and whether) the interface affected learning. Some of the research findings shocked us. The methodology and analysis led us to think about learning in new ways. None of the outcomes we expected occurred. Other designers may find our journey informative.

 

OVERT CONCEPTUAL OUTLINES AS USER INTERFACE

Learners follow guided paths through 121 multimedia modules about environmental science in the Virgin Islands. Inside each module, a travel brochure menu serves as both a conceptual outline of the topic area and a navigation tool.

Learners can jump to content in any heading in the outline, but they always bounce back to the full outline before making another navigation choice. Inside the brochure's small chunks of content, the outline heading that was clicked on to get to the chunk appears as a label. The conceptual outline is omnipresent and inescapable. The menu also serves as a progress indicator. Once a subsection has been visited, the corresponding blue dot turns into a check. The upper right box shrinks the brochure to show the entire background image.

Most multimedia menu interfaces share these characteristics. Ours is distinguished only because we consciously hope to influence learners' thought patterns through our choice of outline structure and function.

 

SCHEMAS AND MENTAL MODELS

Schemas "are cognitive structures that represent knowledge about a concept or type of stimulus, including its attributes and the relation among those attributes" ( Fiske & Taylor, 1991 p. 98). Humans may organize knowledge as nodes and links or semantic networks similar to hypermedia nodes and links (Collins & Loftus, 1975). "The central proposition of schema theory as it applies to text processing is that the prior knowledge of the reader and the context of the situation (titles, heading, and other immediately preceding material) interact to influence the interpretation and subsequent recall of new information" (Rezabek, 1995). Schemas are top-down processes, robust and inflexible due to their genesis in repetitive experiences. Schemas codify general knowledge (e.g. a script about eating in a restaurant) (Schank & Abelson, 1977 cited in Eysenck & Keane, 1995), but do not prepare an individual for interaction in new environments (Norman, 1988 cited in Preece, 1994).

The mental model approach proposes that knowledge organization can be dynamically constructed by activating stored schemata. Learning involves encoding information from working memory (memory that contains currently active information) into long-term memory (LTM). Retrieving information from LTM into consciousness is called activation.

Mental models evolve with experience. Experts have better and more elaborated representations due to extensive practice and more efficient "chunking," the categorizationg of information into one unit, than do novices (Eysenck &Keane). Experts' knowledge would evolve into a schematic model over time.

When people encounter new information, they construct a mental model of the unfamiliar topic area. If a topic is wholly unfamiliar to learners, they lack a structure for thinking about it and are unsure how to prioritize, store, or interpret the information. They may borrow from a familiar schema related to the new topic to process the unconnected information. Or they may ignore it. McKeague and Stevens (1977) found that readers who were asked to construct a concept map by actively organizing key concepts in the text succeeded better at recalling the text content. The researchers suggest that individuals with little prior knowledge need and rely on structural and sequencing cues in linear text during reading.

Educators try to provide learners with expert mental models to facilitate their learning and recall of new topic areas. Learning how to think about and organize a topic area differs from learning unconnected facts from that domain. We assessed the impacts of our educational software on both factual recall and learners' mental models.

 

EXPERIMENTAL CONDITIONS

We created 3 experimental conditions to test whether the organizational layout of an educational hypermedia interface may affect memory processes that promote learning. Condition 1, MENUS and LABELS, used the multimedia Coral Reefs module taken directly from the Virgin Islands project. The conceptual outline, used as the navigation menu, included the following topic headings and subheadings: The Coral Animal; Structure of Coral; Ocean Habitats and Species Diversity; Deep Water Coral; Feeding Habits (Nocturnal Carnivores and Symbiotic Algae); Reproduction (Sexual and Asexual); Coral Larvae; Benefits of Coral to Humans. The headings appeared as labels on each brochure page.

Condition 2, NO MENU, LABELS used identical multimedia content, but without the menu. Navigation was linear (forward or backward) through the 29 pages of graphics, sound, animation and short text. Clicking on the bottom of the page fold moves forward through the content and clicking on the upper bent page moves backwards in the linear sequence. The information structure of Condition 2 resembles a college textbook.

Condition 3, NO MENU, NO LABELS, used identical multimedia content in a linear navigation, but every page had the same uninformative label (Coral Reefs). The information structure of Condition 3 resembles a novel.

We planned to asses memory impacts of the software immediately after use and also one week later, to look for differential impacts of the software on long term memory and short term memory. We expected the mental models to survive more completely than the free recall memory. And we expected the MENU and LABELS condition to outperform the other two, particularly NO MENU NO LABELS.

Do the labels or organizational structure used in a program influence the way mental representations are linked in learners' minds? What type of organization stimulates learning and better memory? We expected learners who used the MENUS and LABELS condition to achieve the greatest recall and describe most sophisticated mental models. We expected learners who had NO MENU and NO LABELS to recall the fewest facts and to construct the weakest mental models.

 

METHODS

Measurement occurred at 3 points in time. Before research subjects used the software ("Time 1"), we used free recall methods to assess their prior knowledge about coral reefs:

"We are going to ask you to use some educational software to learn about coral reefs. Before you use the software, we would like to find out how much and what you already know about coral reefs. Please write down as many facts and interesting things about Coral Reefs as you can."

Next, subjects completed a brief online questionnaire asking their age, gender, GPA, interest in marine biology, time spent close to oceans and familiarity with computers. They interacted with the program for 25 minutes (+/- 3 minutes). Immediately after using the software, subjects completed the same free recall memory task ("Time 2"). They turned in the free recall responses and picked up the final component for Time 2 -- a Mental Model Assessment. Like free recall, the method is simple for test designers and requires extensive effort from the respondent. Our Mental Model Assessment was a blank piece of paper with the words CORAL REEFS in medium sized type in the middle of the page and the instruction "Draw your diagram of the coral reef here." along the top. On the back of the page was a sample diagram on a different topic. Respondents created nodes and links to construct a mental model by writing down and circling words related to the central concept, drawing links from the center to the related words, and then expanding the model to concepts related to the related concepts, and so on.

One week later, subjects returned to the computer laboratory ("Time 3"). For the third time, they completed the free recall assessment, writing down as many facts and interesting things about Coral Reefs as they could. They turned in the recall responses and then completed the second and final Mental Model Diagram.

Seventy-five students from a large mid-western university participated in the experiment, with approximately 25 individuals being randomly assigned to one of the three conditions. Subjects were all between 18 and 25 years old. Thirty-nine percent were female. They were volunteers from a telecommunication class participating in the research for extra credit. Their interest in and knowledge about marine biology was minimal. Six percent said they were quite knowledgable; 54% knew nothing at all, and 40% had some knowledge about the topic.

To examine the effect of direct experience on learning and retention, we asked how much time they had spent near an ocean during their lifetime. Seventeen percent had spent no time near an ocean; 55% had been near an ocean but for less than a week ; 28% for some number of weeks; 10% for months and 7% for years.

Additional exposure to coral reefs from a different source was asked at Time 3. No subject reported exposure to additional coral reef content between Time 2 and Time 3 data collection.

To code the free recall data, we decomposed the factual content of the Coral Reefs module into 178 small, distinct pieces of information. (For example, "CORAL ARE CARNIVOROUS," "CORAL NEED SUNLIGHT FOR PHOTOSYNTHESIS," "CORAL LARVAE CAN TRAVEL FOR HUNDREDS OF MILES BEFORE SETTLING ON THE OCEAN FLOOR.") Two coders were trained to check off all matches from within the list of 178 facts for each of the three free recall questionnaires. Each fact was coded as present, missing, or wrong. Total facts that appeared in the software, plus the number of facts that were in the recall but not part of the programmed, were summed to obtain total facts recalled at Times 1, 2 and 3.

To code the Mental Model Diagrams, we defined three attributes of mental models: SIZE, COMPLEXITY, CONGRUENCE. SIZE refers to the total number of different nodes each user placed on their diagram. COMPLEXITY describes the depth of the Mental Model -- how many levels of nodes and links the subject constructed. (A simple Mental Model might have Coral Reefs at the center, and then any number of primary nodes but no further links off of any of the primary nodes. A complex Mental Model would include links to nodes and then more links off those nodes, and so on.)

CONGRUENCE measured how completely the learner represented the organizing structure of the labels from the software within their nodes -- a reflection of internalizing our content expert's structure. Twelve concepts appeared as both Labels on the pages and MENU items in the outline. Two coders were trained to content code the post graphical representations including presence or absence of each of the 12 concepts as nodes in the subject's Mental Model.

A final measure of the Mental Models was a count of the number of PROGRAM NODES in the Mental Model which related directly to content in the software (not just the 12 organizing concepts, but all 178 facts).

A 15% overlap of content sheets for both Free Recall coding and Mental Model coding was used to check intercoder reliability. The coders were in agreement 88% of the time.

 

RESULTS

Figure 1 compares the five measures of recall and mental model for Time 1, Time 2 and Time 3 (as applicable). The average number of facts recalled jumped from .97 prior to using the software to 11.3 of the 178 facts immediately after use. Average free recall dropped to 7.33 one week later. On average all but one or two of the nodes generated related directly to content from the software.

Figure 1: Means and Size of Recall and Mental Model Results

As long time developers of educational software, we were unprepared to discover that immediately after using the program for 25 minutes, 83% of the 178 facts were recalled by fewer than 10% of the users. Only 4% of the facts were recalled by at least 20% of the users. Only the single fact that coral reproduce sexually was recalled as many as 75% of users. (Notice our description of these people changed from "learners" to "users" after we calculated these statistics!)

Figure 2: Number of Facts Recalled by Percent of Users

Figure 2 shows the breakdown by time period. Few facts were recalled and different students recalled different facts, with almost no homogeity. One week later, by Time 3, 58% of the facts were recalled by fewer than 5% of the users.

Coral Reefs is one of 121 modules. The entire Virgin Islands project contains an estimated 3700 facts, more than 2000 of which will pass completely unnoticed by students using the software for learning.

Continuing with Figure 1, the mental model measurements yielded somewhat more encouraging evidence of memory effects. Average size of a mental model at Time 2 was 18 nodes, dropping to 12 nodes at time 3. The models averaged 3 node-link levels deep at Time 2 and 2.8 levels deep at Time 3. Most encouraging, the congruence rating at Time 2 showed learners identifying an average of 7.4 of the intended 12 central concept nodes at Time 2, dropping to 5.7 at time 3. The free recall measures expose the software as a learning disaster, while the congruence scale suggests sound success in conveying key themes about Coral Reefs. Had we used only one or the other method, our perspectives on learning from the software would be skewed. Mental models may be easer to learn, to recall, and/or to express,than piecemeal facts.

As a deeper glimpse at outcomes of this method, Figure 3 shows details about the complexity measures and congruence ratings by each of the 12 topics. Roughly one fourth of learners constructed simple models with one or two levels in Time 2. Fourty-four pecrcent included 3 levels and nearly one third used four or more levels in Time 2. Highly complex mental maps appeared less often in Time 3 than in Time 2, but those with moderately complex models did not reduce their complexity.

Working closely with natural scientists for more than a year, we realized only after collecting data how inured we had become to the sex life of animals. We had planned to conduct analyses comparing content that was offered only as text, as audio and text, and as animations.

Animation of Pangea (Menu and Label Condition)

Full Screen Coral CloseUp (No Menu, Label Condition)

Audio about Coral Spawing in the Great Barrier Reef (No Menu, No Label Condition)

We forgot how undergraduates in nonscience majors might react to scientists describing sperm and eggs spewing into the ocean during the full moon. Unfortunately, 3 of the 12 categories relate to reproduction. Fortunately, the encouraging results exceed that number of congruencies. This measure of congruence by topic outline provides useful feedback to content designers about vividness and lasting impact of different chunks of content.

Figure 2: Complexity Measures and Congruence Ratings

Hypothesis testing using OneWay ANOVAs in Figure 4 found no significant differences across the three experimental conditions for any of the measures of learning. Our intentional design did not yield the intended results. We had secretly worried that perhaps the Menu with Labels condition interfered with rather than facilitated learning. At least our preferred interface was not significantly worse for learning.

Figure 3: One Way Anovas (Time by Conditions)

As a final examination of learning with the software, we ran multiple regressions to consider the impacts of age, gender, and exposure to oceans interacting with experimental condition. Two dummy variables were created: Labels that combined the Menu with Labels and the No Menu with Labels; and Menu that pitted Menu with Labels against the other two combined conditions.

Figure 4: Multiple Regressions Predicting Memory Measures

Figure 4 shows that nine of the 11 multiple regressions yielded significant R2 results. Being male was a significant predictor of recall and mental model for 8 dependent variables. Direct experience with oceans was a significant predictor for 7 dependent learning variables. Experience with oceans logically would enhance recalling and thinking about coral reefs. Why being male is such a strong predictor surprises us and we offer no explanation other than a stereotypic bias against science on the part of female students. Still, the content of the Coral Reefs module slanted more to ecology than to hard science.

 

CONCLUSIONS

When we collected the data we had not decided precisely how to analyze it. Coding the free recall at a degree of detail specific to each individual fact and counting nodes in the mental models many different ways turned out to be laborious and complex. The coding can be without losing useful results.

In retrospect we realize that not all facts are created equal. Some of the 178 facts are key concepts while others are specific details with little relevance to most student lives or mental models of how ecosystems work. This realization extends to data collection but also to educational software design. More facts do not automatically produce more learning. After conducting this research, as developers we are refocussing on the larger framework and overarching mental models we want learners to carry away from our software.

In future learning research, rather than coding for every piece of content, we would tag central concepts and code for their presence or absence for free recall data and for mental model data. We would continue to count overall number of knowledge bits recalled as a measure of mental effort and storage space devoted to the topic and task. Counting link depth and overall number of links is also fast and easy. Searching for links that directly represent key program concepts yields interesting results. Determining whether every node appeared somewhere within the software is a waste of time.

Researchers have tried to increase recall by providing participants with cues. "Retrieval cues are any words, images, emotions, and so on that activate or direct the memory search process" (Sudman, Bradburn, and Schwarz, 1996 p.167). Experimental research in memory has shown that providing retrieval cues does not necessarily increase an individual's accuracy of reporting. Instead "cues can enhance or impair recall depending on whether or not they are appropriate to the memory that is the target of retrieval" (Tulving, 1983; cited in Jobe, Tourangeau, & Smith 1993 p.145). These cues may be provided within the software as part of the learning experience, not just during memory research.

We are no more or less certain of our Menus with Labels interface than we were at the beginning of the research. But we have a greater appreciation for what users do NOT learn. (Almost everything!) We feel called to do a better job of bringing concepts to life, treating them more carefully and addressing learners more insistantly though through subtle and elegant means.

 

REFERENCES

Collins, A.M., & Loftus, E.F. (1975). A spreading-activation theory of semantic processing. Psychological Review, 82, 407-28.

Eysenck, M.W., & Keane, M.T. (1995). Cognitive Psychology: A student's handbook. Hove, UK: Lawrence Erlbaum Associates

Fiske, S.T., & Taylor, S.E. (1991). Social Cognition (2nd ed.). New York: McGraw-Hill.

Jobe, J.B., Tourangeau, R., & Smith, A.F. (1993). "Contributions of survey research to the understanding of memory". Applied Cognitive Psychology, 7, 465-584.

Jorg, S., & Hormann, H. (1978). "The influence of general and specific verbal labels on the recognition of labeled and unlabeled parts of pictures". Journal of Verbal Learning & Verbal Behavior, 17, 4, 445-454.

Kealy, W.A., & Webb, J.M. (1995). "Contextual influences of maps and diagrams on learning". Contemporary Educational Psychology, 20, 3, 340-358.

May, M.D., Sundar, S.S., & Williams, R.B. (1997). The effects of hyperlinks and site maps on the memorability and enjoyability of web content. International Communication Asociation Conference: Communication & Technology Division., Montreal, Canada.

McKeaugue, C., & Stevens, R.J. (1997). Effects of hypertext structure and reading activity on novices' learning from text. Presented at the American Educational Research Association, Toronto, Canada.

Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., & Carey, T. (1994). Human-Computer Interaction. Harlow, England: Addison-Wesley.

Rezabek, R. (1995). The relationships among measures of intrinsic motivation, instructional design, and learning in computer-based instruction. Paper presented at the Annual National Convention of the Association for Educational Communications and Technology, Anaheim, CA.

Rowe, D.W., & Rayford, L. (1987). Activating background knowledge in reading comprehension assessment. Reading Research Quarterly, 22, 2, 160-176.

Sudman, S., Bradburn, N.M., & Schwarz, N. (1996). Thinking About Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco, Jossey-Bass Publishers.