Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

https://orcid.org/0000-0001-5213-7234

Document Type

Campus-Only Access for One (1) Year

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Psychology

Year Degree Awarded

2020

Month Degree Awarded

September

First Advisor

Rosemary A. Cowell

Second Advisor

Jeffrey Starns

Third Advisor

Rebecca Spencer

Fourth Advisor

Ethan Meyers

Subject Categories

Cognition and Perception | Cognitive Neuroscience | Cognitive Psychology

Abstract

Proponents of the representational-hierarchical (R-H) account claim that memory and perception rely on shared neural representations. In the ventral visual stream, posterior brain areas are assumed to represent simple information (e.g. low-level image properties), but the complexity of representations increases toward more anterior areas, such as inferior temporal cortex (e.g., object-parts, objects), extending into the medial temporal lobe (MTL; e.g. scenes). This view predicts that brain structures along this continuum serve both memory and perception; a structure’s engagement is determined by the representational demands of a task, rather than the cognitive process putatively involved.

In a neuroimaging study, I searched for the transition from feature-based representations to conjunction-based representations along this pathway. In the first scan session, participants viewed two stimulus sets with different levels of complexity: fribbles (novel 3D objects) and scenes (novel, computer-generated rooms). According to the R-H account, a neural feature-code for both fribbles and scenes should reside in posterior ventral visual stream. I predicted a transition to conjunction-coding toward MTL, with the transition for the simpler stimulus set (fribbles) occurring earlier.

Next, I measured memory signals while varying (1) stimulus complexity and (2) type of retrieved information (features or conjunctions). In a second scan session, participants completed a recognition memory task for fribbles and scenes, with three mnemonic classes of item. Novel items comprised novel features combined in a novel conjunction; Recombination items possessed features that had been seen in the first session, but never within the same item (i.e., familiar features, but novel conjunctions); and Familiar items comprised familiar features and familiar conjunctions. Under the R-H account, a memory task that requires only the retrieval of feature-based information should recruit visual cortex rather than MTL. Further, these “feature memory” signals should map onto feature-coding regions found in the first session.

Analyses revealed that visual regions, outside of MTL, contained (1) more information about individual features than conjunctions of features (first session data), and (2) the greatest signal for feature memory (second session data). Thus, cortical regions that best represented feature information during perception also best signaled feature information in memory and were located outside MTL.

Share

COinS