Friday, May 31, 2013

Louder Than Words: The New Science of How the Mind Makes Meaning
by Benjamin K. Bergen
-This is a book by a student of George Lakoff about the experimental evidence for the Embodied Simulation Theory of meaning. The idea is that when the brain is trying to decipher the meaning of language, as heard or read or etc., it makes use of brain systems whose original functions pre-date language: systems for memories of motor control, for locating things in space, for sensory information, etc. These systems run a sort of "virtual reality" simulation to try to "picture" or "feel" what it would be like to experience the world according to the language before it. Bergen talks about the brain using systems built for motor control and "bootstrapping" them up into use for language. I'm not sure what to think about all this. On the one hand the experiments seem convincing, I guess, though it got pretty tedious reading about experiment after experiment (these guys love their experiments, and bless them for it), and it "feels" sort of right--an organism develops a brain to organize bodily systems and respond to its environment, and then the brain develops the power to turn on itself and include itself in its models, and this recursion and modeling kicks into overdrive with the development of language as a socially shared modeling system--but all the while reading the book a little voice in my head kept saying that something wasn't adding up. I don't have the energy right now to think this through but it seems like there's some intro to philosophy type questions being side-stepped? Or maybe not? Out of my depth. 

There's a bit in the book about abstraction and metaphor that was interesting to me. Bergen says that they don't yet understand how embodied mental simulation works when we talk and think about abstract language. We seem to easily understand abstract concepts in metaphorical terms, and abstractions seem to be made up of a lot of dead metaphors, but the mental simulations of metaphors and abstractions seem somehow different than simulations of more concrete scenarios. But it seemed to me like, aren't we able to make simulations of simulations? Models of models, maps of maps? And we go from there? We think in mental simulations, but we can also take those simulations and treat them like objects to be simulated. We seem to be able to nest them within each other? I don't know, I'm out of my depth again.