Hrm… so should I be worried about Paul’s fondness for crocs? Nah, that would be affirming the consequent. (Affirming the consequent means asserting that because if A then B, therefore if B then A.)
I received this most fabulous message a few days ago… and I couldn’t resist sharing it with you. Plus, it raises some serious points about my approach that I discuss below too.
SO! I quit listening to P/A a few months ago because (hear me out) – I started noticing that I agreed with *everything* that was coming out of your lovely face. I started growing a little worried that I was getting super lazy and losing (or worse, ignoring) the whole critical thinking thing.
Naturally, being a mad social scientist, I decided to test the theory. I followed the questions for a few weeks and jotted down what I thought the correct responses should be, and for the past few days I’ve been listening to podcasts at work (productivity soared). So guess what! My worryworting was totally unjustified. I got the gist of plenty of the questions spot on, although with a miserable fraction of the detail you provided. I’m so damn proud.
Anyhoo, the shamefully unreciprocated consumption of your podcasts on my part is over. As soon as Dwolla verifies my bank account the donations should be coming in biweekly. I adore your work, and not just because it gives me some vain sense of self-righteousness. That’s just a perk.
If you’re ever back home in Maryland and would like me to donate some steak and bacon, just drop me a line, chica!
The style of this email put me into fabulous fits of giggles, but I very much enjoyed its serious point too.
If you’re a fellow Objectivist, the basic answers to the questions I answer on Sunday’s Philosophy in Action Radio aren’t terribly difficult. In most cases, I know that basic answer when I choose the questions, and I bet many of you do too. If my goal were just to inform listeners — Objectivist or not — of the right answer, I’d answer six questions in fifteen minutes… and then shoot myself in the head.
Instead, the goal of the show is to work through the actual thinking required to answer such questions — meaning, to develop and apply relevant principles, to test those principles via real-world examples, to consider objections, and so on. That’s why the show consistently runs over an hour each week. It’s also why preparation for each show usually requires thinking through the issues involved, then some reading and research, then discussion with Greg, Tammy, and Paul, then more in-depth thinking, then hours of writing and organizing those thoughts.
By taking that approach, I’m able to explain my reasons for my answer in sufficient depth that people can (and do) change their minds — rationally, not rationalistically or dogmatically. Moreover, I’m teaching them — implicitly and explicitly — the principles and tools they need to think through new issues on their own in a rational way.
I’m very pleased — and proud — to be doing that. I’m also so grateful that so many others see the value in my approach, particularly when they help spread the word about the show and support it financially. That means the world to me.
Share This Post
It’s often difficult to challenge your own entrenched beliefs. Habits of thought die hard, particularly when your values or way of life seems to depend on those beliefs. (“But but but… XYZ must be true!”)
When confronted with challenging new ideas, I try to approach them carefully, so as to avoid any knee-jerk emotional reaction in favor of my existing beliefs.
Ideally, here’s what I do: I remind myself that I don’t need to agree or disagree right away. Instead, I focus on understanding the ideas and arguments fully. Then, once that’s done, I take some time to mull over those ideas — perhaps days, weeks, or months. I gather empirical evidence for and against the idea. I consider new angles, arguments, and implications. I discuss those ideas with smart people, as they often have fresh insights. Finally, I come to a judgment about the truth of those new ideas.
If I take that time, I’m far less likely to err in my evaluation — meaning, to dismiss right ideas or embrace wrong ideas. That’s a win!
But… uh… of course, that’s not always what happens. Yet even when I have that dreaded knee-jerk reaction against some new idea, I can exert my better judgment: I can choose to evaluate it objectively. If I have to eat crow at the end of that process, that’s better than persisting in dogmatic commitment to falsehoods.
Note: I published a version of the above commentary in Philosophy in Action’s Newsletter a while back. Subscribe today!
Share This Post
Mental Floss posted a list of “contronyms” or “self-antonyms,” namely words that mean their own opposite. Here are a few of my favorites:
1. Sanction (via French, from Latin sanctio(n-), from sancire ‘ratify,’) can mean ‘give official permission or approval for (an action)’ or conversely, ‘impose a penalty on.’
3. Left can mean either remaining or departed. If the gentlemen have withdrawn to the drawing room for after-dinner cigars, who’s left? (The gentlemen have left and the ladies are left.)
4. Dust, along with the next two words, is a noun turned into a verb meaning either to add or to remove the thing in question. Only the context will tell you which it is. When you dust are you applying dust or removing it? It depends whether you’re dusting the crops or the furniture.
10. Fast can mean “moving rapidly,” as in “running fast,” or ‘fixed, unmoving,’ as in “holding fast.” If colors are fast they will not run. The meaning ‘firm, steadfast’ came first. The adverb took on the sense ‘strongly, vigorously,’ which evolved into ‘quickly,’ a meaning that spread to the adjective.
Another strange category of words that I love seem to be called “autoholonyms.” Basically, these are words which refer to both the species and the genus. Here are some examples:
- “Cow” can be mean just female bovines or all bovines.
- “Day” can mean a 24 hour period or just the light portions thereof.
- “Man” can refer to all humans (contrast: animals) or just male humans (contrast: woman) or just adult male humans (contrast: boy).
Can you think of other common words that fit that pattern? I want to know more!
Share This Post
I appreciate that it’s often useful for people to say “I agree” or “I disagree” in contexts where wrong assumptions might be made.
That’s rarely the case, however. Most of the time, when people just indicate their mere agreement or disagreement with some remark, my reaction is pretty much: Why the heck did you bother to say that for? How about offering some kind of reason or argument or example? Do you think that anyone cares about your mere opinion? Be witty or be substantive or be silent! Then, despite my major disagreements with Plato’s epistemology, I feel the urge to beat them over the head with his distinction between opinion and knowledge.
Am I alone? Is this a crazy pet-peeve? The inevitable result of reading too many undergraduate philosophy papers? Or just a hope for higher standards of rationality?
Share This Post
I just ran across this passage in an otherwise merely annoying sports column on athletes and steroid use:
This past Christmas Eve, my son and daughter made Santa cookies, wrote him a letter, even left four carrots for his reindeer. As we were putting them to bed, I remember thinking, Man, I wish they could always stay like this. And by “this,” I really meant, I wish they could always just blindly believe in things being true despite mounting evidence against them.
Oy vey! The “blind belief” of faith is not a virtue — neither in adults not in children. It’s the rejection of reason’s requirements of empirical evidence and logical argument. To the extent that a person lives by faith rather than reason is the extent to which he imperils his life and his happiness. (For more on what’s wrong with faith — including why faith and reason cannot be reconciled — I strongly recommend George H. Smith’s Atheism: The Case Against God.)
Interestingly, the sports writer indulges in fairly arbitrary doubts about athletes and steroid use in the rest of the column. Given that kind of irrationality, it’s hardly surprising that he longs to enjoy the comforts of the opposite kind of error.
Share This Post
The following comments on the validity of a evolutionary approach to nutrition are from an email that I wrote to an Objectivist philosopher skeptical of the paleo diet. (The email was sent many moons ago, and I only just found it again.) My comments stand pretty well on their own, I think, and I hope that they’ll be of interest to folks interested in thinking about paleo in a philosophical way.
I cannot point you to a single study that definitively proves the superiority of a paleo diet. For a hundred different reasons — most of which probably aren’t on your radar — such a study is not possible. (Gary Taubes and Mike Eades have written on that problem.) Nonetheless, a whole lot of smaller, more delimited studies (as well as well-established biology) support the claims made by advocates of a paleo diet. Plus, people report looking, feeling, and performing better — with improved lab values — on a paleo-type diet. Each of us has our own experiences and experiments to draw on too.
Hence, as I said in a thread on Facebook: “I think I’ve got very good grounds for saying that a paleo diet is (1) healthy for most people, (2) far superior to the diet of most Americans, (3) exceedingly delicious and satisfying, and (4) worth trying to see if you do better on it, particularly if you have certain kinds of health problems.”
I’m not claiming certainty, nor do I assume that my current diet is optimal. We have tons to learn about nutrition and health. Yet that’s hardly a reason to ignore what we do know — or to suppose that we can just keep eating however we please without experiencing pernicious consequences down the road.
Moreover, people are doing themselves harm by eating the standard American diet. In my own case, I was on my way to type 2 diabetes (based on my doctor’s blood glucose tests) and liver disease (based on a CT scan showing non-alcoholic fatty liver disease). We can’t assume that the standard American diet is a safe default just because it’s all around us — just as people shouldn’t assume that the standard American religion is a safe philosophical default.
To address your skepticism about an evolutionary approach to nutrition, let me ask you the following… Imagine that you were given a dog to care for, but you’d never seen or heard of a dog before. Would you say that the fact that dogs are very close relatives of wolves is irrelevant to the question of what you ought to feed this dog? Wouldn’t that evolutionary fact suggest that the dog needs meat, meat, and more meat — not tofu or corn or alfalfa?
That evolutionary inference certainly wouldn’t be the last word on proper diet for the dog by any stretch of the imagination. Yet that inference would help guide your inquiry into the optimal diet for the dog — and guide your feeding of him in the meantime. That evolutionary perspective would be particularly helpful if the government and its lackeys were busy promoting a slew of false views about optimal canine diet. Ultimately, it would help integrate and explain your various findings about canine nutrition, since the nature of the canine was shaped by its evolutionary history.
On this point, your comparison to evolutionary psychology is not apt. Evolutionary psychology is a cesspool. But that’s not because inferences from our evolutionary history are difficult, although that’s true. Evolutionary psychology is a cesspool because it depends heavily on some false philosophical assumptions — particularly determinism and innate ideas.
The same charges cannot be made against an evolutionary approach to nutrition. We know that every organism is adapted to eat certain kinds of foods rather than others. We know that human biology was shaped over the course of millions of years, during which time we ate certain kinds of foods but not others. That suggests the kinds of foods that we’re best adapted to eat. Moreover, we can see in skeletal remains that when people switched to other kinds of foods, particularly grains, they declined remarkably in basic measures of health. Then consider what know about the nature of wheat, including its effects on the gut. Top that off with the positive effects people experience — improved well-being, fat loss, better lab values, less autoimmunity — when they stop eating wheat. Then you’ve got a compelling case against eating wheat.
The evolutionary perspective is not merely a useful starting point in such inquiries, to be discarded with advancements in modern science. It’s relevant causal history: it explains why we respond as we do to wheat. That enables us to integrate disparate findings about wheat (and other foods) into a unified theory of nutrition. That’s hugely important to developing nutrition as a science.
Share This Post
When I developed my list of Modern Paleo Principles in early 2010, I’d hoped to be able to sort out the essential principles from the optional tweaks. So forgoing grains would be essential to eating paleo whereas intermittent fasting would be just an optional tweak that a person might never even try. Sounds reasonable, right? Perhaps so, but the attempt was a total non-starter.
Almost as soon as I sat down to write out my list of principles, I realized that I couldn’t possibly separate them into “essential” and “optional,” except in a few clear cases. Similarly, I couldn’t rank its principles by priority except in a very rough way. Despite the core features of the diet captured in my definition — avoiding grains, sugars, and modern vegetable oils in favor of high-quality meat, fish, eggs, and vegetables — that just wasn’t possible.
But… why not? Why can’t we identify the essential versus optional principles of a paleo diet or rank its principles by priority? The answer is more interesting than I supposed at first. I see three major obstacles — (1) the value of health, (2) individual differences, and (3) the science of nutrition. Let’s examine each in turn.
Health Is Not Your Ultimate Value
Health is a major value, but it’s not a person’s proper ultimate value. Health is not all that matters in life.
A person’s ultimate value is (or rather, ought to be) his own life. Consequently, people can make legitimate trade-offs with respect to health, in order to serve other, higher values. For example, a paleo-eater might choose to eat restaurant salads with canola oil dressing at business lunches because that’s what best serves her career, even if that risks some harm to her health. Or a paleo-eater might enjoy the occasional “Mo’s Bacon Bar,” because the taste is just so worth the sugar hit. Such choices would be totally legitimate: optimizing health shouldn’t be treated as an out-of-context duty.
What does that mean? It means that no principle of paleo can be treated as “essential” — in the sense that if you violate it, then you’re doing wrong, you’ve fallen off the wagon, you’re no longer paleo. Paleo is not a religious dogma: it has no Ten Commandments — nor even a “thou shalt.” (That’s for the vegans!)
Instead, paleo involves a set of principles to help guide the actions that impact our health, particularly diet. However, if a person is willing to pay the price for deviating occasionally from those principles — if that’s not a sacrifice for him but an enhancement of his life — then he ought to deviate. That’s the rational approach.
Your Health Depends on Individual Context
People are not merely fodder for the aggregate statistics of epidemiologists. They are individuals — and each person’s particular background, constitution, and circumstances matter to his choices about diet.
For example, one paleo-eater might be diabetic, another hypothyroid, and another in perfect health. One person might be disposed to heart disease, whereas another would be more likely to suffer from cancer or stroke. One person might suffer terrible effects from eating wheat, whereas sugar might be the downfall of another. A paleo-eater might be able to find a source of grass-fed beef that matches his budget — or not. A person might have 200 pounds of fat to lose — or 20 pounds of muscle to gain. One person might look, feel, and perform better eating starchy tubers while another does better avoiding them. One person might need to work hard to eliminate the soy from his diet, whereas another has none to remove. One person might live with a supportive spouse, while another lives with a hostile vegan roommate. One person might prepare all his meals at home, while another must eat in restaurants, while another must eat in the college dorm.
In short, people’s backgrounds, constitutions, and circumstances are often hugely different in ways that will affect what they can and should eat. People will implement a paleo diet in very different ways, based on those differences. To claim, as a universal generalization, that certain paleo principles are essential while others are merely optional would be to run roughshod over those individual differences. Instead, each person needs to discover what’s more essential versus more optional for him. Each person need to focus on his own life and values. The experiences of others are often useful guides or hints, but they don’t determine what’s essential versus optional for you.
The Science of Nutrition Is in Its Infancy
Ideally, with further development of science, we might be able to identify certain universal mid-level principles, such as “avoid foods that irritate your gut” or “avoid foods that promote the formation of small LDL.” Then people could focus on those principles, rather than adapting the particular recommendations of paleo to their own cirucmstances. Those kinds of integrations would be useful, undoubtedly, but I see at least three problems with aiming for that.
First, the science of nutrition is not as advanced or definitive as we might like, except on a few issues. I’m routinely amazed by how much we still have left to learn — on the value of tubers, on the different kinds of fats, on carbohydrate sources, and so on. So right now, we’re not in a position to clearly define and defend such mid-level principles. The science needs to be more settled for that.
Second, such mid-level principles wouldn’t be particularly helpful for guiding a person’s everyday choices about what to eat — unless he already knew, for example, what irritates guts in general and his gut in particular. So even if armed with a slew of solid mid-level principles, a person would still need to discover how to implement those principles well in his choices of what to eat for breakfast, lunch, and dinner.
Third, even if all that were known, individuals would still vary in their responses to foods, and they’d have to determine much of their own optimal diet by their own n=1 experiments. For example, people respond very differently to gluten. Personally, even small quantities of gluten give me migraines, but no digestive upset. Others have a different response — or no response at all.
One important conclusions from these reflections on the value of health, individual differences, and the science of nutrition is that even though the various paleo diets have a common core, the principles of paleo cannot be designated “essential” versus “optional” nor ranked in order of importance.
Of course, we can define a paleo diet, because it means something definite. We can also identify the general principles of a paleo approach to health; that’s what I hope that I’ve done with the Modern Paleo Principles. That’s crucial for doing paleo well, I think.
Yet to think of some of these principles as universally “essential” versus universally “optional” would be a mistake. Instead, they should stand in our minds as “more or less important for me.”
Of course, as an advocate of people, I’m interested to know what’s more or less important for most people or for people with certain medical conditions. Still, the individual’s mileage will always vary.
Also, a person often requires a few weeks or even months to learn how to implement the basic principles of paleo well in his own life, then even longer to tweak and optimize. For people really concerned to eat well — and to be fully healthy — that can be well worth the trouble!
Even with the broad range of paleo, we cannot hope to find a “one-size-fits-all” diet, except in a very broad way.
Share This Post
In Sunday’s Philosophy in Action Webcast, I discussed giving the benefit of the doubt. The question was:
When should we give another person the benefit of the doubt? Often, people say that public figures facing some scandal should be given the benefit of the doubt? What does that mean in theory and in practice? When ought people give the benefit of the doubt? Is doing so a matter of generosity or justice?
My answer, in brief:
To give someone the benefit of the doubt means that you’re not leaping to conclusions about wrongdoing, but considering their past actions and character, and hence, only condemning when the proof of wrongdoing is definitive. It’s proper to give someone the benefit of doubt when it’s likely that the person didn’t act wrongly, when you’re waiting for definitive evidence, or when your judgments are based on knowledge of character.
Here’s the video of my full answer:
If you enjoy the video, please “like” it on YouTube and share it with friends via social media, forums, and e-mail! You can also throw a bit of extra love in our tip jar.
Join the next Philosophy in Action Webcast on Sunday at 8 am PT / 9 am MT / 10 am CT / 11 am ET at www.PhilosophyInAction.com/live.
In the meantime, Connect with Us via social media, e-mail, RSS feeds, and more. Check out the Webcast Archives, where you can listen to the full webcast or just selected questions from any past episode, and our my YouTube channel. And go to the Question Queue to submit and vote on questions for upcoming webcast episodes.