My prejudices

  • Post author:
  • Post category:Meta / Philosophy
  • Reading time:14 mins read
  • Post last modified:June 7, 2022

(Part 1 of 2)

Introduction

To begin as most people might: I’m not perfect; I have undesirable traits and characteristics just like everyone else. But, I think of it as having room for improvement rather than as a thought-terminating excuse for having flaws.

If one were ignorant about 99% of subjects, they would not be perfect, but the same could be said for one who’s ignorant about only 1% of subjects, leaving the distinction of not being “perfect” rather pointless: which is it? Most readers I believe will agree that there is a stark difference between these two numbers, but perhaps reality isn’t as clearly-delineated as this—perhaps, more realistically rounded to the nearest whole number, everyone is ignorant about 99% of subjects in this world, given the expanse of human knowledge, our rate of learning, and our limited lifespans.

Perhaps, then, there really is no practical difference, as long as we aren’t differentiating between something like 99.98% vs 99.98% of knowledge which is, again, trivial compared to the amount of knowledge in the world.

But the phrase “I’m not perfect” implies something much closer to perfection than something like “I’m mostly flawed”, just like how one might describe a mild-tasting drink as “It’s not sweet” more often than “It’s mostly bitter” if the drink doesn’t taste primarily of bitterness. Does this mean, then, that people describe themselves on this spectrum as closer to perfection than not? If the scale was so easily-traversed, we’d have a lot more people in the average range (hypothetically, assuming equal opportunities, fair education, etc.), and more importantly people also might have to be interested in, well, virtually everything.

Anyone who’s tried to learn something (most people) can tell you that a mere interest in something—for whatever reasons, is merely a necessary, yet insufficient condition to acquiring the knowledge related to it. There are countless academics who have spent their entire lives studying intently certain niches, and yet never quite reached “the end”. As we study knowledge, we inevitably create more. In order to compensate for this problem, for the average person to have an average amount of knowledge, they might need to be interested in everything; gathering some knowledge from everything.

I don’t believe that any human being knows more than a fraction of a percentage of the world’s knowledge, even if I don’t believe it matters as I also maintain that a similarly-large amount of knowledge is irrelevant to most human lives: a gardener remains unaffected by a lack of knowledge for the word “thunder” in Cree (piyesiwak, according to a this dictionary), a professional driver has no business knowing the structure of a single-wall carbon nanotube, and perhaps even a salesman at a computer store doesn’t need to know the specifications of the capacitors in the computers’ power supplies, let alone those on the motherboard: that’s someone else’s problem.

Most knowledge in this world is irrelevant to us, even within the “same domain”—an executive manager of a robotic vacuum company may not know exactly what parts go into their vacuums. When asked, they might quite literally retort: “it’s not my job”, which is true! It isn’t their job; it’s the job of the company’s engineers and researchers.

Still not my job
Even then, it’s still not the job of all engineers and researchers; those responsible for the navigation algorithm need not know what material the wheels are made of and how they are shaped, even if someone in another team has made a presentation detailing and discussing the different wheel options after much research.

There’s a lot that goes into a robot vacuum. The material of its shell, the shape of its shell, the motherboard, the CPU, the power unit, the different sensors it uses, the electric motors it uses, I suppose there also may be a team responsible for safety and durability testing. which isn’t a physical component but a very important characteristic nonetheless: how it works as a sum of all its parts.

Even within this already-specific niche of robot vacuums we can’t expect everyone who’s involved to know absolutely everything about it, barring the rare and exceptionally-curious individuals who even then still have their own limits.
The matter on “not being perfect”, I believe, leans a lot more heavily towards being mostly ignorant rather than mostly knowledgeable.

Regardless, rather than surrendering to the impossibility of ever acquiring all the knowledge in the world—that having virtually no knowledge is, relative to all the knowledge in the world, only a short step back from being well-educated, which the cognitively-slothful layperson uses to justify their lack of interest in knowledge, it would still be better to have that knowledge anyway. So far, I’ve been measuring knowledge in the same way one might measure a game character’s “health bar”, where everything remains the same as long as it’s always above zero.

But this doesn’t make sense in real life: people don’t just suddenly die when they run out of health points despite being fully healthy a moment before: the world is far more complicated and intricate than that. Knowledge, then, is similarly nuanced. Unlike video games where one’s health bar can be completely depleted by being stepped on the foot repeatedly, something like this is unlikely to kill someone in real life, at least not instantaneously. Injure the heart, however, and everything about this story changes.

Similarly, with knowledge, not all of it is equally important or useful. Every example mentioned above is no less better at doing their job just because they didn’t know something outside of their job scope. If they needed that information, they could just ask, anyway, to be given only the most relevant details of whatever it is they need to know: “I don’t need to know what the battery is made of or how heavy it is, just tell me how long it takes to charge!”.
In light of my somewhat confusing wording:
We can have high-quality knowledge—knowledge that’s highly relevant and useful to us, despite it only constituting a fraction of the world’s collective knowledge.
Now, we get to the main issue: just because one does not possess all the knowledge in the world—most of which arguably useless, what depending on whom you ask—does not mean one does not have to have any knowledge of the world; that one should adopt an extreme position and decry the necessity of learning: “If we can’t learn everything, then there’s no point to learning!”. It’s an absurd conclusion that most jump to, usually in the form of the thought-terminating cliche we started with: “I’m not perfect!”, that results from a lack of granularity in their worldview.

To emphasise: that “useless knowledge” is not the same for everybody—knowledge can be relatively useless, not absolutely useless; it always depends on who’s involved and whom you ask, all of them equally worth consideration. In essence, it is like a constantly shifting pool entirely dependent on the observer—everyone sees “useful” and “useless” differently (my belief is that groups of people and communities are formed by those who draw similar lines between “useful” and “useless” in this pool of knowledge—those who have similar-looking pools, though there’s more to that including having similar standards for knowledge which is not being discussed here).

Of course, there’s also the argument that people don’t know what they’re looking at, leading them to erroneously overvalue what should be useless and undervalue what should be useful, but I’ve simplified it for the sake of this argument: just because we can’t know everything isn’t justification for not acquiring accurate knowledge, let alone rejecting the very act of acquiring knowledge in itself, choosing to take a laissez faire approach and letting one’s intuition, social circumstances, and passing information take over their learning process, like an abandoned garden overgrown with weeds—an uncontrolled free-for-all. (it’s a lot easier to maintain and transform a garden that’s well-maintained to begin with, as opposed to a corrupted botanical wasteland; perhaps this is why so many people become bitter and prejudiced, irreversibly obstinate in their beliefs?)

Not all knowledge may be equal, but we can all agree that one should not pretend to be an expert outside of their domain, as much as this happens all the time in reality. Expert or not, one should at least have some rudimentary knowledge before engaging in discussion relevant to it—discussions that can change almost linearly in complexity given the collective knowledge in the group reaches a certain threshold—by linearly, I mean that the topic remains the same, just discussed ways that might actually lead to something, as opposed to the echo-chamber-style discussions where everyone just parrots the most emotionally-persuasive proposition, regardless of whether it may even be true.

One of the topics people like to discuss are politics, social issues, psychology, philosophy, morals, economics, and just about everything that they find themselves involved in. I cannot understand why people always try and take the difficult option of maintaining “an opinion” even if they don’t really understand the topic, when they can just take the easier (and more responsible) position of, well… not having an opinion: you can’t write a wrong answer if you don’t write anything, and real life isn’t some timed exam where you lose the chance to answer—you can always come back to it again another time. Nobody is expected to know everything in the world, let alone having all the answers on hand at any given moment. There’s nothing wrong admitting that one doesn’t know something—the converse would be more absurd, unbelievable, and thus and far less-worthy of respect.

While I know that these are topics central to human existence, I don’t maintain strong opinions on the majority of them, as I just don’t know. That said, because these topics are so central to human existence, they can have profound emotional impacts on us—we can have opinions that mean something to us, emotionally, rather than rationally, myself not exempted, which is the purpose of this (did I say non-exhaustive?) list of prejudices I find worth mentioning about myself.
But how does one function without concrete beliefs?
Why would it disable me?
I can make educated guesses, I can make mistakes, I can rely on certain authorities, hoping that they are right on this occasion though in no way prophets and always remembering that they’re fallible (the trick is to be right more often than wrong)—there are many things I can do even if I don’t have absolute certainty in things, because one doesn’t need absolute certainty: this world isn’t even built on absolute certainty to begin with; everything comes with some level of risk.

I bring an umbrella if the weather forecast says it will rain, even if there is never absolute certainty due to the probabilistic nature of the prediction; one does not need that much in order to function. Even if it ends up not raining, it doesn’t affect my life in any other way other way other than the fact I had to carry an umbrella the whole day (as opposed to throwing it away).

For more important matters, of course, it would be prudent for us to contemplate for a while longer, but this goes beyond what one might describe as reasonable daily functioning. Even then, the conclusions we may arrive at will still be probabilistic in nature.

Another way of looking at it would be that it is far more important to be right when it counts than wrong when it doesn’t, unlike how most people end up approaching things in what I can only describe as an emotionally-compensated pseudo-random fashion being indifferently wrong when it counts and proudly correct when it doesn’t.
Going back to our garden analogy, if you’re not sure if the plant will survive, don’t plant a whole garden full of it; just plant a few. There’s no need to invest so much in something you can’t be sure about (even if you may believe you are absolutely sure).

Philosophy, if I may argue, covers everything we are or may be interested in as human beings, as it is itself the study of existence, reasoning, knowledge, and many other “building blocks” that lead up to our forming of conclusions about things—the reality and nature of the world we live in. Without existence, we can’t reason; without reason, we can’t do much with our knowledge, let alone acquire it; without knowledge, we can’t reason as well as we could otherwise or have as much to reason with, creating a positive feedback loop.

When people discuss issues central to philosophy, it should follow that a lack of philosophical knowledge—even at an elementary level—would be considered discussing a topic outside of their domain. This is not to say that one should never discuss topics outside of their domain, but rather that one should not be so confident—sometimes even more confident—about their conclusions than their knowledge and reasoning would warrant. Most casual discussions about philosophical topics are riddled with blatant reasoning errors, which affects the veracity of their conclusions. While it is true that even highly-educated people are susceptible to fallacious reasoning, the difference is mostly found in the conclusions: the conclusions typically do not stray far from what the reasoning and evidence justifies, and are often open to future modification—a tentative “best guess”, alluding to the potential flaws in reasoning and current evidence.

Most laypeople, on the other hand, have their conclusions somewhere on Mars—astronomically far away from what the strength of their reasoning and veridical knowledge would allow. How that is achieved I do not know. It is as if they firmly believe it’s so true it can’t (and shouldn’t) be reached by normal means. Being closed to future reasoning and persuasion doesn’t help: they can’t exactly go to Mars to retrieve it, can they? They shouldn’t have thrown something they couldn’t be absolutely, absolutely certain about so far away in the first place.

In discussions of philosophy, then, it would be best that these people try and acquire at least the fundamental awareness that their reasoning isn’t necessarily perfect (psychology, as well, as human perception is imperfect). It is then up to them whether they would want to refine their reasoning, educate themselves on critical thinking skills, adjust their conclusions to match their actual ability and assigning less certainty to them, or at the very least—and really even at the very least—have a sense of doubt the next time they throw something into space. Even the slightest hint of regret: “I wonder if I should have done that?” will go a long way.

This regret mixed with doubt will hopefully lead to some hesitation the next time they jump to a conclusion (or, consistent with our analogy, jump a conclusion into space), slowing them down, giving them time to think, or at least inspiring some honesty with their conversational partners, admitting that they don’t know if their conclusion is absolutely true before making the conclusion regardless: a significant improvement in self-awareness despite only a minor change in behaviour.

For the rest of this post, I will be detailing some of my prejudices which I believe can be improved eventually, or at least those I have no proper justification for but hold anyway for emotional reasons.

You can read part 2 for the rest of this post here.