If everyone in our family / group / society / country was both trusting and trustworthy, we would never know to recognise the first non-trustworthy person to come along. Alternately, if too many people were untrustworthy, very little would get done, and the world would quickly run out of steel-reinforced doors. But how much is the right level of trust?
In order to map out this seeming middle-ground, in his new book, "Liars and Outliers: Enabling the Trust that Society Needs to Survive," security guru Bruce Schneier outlines four "societal pressures" that guide the lives of most of us and define what causes us to both trust and be trustworthy.
Moral pressure represents our internal "moral compass." We don't steal because we believe it to be wrong and we would prefer to be a rule-follower rather than a rule-breaker.
Reputational pressure takes the opposite view of this interaction. We follow the rules because we want to be known as a rule-follower.
Institutional pressure lays out the general and specific rules of society. Mostly cast as negatives - don't do this, or you will pay a penalty, these pressures (or rules and laws) are the "written down" versions of the previous two pressures.
Security systems serve two purposes, both seek to induce cooperation. Firstly, Security systems make it difficult to break the rules - locks, razor-wire, anti-theft tags in store goods, etc. Secondly they assist in the identification of rule-breakers - forensic science, surveillance cameras and so on.
As Schneier notes, "It's everyone's responsibility to keep everyone else in check." However, not too much in check - Schneier makes a compelling case for an appropriate level of rule-breaking (the Liars and Outliers of the title).
But of course, it's not that simple (it never is!).
From a different perspective, the book delves into the trade-off between self-interest and rule-breaking. As an example, Schneier gives us the example of the "No Child Left Behind Program." Here, school pupils had to pass government-mandated tests to guarantee funding to their schools. In the District of Columbia example used in the book, teachers were doubly incentivised - bonuses for success; job-loss for failure.
Pass rates went up enormously; the district was held up as a model of how well incentives worked. It was just a pity that so many tests were completed by the teachers, not the students. "People became teachers to teach, not to cheat... until their jobs depended on it." So normally trustworthy people can become untrustworthy in the right circumstances - this trust thing is getting tougher to understand all the time.
Or to paraphrase Joel Spolsky, if programmers are rewarded for fixing bugs, they'll write buggy software so there are more bugs to find.
Of course there is far more in this book than I can cover in a simple review, but be very aware; Schneier doesn't reach a lot of conclusions in this book - that's your job as the reader. But he will give you all the background and information you need from more scientific and related disciplines than you really expected.
The broad sweep Schneier takes across the trust landscape will leave you wondering both, "Where on earth did he find that piece of information" and at the same time being very thankful that he did.