Tuesday, May 20, 2025

On the Edge - Book Review

Good book from Nate Silver!



  • The Introduction in the book reminds us of the instances when a provision gets sneakily inserted into an unrelated bill. The example - author gives is for the UIGEA bill that established regulations for payment processors for online gambling, but was part of a Homeland Security Bill. Another such famous example is - when “minimum wage” was sneakily inserted in an Iraq War bill.
  • Lots of discussion around Expected Value. EV = P(win) x Profit + P(loss) x Loss.
  • Book contains lots of references to Game Theory.
  • Good examples of Prisoner’s Dilemma. This arises only in situations that are not zero sum. It is what happens when people are unable to cooperate,even though they would be better off if they could.
  • A good way to play poker is to randomize. That means, not just that you should play different hands in different ways; but you should play the same hand in different ways.
  • Deception is very important in poker.
  • Game Theory Optimal.
  • In Poker - player’s checks are highly predictable, as though they’ve turned their hand face up. The term for this is “Capped”.
  • Defense mechanism against the above is - to occasionally check with their “best hands”.
  • Sports book odds - point spread, and fractional odds.
  • Annie Duke’s book Quit, has well researched evidence - that when people volunteered to put major life decisions as to whether to stay at a job or in a relationship up to the outcome of a coin flip, they were happier on average when they made a change. It’s because most people do NOT take enough risk.
  • System 1 Thinking : Little or no conscious effort.
  • System 2 Thinking : Deliberate, structured thought process.
  • Book name mentioned : The Luck factor
  • Nevada regulation asks that the casino and success of gaming is dependent upon public confidence and trust and free from criminal and corrupt elements. It means that it is very unlikely that a casino will remove a single ace from a six-deck blackjack shoe. Although, it’d be hard for someone to find it out. But the problem is once the other casino operators also start following this to maximize their profits, it’d become hard for consumers to find trustworthy operators. Also, many casino resorts are the most expensive building projects and developers count on many decades’ worth of profits to get their investments back.
  • As a result, Nevada’s gaming revenues exceed those of the next three states combined.
  • Never play slots in a casino. Because slots have a heavy house edge. And they are addictive.
  • Slots prey on people who are looking to escape from the world.
  • Sports Betting -
  • Middling arbitrage. Bet differently over two betting sites if they are favoring opposing teams.
  • It’s easy to find edges but not easy to find people willing to bet real money for an extended period of time.
  • There is something called the Moneyline - it refers to which team is more likely to win. The team with a minus sign in front of it - is the favorite. Example - If Team A has -225 in front then it means you’ve to bet $225 to win $100.
  • Point spread talks about the difference between the two team’s scores.
  • Total is about the total number of points scored in a match.
  • In the chapter, Acceleration, the author very convincingly points out that Silicon Valley might not be as contrarian as it pretends to be.
  • Peter Thiel seemed to rely on the fact that the world is deterministic.
  • Non-fungible token = One of its kind.
  • Thomas Schelling i.e concept of Focal Point. A solution that people tend to choose by default in the absence of communication in order to avoid coordination failure.
  • John Rawls talks about maximizing the well-being of the least well-off person.
  • SBF’s philosophy that “0 is not the correct number of times to miss a flight.”
  • Are holding investments who have low probability of success like a 1% chance to return 1000x the money, worth it?
  • SBF is cunning enough to cultivate an image of an eccentric founder who believes in taking risks. Like some politicians too.
  • The chapter Quantification - gives a very fascinating example of a small dog which got loose from her owner and went on the tracks following the route of an F train.
  • It was close to rush-hour traffic and the trains were shut down for an hour. It raises the trolley problem i.e a moral dilemma.
  • Do we save 5 people or just 1 person?
  • In this case - 100s of thousands of people ride this route daily, so if we shut down the trains for an hour - it means that 10s of thousands are delayed. If the average hourly wage in NY is $40/hour - then if we delayed 50,000 people - the cost is ~$2 million.
  • Is Dakota’s life worth more or less than $2 million?
  • It’s a rough simplification. Because in this case, we erased 1 hour from the commuters’ lives that day. Sure they can keep scrolling TikTok or some will try to find an alternate route. However, delays for some of these commuters might endanger other people. A hospital worker might not be able to get to the hospital to attend to a victim, or a father might not be able to pick up their special-needs kid.
  • How many of you will support shutting down the subway for an hour if a squirrel got onto the tracks? What about a human toddler - hope everyone agrees. With a poodle - reasonable people will argue the case back & forth.
  • William MacAskill argues that we need to go by the number of neurons in the animal.
  • Could it be a speech -  Hard choices are often unavoidable.

[include the covid instances as well]

  • The author gives us a primer on Effective Altruism in the chapter Quantification.
  • Eliezer Yudkowsky is leading proponent, and the author of LessWrong blog.
  • Table

Effective Altruists

Rationalists

Influenced by Peter Singer.

Influenced by Yudkowsky.

Do things from the heart - i.e want to do good.

Do things from the Head - want to do what is logical.

  • EAs prioritize morals/how-to-do-good. This is in contrast to rationalists who prioritize logic and think about long-term problems.
  • Utilitarianism states that the best action is the one that maximizes overall well-being (often referred to as "utility") and reduces suffering for the greatest number of people.
  • “Common sense morality” has some flaws in it. Example - why we eat pigs but not dogs. If you raise too much fuss around it, then either you are a radical activist or puppy killer.
  • Can fully self-denying ourselves lead to a better world?
  • I like how the author made an astute observation i.e people of different stripes of political and other identity - all believe that the world as we know it, will end soon?
    Climate Change -> Democrats ;
    Social Decay -> Republicans ;
    AI Risk -> Tech Bro
  • Peter Singer’s EA philosophy poses the following scenario: a variation of the Trolley Problem. You’re walking past a child who is drowning and you can easily pull her out without any risk to yourself. Although, there is one thing - you’re wearing nice clothes and they would get dirty. What should you do? The answer is you should rescue her - everyone will agree that you’d be some sort of sociopath if you didn’t.
  • If we were to up the ante - how about saving a kid which you are not close to or will never know the name of? If your answer is no, then are you placing a value on the life of a kid in the U.S versus rest of the world.
  • If someone indulges in luxury - then they could’ve potentially saved someone somewhere for that amount. It is the same way a man who refused to save a kid.
  • Kelly Criterion - Describes what fraction of your current wealth you should bet on an opportunity where you believe the odds are in your favor. It talks about bet sizing.
  • St. Petersburg paradox - even though the expected value of this gamble is infinite, most people would be willing to pay only a small, finite amount to play it. Imagine a game where you flip a fair coin repeatedly until it lands on heads. The game ends as soon as the first head appears. The payout depends on when the first head occurs: If the first head appears on the 1st flip, you win $2^1 = $2. If the first head appears on the 2nd flip (meaning the sequence was Tails, Heads), you win $2^2 = $4. If the first head appears on the 3rd flip (Tails, Tails, Heads), you win $2^3 = $8. And so on... If the first head appears on the nth flip (n-1 tails followed by a head), you win 2^n dollars.
  • If it is wrong to take a chance to press a button when 50% probability the world will end, 50% probability the world will be twice as good. The Manhattan Project had its doubts too.
  • Yudkowsky : AI thesis is orthogonal i.e AI’s intelligence and its goals are uncorrelated.
  • Interesting tidbit : The UN was formed 6 weeks before the Hiroshima bombing.
  • Prospect theory states - people are risk-averse when it comes to losing what they already have.
  • Another interesting theory was - how the decision to nuke Hiroshima and spare Kyoto was taken. The then Sec of Defense at the time Henry Stimson - didn’t want to destroy Kyoto, a city of cultural significance as he and his wife had vacationed there, so zeroed on Hiroshima.
  • System 1 thinking is Fast, Effortless, Associative, Emotional.
  • System 2 thinking is Slow, Analytical, Complex, Conscious
  • AI can make the world - more - gamified, commodified, quantified, monitored and manipulated.
  • Food for thought - How to stop becoming stuck in a loop of “button-clicking” AI with a weapon for AI?
  • SBF made a strange comment that - If you never miss a flight, you're spending too much time at airports.

No comments: