penlessej: (Default)

September 2017

34 5 6 789
1011121314 1516
17 181920 212223

Most Popular Tags

The War Over Obamacare May Never End

Friday, 22 September 2017 09:27 pm
[syndicated profile] 538_feed

Posted by Perry Bacon Jr.

The latest version of Obamacare repeal seems dead or on life support, with Arizona Sen. John McCain declaring on Friday that he will not support the Graham-Cassidy legislation. Maine’s Susan Collins, Alaska’s Lisa Murkowski and Kentucky’s Rand Paul are also leaning against voting for the bill, which would put the Republicans two votes shy of passage, so it’s not clear whether Graham-Cassidy will even be taken up for a vote next week.

But I don’t think the Obamacare wars are over — or even close to over. We tend to think there are only two possible futures for the Affordable Care Act: It remains in place or Republicans in Congress repeal it. But there are really four paths:

  1. Repeal and replace succeeds. Republicans in Congress find a way to repeal or partially repeal Obamacare. Sen. Orrin Hatch of Utah has already floated the idea of adopting a reconciliation bill for 2018 that includes tax reform and an Obamacare repeal. This would allow Republicans to attempt to accomplish both major GOP goals using a single bill that would require only 50 votes to pass. It would be hard to pull off, but the fac that Hatch is floating the idea suggests that Republicans may not give up trying to pass an Obamacare repeal with 50 votes, even if they can’t use the 2017 reconciliation process, which expires after Sept 30.
  2. Executive branch undermines Obamacare. President Trump’s administration could do a number of things — such as cutting the advertising budget for the Obamacare marketplaces (which has already happened) or refusing to pay insurers for cost-sharing subsidies (which Trump has not yet done) — that don’t outright repeal the law, but that severely weaken its effect and over four years might add up to a partial repeal.
  3. Bipartisanship. A bipartisan congressional effort to fix the law — along the lines of the bill Republican Sen. Lamar Alexander of Tennessee and Democratic Sen. Patty Murray of Washington were working on until GOP leaders decided this week to focus on Graham-Cassidy — could still gain traction.
  4. Trump implements Obamacare. In this scenario, the Trump administration and Republican governors decide to give up on repeal and instead implement Obamacare. This would mean Trump’s team would need to encourage insurers to remain in the program and some of the 19 states that have not yet expanded Medicaid under Obamacare, nearly all of which are led by Republicans, would likely opt to do so.

Congressional Republicans and the Trump administration, in trying to pass an Obamacare repeal for much of the year, have essentially been vacillating between these four approaches. The White House has taken steps, like paying out the cost-sharing subsidies owed to insurers under the law, that move in the direction of approach No 4. But cutting the Obamacare ad funding is more like No. 2, a kind of Obamacare sabotage. And the Trump administration didn’t formally oppose Alexander’s bipartisan approach until this week, when it seemed like Graham-Cassidy could pass.

It’s hard to say which path Republicans will take, even next week. And this is not because party leaders are stupid or confused about their goals, but because each of these paths is fraught for Republicans.

Repeal and replace succeeds: It’s not an accident that Republicans keep coming up a handful of votes short. The party’s ideology keeps pushing it toward approaches that would cut Medicaid and leave millions more people uninsured. But those ideas are unpopular with the public, and the Obamacare provision that ensures people with pre-existing conditions can get affordable coverage has become a kind of political red line that can’t be crossed,

So Republicans keep trying to advance bills that cut Obamacare’s regulations, rules and costs without leaving more people uninsured or pricing out people with pre-existing conditions. These bills annoy more moderate GOP members (Collins) and also more conservative members (the House Freedom Caucus and Paul) by trying to split the difference between divergent views of health care. The goals (keep the good parts of Obamacare while repealing Obamacare) also fall apart under scrutiny from policy analysts like the Congressional Budget Office, creating some incentive for rushed, opaque processes that annoy more institutionalist Republicans (McCain, Murkowski).

Obviously, this option goes away if Democrats win control of the House or Senate in 2018 or if Republicans no longer occupy the Oval Office.

Executive branch undermines Obamacare: Millions of Americans are already buying insurance through Obamacare exchanges or living in states that have expanded Medicaid. Cutting off the advertising budget for Obamacare is most likely to affect potential beneficiaries of the law, that is, people who have not yet signed up. So I would argue Trump is on safer political ground there.

But any steps that take health care away from people who already have it are more politically complicated. Arkansas, West Virginia and Kentucky all expanded Medicaid under Democratic governors but now have GOP chief executives. All those states could withdraw themselves from the Medicaid expansion. None of them have, because those governors know such a move would be politically perilous.

If Trump’s team used the executive branch to take steps that would gut protections for people already getting coverage through Obamacare, they would face similar political challenges.

Bipartisanship: Even if Senate Republicans drop their Obamacare repeal effort and never come back to it again, I’m skeptical that a bipartisan Obamacare “fix” can pass Congress. Republican members of the House and Senate have spent almost a decade attacking this law. They have told party activists it is terrible. Key groups within the party, like Americans for Prosperity, are deeply committed to ending Obamacare. Would House Speaker Paul Ryan want to bring some kind of “Obamacare stabilization” legislation to the floor and watch it pass even as the majority of Republican House members vote against it? I doubt it.

Trump implements Obamacare: To me, this is the easiest path — or at least the one with the fewest land mines. Team Trump would ratchet down the Obamacare wars, stop criticizing the law and take some steps to implement the ACA, but in a conservative way. (This might include measures like having the 19 states that have not currently expanded Medicaid opt in to the expanded program, but require recipients to have jobs or be in college or a training program, plus pay some small premiums.) Some of Trump’s remarks about Obamacare (he seems open to signing a bill that doesn’t really repeal the law as long as he can claim he fixed it) suggest that the president would not be opposed to such a path. And if he can flip-flop on DACA even though it involves one of his signature issues (tough immigration policies), surely he can flip on health care. Trump could then say that he fixed American health care.

But Trump’s Health and Human Services Department is run by Tom Price, who has been a consistent and fervent opponent of Obamacare. I doubt Price would favor such an approach. In fact, I think Trump’s HHS staff would slow-walk a pro-Obamacare strategy (think of how Trump’s national security staff seems to be trying to stop him from withdrawing the U.S. from the Iran nuclear deal) if the president called for one.

The news of this week suggests that Republicans won’t pass an Obamacare repeal by Sept. 30, although that could change if Collins, Murkowski or Paul suddenly switch positions. But Republicans, I would argue, only have hard choices on Obamacare. And that’s why the last seven months in Washington have felt like Groundhog Day.

Posted by FiveThirtyEight

Sen. John McCain announced on Friday that he would vote no on the Graham-Cassidy bill, a renewed GOP effort to repeal the Affordable Care Act. FiveThirtyEight’s Politics podcast team weighs in on whether Republicans can still muster a repeal with only a week before their options narrow significantly.

You can listen to the episode by clicking the “play” button above or by downloading it in iTunes, the ESPN App or your favorite podcast platform. If you are new to podcasts, learn how to listen.

The FiveThirtyEight Politics podcast publishes Monday evenings, with occasional special episodes throughout the week. Help new listeners discover the show by leaving us a rating and review on iTunes. Have a comment, question or suggestion for “good polling vs. bad polling”? Get in touch by email, on Twitter or in the comments.

Posted by Michael Salfino

The Bengals entered this year as playoff contenders with a retooled offense that was considered one of the fastest units in the NFL. But two games into the season, they’ve kicked three field goals. And that’s it, that’s all the points the team has scored. Cincinnati’s inability to score a touchdown in its first two games (both losses) has led to the quick dismissal of offensive coordinator Ken Zampese in his 15th season with the team.

It may not sound like that big a deal to be held without a touchdown for the first two games of the season, but going back to 1970, this has only happened 15 times prior to 2017. Another eight teams registered only a return touchdown, failing to score with their offense.

The 23 teams that got left at the starting gate should not give Bengals fans much confidence in this year’s unit. These offenses would go on to average 17 points per game for the remainder of the season. If you include the two clunkers each team had in Weeks 1 and 2, the group finished the season with a paltry 15.6 points per game. When compared to their previous season’s scoring output, teams — not counting the 1976 Tampa Bay Buccaneers, who were an expansion team and so did not have a previous season — declined by an average of three points.

Teams that started like the Bengals didn’t rebound well

How the 22 past teams that didn’t score an offensive touchdown in Weeks 1 and 2 fared over the rest of the season, compared to the season prior

2016 LAR 14.0 4.5 15.4 17.5 -2.1
2006 OAK 10.5 3.0 11.6 18.1 -6.5
2006 TAM 13.2 1.5 14.9 18.8 -3.9
2004 TAM 18.8 8.0 20.3 18.8 +1.5
2001 SEA 20.0 6.0 22.0 21.1 +0.9
2001 WAS 16.0 1.5 18.1 17.6 +0.5
2000 DET 19.2 14.5 19.9 20.1 -0.2
1997 IND 19.6 8.0 21.3 19.8 +1.5
1996 TAM 13.8 4.5 15.1 14.9 +0.2
1990 NOR 17.1 7.5 18.5 18.8 -0.3
1990 PIT 18.3 11.5 19.3 16.6 +2.7
1988 CLE 19.0 4.5 21.1 26.0 -4.9
1985 BUF 12.5 6.0 13.4 15.6 -2.2
1985 PHI 17.9 3.0 20.0 17.4 +2.6
1982 KAN 19.6 14.0 21.2 21.4 -0.2
1978 BAL 14.9 0.0 17.0 21.1 -4.1
1977 TAM 7.4 3.0 8.1 8.9 -0.8
1977 BUF 11.4 3.0 12.8 17.5 -4.7
1975 NOR 11.8 1.5 13.5 11.9 +1.6
1974 PHI 17.3 8.0 18.9 22.1 -3.3
1973 OAK 22.9 14.0 24.4 26.1 -1.7
1970 NOR 12.3 1.5 14.1 22.2 -8.1

Excluding the 1976 Tampa Bay team, which was in its first year as a franchise


All is not lost here for the Bengals. Eight of those teams that didn’t score an offensive touchdown in their first two games actually went on to score more on average in their remaining games than they did in the previous year. But all of these gains were modest, in many cases less than a point. The biggest rebounders were the 1990 Pittsburgh Steelers, who averaged 16.6 points in 1989, failed to score an offensive TD in Weeks 1 and 2, and then averaged 19.3 points for the rest of the year. The news is less rosy when you look at three most recent examples: The 2016 L.A. Rams, the 2006 Oakland Raiders and the 2006 Tampa Bay Bucs. The inauspicious starts for these three were a dark omen for what was to come. The trio combined to go 10-38.

The hope of modest gains isn’t much for Bengals fans to cling to. This team was expecting its offense, which ranked 24th in the NFL last year, to get much better — not to plateau or fall off a cliff. Since history tells us to expect that teams in the Bengals’ position will score an average of three fewer points per game than they did in the previous season, and Cincinnati scored 20.3 points per game last year, we’d expect the team to post about 17 points per game in 2017. In the 16-game era,1 teams that average between 16 and 18 points per game are 871-1,635-6 for a .348 winning percentage that translates to between five and six projected wins this year for the Bengals.

Of course, the Bengals could have just run into hot defenses in their first two games, against the Baltimore Ravens and Houston Texans. But even the great 1985 Bears gave up 22 offensive touchdowns that season, or 1.4 per game. Getting shut out from paydirt in two straight games is epic futility no matter who you’re facing. Cincinnati might pin Week 2’s offensive fiasco on the fact that is was playing a Thursday night game on short rest, but that likely had no effect given that in games through Week 2 since 2014, teams have actually averaged more points per game on Thursdays (23.3) than in the season as a whole (22.6).

Some expressed worries that the Bengals’ attack would suffer after the team let 35-year-old Pro Bowl tackle Andrew Whitworth leave for the Rams in free agency, but the Bengals attempted to compensate for the loss by picking up even more skill players in the draft. Owner Mike Brown and head coach Marvin Lewis selected world-class sprinter John Ross to be a game-changing deep threat with the ninth overall pick. And in the second round, the club added 226-pound running back Joe Mixon, who ran a 4.5 40-yard-dash at his Pro Day.

And those players were added to a mix that already included three-time Pro Bowl quarterback Andy Dalton, perennial Pro Bowl wideout A.J. Green and one of the league’s most efficient scorers in tight end Tyler Eifert, who since 2000 has the third most touchdowns per catch (minimum 20 touchdowns) among tight ends. The team has 11 offensive players who are home-grown first- or second-round draft picks.

All of which makes the offense’s ineptitude even more perplexing. Which explains why the team took drastic measures: This is the first time in the Bengals’ 50-year history, all of which has been spent under the guidance of the Brown family, that an offensive coordinator has been fired during the season.

But the bigger issue may be Dalton, who currently ranks last in the NFL in QBR with a rating of just 10. The league average QBR through Week 2 is 49; last year, Dalton’s was 52.3. There have even been rumblings about benching Dalton, including from a former NFL Executive of the Year.

Either way, Cincy has no excuses this week — at least, that is, no excuses for not scoring a touchdown. The Bengals are in Green Bay facing a Packer defense that ranks 25th in yards allowed per play through Week 2 after finishing 28th in 2016.

But perhaps the Bengals can look to one of their NFC counterparts for offensive inspiration. The Bengals were the 24th team to go through their first two games without scoring an offensive TD, but the 25th team, this year’s San Francisco 49ers, joined the club just a few days later. After a fortnight of grim, incompetent offense, Brian Hoyer and the Niners exploded for five touchdowns and 39 points in Thursday night’s loss to the Rams.

Then again, when it’s only Week 3 and you are already trying to emulate the feats of the Niners, something has gone terribly wrong.

Posted by Ritchie King

Every new presidency brings its own language to America, a set of names, acronyms and slang that work their way into the zeitgeist. The Bush era had 9/11, WMDs, freedom fries and misunderestimated, while Obama’s tenure included the 1 percent, Occupy Wall Street, drone strikes and Obamacare. Though President Trump is only eight months into his term, we have already seen some words and phrases making a bid to help define his time in office.

How The Internet* Talks

We’ve analyzed every comment posted to Reddit from October 2007 to December 2016 to track how people use language on the site. Search billions of Reddit comments »

At FiveThirtyEight, we have a tool that tracks the popularity of terms used on Reddit, the massive internet message board that is the fourth-most-visited site in the U.S. and the gathering place of some of Trump’s more rabid followers. We’ve now updated that tool with data through the end of July, which means you can use it to search through Reddit comments posted during the first six months of Trump’s presidency. Here we share some of the more intriguing trends we’ve found — changes in how often words, expressions and the names of popular figures crop up — and we encourage you to check out the tool yourself. Tweet @fivethirtyeight if you find something interesting.












Hop over to the interactive and see what you can find.

Significant Digits For Friday, Sept. 22, 2017

Friday, 22 September 2017 12:04 pm
[syndicated profile] 538_feed

Posted by Walt Hickey

You’re reading Significant Digits, a daily digest of the numbers tucked inside the news.

Stage 3

Aaron Hernandez, a former NFL tight end who killed himself in prison while serving a life sentence for murder, suffered a severe form of chronic traumatic encephalopathy, or C.T.E., researchers say. He died at 27 — and having not played football for several years — but nonetheless had stage 3 C.T.E (there are four stages). [ESPN]


Looks like tickets for last night’s Los Angeles Rams-San Francisco 49ers game weren’t exactly highly sought-after commodities: Resale sites had them going for as cheap as $14, or a little less expensive than two pretzels at Levi’s Stadium. Rams won, 41-39. [SF Gate]

Check out Besides the Points, my new sports newsletter.

31 percent

Percentage of actors cast in British films produced in 1913 who were women, according to a British Film Institute study. That figure in 2017: 30 percent. [The Guardian]

58 percent

The full consequences of the Flint water crisis are still coming to light: Fertility rates dropped 12 percent while women in the city were exposed to increased lead in their drinking water, according to a new study. Fetal death rates rose by 58 percent. [Detroit Free Press]

3,000 ads

Facebook agreed to give Congress more than 3,000 advertisements linked to a Russian group that spent at least $100,000 on divisive ads during the 2016 election. [The New York Times]


Amount owed by North Korea in parking tickets to the city of New York, dating all the way back to the 1990s. Of course, unpaid parking tickets are just one of several downsides of hosting the United Nations, especially when the general assembly is in session. Others include: abysmal traffic, dinner reservations becoming impossible, motorcades being less cool than normal and midtown hotel lobby bathrooms no longer being a reliable emergency option for the week. [NBC News]

Like Significant Digits? Like sports? You’ll love Besides the Points, our new sports newsletter.

If you see a significant digit in the wild, send it to @WaltHickey.

How Do You Like Them Rectangles?

Friday, 22 September 2017 12:00 pm
[syndicated profile] 538_feed

Posted by Edited by Oliver Roeder

Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. There are two types: Riddler Express for those of you who want something bite-size and Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,2 and you may get a shoutout in next week’s column. If you need a hint, or if you have a favorite puzzle collecting dust in your attic, find me on Twitter.

This week, we’ve got two puzzles from the forthcoming puzzle book “The Original Area Mazes,” by Alex Bellos, Naoki Inaba and Ryoichi Murakami. The goal of these puzzles, which are also known by the Japanese term “menseki meiro,” is to figure out what the “?” equals. The only math you’ll need to know is that length times width equals area. Keep in mind that the diagrams aren’t necessarily to scale — this is about logic, not measuring.

Riddler Express3

Submit your answer

Riddler Classic4

Submit your answer

Solution to last week’s Riddler Express

Congratulations to 👏 Kristian Hougaard 👏 of Copenhagen, Denmark, winner of last week’s Express puzzle!

Twenty ghostbusters are on their annual camping retreat. Two of them, Abe and Betty, have discovered that another pair, Candace and Dan, are in fact ghosts posing as ghostbusters. Abe and Betty hatch a plan: When all 20 campers are sitting in a circle around the campfire, Abe will fire his proton pack at Candace, and Betty will simultaneously fire her proton pack at Dan, annihilating the ghosts. However, if two proton streams cross, it means the end of all life on Earth. If the ghostbusters are arranged randomly around the fire, what are the chances that Abe and Betty will cross the streams?

The chances are 1/3.

There are 20 ghostbusters, but we only really care about four of them — Abe, Betty, Candace and Dan. The position of the other 16 won’t affect the possible crossing of the streams, so let’s ignore them. (Sorry, you 16 irrelevant ghostbusters.)

Fix Abe’s spot at the campfire — say he’s on the north side. There are then three places his co-ghostbuster Betty can sit — east, west or south. The ghosts, Candace and Dan, will sit in the other two seats. There are 3*2*1 or six possibilities for the seating in the east, west and south seats. In exactly two of these arrangements — the two where Candace occupies the southern seat, forcing Abe to fire across the circle — the ghostbusters will cross their proton streams. Each of these six arrangements is equally likely, so the chances of stream-crossing disaster are 2/6 = 1/3.

Solution to last week’s Riddler Classic

Congratulations to 👏 Carl Ober 👏 of New Britain, Connecticut, winner of last week’s Classic puzzle!

Last week, you faced four sticky questions:

  1. If you break a stick in two places at random, forming three pieces, what is the probability of being able to form a triangle with the pieces?
  2. If you select three sticks, each of random length (between 0 and 1), what is the probability of being able to form a triangle with them?
  3. If you break a stick in two places at random, what is the probability of being able to form an acute triangle — where each angle is less than 90 degrees — with the pieces?
  4. If you select three sticks, each of random length (between 0 and 1), what is the probability of being able to form an acute triangle with the sticks?

The probabilities are, respectively, 1/4, 1/2, \(\ln(8)-2\) and \(1-\pi/4\).

To solve these geometry problems, let’s draw some pictures! We’ll take the questions one by one.

For No. 1: Call the lengths of the three pieces \(x\), \(y\) and \(z\). They form a triangle if the sum of any two sides are larger than the third side (\(x+y>z\), \(y+z>x\) and \(x+z>y\)). This is called the triangle inequality!

Now to our stick, which we’ll assume is one unit long. Call the points where we broke it \(a\) and \(b\), both chosen at random. That gives us pieces of length \(a\), \(b\) minus \(a\) and 1 minus \(b\). Substituting those lengths in for \(x\), \(y\) and \(z\) above simplifies those triangle equalities:

  • \(x+y>z \Rightarrow a+(b-a)>1-b \Rightarrow b>1-b \Rightarrow 2b>1 \Rightarrow b>1/2\)
  • \(y+z>x \Rightarrow (b-a)+(1-b)>a \Rightarrow 1-a>a \Rightarrow a<1/2\)
  • \(x+z>y \Rightarrow a+(1-b)>b-a \Rightarrow 2a+1 > 2b \Rightarrow b-a<1/2\)

So, armed with the lengths of those three pieces, we can visualize the answer to the problem. As long as our randomly selected points on the stick (\(a\) and \(b\)) satisfy those inequalities we just made, we know we can make a triangle. We can plot values of \((a,b)\) as coordinates in a plane, as Laurent Lessard did:

The shaded areas are where the inequalities are satisfied. (There are two triangles because one corresponds to when \(a>b\) and the other when \(b>a\).) Those areas take up ¼ of the total square, which gives us our answer: 25 percent.

For No. 2: What if, instead of pieces from one stick, we pick up three sticks of random length somewhere between zero and one? This is a little trickier. When we plotted the first problem, it could be collapsed to two dimensions, because we were only really worried about two sticks — the length of our third piece was automatically determined by the length of our first two pieces. But this problem is in three dimensions, so the solution needs to be plotted not in a one-by-one square but rather in a one-by-one-by-one cube.

To help solve the problem, consider what wouldn’t solve it: a violation of our triangle inequalities. Suppose, for example, that \(x>y+z\), which makes it impossible to build a triangle. In that formulation, those points are contained in a pyramid bounded by the planes \(y=0\), \(z=0\), \(x=1\) and \(x=y+z\). There are three such pyramids in this cube, one for each of the ways the triangle inequality can be violated.

Each of those pyramids has a volume of ⅙. (A pyramid has a volume equal to the area of its base times its height, all divided by three. Our pyramid in question has height 1 and a triangular base with area ½.). Therefore, there is a ½ chance we can’t make a triangle, and a ½ chance we can. And so we have the answer to the second problem.

No. 3: Getting tougher still (as though that were possible)! Now it’s time for calculus. Guy D. Moore explains this one for us:

The problem asks us to ensure that three pieces form an acute triangle. Consider three pieces with lengths \(x>y>z\).

First, think about a right triangle. The formula for that, as our middle school teachers drilled into our heads, is \(x^2=y^2+z^2\). (Otherwise known as the Pythagorean Theorem.) To have an acute triangle, all angles must be less than 90 degrees, so we tweak that formula: \(x^2<y^2+z^2\).

From this we get that y<x and \((1-x)^2<y^2+(x-y)^2\), which is the same as


Since we’re dealing with pieces of the same stick and not three separate sticks, we can return to plotting in two dimensions, not three. And our mirrored-triangle plot is useful again since our answer lies within those two original triangles. This time, though, we need to draw two new three-pointed shapes within those two triangles. The area of those shapes will be our answer — the probability of an acute triangle.

So to calculate our new shapes, we need to cut pieces out of our original triangles. The area of one of those pieces is expressed in an integral (which is the calculus part of the solution). That integral is:


There are six shapes, each with the same area, cut out of our one-by-one square, leaving:


In that equation, “ln” is the natural log, which equals an implied probability of acute triangle-formation of about 7.9 percent. (Who knew that natural logs are a great way to solve stick problems?) Guy also provided this illustration of the curvy areas we calculated:

No. 4: We’re back to three dimensions again for the final question. This solution furthers the solution from problem No. 2, the way that solution No. 3 furthers solution No. 1. Laurent explained his solution this way:

We’ll solve this problem the same way we solved No. 2, but we’ll replace the triangle inequalities with the acute triangle inequalities. As in No. 2, we end up with a three-dimensional volume rather than a two-dimensional area. For simplicity again, we’ll assume that \(c\) is the largest length, which accounts for one-third of all possibilities.

Laurent provided a lovely illustration of this volume:

Our answer will ultimately be three times the area of this shape (this shape only accounts for stick c being longest, and two identical shapes will be generated for stick b being longest and stick a being longest).

Our solution lies in the filled-in parts of that shape. While this looks complicated, the curved surface inside that area has the equation \(c^2 = a^2 + b^2\), which is, conveniently, the equation of a right circular cone! So we can calculate the volume of the region of interest by subtraction. It’s ⅓ of the volume of the cube minus ¼ of the volume of the cone. (One-third because we’re considering only one out of three scenarios, the one where c is longest. And ¼ because the cone’s base is ¼ of a circle.) The total probability is three times this volume, because we must account for the remaining identical pieces. The final answer is \(3(1/3 − 1/4 ( \pi/3 ) ) = 1 − \pi/4\) or about 0.2146. So the probability of forming an acute triangle with three randomly chosen lengths is about 21.5 percent.

Want to submit a riddle?

Email me at

LBCF, No. 152: ‘In these shoes?’

Friday, 22 September 2017 11:07 am
[syndicated profile] slacktivist_feed

Posted by Fred Clark

The misogyny is palpable, but we’ve had plenty of opportunity to explore that before now, so let’s set aside for the moment L&J’s warped understanding of gender and consider instead their warped understanding of footwear.

Posted by Maggie Koerth-Baker

President Trump’s voter fraud commission has the stated goal of ensuring the integrity of the vote as “the foundation of our democracy.” But, like the buried foundations of a building, who votes and how they vote aren’t easy things to examine.

In alleging that there’s widespread voter fraud, commission Vice Chair Kris Kobach has relied on proxies, such as the indirect measure of matching up names in voter registries to identify people registered in more than one state. In the lead-up to the commission’s second meeting last week, he also railed against thousands of New Hampshire voters who registered using out-of-state licenses — which he claimed proved that people were hopping state borders to illegally swing elections.

The experts I spoke with said those metrics don’t really measure the existence or risk of illegal voting. In fact, they said, it’s probably impossible to conclusively prove or disprove allegations of widespread illegal voting — though they pointed out that very few cases have ever been found and prosecuted, even as Kobach is aggressively seeking them out to prove his hypothesis of rampant voter fraud.

When Kobach employs these proxies as proof of voter fraud, though, he is implicitly suggesting that changes need to be made to the voting system to protect its integrity, such as ensuring that the same name never turns up on multiple registries and voters never use out-of-state licenses at the polls. But those irregularities exist because of the fundamental American values the commission is dedicated to protecting: You can’t easily and swiftly clean up registry errors without disenfranchising millions of voters. And you can’t set up a uniform, nationalized voter registry in a country whose founding values are based on limited federal control.

The problem with proxies is that they do more to demonstrate the complex nature of American values than they do to prove our elections are rigged.

If Kobach were simply claiming that voter registries are messy — full of errors and inaccuracies — he’d be correct. Research published by the Pew Center on the States in 2012 estimated that 24 million registration records (13 percent of all the registrations in the country) contained information that was likely inaccurate — names that had changed, addresses that were no longer up-to-date, people who had died, simple typos. And double-registered voters — a favorite target of Kobach’s — reached nearly 3 million. Likewise, he’s also right that people do sometimes vote in states where they aren’t officially residents. That’s particularly true of college students, who might spend most of their time in a place they don’t technically live. Depending on local laws, those students can use out-of-state licenses to prove their identities at the ballot box.

But experts say that neither of these proxies is particularly good evidence of illegal voting. Primarily, that’s because both things are 100 percent legal and exist for reasons that have nothing to do with fraud. Take double registration, for instance: When Americans are double registered, it’s usually because they’ve moved and their names were never cleared out of the system in their previous state of residence.

We did a quick survey of FiveThirtyEight staffers by checking voter registration rolls in the states they’ve lived in over the past 15 years. Out of 15 people who participated, five were double-registered. I’m one of them, with active voter registrations in Minnesota, where I live, and Alabama, a state I last lived in in 2006. Three staffers were only registered in states they no longer live in. One person wasn’t registered anywhere, much to his surprise. Bottom line: Americans don’t stay in one place forever, and bureaucracy doesn’t always keep up with us.

Then there’s the specter of out-of-state voters. Kobach claimed that more than 5,000 people had come to New Hampshire from other states to vote in (and try to change the outcome of) the November election. His proof was a list of people who had taken advantage of New Hampshire’s same-day registration laws, had used out-of-state driver’s licenses to verify their identities and had not later applied for New Hampshire licenses or vehicle registrations. Kobach has received plenty of pushback on the idea that this meant they weren’t legitimate Granite State voters, including from other members of the commission during last week’s meeting. That’s because it’s likely that many of those people whom he called fraudulent voters were actually college students voting in New Hampshire because that’s where they spent most of their time and where they were living when Election Day rolled around. The Washington Post found several individuals who attested to having done just that, and the cities with the highest number of out-of-state-license voters were college towns.

Just because these practices don’t prove voter fraud, though, doesn’t mean they aren’t confusing and even at times problematic. It’s certainly not ideal to have voter registries loaded with the “dead wood” of misspelled names and people who’ve left the state, said Charles Stewart, professor of political science at MIT. Those errors can prevent people from voting if, say, their current address and registry address don’t match. People in that situation could be turned away or forced to file provisional ballots.

And Stewart said he believes they suggest deeper administrative problems — especially when the state doesn’t know exactly how many errors its voter rolls contain. “What if a school said, ‘We don’t know how many people graduated’? We’d be really suspicious of public officials that had sloppy reporting,” Stewart said. “It’s generally good public policy to have good records.”

That’s why states go through the process of cleaning up voter registration rolls — removing the dead and the people who have left the state to try to maintain an accurate count of voters. But here’s where American values conflict with clean database management: You can’t just unceremoniously purge people from the records because they haven’t voted in a while or because they appear to be registered in another state, said Walter Mebane, professor of political science and statistics at the University of Michigan.

The National Voter Registration Act prevents states from doing just that because it’s likely to end up illegally stripping people of their right to vote.5 States have to go through a process of trying to match voter registry records to other kinds of data and alerting voters if it looks like they should be removed. There’s no uniform procedure for this, and the quality of registry maintenance (and election administration in general) varies widely from state to state. The courts are still hashing out what is and isn’t appropriate. For instance, the Supreme Court will hear arguments in November in a case on Ohio’s registry maintenance methodology, which purged voters from the rolls if they hadn’t voted in six years.

You could fix the problem — and probably make it easier to see if people have truly double-voted, not just double-registered — by having a single national voter registry, Mebane told me. “But there’s no reason to worry about that because it would never happen,” he said, explaining that it be would anathema to our national values.

Those values strongly favor local control of elections, even when it’s not the most efficient choice. It dates back to the beginnings of the country, when county officials tallied in-person voice votes from citizens who didn’t need to be registered at all. As things like the secret ballot and voter registration were added into the mix, cities, counties and states came up with different ways to handle the new complications, collect the records and administer the elections. Today, elections are governed by states, but a lot of the nuts-and-bolts management still happens at the city or county level — often in ways that vary from one town to another. And shifting away from that diverse local control probably wouldn’t be terribly popular, given that Americans’ confidence in election results and fair handling of votes decreases as the level of administration moves further from where they live.

The same is true with out-of-state voting: You can simplify the system, but that would conflict with other values. Courts have repeatedly said students can vote where they study. “Nobody can lose their right to vote because of issues with residency as a student,” said Marc Meredith, professor of political science at the University of Pennsylvania — something that would be likely to happen if students were forced to travel back to their home states on Election Day in the middle of their fall semesters.

But Americans are generally less supportive of students voting outside their home states than we are of other 20th-century voting reforms, Stewart said. “There’s a sizeable number of people in the public who just believe that college students should vote where their parents live.”

He based that on the unpublished results of questions he asked in the Cooperative Congressional Elections Study in 2013. Although most Americans — 65 percent — said expanding where students could vote improved elections, respondents were less supportive of that than they were about other kinds of reforms — like extending the vote to women.

In other words, Americans are both suspicious of thousands of people from “someplace else” tipping an election and have also set up the legal system to support expansion and protection of the right to vote, even for people who are, technically, from someplace else. The result is a jumble of laws that make the ability of college students to vote — and what forms of ID and documentation they have to bring with them to the polls — vary unpredictably from state to state, even county to county. Even someone like Kobach — a state election official who has made his national career on issues surrounding election transparency — can’t be expected to know what is legal and what isn’t nationwide, experts told me. There’s just too much diversity.

But the data mess explains why it’s difficult to make a case around voter fraud from either side. Just because a situation isn’t ideal doesn’t mean it’s proof of illegal voting. Instead, Meredith said, he wishes Kobach and the commission would focus on finding better ways to systematically study voting — ways that line up with both the needs of researchers and American values. “Your hope would be that’s what a voter integrity commission would be,” he said. “Rather than jumping to conclusions on the basis of proxies that may or may not have validity.”

The GOP’s Catch-22 On Obamacare

Friday, 22 September 2017 09:49 am
[syndicated profile] 538_feed

Posted by Harry Enten

Welcome to Pollapalooza, our weekly polling roundup. Today’s theme song: “Everything’s Relative.”

Poll of the week

Republicans in the U.S. Senate have just over a week, until Sept. 30, to pass an Obamacare repeal bill with a bare majority (instead of 60 votes). But in the rush of whip counts and CBO scores, don’t forget: This is an incredibly dangerous debate for Republicans. The public, through a variety of poll results, has made plain that it doesn’t like what the GOP is doing.

The latest YouGov poll, for example, found that 38 percent of respondents picked Democrats as the party that would do “a better job handling the problem of health care”; 24 percent picked Republicans. The Affordable Care Act, meanwhile, has a positive net favorable rating, and the various GOP repeal-and-replace bills have generally polled terribly.

President Trump should also be worried about an unpopular health care bill passing. His overall job approval rating has climbed in recent weeks as news networks have been focused on hurricanes, but his approval rating has tended to decline when Americans are more focused on the health care debate. Trump himself has an approval rating of just 27 percent on the issue of health care, according to the latest NBC/Wall Street Journal survey.

So why are Trump and congressional Republicans barreling on anyway? Republican voters want them to. According to a Politico/Harvard T.H. Chan School of Public Health poll, 53 percent of Republicans said repealing and replacing Obamacare was an “extremely important priority” for them. That 53 percent was higher than it was for any other issue polled.6 Lowering taxes, which Republicans are also gearing up to do, was rated as extremely important by just 34 percent of Republicans.

The question therefore for Republicans is whether they want to pass a bill and upset the electorate at large or leave a seven-year promise to repeal Obamacare unfulfilled and upset their base. Neither option is all that appealing politically.

Other polling nuggets

  • It’s close in Virginia — Democrats were perhaps hoping that Trump’s unpopularity would allow Ralph Northam to run away with the Virginia governor’s race. It hasn’t happened. In an average of five surveys conducted this month, Northam is nursing a 45 percent to 41 percent lead over Republican Ed Gillespie. Northam may have more room to grow because African-Americans, who overwhelming vote Democratic, tend to make up a disproportionate share of undecideds in these polls. But also remember that the link between how voters feel about a president and how they vote for governor isn’t as strong as you might think.
  • How students understand free speech — UCLA Professor John Villasenor published a poll this week in which college students offered their opinions on free speech. Among the findings: A plurality of students said the First Amendment does not protect hate speech (44 percent to 39 percent). A slim majority said it is OK for students to shout down a guest speaker (51 percent to 49 percent). And finally, 19 percent of all students (and 30 percent of male students) said it was OK for students to use violence to prevent someone from speaking. I highly suggest reading the entire poll.
  • Moore remains ahead in Alabama — The Alabama Republican primary runoff is Tuesday, and the GOP establishment should be worried. Firebrand conservative Roy Moore led Sen. Luther Strange in two polls released this week — 53 percent to 47 percent in a Strategy Research poll and 50 percent to 42 percent in a JMC Analytics poll. Still, Moore’s 8-point margin in the latter poll is down from 19 points the last time JMC Analytics surveyed the race. Put another way: Moore is the favorite, but don’t be shocked if Strange pulls it out.
  • Bill de Blasio is cruising to re-election — After New York Mayor Bill de Blasio captured nearly 75 percent of the Democratic primary vote last week, a new Marist College poll suggests that he may come close to that percentage in November’s general election. De Blasio was ahead 65 percent to 18 percent over Republican Nicole Malliotakis. Perhaps that shouldn’t be too surprising given the heavy Democratic registration edge in New York City. Remember, though, that New York didn’t elect a Democratic mayor in any of the five elections before de Blasio won in 2013.

Trump’s job approval ratings

Trump’s job approval rating is 39.5 percent. His disapproval rating is 53.6. Both of those are improvements for Trump over last week’s 38.5 percent to 55.6 percent spread, and they continue a longer-term positive trend for the president. Just last month, his approval rating was below 37 percent, and his disapproval rating was above 57 percent. The timing of Trump’s improved numbers lines up pretty well with Hurricane Harvey making landfall in the U.S.

The generic ballot

Democrats are ahead of Republicans 46.4 percent to 38.6 percent on the generic congressional ballot. That’s a slight improvement for Republicans from last week when they were down 45.5 percent to 36.0 percent.

Posted by Benjamin Morris

Before the Super Bowl in February, we published a fairly comprehensive guide for when to go for 2, simplified into one slightly complicated (but very easy to use once you get the hang of it!) chart. In addition to hopefully demystifying how to judge a lot of borderline situations, we identified some fairly clear-cut cases in which NFL coaches should choose to go for 2 but don’t. Ever.

My hope, of course, was that teams would read this (or figure it out on their own) and that we’d see an immediate and cataclysmic shift in 2-point strategy — like going for it when down 4, 8, or 11 after scoring a touchdown late (which are not only real cases, but ones that are usually clear-cut and significant). But, alas, no such luck.

The logic is pretty simple: If you can estimate your team’s chances of winning with an X point lead/deficit (X points being how many points you are up or down following a touchdown) and your chances of winning with X+1 and X+2, the decision follows from simple arithmetic. In fact, given that 2-point attempts and extra-point attempts taken from the 15-yard line (under the new rules implemented in 2015) now have roughly the same expected point value (both around 0.95 points), the choice is easier than ever. Simply calculate (or estimate):

  • The improvement in win percentage if your point margin changed from X to X+1.
  • The improvement in win percentage if your point margin changed from X+1 to X+2.

If the first number is greater, kick the extra point. If the second is, go for 2.

Now, you can estimate or intuit these differences on your own on the fly, or you can use a fancy win probability model like we have,7 but the logic is the same.

Of course, we’ve taken it a bit further — our chart uses multiple sets of assumptions to create a range for each scenario covering teams that are relatively better or worse at 2-point conversions than our baseline. In case you missed it, here’s the chart:8

A quick note on reading this chart: It may look a little “loud,” but that’s a feature for looking up scenarios lightning-fast. For a quick approximation, you first look at the minichart corresponding to the point spread (after the touchdown). If the quarter you’re in is shaded bright purple, you probably want to kick; if it’s bright orange, you should probably go for it. If you’re in a rush, you could stop there and be in pretty decent shape.

Through the first two weeks of this NFL season, teams have gone for 2 (from the 2-yard line) eight times overall. More importantly, of the 30 times that the numbers say they should have gone for 2, they did so just four times, for a rate of 13 percent. Since 2015, in the regular season and playoffs, teams that should have gone for 2 have done so around 15 percent of the time.

Now, of course it’s possible that some teams are better or worse at going for 2 than average, but it isn’t possible that 85 percent of teams are worse than average. I’ve also calculated how often teams should “clearly” go for 2 — meaning situations in which they should go for it even if they are relatively quite bad at 2-point attempts9 — and there have been 16 such cases through Week 2:10

Times when teams clearly should have gone for 2

2017 NFL season through Week 2

1 Cleveland Pittsburgh 4 3:36 -5 2.23 ✓
1 L.A. Chargers Denver 4 7:00 -4 1.62
1 Chicago Atlanta 4 7:26 -4 1.33
1 Detroit Arizona 3 3:07 -2 1.28 ✓
2 Arizona Indianapolis 4 7:38 -4 1.28
1 N.Y. Jets Buffalo 3 2:00 -2 1.24 ✓
1 Detroit Arizona 4 9:27 4 0.43 ✓
1 L.A. Chargers Denver 4 8:10 -11 0.43
1 Jacksonville Houston 2 0:49 18 0.29
1 Baltimore Cincinnati 2 1:28 16 0.29
1 Houston Jacksonville 3 9:09 -13 0.24
2 Cleveland Baltimore 2 4:56 -8 0.24
2 New Orleans New England 4 5:04 -17 0.10
2 Tennessee Jacksonville 3 2:49 19 0.05
2 Dallas Denver 4 14:24 -19 0.05
2 Philadelphia Kansas City 4 0:08 -8 0.05

Magnitude is the amount that a team’s expected win percentage is improved by making the right decision.

Source: ESPN Stats & Information Group.

Teams made the correct decision in four of those 16 cases, for a 25 percent rate. (For comparison: Since 2015, regular season and playoffs combined, teams have gone for 2 points 27 percent of the time in “clear go” scenarios.)

Of course, a decision being clear-cut doesn’t mean that it matters a whole lot, but note that even among the decisions with the most significant consequences, teams are still making the wrong choices regularly (most likely because of adherence to Dick Vermeil’s rigid and outdated system that leads them to repeat the same mistakes over and over). In particular, the aforementioned scenarios of being down 4, 8, or 11 points late are both quite clear and quite important.

Another significant case is when a team scores to pull within 2: Go for 2! This may seem like an obvious one, but since 2015, teams in this situation have chosen to kick the extra point as late as the fourth quarter (once, which is way too many times), and they’ve done so half the time in the third quarter (6 of 12, and still very bad) and 77 percent of the time in the second quarter (10 of 13, and still pretty bad, especially for such an early decision).

This season, teams down 4, 8 or 11 late are holding steady at a 0 percent correct rate, having attempted extra points five out of five times when they “clearly” should have gone for it. That means that over the past three season, they’ve gotten these right exactly zero times in 105 chances.

On a slightly brighter note, teams have been down 2 points after a touchdown twice this season — both in the third quarter — and they’ve correctly tried to tie the game both times! It’s not quite the revolution — it isn’t really even shots fired. But maybe, just maybe …

Beside The Points For Thursday, Sept. 21, 2017

Thursday, 21 September 2017 07:16 pm
[syndicated profile] 538_feed

Posted by Walt Hickey

Things That Caught My Eye

For whom the AL wild-card slot tolls

It sure looks like the Minnesota Twins are going to snag the American League’s second wild-card slot in the playoffs, and needless to say it’s going to be difficult to get past the recently streaking Indians, top-notch offense of Houston, or the Yankee-Red Sox industrial complex. They’ve got a two in three chance of nabbing the potentially doomed playoff spot. [FiveThirtyEight]

More like AFC Best you know?

The AFC West — the Kansas City Chiefs, Oakland Raiders, Denver Broncos, and some itinerant caravan of rootless football professionals describing themselves as Chargers — is stacked this year, with the Chiefs, Raiders and Broncos all with higher-than 50 percent chances to make the playoffs according to ESPN’s football power index. [ESPN]

Technically undefeated!

The Las Vegas Golden Knights are 2-0 so far through the NHL preseason, which is their first as a franchise. Technically speaking, that makes them the only entirely undefeated team playing at the moment. Hockey starts up again October 4. [Knights on Ice]

NFL games getting shorter?

Not including Monday Night Football, the average Week 2 NFL game lasted 3 hours, 4 minutes — down slightly from Week 2 of 2015 and 2016. Obviously we’re going to need a few more weeks of data before making a definitive declaration about the speed of play but early number appear promising. [ESPN]

Baseballs approach their platonic ideal

All baseballs go through the air a little differently — a lower seam here or a smoother ball there marginally affect how they travel — but those slight differences have been getting slighter. Judging by a measure of air resistance, the baseballs used in MLB play since 2008 have been getting more and more internally consistent when it comes to how they fly, which ends up affecting how far they go, which might explain… [FiveThirtyEight]

Big Number


The number of home runs hit league-wide in the 2017 season when Kansas City’s Alex Gordon connected for one in the eighth inning on Tuesday night, topping the major-league record set in the 2000 season. The league is currently on pace for 6,140 homers. [ESPN]

Leaks from Slack


Sox going to extra innings again. 2nd day in a row.


dammit, @neil, you caused this


I only caused it if they end up losing


by reminding them how lucky 14-3 is in extras

[The Red Sox won and were subsequently 15-3 in extra innings]


Oh, and don’t forget
This could be it for Bautista, enjoy him while you can.

How Graham-Cassidy Caught The Democrats Napping

Thursday, 21 September 2017 05:09 pm
[syndicated profile] 538_feed

Posted by Perry Bacon Jr.

Everyone should have seen the Graham-Cassidy Obamacare repeal bill coming. But we didn’t.

Democrats had spent months defending the Affordable Care Act — and they appeared to have succeeded. So just over a week ago, a group of liberal members of the U.S. Senate rolled out their proposal to create a Medicare-for-all program. The group, led by Bernie Sanders, didn’t directly say, “We saved Obamacare, so now it’s time to move on to something even more liberal,” but that was the gist.

How did Democrats end up getting caught so flat-footed, putting out a single-payer proposal that essentially has no chance of becoming law until the White House changes hands while an effort to repeal one of the party’s signature achievements of the last decade gained strength? Because aside from Sens. Bill Cassidy of Louisiana and Lindsey Graham of South Carolina, basically everyone in Washington — Republicans, Democrats, the media — assumed the Obamacare repeal effort was dead. Two weeks ago, President Trump was suggesting that Republicans needed to give up on Obamacare repeal and focus on tax reform, Sen. Lamar Alexander of Tennessee was writing a bipartisan bill to fix Obamacare and Senate Republican leaders were downplaying the possibility that the Obamacare repeal effort would be revived.

So what happened?

Most importantly: Dean Heller of Nevada moved from a weak no to a firm yes — but no one really noticed.

The rise of Graham-Cassidy began on the afternoon of July 27 — hours before the Obamacare repeal effort seemed to die in the Senate. (GOP Sens. Susan Collins, John McCain and Lisa Murkowski formally voted down the “skinny” repeal after 1 a.m. on July 28.)

On that summer Thursday, Heller — who had been one of the Republican holdouts on a bunch of other Obamacare repeal proposals, arguing they cut Medicaid too deeplybecame a co-sponsor of the Graham-Cassidy bill. (Estimates suggest Graham-Cassidy will cut federal dollars going to states for health care by up to $400 billion from 2020-2026, much less than the more than $700 billion in estimated Medicaid cuts that were included in some of the proposals Heller opposed.)

It’s not totally clear why Heller signed on to Graham-Cassidy. He may have assumed it would never actually come up for a vote. He may have been worried about re-election: Republican donors in Nevada were reportedly warning Heller that they wouldn’t give him money for his 2018 re-election effort unless he backed Obamacare repeal, and Trump suggested he would oppose Heller in a GOP primary if the senator didn’t join the cause. Or perhaps Heller simply believes in the Graham-Cassidy model of health care policy reform, which would send most Obamacare funds back to states.

Either way, co-sponsoring the bill was an odd move for Heller, largely because he had previously suggested he would back only legislation that both preserved the expanded Medicaid funding Nevada had received through Obamacare and had the support of the state’s GOP governor, Brian Sandoval. Even in July, it was clear that Graham-Cassidy would likely reduce the number of federal dollars going to Nevada for Medicaid, which is further supported by recent estimates. Sandoval didn’t endorse the legislation back then, and this week he joined a bipartisan group of governors opposing it.

Whatever his reasons, Heller’s support was key, making the Senate math much easier for Cassidy and Graham. Back in July, only three GOP senators (Collins, Heller and Murkowski) had been strong opponents of the Obamacare repeal bills, voting down both the full repeal of Obamacare and a partial repeal largely written by Senate Republican Leader Mitch McConnell. (Of the 52 GOP senators, the other 49 voted for at least one of those two provisions.)

The last-ditch “skinny” repeal bill (which did not include Medicaid cuts) was widely expected to pass because Heller supported it, providing what was thought to be the crucial 50th vote. But at the last minute, his “no” vote was replaced by McCain’s.

In other words, at the end of July, Republicans still had two months left to repeal Obamacare and only two real, solid opponents of their repeal ideas: Collins and Murkowski. They were the only ones to vote against all versions of the repeal, though a number of their GOP colleagues had also said they were reluctant to support various bills. Despite expressing concerns about protecting Medicaid, Sens. Shelley Moore Capito of West Virginia, Jerry Moran of Kansas and Rob Portman of Ohio all eventually voted for a version of Obamacare repeal that would have cut Medicaid spending. So did McCain, who said some of his objections to the “skinny” repeal bill were about the process by which it had been written (without any Democratic input and without going through the traditional committees and hearings). Mike Lee of Utah and Rand Paul of Kentucky, two of most conservative GOP senators, had voted for “skinny” repeal, despite complaining that the Obamacare repeal proposals left much of the ACA in place.

So assuming Murkowski and Collins were the only real holdouts, Heller’s support gave the Obamacare repeal 50 votes — at least in theory.

Meanwhile, Cassidy and Graham spent much of August and early September touting their bill. Senate Republican leaders were not enthusiastic about coming back from their summer recess to face another attempt at an Obamacare repeal. Neither were rank-and-file senators. But no senator was actually saying, “I will vote against this bill if it comes to the floor.”

Fast forward to this week and it’s easy to see why Senate Republicans want to give Obamacare repeal a final try. Yes, McCain is a problem, because this bill is, like the July legislation, a GOP-only proposal written outside of the traditional committee process. And he demonstrated in July that he is not afraid to be the deciding vote against an Obamacare repeal.

But McCain has not really given any policy-driven reasons for voting this bill down. And Graham is a very close friend of his. He may still vote yes.

Paul ultimately backed the skinny repeal bill in July despite his early objections, so Republican leaders are probably betting that his threats to vote against this bill are also empty. That’s not an unreasonable assumption.

Collins and Murkowski still sound like “no” votes, and they consistently voted “no” before. But if Collins and Murkowski are the only noes, the Republicans can pass Graham-Cassidy. So look for Paul and McCain to get plenty of calls from the White House and fellow Republicans imploring them to back this legislation, and for the Democrats to back off talking about Medicare-for-all for a bit. In short, the GOP is exactly where it was at the end of July, but with much less time left to get a deal done.

London Brings Out The Best In The NFL’s Dregs

Thursday, 21 September 2017 03:52 pm
[syndicated profile] 538_feed

Posted by Daniel Levitt

The NFL will take over London for the 18th time — and the 11th consecutive year — this weekend when the Baltimore Ravens take on veteran overseas travelers the Jacksonville Jaguars at Wembley Stadium. The game will be the first of four set in England this season, the most that have been played in a calendar year.

For the NFL, the additional game — there have been three in London each of the past three seasons — represents a concerted effort to expand the popularity and global reach of its brand.11 For the British, it’s another chance to watch lousy football.

It’s no secret that the teams that NFL commissioner Roger Goodell has sent have been overwhelmingly bad — and we aren’t just talking about the Jaguars. According to FiveThirtyEight’s pre-game Elo ratings, the harmonic mean of both teams’ ratings — a balanced measure of matchup quality that can better detect when both teams in a game are either good or bad — has been below average in 13 of the 17 games played in London.12 On top of that, all four games to be played in London this year will be below average, according to the team’s current Elo ratings.

London NFL games have been consistently below average

The harmonic mean of the Elo ratings of the teams in each matchup compared with 1500, roughly the rating of an average NFL team

2014 Miami 1449 Oakland 1327 1385 -115
2015 Buffalo 1512 Jacksonville 1310 1404 -96
2017 Cleveland 1321 Minnesota 1501 1405 -95
2016 Indianapolis 1469 Jacksonville 1350 1407 -93
2010 Denver 1401 San Francisco 1418 1409 -91
2014 Dallas 1557 Jacksonville 1298 1416 -84
2013 San Francisco 1642 Jacksonville 1246 1417 -83
2007 N.Y. Giants 1553 Miami 1358 1449 -51
2013 Pittsburgh 1448 Minnesota 1477 1462 -38
2015 New York Jets 1478 Miami 1449 1463 -37
2017 Baltimore 1539 Jacksonville 1396 1464 -36
2014 Detroit 1541 Atlanta 1405 1470 -30
2017 Arizona 1529 L.A. Rams 1418 1471 -29
2015 Detroit 1432 Kansas City 1514 1472 -28
2016 N.Y. Giants 1466 L.A. Rams 1481 1473 -27
2017 New Orleans 1460 Miami 1519 1489 -11
2009 New England 1630 Tampa Bay 1375 1492 -8
2016 Washington 1509 Cincinnati 1525 1517 17
2012 New England 1678 St. Louis 1393 1522 22
2008 San Diego 1600 New Orleans 1470 1532 32
2011 Chicago 1543 Tampa Bay 1527 1535 35

All 2017 games are based on Elo ratings before Week 3.

The Jaguars are a big part of this, of course. Jacksonville has played in London four times, and the Elo rating of each of those four Jaguar teams ranks in the bottom five (among all 34 teams). Joining them in that bottom five are the 2014 Oakland Raiders. And it turns out that the Raiders’ game against the Miami Dolphins that year was the worst London matchup so far based on our Elo ratings. That game was so dreary that those Raiders, who fell to 0-4 after losing to Miami, fired their coach, Dennis Allen, not long after their plane touched down in the U.S. Perhaps by no coincidence, the Dolphins coach that year, Joe Philbin, would be fired the next season after starting 1-3. Philbin’s last game would be a loss to the Jets … in London.

But not every game played in London has been between NFL bottom feeders — sometimes a good team makes the trip (and, sure, plays a bottom feeder). The Brits have experienced Tom Brady and the New England Patriots twice, as well as the San Francisco 49ers the season after their latest Super Bowl appearance. But if you remove those three teams, the average London team,13 including this year’s Ravens and Jags, has an Elo rating of 1444. That’s roughly on par with this year’s 0-2 Cincinnati Bengals.

NFL fans will generally tune in regardless of who is playing. So perhaps the NFL’s intention was that the consistently poor quality of opponents would be scratched out by competitive, exciting contests. If that’s the case, the plan is generally working.

Blowout or bust

The point differential for regular-season NFL games played in London

2016 Washington 27 Cincinnati 27 0
2014 Detroit 22 Atlanta 21 1
2016 Indianapolis 27 Jacksonville 30 3
2007 New York Giants 13 Miami 10 3
2015 Buffalo 31 Jacksonville 34 3
2008 San Diego 32 New Orleans 37 5
2011 Chicago 24 Tampa Bay 18 6
2016 New York Giants 17 L.A. Rams 10 7
2013 Pittsburgh 27 Minnesota 34 7
2010 Denver 16 San Francisco 24 8
2015 New York Jets 27 Miami 14 13
2014 Dallas 31 Jacksonville 17 14
2014 Miami 38 Oakland 14 24
2009 New England 35 Tampa Bay 7 28
2013 San Francisco 42 Jacksonville 10 32
2015 Detroit 10 Kansas City 45 35
2012 New England 45 St. Louis Rams 7 38

Source: ESPN Stats & Information Group

Ten of the 17 games — or 59 percent — have been decided by one score. That might not sound so thrilling, but just 35 percent of all NFL games played since 2007 have been decided by 8 points or fewer. One of last year’s London games was so tightly matched, no one won it. (Fortunately for Cincinnati and Washington, they were playing in the one NFL location where fans are content with a tie.)

Low-quality games usually lead to drops in attendance toward the end of the season. Not in London, though. All but two games have attracted a crowd of more than 80,000, with the highest NFL London crowd at 84,488 — for last year’s tie at Wembley. To put that in context, that average draw would have been the second-highest home attendance of any team in the league last season (behind only the Dallas Cowboys).

As Goodell continues to push some of his most mediocre teams onto the international scene, it turns out that they’re rewarding fans with some of the league’s most competitive play.

Significant Digits For Thursday, Sept. 21, 2017

Thursday, 21 September 2017 11:47 am
[syndicated profile] 538_feed

Posted by Walt Hickey

You’re reading Significant Digits, a daily digest of the numbers tucked inside the news.

2 holdouts

With Nicaragua reportedly set to join the Paris climate accords — they held out in 2015 because the nation believed the deal didn’t go far enough — there are now only two holdouts from the landmark deal: Syria and the United States, which President Trump said would pull out of the agreement. [Bloomberg]

AB 485

A California bill awaiting the signature of Gov. Jerry Brown would outlaw puppy mills, banning pet stores from selling cats, dogs and bunnies that did not come from a shelter or rescue. [The New York Times]

1,772 episodes

This is easily the most staggering statistic I have come across while writing this column: There have been 1,772 individual episodes of HGTV’s “House Hunters” since it debuted in 1999. I could watch an episode of “House Hunters” every day for nearly five years without seeing a single repeat. When we’re just a radioactive cinder in the gaze of an expanding sun, whomever or whatever succeeds us will be able to say, “damn … they were good at finding and obtaining houses.” [Vulture]

80,000 311 calls

Hurricane Sandy left an indelible mark on New York City, and the effects of the storm can still be seen and felt years later. More than 36 million calls were placed to NYC’s 311 service from just before Sandy hit in late 2012 through earlier this week. Nearly 80,000 of them were related to the storm. And the tail is super long — 142 such calls were made in 2017 (as of Monday). [FiveThirtyEight]

3.5 million people

Hurricane Maria has left the entire island of Puerto Rico and its 3.5 million residents without power. That’s to say nothing of flooding and other destruction. Maria, now a Category 3 storm, is currently hitting the Dominican Republic. [BBC]

$31.4 million

Russian trade with North Korea more doubled to $31.4 million in the first quarter of 2017. Reuters found eight North Korean fuel ships that left Russia ostensibly en route to China or South Korea only to change their final destination to North Korea. [Reuters]

Like Significant Digits? Like sports? You’ll love Besides the Points, our new sports newsletter.

If you see a significant digit in the wild, send it to @WaltHickey.

The Media Has A Probability Problem

Thursday, 21 September 2017 09:47 am
[syndicated profile] 538_feed

Posted by Nate Silver

This is the 11th and final article in a series that reviews news coverage of the 2016 general election, explores how Donald Trump won and why his chances were underrated by most of the American media.

Two Saturday nights ago, just as Hurricane Irma had begun its turn toward Florida, the Associated Press sent out a tweet proclaiming that the storm was headed toward St. Petersburg and not its sister city Tampa, just 17 miles to the northeast across Tampa Bay.

Hurricane forecasts have improved greatly over the past few decades, becoming about three times more accurate at predicting landfall locations. But this was a ridiculous, even dangerous tweet: The forecast was nowhere near precise enough to distinguish Tampa from St. Pete. For most of Irma’s existence, the entire Florida peninsula had been included in the National Hurricane Center’s “cone of uncertainty,” which covers two-thirds of possible landfall locations. The slightest change in conditions could have had the storm hitting Florida’s East Coast, its West Coast, or going right up the state’s spine. Moreover, Irma measured hundreds of miles across, so even areas that weren’t directly hit by the eye of the storm could have suffered substantial damage. By Saturday night, the cone of uncertainty had narrowed, but trying to distinguish between St. Petersburg and Tampa was like trying to predict whether 31st Street or 32nd Street would suffer more damage if a nuclear bomb went off in Manhattan.

To its credit, the AP deleted the tweet the next morning. But the episode was emblematic of some of the media’s worst habits when covering hurricanes — and other events that involve interpreting probabilistic forecasts. Before a storm hits, the media demands impossible precision from forecasters, ignoring the uncertainties in the forecast and overhyping certain scenarios (e.g., the storm hitting Miami) at the expense of other, almost-as-likely ones (e.g., the storm hitting Marco Island). Afterward, it casts aspersions on the forecasts unless they happened to exactly match the scenario the media hyped up the most.

Indeed, there’s a fairly widespread perception that meteorologists performed poorly with Irma, having overestimated the threat to some places and underestimated it elsewhere. Even President Trump chimed in to say the storm hadn’t been predicted well, tweeting that the devastation from Irma had been “far greater, at least in certain locations, than anyone thought.” In fact, the Irma forecasts were pretty darn good: Meteorologists correctly anticipated days in advance that the storm would take a sharp right turn at some point while passing by Cuba. The places where Irma made landfall — in the Caribbean and then in Florida — were consistently within the cone of uncertainty. The forecasts weren’t perfect: Irma’s eye wound up passing closer to Tampa than to St. Petersburg after all, for example. But they were about as good as advertised. And they undoubtedly saved a lot of lives by giving people time to evacuate in places like the Florida Keys.

The media keeps misinterpreting data — and then blaming the data

You won’t be surprised to learn that I see a lot of similarities between hurricane forecasting and election forecasting — and between the media’s coverage of Irma and its coverage of the 2016 campaign. In recent elections, the media has often overestimated the precision of polling, cherry-picked data and portrayed elections as sure things when that conclusion very much wasn’t supported by polls or other empirical evidence.

As I’ve documented throughout this series, polls and other data did not support the exceptionally high degree of confidence that news organizations such as The New York Times regularly expressed about Hillary Clinton’s chances. (We’ve been using the Times as our case study throughout this series, both because they’re such an important journalistic institution and because their 2016 coverage had so many problems.) On the contrary, the more carefully one looked at the polling, the more reason there was to think that Clinton might not close the deal. In contrast to President Obama, who overperformed in the Electoral College relative to the popular vote in 2012, Clinton’s coalition (which relied heavily on urban, college-educated voters) was poorly configured for the Electoral College. In contrast to 2012, when hardly any voters were undecided between Obama and Mitt Romney, about 14 percent of voters went into the final week of the 2016 campaign undecided about their vote or saying they planned to vote for a third-party candidate. And in contrast to 2012, when polls were exceptionally stable, they were fairly volatile in 2016, with several swings back and forth between Clinton and Trump — including the final major swing of the campaign (after former FBI Director James Comey’s letter to Congress), which favored Trump.

By Election Day, Clinton simply wasn’t all that much of a favorite; she had about a 70 percent chance of winning according to FiveThirtyEight’s forecast, as compared to 30 percent for Trump. Even a 2- or 3-point polling error in Trump’s favor — about as much as polls had missed on average, historically — would likely be enough to tip the Electoral College to him. While many things about the 2016 election were surprising, the fact that Trump narrowly won14 when polls had him narrowly trailing was an utterly routine and unremarkable occurrence. The outcome was well within the “cone of uncertainty,” so to speak.

So if the polls called for caution rather than confidence, why was the media so sure that Clinton would win? I’ve tried to address that question throughout this series of essays — which we’re finally concluding, much to my editor’s delight.15

Probably the most important problem with 2016 coverage was confirmation bias — coupled with what you might call good old-fashioned liberal media bias. Journalists just didn’t believe that someone like Trump could become president, running a populist and at times also nationalist, racist and misogynistic campaign in a country that had twice elected Obama and whose demographics supposedly favored Democrats. So they cherry-picked their way through the data to support their belief, ignoring evidence — such as Clinton’s poor standing in the Midwest — that didn’t fit the narrative.

But the media’s relatively poor grasp of probability and statistics also played a part: It led them to misinterpret polls and polling-based forecasts that could have served as a reality check against their overconfidence in Clinton.

How a probabilistic election forecast works — and how it can be easy to misinterpret

The idea behind an election forecast like FiveThirtyEight’s is to take polls (“Clinton is ahead by 3 points”) and transform them into probabilities (“She has a 70 percent chance of winning”). I’ve been designing and publishing forecasts like these for 15 years16 in two areas (politics and sports) that receive widespread public attention. And I’ve found there are basically two ways that things can go wrong.

First, there are errors of analysis. As an example, if you had a model of last year’s election that concluded that Clinton had a 95 or 99 percent chance of winning, you committed an analytical error.17 Models that expressed that much confidence in her chances had a host of technical flaws, such as ignoring the correlations in outcomes between states.18

But while statistical modeling may not always hit the mark, people’s subjective estimates of how polls translate into probabilities are usually even worse. Given a complex set of polling data — say, the Democrat is ahead by 3 points in Pennsylvania and Michigan, tied in Florida and North Carolina, and down by 2 points in Ohio — it’s far from obvious how to figure out the candidate’s chances of winning the Electoral College. Ad hoc attempts to do so can lead to problematic coverage like this article that appeared in the The New York Times last Oct. 31, three days after Comey had sent his letter to Congress:

Mrs. Clinton’s lead over Mr. Trump appears to have contracted modestly, but not enough to threaten her advantage over all or to make the electoral math less forbidding for Mr. Trump, Republicans and Democrats said. […]

The loss of a few percentage points from Mrs. Clinton’s lead, and perhaps a state or two from the battleground column, would deny Democrats a possible landslide and likely give her a decisive but not overpowering victory, much like the one President Obama earned in 2012. […]

You’ll read lots of clips like this during an election campaign, full of claims about the “electoral math,” and they often don’t hold up to scrutiny. In this case, the article’s assertion that the loss of “a few percentage points” wouldn’t hurt Clinton’s chances of victory was wrong, and not just in hindsight; instead, the Comey letter made Clinton much more vulnerable, roughly doubling Trump’s probability of winning.

But even if you get the modeling right, there’s another whole set of problems to think about: errors of interpretation and communication. These can run in several different directions. Consumers can misunderstand the forecasts, since probabilities are famously open to misinterpretation. But people making the forecasts can also do a poor job of communicating the uncertainties involved. For example, although weather forecasters are generally quite good at describing uncertainty, the cone of uncertainty is potentially problematic because viewers might not realize it represents only two-thirds of possible landfall locations.

Intermediaries — other people describing a forecast on your behalf — can also be a problem. Over the years, we’ve had many fights with well-meaning TV producers about how to represent FiveThirtyEight’s probabilistic forecasts on air. (We don’t want a state where the Democrat has only a 51 percent chance to win to be colored in solid blue on their map, for instance.) And critics of statistical forecasts can make communication harder by passing along their own misunderstandings to their readers. After the election, for instance, The New York Times’ media columnist bashed the newspaper’s Upshot model (which had estimated Clinton’s chances at 85 percent) and others like it for projecting “a relatively easy victory for Hillary Clinton with all the certainty of a calculus solution.” That’s pretty much exactly the wrong way to describe such a forecast, since a probabilistic forecast is an expression of uncertainty. If a model gives a candidate a 15 percent chance, you’d expect that candidate to win about one election in every six or seven tries. You wouldn’t expect the fundamental theorem of calculus to be wrong … ever.

I don’t think we should be forgiving of innumeracy like this when it comes from prominent, experienced journalists. But when it comes to the general public, that’s a different story — and there are plenty of things for FiveThirtyEight and other forecasters to think about in terms of our communication strategies. There are many potential avenues for confusion. People associate numbers with precision, so using numbers to express uncertainty in the form of probabilities might not be intuitive. (Listing a decimal place in our forecast, as FiveThirtyEight historically has done — e.g., 28.6 percent chance rather than 29 percent or 30 — probably doesn’t help in this regard.) Also, both probabilities and polls are usually listed as percentages, so people can confuse one for the other — they might mistake a forecast showing Clinton with a 70 percent chance of winning as meaning she has a 70-30 polling lead over Trump, which would put her on her way to a historic, 40-point blowout.19

What can also get lost is that election forecasts — like hurricane forecasts — represent a continuous range of outcomes, none of which is likely to be exactly right. The following diagram is an illustration that we’ve used before to show uncertainty in the FiveThirtyEight forecast. It’s a simplification — showing a distribution for the national popular vote only and which candidate wins the Electoral College.20 Still, the diagram demonstrates several important concepts for interpreting polls and forecasts:

  • First, as I mentioned, no exact outcome is all that likely. If you rounded the popular vote to the nearest whole number, the most likely outcome was Clinton winning by 4 percentage points. Nonetheless, the chance that she’d win by exactly 4 points21 was only about 10 percent. “Calling” every state correctly in the Electoral College is even harder. FiveThirtyEight’s model did it in 2012 — in a lucky break22 that may have given people a false impression about how easy it is to forecast elections — but we estimated that the chances of having a perfect forecast again in 2016 were only about 2 percent. Thus, properly measuring the uncertainty is at least as important a part of the forecast as plotting the single most likely course. You’re almost always going to get something “wrong” — so the question is whether you can distinguish the relatively more likely upsets from the relatively less likely ones.
  • Second, the distribution of possible outcomes was fairly wide last year. The distribution is based on how accurate polls of U.S. presidential elections have been since 1972, accounting for the number of undecideds and the number of days until the election. The distribution was wider than usual because there were a lot of undecided voters — and more undecided voters mean more uncertainty. Even in a normal year, however, the polls aren’t quite as precise as most people assume.
  • Third, the forecast is continuous, rather than binary. When evaluating a poll or a polling-based forecast, you should look at the margin between the poll and the actual result and not just who won and lost. If a poll showed the Democrat winning by 1 point and the Republican won by 1 point instead, the poll did a better job than if the Democrat had won by 9 points (even though the poll would have “called” the outcome correctly in the latter case). By this measure, polls in this year’s French presidential election — which Emmanuel Macron was predicted to win by 22 points but actually won by 32 points — were much worse than polls of the 2016 U.S. election.
  • Finally, the actual outcome in last year’s election was right in the thick of the probability distribution, not out toward the tails. The popular vote was obviously pretty close to what the polls estimated it would be. It also wasn’t that much of a surprise that Trump won the Electoral College, given where the popular vote wound up. (Our forecast gave Trump a better than a 25 percent chance of winning the Electoral College conditional on losing the popular vote by 2 points,23 an indication of his demographic advantages in the swing states.) One might dare even say that the result last year was relatively predictable, given the range of possible outcomes.

The press presumed that Clinton would win, but the public saw a close race

I’ve often heard it asserted that the widespread presumption of an inevitable Clinton victory was itself a problem for her campaign24 — Clinton has even made a version of his claim herself. So we have to ask: Could this misreading of the polls — and polling-based forecasts — actually have affected the election’s outcome?

It depends on whether you’re talking about how the media and other political elites read the polls — and how that influenced their behavior — or how the general public did. Regular voters, it turns out, were not especially confident about Clinton’s chances last year. For instance, in the final edition of the USC Dornsife/Los Angeles Times tracking poll, which asked voters to guess the probability of Trump and Clinton winning the election, the average voter gave Clinton only a 53 percent chance of winning and gave Trump a 43 percent chance — so while respondents slightly favored Clinton, it wasn’t with much confidence at all.

The American National Election Studies also asked voters to predict the most likely winner of the race, as it’s been doing since 1952. It found that 61 percent of voters expected Clinton to win, as compared to 33 percent for Trump.25 This proportion is about the same as other years — such as 2004 — in which polls showed a fairly close race, although one candidate (in that case, George W. Bush) was usually ahead. While, unlike the LA Times poll, the ANES did not ask voters to estimate the probability of Clinton winning, it did ask voters a follow-up question about whether they expected the election to be close or thought one of the candidates would “win by quite a bit.” Only 20 percent of respondents predicted a Clinton landslide, and only 7 percent expected a Trump landslide. Instead, almost three-quarters of voters correctly predicted a close outcome.

Voters weren’t overly bullish on Clinton’s chances

Confidence in each party’s presidential candidate in the months before elections

2016 61%
33% ✓
2012 ✓ 64
2008 ✓ 59
2004 29
62 ✓
2000 47
44 ✓
1996 ✓ 86
1992 ✓ 56
1988 23
63 ✓
1984 12
81 ✓
1980 46
38 ✓
1976 ✓ 43
1972 7
83 ✓
1968 22
57 ✓
1964 ✓ 81
1960 ✓ 33
1956 19
68 ✓
1952 35
43 ✓

Source: American National Election Studies

So be wary if you hear people within the media bubble26 assert that “everyone” presumed Clinton was sure to win. Instead, that presumption reflected elite groupthink — and it came despite the polls as much as because of the polls. There was a bewilderingly large array of polling data during last year’s campaign, and it didn’t always tell an obvious story. During the final week of the campaign, Clinton was ahead in most polls of most swing states, but with quite a few exceptions27 — and many of Clinton’s leads were within the margin of error and had been fading during the final 10 days of the campaign. The public took in this information and saw Clinton as the favorite, but they didn’t expect a blowout and viewed the outcome as highly uncertain. Our model read it the same way. The media looked at the same ambiguous data and saw what they wanted in it, using it confirm their presumption that Trump couldn’t win.

News organizations learned the wrong lessons from 2012

During the 2012 election, FiveThirtyEight’s forecast consistently gave Obama better odds of winning re-election than the conventional wisdom did. Somehow in the midst of it, I became an avatar for projecting certainty in the face of doubt. But this role was always miscast — even quite opposite of what I hope readers take away from FiveThirtyEight’s work. In addition to making my own forecasts, I’ve spent a lot of my life studying probability and uncertainty. Cover these topics for long enough and you’ll come to a fairly clear conclusion: When it comes to making predictions, the world usually needs less certainty, not more.

A major takeaway from my book and from other people’s research on prediction is that most experts — including most journalists — make overconfident forecasts. (Weather forecasters are an important exception.) Events that experts claim to be nearly certain (say, a 95 percent probability) are often merely probable instead (the real probability is, say, 70 percent). And events they deem to be nearly impossible occur with some frequency. Another, related type of bias is that experts don’t change their minds quickly enough in the face of new information,28 sticking stubbornly to their previous beliefs even after the evidence has begin to mount against them.

Media coverage of major elections had long been an exception to this rule of expert overconfidence. For a variety of reasons — no doubt including the desire to inject drama into boring races — news coverage tended to overplay the underdog’s chances in presidential elections and to exaggerate swings in the polls. Even in 1984, when Ronald Reagan led Walter Mondale by 15 to 20 percentage points in the stretch run of the campaign, The New York Times somewhat credulously reported on Mondale’s enthusiastic crowds and talked up the possibility of a Dewey-defeats-Truman upset. The 2012 election — although it was a much closer race than 1984 — was another such example: Reporting focused too much on national polls and not enough on Obama’s Electoral College advantage, and thus portrayed the race as a “toss-up” when in reality Obama was a reasonably clear favorite. (FiveThirtyEight’s forecast gave Obama about a 90 percent chance of winning re-election on election morning.)

Since then, the pendulum has swung too far in the other direction, with the media often expressing more certainty about the outcome than is justified based on the polls. In addition to lowballing the chances for Trump, the media also badly underestimated the probability that the U.K. would leave the European Union in 2016, and that this year’s U.K. general election would result in a hung parliament, for instance. There are still some exceptions — the conventional wisdom probably overestimated Marine Le Pen’s chances in France. Nonetheless, there’s been a noticeable shift from the way elections used to be covered, and it’s worth pausing to consider why that is.

One explanation is that news organizations learned the wrong lessons from 2012. The “moral parable” of 2012, as Scott Alexander wrote, is that Romney was “the arrogant fool who said that all the evidence against him was wrong, but got his comeuppance.” Put another way, the lesson of 2012 was to “trust the data,” especially the polls.

FiveThirtyEight and I became emblems of that narrative, even though we sometimes tried to resist it. What I think people forget is that the confidence our model expressed in Obama’s chances in 2012 was contingent upon circumstances peculiar to 2012 — namely that Obama had a much more robust position in the Electoral College than national polls implied, and that there were very few undecided voters, reducing uncertainty. The 2012 election may have superficially looked like a toss-up, but Obama was actually a reasonably clear favorite. Pretty much the opposite was true in 2016 — the more carefully one evaluated the polls, the more uncertain the outcome of the Electoral College appeared. The real lesson of 2012 wasn’t “always trust the polls” so much as “be rigorous in your evaluation of the polls, because your superficial impression of them can be misleading.”

Another issue is that uncertainty is a tough sell in a competitive news environment. “The favorite is indeed favored, just not by as much as everyone thinks once you look at the data more carefully, so bet on the favorite at even money but the underdog against the point spread” isn’t that complicated a story, but it can be a difficult message to get across on TV in the midst of an election campaign when everyone has the attention span of a sugar-high 4-year-old. It can be even harder on social media, where platforms like Facebook reward simplistic coverage that confirms people’s biases.

Journalists should be wary of ‘the narrative’ and more transparent about their provisional understanding of developing stories

But every news organization faced competitive pressure in covering last year’s election — and only some of them screwed up the story. Editorial culture mattered a lot. In general, the problems were worse at The New York Times and other organizations that (as Michael Cieply, a former Times editor, put it) heavily emphasized “the narrative” of the campaign and encouraged reporters to “generate stories that fit the pre-designated line.”

If you re-read the Times’ general election coverage from the conventions onward,29 you’ll be struck by how consistent it was from start to finish. Although the polls were fairly volatile in 2016, you can’t really distinguish the periods when Clinton had a clear advantage from those when things were pretty tight. Instead, the narrative was consistent: Clinton was a deeply flawed politician, the “worst candidate Democrats could have run,” cast in “shadows” and “doubts” because of her ethical lapses. However, she was almost certain to win because Trump appealed to too narrow a range of demographic groups and ran an unsophisticated campaign, whereas Clinton’s diverse coalition and precise voter-targeting efforts gave her an inherent advantage in the Electoral College.

It was a consistent story, but it was consistently wrong.

One can understand why news organizations find “the narrative” so tempting. The world is a complicated place, and journalists are expected to write authoritatively about it under deadline pressure. There’s a management consulting adage that says when creating a product, you can pick any two of these three objectives: 1. fast, 2. good and 3. cheap. You can never have all three at once. The equivalent in journalism is that a story can be 1. fast, 2. interesting and/or 3. true — two out of the three — but it’s hard for it to be all three at the same time.

Deciding on the narrative ahead of time seems to provide a way out of the dilemma. Pre-writing substantial portions of the story — or at least, having a pretty good idea of what you’re going to say — allows it to be turned around more quickly. And narratives are all about wrapping the story up in a neat-looking package and telling readers “what it all means,” so the story is usually engaging and has the appearance of veracity.

The problem is that you’re potentially sacrificing No. 3, “true.” By bending the facts to fit your template, you run the risk of getting the story completely wrong. To make matters worse, most people — including most reporters and editors (also: including me) — have a strong tendency toward confirmation bias. Presented with a complicated set of facts, it takes a lot of work for most of us not to connect the dots in a way that confirms our prejudices. An editorial culture that emphasizes “the narrative” indulges these bad habits rather than resists them.

Instead, news organizations reporting under deadline pressure need to be more comfortable with a world in which our understanding of developing stories is provisional and probabilistic — and will frequently turn out to be wrong. FiveThirtyEight’s philosophy is basically that the scientific method, with its emphasis on verifying hypotheses through rigorous analysis of data, can serve as a model for journalism. The reason is not because the world is highly predictable or because data can solve every problem, but because human judgment is more fallible than most people realize — and being more disciplined and rigorous in your approach can give you a fighting chance of getting the story right. The world isn’t one where things always turn out exactly as we want them to or expect them to. But it’s the world we live in.

CORRECTION (Sept. 21, 2:40 p.m.): A previous version of footnote No. 10 mistakenly referred to the Electoral College in place of the national popular vote when discussing Trump’s chances of winning the election. The article has been updated.

C-47, An Act to amend the Export and Import Permits Act and the Criminal Code (amendments permitting the accession to the Arms Trade Treaty and other amendments)

Page Summary

Style Credit