Last week, Central Track published a list of the 50 Best Burgers in Dallas. The post went viral via Twitter and Facebook, and everyone was touting its sincerity. Sweetness. I bookmarked it for later.
It’s a genius idea to make a list from everyone else’s list… as long as you do it right. When I got a good look at the list on Friday, something seemed off about it, but I couldn’t put my finger on what was what. (Could’ve been that my brain was still lost in July 4 or something.) I’ll admit I was a lazy sonovagun on Friday. I couldn’t deduce how Central Track assigned points to each burger joint, so I gave up after thirty minutes. I managed to send one email out to Central Track before I called it quits. No word yet on how the team did it. So, smart SideDishers, let’s see if we can solve this mystery together. How did Central Track come up with a list of the 50 Best Burgers in Dallas? Was it arbitrary? Or was it scientific? I lean toward the former.
Exhibit A: A vague explanation of the vague methodology at the top of the article.
“Over the past few weeks, we’ve sought out as many of these top burger lists and “Best Of” honors in Dallas as we could find and started compiling them into a weighted list, assigning more value to the burger joints at the top of each list than the ones in the middle of the pack. And more to the middle-of-the-pack ones than the bottom-dwellers. And so on, and so on.
Our opinion on the matter isn’t included at all in this equation. Sure, we love a delectable, juicy burger as much as everyone else. But, considering the sheer number of lists already put out by the likes of D Magazine, the Fort Worth Star-Telegram, the Dallas Observer and others (14 lists in total made it into our formula, and you can find links to each of them at the bottom of this post), we just didn’t feel it necessary to try to get our own voice to rise about the chorus. Instead, we wanted to come up with a consensus — or as close to one as we could.”
Burning Question #1: How were points assigned to burger joints in lists that are unranked? Six out of the 14 source lists don’t decrescendo from “best to worst” burgers. This includes the Dallas Observer’s, D Magazine’s, CultureMap’s, CraveDFW’s, Eater’s, and Metro’s.
Burning Question #2: How did Central Track account for old lists that left out new(er) joints, like Hopdoddy? The Texas Monthly list from April 2009 doesn’t include Maple & Motor and Hopdoddy because they didn’t exist in Dallas back then. This means new burger joints are at a disadvantage and naturally assigned fewer points. (Other lists before Hopdoddy’s time include the Dallas Observer’s, D Magazine’s, and Fort Worth Star-Telegram.)
MOST Burning Question #3: HOW DID CENTRAL TRACK COME UP WITH THIS LIST?? What was the experimental setup??
Yo necesito un explicación, por favor.
Jump for more.
[Update 3:46 p.m.] Pete Freedman has sent me all the info I needed. If you want to see the actual spreadsheet, you can email him at [email protected] I’ve been emailing him back and forth for the past couple of hours to get to the bottom of this list, and here’s the exchange:
Rules for Unordered Lists:
- For lists that mention 5 or less places (the most selective): Each burger place will get 30 points.
- For lists 6-10 (somewhat selective): Each burger place will get 25 points.
- For lists 11-15 (less selective): Each burger place will get 20 points.
Rules for Ordered Lists:
- The number one got 32 points (that’s our longest list) and the number two got 31 and so on until the last one on that list.
- Any honorable mentions left after the numbered list all got the same amount. The amount cannot be higher than 20. If the numbered portion of the list is longer than 20, the amount given to honorable mention burger places will be the next one down from the last numbered burger score.
- Exceptions and clarifications:
– D Magazine’s list: The selections are treated like unordered lists. The Best D pick got 30 points and the Reader’s choice also got 30 points.
– Texas Monthly: Since this list looks at all of Texas and we only looked at DFW locations, the list was treated as an ordered list out of 7 (the number of DFW locations).
My response to his system:
Thanks for giving me access to the spreadsheet.
I’ve looked it over and I still have two main issues with the methodology, even though I do think it is close.
1. The arbitrary point system. Why 30 points for an exclusive list, then 25 and 20? The ordered scoring system is also random. Why should #1 of a 32-rank list have the same point value as the #1 of a 10-rank list?
2. The point system is not normalized. Normalizing the point system across the ordered lists would make the aggregate list make more sense, instead of having fixed #s based on absolute positions. You would also normalize the unordered list based on the list size so that points are all the same.
- For example, you have two separate lists with 10 ordered items, one ranked and one unranked. The ranked ones would give different scores to all items. However, the sum of all the ranked items would still equal the sum of the unordered list items.
Does that make sense? It’d be interesting to see how the list would change if you normalized the point system.
Pete responded and said that his numbers are, in fact, normalized.
A publication that chooses not to rank its burger selections is, in effect, saying they’re all “the best.” Given that a top place in a list of Top 30 would get 32 points, the 30 points for an unranked Top 5 actually IS a pretty spot-on average. Same with unranked Top 10 lists getting 25 points each, etc.
I guess my point is that we very much did the math here. It’s not some arbitrary system. Which answers your main question, no? Is the rest not semantics? This is just how we figured it fair as possible to do it given the varied source material. And the math checks out with the formula used, does it not?
It’s fair to say that we have agreed to disagree. And now my brain hurts, because I’ve done more math today than I have in the last four years. Burgers, man. Gotta count ’em all.