Marcin Sawicki
Poland Katowice

I have encountered that's strange enough to warrant flagging it for attention here  or at least I think so.
People rate games. There is average of these ratings available (Avg Rating), as well as a rating weighted a bit towards the middle, so new entries (without many votes) don't get unreasonably high (Geek Rating).
Now, that weighting is supposed to inflict lesser change when there's more votes, right?  at least I unterstand it so... Which is why following numbers seem strange: Automobile: votes: 4375, average: 7.40, GeekRating: 7.215 Neuroshima Hex: votes: 7795, average: 7.44, GeekRating: 7.211
Why MORE votes with HIGHER average gave LOWER GeekRating?! I'd expect that if both number of votes and average of these votes is higher, GeekRating would also be higher (or at least not lower, for small differences) (I chose those games because there's large difference in number of votes  but same behavior can be seen in other pairs of games too).
link (actual positions and ratings may have changed of course): https://www.boardgamegeek.com/strategygames/browse/boardgame...

Chip Crawford
United States Aubrey Texas

When you stare into the geek rating abyss...the geek rating abyss stares back at you.
It's best not to invest much of your precious sanity into contemplation of that subject. One can go mad rather quickly.

Maarten D. de Jong
Netherlands Zaandam

There's also a shill filter active whose workings remain a closely guarded secret; and the last time I ran some calculations I concluded that the number of Bayesian votes seemed to be different depending on how many normal votes a game had attracted. This seems to be the case with these two titles. Assuming that the number of shills is small with either game (meaning that the raw average isn't distorted much by them), then you can calculate the number of Bayesian votes b by solving for b in
4375 * 7.40 + b * 5.5 = (4375 + b) * 7.215
which seems to result in b ≈ 470, sufficiently close to the wellrounded number 500. Plugging in the numbers for the other game results in the equation
7795 * 7.44 + b * 5.5 = (7795 + b) * 7.211
with b being ≈ 1040, close to 1000. So either the number of Bayesian votes has been turned into something with multiple steps (as I've been suspecting for a while now), or the number of shills is huge (which raises questions all its own). It's fairly simple to run the topsomething thousand through some spreadsheet or matrix manipulation program to see if there's a pattern, but frankly I am still hoping that the administration just tells us what is done to those data. Even if in the end it still means little.

Tomello Visello
United States Reston Virginia

There is a very straightforward mathematical reason for this. It is known. It just is not permitted for us to know it.
and trying to learn more than you are permitted to know will only lead to time wasted that could instead been profitably playing games.

Osprey
United States West Virginia
Errg...Argh..

ChipChuck wrote: When you stare into the geek rating abyss...the geek rating abyss stares back at you. It's best not to invest much of your precious sanity into contemplation of that subject. One can go mad rather quickly.
Yes, very similar to looking into a Palantir.

Marcin Sawicki
Poland Katowice

Wow, those are quick answers!
Thank You all for sharing Your insights. Also, some say that Palantirs have their uses, too

Marcin Sawicki
Poland Katowice

cymric wrote: I am still hoping that the administration just tells us what is done to those data. Even if in the end it still means little. yes, that would be best. Of course there would be LOTS of discussion about the algorithm and how it should be modified, but at least they would be based on facts and not hypothesis.

Maarten D. de Jong
Netherlands Zaandam

Zero_1627 wrote: yes, that would be best. Of course there would be LOTS of discussion about the algorithm and how it should be modified, but at least they would be based on facts and not hypothesis. The necessary modifications are already agreed upon (at least by those who know about these things)—it's called pairwise comparison—and some years ago there were even some preliminary calculations on how the ranks would look if that method was instated. The advantage is that there is solid maths underneath it all, with popularity a more or less controllable parameter instead of the shadowy fudge that it is now; the disadvantage is that it still won't do a bloody thing for the question that matters: Given a game with rank x, will it be interesting for me? For that, you still need Geekbuddies; or rather, Geekbuddies are a decent firstorder and lowcost solution to that problem. At this moment we're really waiting for Aldie and Dan to get a move on with the massive job of rewriting the code so that such novel features can be implemented in the first place.

Marcin Sawicki
Poland Katowice

cymric wrote: the disadvantage is that it still won't do a bloody thing for the question that matters: Given a game with rank x, will it be interesting for me? True, especially as a game must have a few years under its belt for the votes to be really mature (and not just first impression). But  if You know what kind of games You like, high standing plus quick reading of description can be quite good initial filter. If interest is attracted, there's no option but to read (...watch...) a serious review or actually play/watch a game yourself.
cymric wrote: For that, you still need Geekbuddies; or rather, Geekbuddies are a decent firstorder and lowcost solution to that problem. Geekbuddies, or just local gaming community; in my experience, chance that _someone_ already has the game You're intersted in and is willing to bring it to a meeting (perhaps even explain) is high.
EDIT: perhaps showing standing between comparable games  not just by broad 'category', but also things like mechanics, theme and playing time (as optional filters)  would be more informative. But it would not be perfect  for that, one needs to actually play (thankfully, I think ).
[EDIT] (another one...) This is general problem of ratings, not just this one. An answer I thought about (perhaps similar to the feature You commented on above?) is: find someone with similar preferences (>whose ratings of games are similar To Yours (at least in a given category... which is tricky)) and has already rated a game You're interested in. His rating should be quite informative to You  way more so than average. Still, I find welldone reviews quite informative  even if I disagree with opinion of reviewer, a good review indicates why opinion is what it is  and this is something that gives me information. Of course, reading/watching a review is much more focused commitment than just looking at table of ratings... (end of edits, I promise!)

Marcin Sawicki
Poland Katowice

I promised not to edit any more, so here's another post.
I see that many webbased game shops include additional information about games. These typically include game length, influence of luck, rules complexity level and interaction level, and I find them quite informative.
[EDIT] of course BGG provides additional ratings too (language dependence, user ages as suggested by players, game mechanics), and I actually use them quite often. But still, above may be more important if one is fishing for a potentially interesting game  eg. one looking for a family (or light evening) game would probably discard complex games, while a player who enjoys calculation and farreaching decisions would probably discard games where luck plays important role.

Maarten D. de Jong
Netherlands Zaandam

Yes, that's been a longstanding request as well. Point is that there is considerable disagreement on what constitutes levels of 'influence of luck', 'rules complexity', and the like. Players tend to progress throughout their 'career' too: what is over your head at one time is quite acceptable fare a few years later. You can ask people to rate the quantities, of course—you can ask them to rate anything, including the attractiveness of the designer's behind, so to speak. You could average the data, or do a pairwise comparison thing again, the maths don't really care. But then you again run into the problem of reversing the average to something you yourself understand; and that's ignoring the fact that some games might attract different kinds of gamers so the rates get skewed. (A fellow geek did a cursory analysis a few weeks ago for the quantity 'game weight', and there indeed seemed to be a step change in weight perception once the game time exceeded ≈ 90 minutes. Whether or not the effect is statistically significant remains to be analysed, but given that it seems to occur at a highly significant length of play time...) The outcome wouldn't be improvements over classifications like 'gamer's game', 'family game', 'worker placement', 'set collection', '30 minute game time', '45 minutes per player game time', and the like; they would just amalgamate these in some hardtotrack way.
Put another way: if you already know a dozen or two games, then you have a working knowledge of what the above classifications mean, obviating the need for rates of luck, interaction, and rules complexity. If you don't know any games, then if there is noone to guide you, you should simply pick a few from the various subdomain top100's (strategy, family, thematic, ...) to see what these things are about. This is as good a place to start as any, again obviating the presence of these special rates. Given how people tend to behave when they are into exploring this little hobby—meaning that they venture well beyond said top100's—the numbers you request seem to be an answer looking for a problem.


