User Name: Password:
New User Registration
Moderator: Hrqls , coan.net , rod03801 
 BrainKing.com

Board for everybody who is interested in BrainKing itself, its structure, features and future.

If you experience connection or speed problems with BrainKing, please visit Host Tracker and check "BrainKing.com" accessibility from various sites around the world. It may answer whether an issue is caused by BrainKing itself or your local network (or ISP provider).

World Of Chess And Variants (videos from BrainKing): YouTube
Chess blog: LookIntoChess.com


List of discussion boards
Mode: Everyone can post
Search in posts:  

27. September 2005, 03:34:38
playBunny 
Subject: Re: BKR
alanback: Yes, that's correct. The examples should have said something like "Taking 16 matches" rather than "After 16 matches".

If I had a BKR simulator I could do the numbers properly but .... yes again, the conclusion is correct because the bias exists no matter what the two players' ratings are. The point being that the higher rated player cannot maintain a level against the opponent unless they win highly unrealistic numbers of games - way beyond chance.

The ELO Bg formula, once understood, is actually very elegant. (Though, to a non-mathematician like myself, that elegance has to be studied to get it into the brain, lol). One of the key points is that it maintains the rating difference between two players who are playing consistently at their respective skill levels. The idea is that a player at 2000 is going to win, for example, 56% against than an 1800er; so is the 1800er against a 1600er; and so too the 1600er against a 1400er. It's the difference of 200 that matters, not the ratings themselves.

The formula is a feedback loop that awards points according to this rating difference such that over time the resulting rating difference reflects the actual performance difference. The winner and loser both adjust by the same amount but the amount is greater if the loser wins. This ensures that the players stay at the same difference when playing consistently but ensures that they converge when the lower rated one plays consistently beyond their rated ability. Yet only to a given point - the point where their win rate in relation to the other is predicted by the new rating difference.

As an example, say the 2000er were to play only the 1600er in a lot of matches but the 1600er was winning 44% (ie. something expected of the 1800er). The two ratings would converge until they were 200 points apart (1900 and 1700) and then stay that way - the rating difference now accurately reflecting the performance difference.

I'm not enough of a maths-head to picture how your proposal would work [I'd have to write a program to show me how it works - or you can. ] but I don't think it would create the negative feedback effect. It also wouldn't have the same comparability (eg. difference of 200 = 56%:44% wins), though that may or may not be a disadvantage.

Date and time
Friends online
Favourite boards
Fellowships
Tip of the day
Copyright © 2002 - 2024 Filip Rachunek, all rights reserved.
Back to the top