Recommend
6 
 Thumb up
 Hide
34 Posts
1 , 2  Next »   | 

Abstract Games» Forums » General

Subject: Facebook (???) tackling Go with deep learning rss

Your Tags: Add tags
Popular Tags: [View All]
Nick Bentley
United States
Madison
Wisconsin
flag msg tools
designer
badge
Avatar
mbmbmbmbmb
I'd guess Facebook is getting into deep learning to serve more targeted ads and the like, but they're also tackling Go, it seems in a fairly novel way.
3 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Guy Adshead
United Kingdom
Stockport
flag msg tools
mbmbmbmbmb
Nice article, I doubt they'll get any where with Go as it comes down to more than recognising patterns on the board. The other areas they're pushing with their deep learning algorithm is really good, especially the ability to describe pictures to blind people.
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Dr Caligari
United States
King of Prussia
Pennsylvania
flag msg tools
Avatar
mbmbmbmbmb
Very interesting. It reminds me of Blondie24, in which neural nets were trained and selected by evolution to play Checkers. Today's neural nets have improved since those days. Now they use recurrent or convolutional neural nets, as opposed to the backpropagation-trained networks used in Blondie24

https://en.wikipedia.org/wiki/Blondie24
 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Rex Moore
United States
Woodbridge
Virginia
flag msg tools
Avatar
mbmbmbmbmb
"Go—the Eastern version of chess..."

<sigh>
7 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
John
United Kingdom
Southampton
flag msg tools
Avatar
mbmbmbmbmb
"Facebook has created an artificial intelligence system that is "getting close" to beating the best human players at the Chinese board game Go, Mark Zuckerberg has revealed."

http://www.bbc.co.uk/news/technology-35419141

More here:

http://arxiv.org/abs/1511.06410

I suspect the "getting close" may be an exaggeration.
 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
christian freeling
Netherlands
flag msg tools
designer
Avatar
mbmb
zabdiel wrote:
I suspect the "getting close" may be an exaggeration.
I used to be sceptical about machine learning but I wouldn't be amazed if in a decade or so you could have a grandmaster on your smartphone.
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
So it goes.
United States
Burke
Virginia
flag msg tools
designer
badge
Avatar
Microbadge: Winsome Games fanMicrobadge: Icehouse fanMicrobadge: I speak JapaneseMicrobadge: Clojure programmerMicrobadge: 18xx fan
Google's AlphaGo is already beating 2-dan level players.
https://www.youtube.com/watch?v=g-dKXOlsf98
4 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Richard Moxham
United Kingdom
flag msg tools
designer
Avatar
mb

I mislaid the link, but according to what I read the engine MellowFruitfulness is now producing odes which experts assess as being in the range 71-76% as good as those of Keats. It's clearly only a matter of time - and maybe a lot less time than we imagined - before the hitherto unthinkable actually comes to pass.

Whilst this is obviously on one level a tremendously exciting prospect, the Eng Lit specialist in me can't help feeling a bit sad. One wonders whether, once the fateful day arrives, there will any longer be much point in continuing to read human poets.

1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Brian Wittman
United States
North Carolina
flag msg tools
designer
mbmbmbmbmb
fogus wrote:
Google's AlphaGo is already beating 2-dan level players.
https://www.youtube.com/watch?v=g-dKXOlsf98
Just to specify thats a professional 2-dan (2p). Aparently it will play Lee Sedol (a 9p) in march: http://mashable.com/2016/01/27/google-ai-beats-go-champ/#yDG...

The important thing is that the program didn't really start out strong, it "learned". It had long been predicted that success would come as a result of machine-learning as opposed to a set of static algorithms. Considering that it beat a 2p 5 out of 5 times, and it should still be improving, I would expect Lee to fall as well. (dont forget that professional dan ranks are about 1/3 of a handicap stone, as opposed to amateur dans where each rank equals 1 handicap stone)
3 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
John
United Kingdom
Southampton
flag msg tools
Avatar
mbmbmbmbmb
Weird that it was announced the same day as Facebook gave an update. From the BBC news article:

"Tested against rival Go-playing AIs, Google's system won 499 out of 500 matches"

"Many of the best programmers in the world were asked last year how long it would take for a program to beat a top professional, and most of them were predicting 10-plus years"

Very impressive.
 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Luis Bolaños Mures
Spain
flag msg tools
designer
Veletas, winner of the Best Combinatorial Game of 2013 Award, is available to buy on nestorgames.com
Avatar
Microbadge: I design abstract strategy gamesMicrobadge: Abstract fanMicrobadge: Citizenship Recognition - Level I - One small step for geek... One giant leap for geek-kind!Microbadge: 5 Year Geek VeteranMicrobadge: I play on igGameCenter.com
zabdiel wrote:
"Tested against rival Go-playing AIs, Google's system won 499 out of 500 matches"

Very impressive.
To be fair, though, Google's system was running on stronger hardware. They should have set the opposition on an equal footing.
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Padoru padoru,
Sweden
flag msg tools
Help, I'm being held prisoner in an overtext typing facility! I don't have much time, they could find out at any m
badge
I'm that weirdo whose number of badges sold prior to yesterday Bail Organa is keeping track of
Avatar
Microbadge: Coffee drinkerMicrobadge: Fantasy GamerMicrobadge: is roughly equal toMicrobadge: Coffee drinkerMicrobadge: Dragon fan
luigi87 wrote:
To be fair, though, Google's system was running on stronger hardware. They should have set the opposition on an equal footing.
"Oh come on," Many Faces of Go 11.0 was heard shouting from a ZX Spectrum.
3 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Maurizio De Leo
Singapore
Singapore
flag msg tools
badge
Avatar
mbmbmbmbmb
luigi87 wrote:
zabdiel wrote:
"Tested against rival Go-playing AIs, Google's system won 499 out of 500 matches"

Very impressive.
To be fair, though, Google's system was running on stronger hardware. They should have set the opposition on an equal footing.
The main issue is that the other programs did not run on distributed systems (at least for commercially available versions). Google was interested in real competition, and gave the opponents the best computer they could handle. After the 99% beating, they gave 4 stones advantage (which resulted in 77% winning "only")
3 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Luis Bolaños Mures
Spain
flag msg tools
designer
Veletas, winner of the Best Combinatorial Game of 2013 Award, is available to buy on nestorgames.com
Avatar
Microbadge: I design abstract strategy gamesMicrobadge: Abstract fanMicrobadge: Citizenship Recognition - Level I - One small step for geek... One giant leap for geek-kind!Microbadge: 5 Year Geek VeteranMicrobadge: I play on igGameCenter.com
To me, about as surprising as the main news is the claim that

Quote:
Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte-Carlo tree search programs that simulate thousands of random games of self-play.
As I understand it, this would mean that such version plays virtually instantly, even on off-the-shelf hardware, which is in line with Facebook's statement about their own program:

Quote:
in the past six months we've built an AI that can make moves in as fast as 0.1 seconds and still be as good as previous systems that took years to build.
I don't really understand how this is possible. By definition, I would assume tactics are more about calculation than intuition. How can the AI be presented with, say, a complex tsumego and spot the correct play with no lookahead at all? Now that's some intuition power on the part of the machine...

I also wonder if this would be replicable in more tactically volatile games (like chess) or if it's only made possible by the remarkable logical coherence and uniformity of Go.
3 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Carlos Luna
Spain
Rubí
Catalunya
flag msg tools
designer
badge
Avatar
mbmbmbmb
luigi87 wrote:
I don't really understand how this is possible. By definition, I would assume tactics are more about calculation than intuition. How can the AI be presented with, say, a complex tsumego and spot the correct play with no lookahead at all? Now that's some intuition power on the part of the machine...
Well, it is difficult to imagine what happens in a neural network and I'm pretty sure that nobody knows what is happening in these highly trained and complex neural networks (this will become increasingly common and frustrating in the future).

This being said, if they aren't really using any lookahead, then you can think about them as encyclopedias of "go proverbs on steroids" (or "crystallized intuition" if you prefer). Basically they look at a tsumego and say "play in the middle" for a very complicated definition of "middle" that nobody understand but that it is trivially easy to compute once the neural network has been trained.

luigi87 wrote:
I also wonder if this would be replicable in more tactically volatile games (like chess) or if it's only made possible by the remarkable logical coherence and uniformity of Go.
The stone placement nature of Go will surely help, but they will take over any game in a near future no matter what.
2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
christian freeling
Netherlands
flag msg tools
designer
Avatar
mbmb
CarlosLuna wrote:
The stone placement nature of Go will surely help, but they will take over any game in a near future no matter what.
That's what the optimist hopes and the pessimist fears.
4 
 Thumb up
0.02
 tip
 Hide
  • [+] Dice rolls
Virginia Milne
New Zealand
flag msg tools
mbmbmbmbmb
Oh Bugger, it looks as though we may know whether the machines have won in the near future, before I kick off my mortal coil.robot

If the worst comes to the worst, I will have the consolation of knowing that games can play to completion without the necessity my agency, or the agency of any other human beings

1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Brian Wittman
United States
North Carolina
flag msg tools
designer
mbmbmbmbmb

For anyone interested in how AlphaGo played here's a commentary you can watch: https://www.youtube.com/watch?v=NHRHUHW6HQE

And here's one you can read: https://gogameguru.com/go-commentary-deepmind-alphago-vs-fan...
4 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Stephen Tavener
United Kingdom
London
England
flag msg tools
designer
The overtext below is true.
badge
The overtext above is false.
Avatar
mbmbmbmbmb
CarlosLuna wrote:
The stone placement nature of Go will surely help, but they will take over any game in a near future no matter what.
I very much doubt it - in order to build a strong evaluation function, whether through a neural network or simple heuristics, you need a large collection of positions with known values. That makes games like chess and go easy targets, compared with most of the games on these forums.
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
christian freeling
Netherlands
flag msg tools
designer
Avatar
mbmb
mrraow wrote:
CarlosLuna wrote:
The stone placement nature of Go will surely help, but they will take over any game in a near future no matter what.
I very much doubt it - in order to build a strong evaluation function, whether through a neural network or simple heuristics, you need a large collection of positions with known values. That makes games like chess and go easy targets, compared with most of the games on these forums.
Don't neural networks learn from playing against themselves? Then there's this significant progress in MC based approaches, and for some types of games making a deterministic evaluation function is actually easier than playing the game. So it seems to me that there's a broadband approach possible that might crack almost any new game, once it would appear interesting enough to do so.
3 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Carlos Luna
Spain
Rubí
Catalunya
flag msg tools
designer
badge
Avatar
mbmbmbmb
mrraow wrote:
in order to build a strong evaluation function, whether through a neural network or simple heuristics, you need a large collection of positions with known values. That makes games like chess and go easy targets, compared with most of the games on these forums.
I don't expect "non-lookahead" algorithms to be the best possible approach to the Gaming-AI problem, but neural networks will play a relevant role as static evaluating functions in a future.

As Christian has already said, any game "interesting enough" can be learned by a neural network through self-play with little or no expert knowledge as starting point.
2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Russ Williams
Poland
Wrocław
Dolny Śląsk
flag msg tools
designer
badge
Avatar
mbmbmbmbmb
christianF wrote:
mrraow wrote:
CarlosLuna wrote:
The stone placement nature of Go will surely help, but they will take over any game in a near future no matter what.
I very much doubt it - in order to build a strong evaluation function, whether through a neural network or simple heuristics, you need a large collection of positions with known values. That makes games like chess and go easy targets, compared with most of the games on these forums.
Don't neural networks learn from playing against themselves? Then there's this significant progress in MC based approaches, and for some types of games making a deterministic evaluation function is actually easier than playing the game. So it seems to me that there's a broadband approach possible that might crack almost any new game, once it would appear interesting enough to do so.
But then aren't there some games which are resistant to simple MC without some hard-code game-specific strategic/tactical knowledge? (E.g. I remember reading that Arimaa was resistant to it, something about there being so many different ways to win in many endgame positions given random response by opponent that the MC would be tricked into thinking that a given position was more favorable than it really was, or some such explanation.)
 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
christian freeling
Netherlands
flag msg tools
designer
Avatar
mbmb
russ wrote:
christianF wrote:
mrraow wrote:
CarlosLuna wrote:
The stone placement nature of Go will surely help, but they will take over any game in a near future no matter what.
I very much doubt it - in order to build a strong evaluation function, whether through a neural network or simple heuristics, you need a large collection of positions with known values. That makes games like chess and go easy targets, compared with most of the games on these forums.
Don't neural networks learn from playing against themselves? Then there's this significant progress in MC based approaches, and for some types of games making a deterministic evaluation function is actually easier than playing the game. So it seems to me that there's a broadband approach possible that might crack almost any new game, once it would appear interesting enough to do so.
But then aren't there some games which are resistant to simple MC without some hard-code game-specific strategic/tactical knowledge? (E.g. I remember reading that Arimaa was resistant to it, something about there being so many different ways to win in many endgame positions given random response by opponent that the MC would be tricked into thinking that a given position was more favorable than it really was, or some such explanation.)
I'm no expert, but the nature of a mechanism must play a role. In Havannah MC works well, supposedly because it isn't too difficult to make a million random play-outs (and presumably including some sort of finitude if rings pop up early on). Emergo would be far more difficult to approach in this way, but a simple deterministic evaluation function and alpha-beta pruning may already play a very strong game: forced sequences trim the branching density and strategy has a simple aim based on piece strength. Neural networks learn, and who knows what comes out of it. But the approach doesn't seem at odds with the other two.

The point being: the mechanism would matter in the choice of approach, and neural networks may upset everything.
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Luis Bolaños Mures
Spain
flag msg tools
designer
Veletas, winner of the Best Combinatorial Game of 2013 Award, is available to buy on nestorgames.com
Avatar
Microbadge: I design abstract strategy gamesMicrobadge: Abstract fanMicrobadge: Citizenship Recognition - Level I - One small step for geek... One giant leap for geek-kind!Microbadge: 5 Year Geek VeteranMicrobadge: I play on igGameCenter.com
If a neural network is not the right approach for a certain game, a clever meta neural network will just choose a different approach.
2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Stephen Tavener
United Kingdom
London
England
flag msg tools
designer
The overtext below is true.
badge
The overtext above is false.
Avatar
mbmbmbmbmb
CarlosLuna wrote:
As Christian has already said, any game "interesting enough" can be learned by a neural network through self-play with little or no expert knowledge as starting point.
I'm not aware of any cases where a neural network has been trained to a high level of play purely through self-play. In the more successful attempts, the neural networks were trained on a large number of high-quality games/positions. If you're aware of any published papers contradicting me, I'd love to read them.
3 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
1 , 2  Next »   |