Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Awhile. This just isn't like Go or Chess. The gap from perfect information to imperfect information is quite a chasm, and from turn-based to real-time is even more vast.

I play Age of Empires 2 semi-competitively, and I just can't imagine the research progress that would have to be made for a pro to lose to an APM-limited AI agent. So much of the game comes down to intuiting what your opponent is planning without being able to see what they're doing, and more importantly intuiting what your opponent isn't ready for.

The biggest difference, though, is the "RT" in "RTS"-- real time. This isn't turn-based anymore, where at a given moment you have a single choice to make, a single piece to move as in Chess and Go, and can then wait for the singular and visible reaction your opponent makes before making your next choice.

My understanding it that the moves a program like AlphaGo makes are not interconnected-- it picks each move individually as an ideal move for that board state. It could take over halfway through the game for someone else and would make the same move that it would have made at that point if it had been in control the whole time and arrived at that board state on its own.

But that doesn't work in a real-time game, since you and your opponent are now moving simultaneously and the "board" is never static. Your moves must be cohesive and planned and flow continuously without time to ponder, each connected to the last. There is no "one" move for a given state.

Another facet of real-time play is the idea of distraction. It's very important in RTS's to keep your opponent distracted, to disrupt their plans and their focus, by coming from unexpected directions at unexpected times, sometimes concurrently with other operations against them. This can't happen in Chess or Go, where the demands on your focus are far less urgent and two things can't happen at once in a literal sense. Can an AI agent learn to appreciate the power of distraction? Can it learn to intuit what will be most disruptive to a human, and what won't be disruptive at all? How can you teach a computer to learn to be annoying?

I will say, of course, that nobody saw AlphaGo coming. And I hope it's the same with RTS's. That would be so exciting. I would love to see an AI blow us away with previously unthought-of strategies. That would be the coolest thing ever. So I hope it happens. But I'd be astonished. RTS is just such a whole new level of thinking for AIs.



At pro level, much of the game is about what information you can gain, and about choosing what to show and, more importantly, what you don't show (hide) and acting on non-triggers.

An example of a non-trigger is knowing that if I haven't seen a certain unit at time X, I know I'm safe to do Y. It is acting upon the information that something didn't happen.

To expand: I saw my opponent starting two gases at my 21 supply scout. When I scouted again at 47 supply, I saw no gas heavy units, so I can deduce the gas was used for better technology. This will allow me the opportunity to increase my worker count by Z before building army, or I could try and kill my opponent right there for his technological greed.


> intuiting what your opponent isn't ready for

I haven't played AOE2 so I don't know if the mechanics are similar enough to translate, but my goal for my Starcraft bot is to do precisely this. If you can enumerate the possible builds (what's available when) and assess the matchups between builds, you can make this happen using some intuitive expansions on adversarial search.

> Your moves must be cohesive and planned and flow continuously without time to ponder, each connected to the last.

Recomputing the entire plan from the current state works in RTS too, but only if your decision-making takes every already-in-motion thing into account and has no internal discrepancies. That's a pretty big if; this sort of weakness accounts for a lot of bot weakness currently. Units spinning around due to slight changes in perceived state cause lots of wasted resources.

> Can an AI agent learn to appreciate the power of distraction?

Despite multitasking theoretically being one of the strengths of an AI, a lot of the current field can't handle more than one military situation at a time. In this year's SSCAIT a lot of bots completely fell apart when confronted with one of the top bots (Bereaver) doing reaver drops.

I'm not sure a bot can meaningfully learn distraction, but I'm not sure it's necessary - attacking on simultaneous fronts is optimal anyway. The army can only be so many places at once.


On the other hand, bots are starting to beat professionals at (thousands of hands of repeated) Poker, so I think we can't say that imperfect information is something that's especially intractable for maching learning algorithms.

http://spectrum.ieee.org/automaton/robotics/artificial-intel...


Yeah but compare the search space of Poker vs Starcraft.


Exactly. And poker is, again, turn-based with a single move to be made. And it may not be "perfect information," but it again is a game of static board state where there is probably a statistically optimal move for a given state provided you have memory of the other players' previous moves so far in a round.


This wasn't an issue for Bridge either.

As soon as you can build a probability distribution over possible states, you can use Monte Carlo like methods.


Speaking of AoE 2, I really hope this research ends up benefiting that game. The AI in it has always been so bad but it's still my favorite game of all time.


Microsoft hired one of the best scripters (Promi) on the AoE2 AI circuit to write the official AI for AoE2 HD, and it's quite good, although the rules engine doesn't have enough features to beat early harassment. If you let it boom it's pretty scary for new players.

If you play using the UserPatch on Voobly, where the serious custom AIs are written, you can play vs. Barbarian. It is _very_ good, if you're not a semi-competitive player it will certainly beat you.

The upcoming UserPatch 1.5 will add even more features, so Barbarian and other custom AIs will become stronger.

To be clear, AoE2 AIs are all rules-based and written by pro players themselves, which is quite different from what DeepMind is trying to do.


Have you played against the updated AI in Aoe2 HD?


It is hard but my impression is it's because it cheats. I'm more looking for interesting AI (creativity, adaptation) rather than raw difficulty due to its ability to spam military.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: