View Single Post
  #68  
Old 04-24-2017, 06:59 PM
Pursuivant Pursuivant is offline
Approved Member
 
Join Date: May 2010
Posts: 1,439
Default

Quote:
Originally Posted by Storebror View Post
One thing we shouldn't forget is that AI is comparably "stupid" when it comes to draw conclusions based on the facts they know.
It doesn't have to be. For relatively simple "pure" air combat (i.e., no bomber defense, no ground attack, no strategic objectives, just a dogfight) the "decision tree" can make AI fight fairly realistically. In fact, I would guess that the real trick is to keep it from being too good!

I'd bet that "deep learning" AI programming - were it to be implemented into an air combat sim - would result in some frighteningly effective and realistic AI behavior after a short period of time, to the point that the AI is legitimately unbeatable by all but the most talented human players. Of course, that's a pipe dream given the current time and cost required for machine learning (unless you work for DARPA or General Atomics).

Quote:
Originally Posted by Storebror View Post
In that regards, letting AI "cheat" in another regime to compensate this lack of experience might be a valid decision to some degree.
I agree, but any AI cheats have to seem fair to the players. In particular, AI "reflexes" can't be any better than a good human player's - no laser-guided gunnery, 360-degree radar vision, or instant, perfect control inputs.

I think that the best way to prevent cheats from being obvious is to have a simple percentage chance based on skill that the AI will screw up and do something random and/or stupid, rather than acting with killer robot efficiency.
Reply With Quote