![]() |
Quote:
|
Quote:
"Nerfing" optimum performance can also be tricky, since you basically have to "teach" the computer how to behave like a less than fully competent human if you want realism. Mimicking human performance limits is also a bit tricky, except when you're dealing with physiological limits which can be quantified, like g forces or limits of vision. But, the extent to which we anthropomorphize AI behavior is a measure of the AI programmer's success. If we can temporarily forget that we're playing against a machine, then for a moment that programming passes the Turing Test! |
Quote:
|
Quote:
More assumptions can be programmed like what you have up there. That's fairly "easy" to check for I would imagine... although I'm not really sure if the AI would know if it's in friendly or enemy territory or if that kind of thing is passed to the AI at all. Would be interesting! |
Quote:
Programming is a difficult thing. What people don't seem to understand is that supposing that to do one thing is x amount of difficulty, then to do two is something like four (two squared) xs worth of difficulty, and to five is about 3,125 (five to the power five) xs worth of difficulty, and when someone says "just one more thing" when there are already a number being done, can be to push the difficulty from 3,125 xs worth of difficulty up to 46,656 (six to the power six) xs worth of difficulty. |
No problem, mate. The funny thing is that I have some programming experience, and I frequently work together with programmers, so the difficulty algorithm you mentioned is well known to me. Unfortunately, when I use a software, I involuntarily try to guess 'what's behind the curtain' / 'what's in the black box', and my badly formulated questions can be easily misunderstood as overpretentiousness... :roll:
|
Quote:
For missions where no front lines are marked, just assume that all territory is friendly, or all territory that isn't within X meters of a hostile ground unit is friendly. Even so, my original partial decision tree for bailout decisions shows the sort of work that is necessary to make aircraft behave in a "smart" fashion for just one small aspect of flight. Humans have plenty of experience with "don't do this, it's probably dangerous," so we understand the ideas that friendly territory is better than enemy territory, landing is (usually) better than bailing, and it's (usually) better to crash land or bail out over land than water. We also have the ability to extrapolate from basic principles. Computer AI is like programming a baby. The computer doesn't automatically "know" anything, and has to be "taught" that certain things or behaviors are bad. Even worse, it has no ability to extrapolate and it's typically really poor at certain types of visual pattern recognition that humans take for granted. |
Quote:
http://xkcd.com/1425/ |
Stand corrected, thanks to all respondents.
Swept away again by high work load, but have noted the responses to my questions and agree. Upon further review, watching playbacks of test combats I record, the AI Ace is not performing miracle, magic bullet shots, but as several pointed out, convergence, spread, various factors effecting bullet trajectory, etc., and I have see the obvious factor I overlooked - these guys take the shot, they're dead serious skilled fliers, so the number of shells in the air is notably higher than the previous AI, which is as it should be. When flying in invulnerable mode, hits on ones' aircraft are accompanied by a high pitched sound indicating the hit. The number of hits is significantly less than shots fired, which is keeping with the difficulty of hitting a heavily maneuvering opponent with a relatively decent level of skill doing so [in this case, me] Apologies to TD, thanks to all who provided the correctives and explanations.
|
Quote:
Absolutely every little step in every single action. Absolutely every single thing that the AI "anticipates" has to be specifically defined and written. The AI simply will not perform an action if there aren't detailed, specific instructions telling it to do so, no matter how basic it may seem to you and I. AI, like computers, are comprehensively stupid. Right or wrong, they do only exactly what it is told to do, and nothing more. That means the person writing this stuff much preemptively anticipate every possible contingency that the AI might ever encounter, write how the AI recognizes any given encounter, write how it responds, etc, etc... It's nothing like "just make the AI know what to do". Programming AI doesn't work that way. It only knows what to do if the coder wrote in specific and detailed instructions telling it to do so. You can imagine how tedious this can become. Almost excruciating. Trust me. I've tried my hand at programming. It wasn't what I thought it would be. The guys that do this for a living deserve every cent they earn in their profession. The guys doing this for free, well... What can you say? |
All times are GMT. The time now is 08:01 AM. |
Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 2007 Fulqrum Publishing. All rights reserved.