Official Fulqrum Publishing forum

Official Fulqrum Publishing forum (http://forum.fulqrumpublishing.com/index.php)
-   Daidalos Team discussions (http://forum.fulqrumpublishing.com/forumdisplay.php?f=202)
-   -   4.13 development update discussion and feedback (http://forum.fulqrumpublishing.com/showthread.php?t=40958)

Igo kyu 11-13-2014 07:09 PM

Quote:

Originally Posted by sniperton (Post 707359)
But back to my question: suppressing radio communication for the AI, does it affect squad's performance?

You were just answered, and responded to that answer. No, there is no radio, the computer tells the AI what is happening, because the AI is the computer. Does your left hand know what your right hand is doing? that's the AI/computer's view of the situation.

Pursuivant 11-13-2014 08:49 PM

Quote:

Originally Posted by ElAurens (Post 707358)
Umm, guys, the AI are part of the game's operating system (game engine).

This is a very good point. But some tasks are easier to program than others. Calculating bombing, firing and navigation solutions is very easy. Programming lines of sight is fairly easy. Programming basic flight maneuvers is relatively easy. Getting even a single plane to behave "intelligently" when flying defensively or offensively is much trickier. Getting multiple planes to coordinate offensively or defensively is very tricky.

"Nerfing" optimum performance can also be tricky, since you basically have to "teach" the computer how to behave like a less than fully competent human if you want realism.

Mimicking human performance limits is also a bit tricky, except when you're dealing with physiological limits which can be quantified, like g forces or limits of vision.

But, the extent to which we anthropomorphize AI behavior is a measure of the AI programmer's success. If we can temporarily forget that we're playing against a machine, then for a moment that programming passes the Turing Test!

sniperton 11-13-2014 10:01 PM

Quote:

Originally Posted by Igo kyu (Post 707366)
You were just answered, and responded to that answer. No, there is no radio, the computer tells the AI what is happening, because the AI is the computer. Does your left hand know what your right hand is doing? that's the AI/computer's view of the situation.

I expected simply 'yes' or 'no'. I suppose it's a 'no'. Thanks. English is not my mother tongue. Sorry.

IceFire 11-14-2014 12:49 AM

Quote:

Originally Posted by Pursuivant (Post 707332)
Currently "Artificial Intelligence" is still an oxymoron.

The good news is that air combat is a very limited sphere of activity, closely bound to the laws of physics, and further bound by historical doctrines and the limits of human physiology. With those limits in mind, AI can often be abstracted into decision trees and flow charts.

For example, currently damaged enemy bombers often behave "stupidly" when choosing whether the crew bails out or crash lands/ditches. A simple decision tree or flow chart could be used to make them behave in a much more realistic fashion. For example:

Can I hold altitude? N/Y > Am I over friendly territory? Y/N > Can I get to an airfield long enough to land the plane? Y/N > Is there open ground where I can crash land? Y/N > Can I reach any water within 300 m of land? Y/N? > Fly towards land > When within 300 m of land, turn parallel to the wind and ditch.

Completely agree. AI is still kind of an oxymoron. It's more of a decision tree that it follows and it's really only as good as the assumptions that the programmer has made.

More assumptions can be programmed like what you have up there. That's fairly "easy" to check for I would imagine... although I'm not really sure if the AI would know if it's in friendly or enemy territory or if that kind of thing is passed to the AI at all. Would be interesting!

Igo kyu 11-14-2014 01:53 AM

Quote:

Originally Posted by sniperton (Post 707369)
I expected simply 'yes' or 'no'. I suppose it's a 'no'. Thanks. English is not my mother tongue. Sorry.

Sorry if I seemed harsh.

Programming is a difficult thing. What people don't seem to understand is that supposing that to do one thing is x amount of difficulty, then to do two is something like four (two squared) xs worth of difficulty, and to five is about 3,125 (five to the power five) xs worth of difficulty, and when someone says "just one more thing" when there are already a number being done, can be to push the difficulty from 3,125 xs worth of difficulty up to 46,656 (six to the power six) xs worth of difficulty.

sniperton 11-14-2014 09:56 AM

No problem, mate. The funny thing is that I have some programming experience, and I frequently work together with programmers, so the difficulty algorithm you mentioned is well known to me. Unfortunately, when I use a software, I involuntarily try to guess 'what's behind the curtain' / 'what's in the black box', and my badly formulated questions can be easily misunderstood as overpretentiousness... :roll:

Pursuivant 11-14-2014 08:55 PM

Quote:

Originally Posted by IceFire (Post 707371)
More assumptions can be programmed like what you have up there. That's fairly "easy" to check for I would imagine... although I'm not really sure if the AI would know if it's in friendly or enemy territory or if that kind of thing is passed to the AI at all. Would be interesting!

I don't know if front markers are passed to the AI. But, given as they're points marked on a map and the game engine already keeps track of aircraft position and vector, it doesn't seem like that big a problem to add them in.

For missions where no front lines are marked, just assume that all territory is friendly, or all territory that isn't within X meters of a hostile ground unit is friendly.

Even so, my original partial decision tree for bailout decisions shows the sort of work that is necessary to make aircraft behave in a "smart" fashion for just one small aspect of flight.

Humans have plenty of experience with "don't do this, it's probably dangerous," so we understand the ideas that friendly territory is better than enemy territory, landing is (usually) better than bailing, and it's (usually) better to crash land or bail out over land than water. We also have the ability to extrapolate from basic principles.

Computer AI is like programming a baby. The computer doesn't automatically "know" anything, and has to be "taught" that certain things or behaviors are bad. Even worse, it has no ability to extrapolate and it's typically really poor at certain types of visual pattern recognition that humans take for granted.

Pursuivant 11-14-2014 08:58 PM

Quote:

Originally Posted by Igo kyu (Post 707373)
Programming is a difficult thing. What people don't seem to understand is that supposing that to do one thing is x amount of difficulty, then to do two is something like four (two squared) xs worth of difficulty, and to five is about 3,125 (five to the power five) xs worth of difficulty

And then there are programming tasks that non-programmers think should be incredibly easy but which are actually incredibly hard:

http://xkcd.com/1425/

Pugo3 11-14-2014 08:59 PM

Stand corrected, thanks to all respondents.
 
Swept away again by high work load, but have noted the responses to my questions and agree. Upon further review, watching playbacks of test combats I record, the AI Ace is not performing miracle, magic bullet shots, but as several pointed out, convergence, spread, various factors effecting bullet trajectory, etc., and I have see the obvious factor I overlooked - these guys take the shot, they're dead serious skilled fliers, so the number of shells in the air is notably higher than the previous AI, which is as it should be. When flying in invulnerable mode, hits on ones' aircraft are accompanied by a high pitched sound indicating the hit. The number of hits is significantly less than shots fired, which is keeping with the difficulty of hitting a heavily maneuvering opponent with a relatively decent level of skill doing so [in this case, me] Apologies to TD, thanks to all who provided the correctives and explanations.

Treetop64 11-15-2014 04:42 AM

Quote:

Originally Posted by Pursuivant (Post 707385)
And then there are programming tasks that non-programmers think should be incredibly easy but which are actually incredibly hard.

It is lost on most that absolutely every single little thing the AI ever does in it's environment, potentially or otherwise, has to be specifically written down without error in the code. This is most literally the case.

Absolutely every little step in every single action.

Absolutely every single thing that the AI "anticipates" has to be specifically defined and written.

The AI simply will not perform an action if there aren't detailed, specific instructions telling it to do so, no matter how basic it may seem to you and I.

AI, like computers, are comprehensively stupid. Right or wrong, they do only exactly what it is told to do, and nothing more. That means the person writing this stuff much preemptively anticipate every possible contingency that the AI might ever encounter, write how the AI recognizes any given encounter, write how it responds, etc, etc...

It's nothing like "just make the AI know what to do". Programming AI doesn't work that way. It only knows what to do if the coder wrote in specific and detailed instructions telling it to do so.

You can imagine how tedious this can become. Almost excruciating.

Trust me. I've tried my hand at programming. It wasn't what I thought it would be. The guys that do this for a living deserve every cent they earn in their profession. The guys doing this for free, well... What can you say?


All times are GMT. The time now is 08:01 AM.

Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 2007 Fulqrum Publishing. All rights reserved.