Update: If you want to find out more about the meeting, Gamasutra published a summary of the meeting, "But Seriously Folks: Austin Game Developers Panel", written by John Henderson.
Summary of my own thoughts: Improved NPC AI could make characters in games more believable and entertaining, and increase player empathy and emotional indentification with the game characters.
My thoughts: I attended the Austin Game Developer‘s meeting last night (March 10th) and caught up with some old friends who are now at a variety of different companies around Austin. The meeting featured a round table discussion entitled "Games as Art: Does it matter?" as a follow-up to Roger Ebert’s recent comments about games.
Being more on the technology side of game development (though always interested in design and sticking my nose into it), I’ve been thinking for a long time about what technological developments could make games more immersive or involving to a larger audience in such a way that could make them feel more alive. These same outcomes could also help games be seen potentially as art instead of just mass market entertainment (in the same way that some movies and books can be art or mass market entertainment, or in the rare exception, both).
I think NPC AI has a big potential for improvement. Though it is a really hard problem that shares much in common with traditional AI, the potential for NPCs in games to interact with players in more non-scripted, plausible ways is huge. Image if, using speech recognition or a keyboard, you could interact with an NPC with free form speech instead of just clicking a few scripted responses the game designer or writer assigned to the NPC.
The NPCs then responded and reacted in plausible ways that were dynamic and based off game world state to such a level that they could not possibly be fully scripted to behave in such a manner. NPCs could show emotions and change emotional state depending on changes in the game, including player interactions and game world changes that the player did not immediately cause during the interaction with the NPC (like being upset after hearing a bad guy stealing a beloved piece of art on the other side of town).
Speech recognition technology seems really close to the point feature-wise where it could be used for such a game feature, but I’m guessing the CPU utilization (in terms of percentage of CPU time needed to do it in real-time) might still be too high to do at same time as rendering, game state logic and updates, audio, and all the other tasks a 3D game has to do continuously.
Understanding the input from speech is another problem though. AI that can understand free-form speech (in the form of text), manage the informational and emotional state of an NPC in a plausible way, and react and speak to players in a believable, in-context manner seems far off.
The problem seems so huge that I highly doubt it will solved in a game development context due to complexity and cost. It will probably be a hard problem in the academic world for continued research, then licensed and developed into a middleware product by a tool provider for developers to integrate into games.
No matter how hard the NPC AI problem is to solve, I’m still looking forward to the time when, playing an RPG, I can walk up to a merchant in town and strike up a casual free-form conversation about the King’s strange behavior, recently odd omens, and the price of fruit, and watch him react in plausible ways as I accidentally insult him then attempt smooth it over diplomatically.