![]() It is really an all round game to "prove" the basic of programming and not only, for planning in general. One learns the benefits of specialization instead of generic purpose programs and so on and so forth. As some programmer says "the best code is the code that does what it needs with less code lines and clear instructions". The larger are the decision trees, the more time is needed to adjust them. The larger the code base, the more time is needed to identify and fix problems. Programs (also real life programs) may have to handle unwanted input from users, the same in gladiabots, programs may have to handle unexpected enemy behaviors and so one is forced to do test drive development (look it up) and heavily use replays (that is a sort of debugging) or the debugger to see what went wrong.įurthermore, due to the simple editor, one is taught about code maintenance and technical debt. ![]() Then the game is great to show the importance of testing and debugging. The problem is that could appear confusing. This is great because one can play with people from around the world without being connected at the same time. Another player, at whatever time, can decide to accept the challenge uploading his ai back, and then the server decide the result (players can play a replay to see how it went). In practice to create a challenge one select a map, the ai, and then upload the challenge to the server. At the start is disturbing that there are few "servers" or challenges available, but then one realizes that there is a lot of activity. ![]() The game uses elo, so one can pick the opponents (it is different when one gets picked after releasing challenges). So far it is not affected by pay to win schemes (pay to win is not bad, but has to be balanced so everyone can find fair challenges. The tutorial is clear, the idea of training with oneself, or storing different ideas in different "decision trees" is great as well. The game is really engaging for people that likes to create small "behave like this" and then to test them. In general everytime there is something related to programming some people posts very advanced insights and what could be done. I hope to find the time to collect them here. There are plenty of interesting posts and articles about the game online. On the nvidia shield k1 I had no slow downs so far. On the lenovo a816 I see that the game stutters a bit when there are 8+ bots with non verbose AI (15+ nodes) and one tries to run the simulation at the max speed, I suppose it is the embedded GPU the limit in this case. Likely it uses "only" one core to produce the 3d models and the consequences in the game, so having powerful single core that are free (background tasks running on another core) is helpful. I do not know how much is optimized, since it is in alpha release (very early releases). The player has to develop a proper decision tree to solve a situation, against the computer, himself or other players. Like "what do I do given this situation?". This decision tree (DT) can be traslated in nested "if then" blocks and decide how the bot behaves according to the situation. The bots follow an AI, well not really an artificial intelligence but a sort of basic one, a decision tree (normally useed in classification algorithms). The game is about situations (maps) in which a certain number of bots of two teams fight each other. I try to launch a community based (markdown based) wiki here: Good for the game to be played like chess, for hundreds of years. The amount of tiny but crucial improvements that can be made is likely massive. Surely there were and there are very good AIs out of there, but it is very likely that among all the possible good AIs, those discovered just scratched the surface. ![]() This is also valid for sentences like "the game is solved". I wrote this only to consider the search space of a possible "self configuring" AI attempt, especially those based on machine learning with loose initial heuristics vs those with stronger fixed heuristics given by the programmer. In comparison, considering a game of chess, where at each turn a player consider 5 sensible moves, having games normally lasting around 50 moves (100 half moves), one has an upper bound that is the same of gladiabots. Therefore focusing only on 100 nodes, with each 5 sensible configuration, we have an upper bound ofħ.8 * 10^69 possible configurations (whether many of those make sense, it is another story).Īnd this is likely a fraction of the real possible configurations considering all the options for conditions and actions and all the options for arrangements and connections. Those two configurations are vastly different but still have 3 nodes. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |