For example, pawn to a4 is a really bad opener because its so close to throwing away that advantage by passing your first turn. Oh yeah, in chess going 1st is 100% definitely an advantage. I think we have to consider far more than strategy-stealing which is not even a generalizable strategy to two-person perfect information turn-based games. I would be inclined to lean towards that direction, but it's a tough claim theoretically and probably not meaningful in practice (unless a generalized strategy such as strategy stealing can be employed otherwise a lookup table is impractical as it'd contain more bits than atoms in the universe even for 100 move games). So I get the argument, I just don't buy it. My understanding is also that Chess (another perfect information turn-based game) is not shown solved or even has proven first player advantage (though in practice it looks so). Nim is the best example where the setup can either be the first player winning game (nim-sum of the sizes of the heaps is not zero) or the second. Not to mention Komi explicitly making it asymmetric.įirst player does not always have the advantage. The wiki suggests both ladder and ko fights create an asymmetry as well as central control. But even the wiki article you linked suggests that Go is not a symmetric game, which is the requisite condition for strategy stealing to work (which was my underlying belief albeit (very) poorly worded). I am familiar with strategy stealing and things like tit-for-tat. Ideally the AI can solve problems that have no algorithms, pun intended. Sure, maybe transformer circuits can learn some addition by learning how to do FFTs and add in the FFT space, but you're not going to get to Abstract Algebra that way. One that does not get anywhere serious enough of a conversation, especially within the community. How do we move on from machines operating on manifolds? How do we make it so data are not distributional? How do we move away from a number of unmentioned axioms remains a large open problem in AI research. But we gotta address the axioms in the room that we're operating under. The thing is that for math to work in AI we have to address the elephants in the room: math. You can't really derive out all of math from probability distributions (or at least cleanly, but still not convinced you can). The thing here is that these are still pattern recognition machines. Not only that, they matter at every single step. The problem is that in math, all those tiny intricate details matter. GANs come close but nuances like GANs having a magnitude fewer parameters).īut let's look at math, can I consistently add numbers? No. The system isn't a "fancy copier" but it is a compression algorithm and the aforementioned tasks were only possible because lots of work training LoRAs, textual inversions, control nets, and so on (you could seriously improve GANs, VAEs, hell, even Boltzman Machines could probably do pretty well were any of these given the same research investment that diffusion has received. Can I turn my cat into a human or Wonder Woman? No. But can I turn my friend into a convincing werewolf? Yes. Strokes, lighting, reflections, and consistency, and all that. Not just the hands, but the tiniest of things. You have to remember what is easy for machine isn't going to correlate to what is easy for us humans.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |