• schmichael 2 hours ago

    It’s a fun demo but they never go into buildings, the buildings all have similar size, the towns have similar layouts, there’s numerous visual inconsistencies, and the towns don’t really make sense. It generates stylistically similar boxes, puts them on a grid, and lets you wander the spaces between?

    I know progress happens in incremental steps, but this seems like quite the baby step from other world gen demos unless I’m missing something.

    • thwarted an hour ago

      > they never go into buildings, the buildings all have similar size, the towns have similar layouts, there’s numerous visual inconsistencies, and the towns don’t really make sense

      These AI generated towns sure do seem to have strict building and civic codes. Everything on a grid, height limits, equal spacing between all buildings. The local historical society really has a tight grip on neighborhood character.

      From the article:

      > It would also be sound, with different areas connected in such a way to allow characters to roam freely without getting stuck.

      Very unrealistic.

      One of the interesting things about mostly-open world game environments, like GTA or Cyberpunk, is the "designed" messiness and the limits that result in dead ends. You poke at someplace and end up at a locked door (a texture that looks like a door but you can't interact with) that says there's absolutely nothing interesting beyond where you're at. No chance to get stuck in a dead end is boring; when every path leads to something interesting, there's no "exploration".

      • trollbridge an hour ago

        Sounds like the AI accidentally implemented NIMBY style zoning.

      • jaccola 2 hours ago

        This is potentially a lot more useful in creation pipelines than other demos (e.g. World Labs) if it uses explicit assets rather than a more implicit representation (gaussians are pretty explicit but not in the way we are used to working with in games etc...).

        I do think Meta has the tech to easily match other radiance field based generation methods, they publish many foundational papers in this space and have Hyperscape.

        So I'd view this as an interesting orthogonal direction to explore!

        • schmichael an hour ago

          Thanks! That’s some nuance I absolutely missed

        • serf 2 hours ago

          >It’s a fun demo but they never go into buildings, the buildings all have similar size, the towns have similar layouts, there’s numerous visual inconsistencies, and the towns don’t really make sense.

          that's 95% of existing video games. How many doors actually work in a game like Cyberpunk?

          on a different note , when do us mere mortals get to play with a worldgen engine? Google/meta/tencent have shown them off for awhile but without any real feasible way for a nobody to partake; are they that far away from actually being good?

          • brnaftr361 an hour ago

            I would think the argument for this is that it would enable and facilitate more advanced environments.

            There's also plenty of games with fully explorable environments, I think it's more of a scale and utility consideration. I can't think of what use I'd have for exploring an office complex in GTA other than to hear Rockstar's parodical office banter. But Morrowind had reason for it to exist in most contexts.

            Other games have intrinsically explorable interiors like NMS, and Enshrouded. Elden Ring was pretty open in this regard as well. And Zelda. I'm sure there are many others. TES doesn't fall into this due to the way interiors are structured which is a door teleports you to an interior level, ostensibly to save on poly budget, which again, concerning scale is an important consideration in both terms of meaning and effort in-context.

            This doesn't seem to be doing much to build upon that, I think we could procedurally scatter empty shell buildings with low-mid assets already with a pretty decent degree of efficiency?

            • jaccola an hour ago

              There are a bunch of different approaches. Many are very expensive to run. You can play with the World Labs one, their approach is cheap to explore once generated (vs an approach that generates frame by frame).

              The quality is currently not great and they are very hard to steer / work with in any meaningful way. You will see companies using the same demo scenes repeatedly because that's the one that looked cool and worked well.

          • ranyume 2 hours ago

            I'd call this 3DAssetGen. It's not a world model and doesn't generate a world at all. Standard sweat-and-blood powered world building puts this to shame, even low-effort world building with canned assets (see rpg maker games).

            • yannyu a few seconds ago

              It seems strange that so many of these have cropped up. I wonder if it's a misunderstanding of the world model concept or just the easiest next step from AI-generated text, images, music, video, and voices.

              • wkat4242 an hour ago

                It's not really a world no. It generates only a small square by the looks of it. And a world built out of squares will be annoying.

                Still, it's a first effort. I do think AI can really help with world creation, which I think is one of the biggest barriers to the metaverse. When you see how much time and money it costs to create a small island world called GTA..

                • ranyume an hour ago

                  Last time I checked, the metaverse was all about people collaborating in the making of a shared world, and we already have this. Examples include minecraft and vrchat, both of which are very popular metaverses. I don't see how not having bot content generation is a barrier?

                  Then, let's say people are allowed to participate in a metaverse in which they have the ability to generate content with prompts. Does this mean they're only able to build things the model allows or supports? That seems very limiting for a metaverse.

              • meander_water an hour ago

                It's funny, I clicked the link to the demo, but it 404s, then I tried googling Worldgen, and it turns out someone else has built the same thing in May and called it Worldgen as well. Looks like it does better at realistic 3D scenes compared to this.

                [0] https://worldgen.github.io/index.html

                • jsheard an hour ago

                  That's pretty far from the same thing, their technique is a 2D image in a trenchcoat. It instantly falls apart if you move more than a foot or so from the original camera position.

                • boriskourt 2 hours ago

                  The paper is quite good [0] there are some interesting details on tackling individual meshes.

                  (couldn't cleanup the link at all sorry)

                  [0]: https://scontent-lhr6-2.xx.fbcdn.net/v/t39.2365-6/586830145_...

                  • nitwit005 44 minutes ago

                    I can see this working as a randomly generated map for some quick game, like the Worms games did in 2D.

                    But, having things feel strongly on a grid kind of ruins the feel. It's rare for every building to be isolated like that. I am guessing they had trouble producing neighboring buildings that looked like they could logically share a common wall or alleyway.

                    • ninetyninenine 5 minutes ago

                      Can’t wait until entire triple A games are generated by a prompt. Hopefully in my lifetime.

                      • Fraterkes 2 hours ago

                        Having the technical knowhow to have an ai generate 3d models, but then generatively compositing those assets together into environments in a way that would have seemed overly simplistic to gamedevs 3 decades ago…

                        • willyxdjazz an hour ago

                          It's funny, I don't know if I see a use for it, and this feeling surprises me. Just as procedural maps bore me, I feel this will be similar in any use case I can think of. What I like is the perceived care behind every action. After the initial "wow" of the care put into that research, I don't think it will end up being a "wow" that scales—I don't know if I'm making myself clear.

                          • galleywest200 an hour ago

                            I loathe how meta.com makes my back button gray out in my browser. Stop trying to force me to stay, it is obnoxious.

                            • mrdependable 38 minutes ago

                              These look a lot like World of Warcraft. I wonder how much of their training data they got from it.

                              • philipwhiuk an hour ago

                                It's definitely a step forward from that 'Minecraft world' gen tech demo that had no persistence of vision.

                                I can see it being useful for isolated Unity developers with a concept and limited art ability. Currently they would be likely limited to pixel games.

                                • tritip 2 hours ago

                                  Every environment appears to be a miniature golf course version of reality. Was this a deliberate choice?

                                  • huevosabio 2 hours ago

                                    This is cool, but it seems much more like a 3d asset generation than the scene generation like World Labs.

                                    • echelon an hour ago

                                      WorldLabs' Marble creates a Gaussian Splat scene. It's a totally different technology.

                                    • jsheard 2 hours ago

                                      > fully navigable, interactive 3D worlds that you can actually walk around and explore.

                                      You can explore, but is there a single interesting thing to find?

                                      https://www.challies.com/articles/no-mans-sky-and-10000-bowl...

                                      • copx 2 hours ago

                                        First steps towards the Holodeck.

                                        • DeathArrow an hour ago

                                          It's weird, houses are almost all tall and too narrow.

                                          • Alex04 an hour ago

                                            Thanks for the info.

                                            • lloydatkinson 2 hours ago

                                              My first thought was the comment in the thread from the other day about Zork and hooking up an AI image generator to that.

                                              But, it looks like WorldGen has that slightly soulless art style they used for that Meta Zuckverse VR thing they tried for a while.

                                              • serf 2 hours ago

                                                >My first thought was the comment in the thread from the other day about Zork and hooking up an AI image generator to that.

                                                I have done this in the early GPT days with 'Tales of Maj'eyal' and to a lesser extent RimWorld.

                                                It works great for games that have huge compendiums of world lore , bestiaries, etc.

                                                • anthk an hour ago

                                                  With a roguelike you would just map tiles to 3D terrain and objects.

                                                • anthk an hour ago

                                                  Instead of Zork, I would try with All Thing Devours, or Spiritwrak. There are libre games since forever and they are designed in Inform6 with all the source code being available, and the compiler and the English library it's free too and it's a really structured language for literal ingames objects mapped to programming (OOP) objects.