Also it’s apparently already being completely destroyed by a cheater.
Eskating cyclist, gamer and enjoyer of anime. Probably an artist. Also I code sometimes, pretty much just to mod titanfall 2 tho.
Introverted, yet I enjoy discussion to a fault.
Also it’s apparently already being completely destroyed by a cheater.
Wait until you find out about the child labour stuff.
There may not be sweatshops in the developed world, but you can bet your ass a lot of roblox “dev studios” are really just one or two adults, exploiting a bunch of kids for free labour via discord.
Oh, and then there’s the fully integrated speculative market for assets and cosmetics, where ten-year-olds gain and lose thousands of real dollars in robux.
The ecosystem around Roblox is a complete shitshow that no sane parent should allow a kid anywhere near.
It wasn’t for me, either, but I’m fairly certain the point of Goat Simulator is that there isn’t one.
They seem to be getting back into it.
There was Alyx, at first. But today there’s Deadlock, two more rumoured games, and now this?
My argument was and is that neural models don’t produce anything truly new. That they can’t handle things outside what is outlined by the data they were trained on.
Are you not claiming otherwise?
You say it’s possible to guide models into doing new things, and I can see how that’s the case, especially if the model is a very big one, meaning it is more likely that it has relevant structures to apply to the task.
But I’m also pretty damn sure they have insurmountable limits. You can’t “guide” and LLM into doing image generation, except by having it interact with an image generation model.
In a couple sentences? In a way that doesn’t approach, equal or exceed the effort of training the model with that data to begin with?
You insist these models can do new things out of nothing, and you keep saying “all you have to do, is give them something”.
Bloated, as in large and heavy. More expensive, more power hungry, less efficient.
I already brought it up. They can’t deal with something completely new.
When you discuss what you want with a human artist or programmer or whatever, there is a back and forth process where both parties explain and ask until comprehension is achieved, and this improves the result. The creativity on display is the kind that can unfold and realize a complex idea based on simple explanations even when it is completely novel.
It doesn’t matter if the programmer has played games with regenerating health before, one can comprehend and implement the concept based on just a couple sentences.
Now how would you do the same with a “general” model that didn’t have any games that work like that in the training data?
My point is that “general” models aren’t a thing. Not really. We can make models that are really, really big, but they remain very bad at filling in gaps in reality that weren’t in the training data. They don’t start magically putting two and two together and comprehending all the rest.
You are completely missing what I’m saying.
I know the input doesn’t alter the model, that’s not what I mean.
And “general” models are only “general” in the sense that they are massively bloated and still crap at dealing with shit that they weren’t trained on.
And no, “comprehending” new concepts by palette swapping something and smashing two existing things together isn’t the kind of creativity I’m saying these systems are incapable of.
Obviously.
But at what point does that guidance just become the dataset you removed from the training data?
To get it to run Doom, they used Doom.
To realize a new genre, you’ll “just” have to make that game the old fashion way, first.
Ok.
Try to get an image generator to create an image of a tennis racket, with all racket-like objects or relevant sport data removed from the training data.
Explain the concept to it with words alone, accurately enough to get something that looks exactly like the real thing. Maybe you can give it pictures, but one won’t really be enough, you’ll basically have to give it that chunk of training data you removed.
That’s the problem you’ll run into the second you want to realize a new game genre.
“The potential here is absurd,” wrote app developer Nick Dobos in reaction to the news. “Why write complex rules for software by hand when the AI can just think every pixel for you?”
“Can it run Doom?”
“Sure, do you have a spare datacenter or two full of GPUs, and perhaps a nuclear powerplant for a PSU?”
What the fuck are these people smoking. Apparently it can manage 20 fps on one “TPU” but to get there it was trained on shitload of footage of Doom. So just play Doom?!
The researchers speculate that with the technique, new video games might be created “via textual descriptions or examples images” rather than programming, and people may be able to convert a set of still images into a new playable level or character for an existing game based solely on examples rather than relying on coding skill.
It keeps coming back to this, the assumption that these models, if you just feed them enough stuff will somehow become able to “create” something completely new, as if they don’t fall apart the second you ask for something that wasn’t somewhere in the training data. Not to mention that this type of “gaming engine” will never be as efficient as an actual one.
It’s been a looong road…
That it’s a good game on it’s own premise
It doesn’t really even manage that. It’s not bad, there’s a lot to like, but playing it I ran into a lot of stuff I wish was there, but wasn’t.
The story was one thing, but it completely fails at bulding tension. DS1 fills you with adrenaline at regular intervals, but in Callisto Protocol the second I realized the “sound-sensitive” blind enemies don’t react to the noise of melee combat, it was like all the air went out of the balloon.
That’s a perfect microcosm of the whole game. Really neat ideas, really good execution, but only to 90%. And that last 10% matters. A LOT.
The combat system is great, but it doesn’t lean into it at all. The final boss is just a bullet sponge that makes no clever use of any mechanics, and the game is so obsessed with trying to be DS (and TLOU) with boring stealth sections and puzzles.
You end up spending a lot of time wishing combat was happening.
I feel like a Callisto Protocol 2 that leans into the things worked, and fixed just a couple small things that get near working, could be amazing.
It was good in many ways. And it expands on dead space in many ways mechanically, it just didn’t follow through in some aspects.
The guns are cool and there’s a very satisfying melee system.
But the melee system is overpowered, which means monsters are less scary. The sound-based stealth sections where you go through rooms full of blind monsters that allegedly react to sound, have the monsters being completely deaf to melee kills, which means you can just walk up to them one by one and clear the room.
And you’re right about the story. The game should have had LORE, but it’s just the bare minimum generic excuse to have a horror setting.
Yes and no.
It’s not as good as dead space, and not as scary.
It does have decent atmosphere, cool visuals, etc. The combat system is very good. Much more action game than horror game. The melee system meant that not running out of ammo and being careful with your shots wasn’t as important as in dead space.
It falls short in several disappointing ways. The “stealth” system is a joke. There’s a level where you have to sneak around “blind” monsters that only act on sound, except you can walk right up to them and just melee kill them, LOUDLY, without any of the others reacting.
So the stealth sections are completely trivial.
There’s a pretty interesting enemy in the form of the automated security robots of the prison, except it literally shows up in just the tutorial, where it shows you how to deal with them, but then they’re utterly absent in the rest of the game.
The whole game is really impressive in a couple ways (graphics and animation, the combat system) but it feels like 50% of what was supposed to be in it was cut, and like several mechanics never got implemented.
In some countries, some people, in some parts of the industry, are unionized. It’s not even close to being the norm. It’s only slowly starting to happen.
…
Ok yeah that’s really fucking uncool.
There are absolutely actors who are down to do that stuff but you can’t hire people with a script sight-unseen and just drop stuff on them that might straight up give some people a panic attack to even think about, let alone re-enact.
Title makes you think there are actors who don’t want games in general to contain explicit adult content, but this is 100% reasonable, and yet another reason voice actors and game industry workers need to unionize.
I bet your ass the same shit is happening with asset creators and animators.
2 and 3 feel like saints row knock-offs for 12-year-olds.
The first game was… Genuinely different in a lot of ways.
The mod is called Living City and adds so much, it might almost be the real Watch Dogs 2 we never got. What the series could have been had it not gone off the rails.
15 minute video that showcases the many, many things the mod adds to the game.
I’m thinking of replaying the first Watch Dogs.
There’s a mod that implements a massive amount of cut content, to the point it’s almost a different game.
Because it introduces latency.
Higher framerates only in part improve the experience due to looking better, they also make the game feel faster because what you input is reflected in-game that fraction of a second sooner.
Increasing framerate while incurring higher latency might look nicer for an onlooker, but it generally feels a lot worse to actually play.