Interesting article here from Shamus Young from The Escapist.
Basically, it's his take on that good graphics are hurting gaming.In the original Wolfenstein 3D, the "level editor" was a simple little program that let you draw squares on a grid to create gamespace. You could make a playable room in well under a minute. It was laughably simple and primitive by today's standards, but the game was at least forty hours long because the content was so easy to produce. (It would have almost been possible for someone to make levels as fast as you could play them.) A few years later in the Doom and Duke Nukem 3D era, level design had become slightly more elaborate. It took time to get the textures to line up and make the lighting interesting; that same room of gamespace might take five or ten minutes to produce. With Quake, the bar was raised even higher. Level design was basically 3D modeling, and it might take a whole hour to make the same amount of content.
I remember back when I tried messing with say Build Engine (Duke 3D) and Quake Engine. Oh hell yeah, tons of a lot easier to make a level from Build Engine instead of say Quake. Could pump out a level in no time w/ Build.
For recent 3D Games, I think the SDK's from both NWN, Morrowind, and Oblivion have had some of the easiest design for making your own stuff through their relative programs. These guys really did a great jobs with their SDK's for us, if you ask me. From completely from scratch, NWN -- you just picked pieces and boom -- it was together; easy way to make a hack-n-slash mod. Had to do a little bit more to make it a deeper RPG, but it wasn't going to take forever. BethSoft's SDK's (Morrowind and Oblivion) made it easy to basically add extra quests and stuff basically to the game's already existing world -- though, putting together "your own areas" and whatnot does take a lot of time with that engine.
You can see where this is going. The one hour room gave way to two hours, and eventually led to teams of people working for days to make just a few moments of playable content. Now you have someone designing the level, someone else making unique meshes to decorate the space, a specialized texture artist, and a lot of work being done to set up complex lighting systems, moving machinery, special environmental effects, and all of the other steps needed to take advantage of current-gen graphics engines. That's more than a thousand fold increase in the amount of work required to give players a few seconds of entertainment. This inflation of manhours is obviously unsustainable, and even the amount of work we're putting into games now is probably too much. Taking another step forward is folly.
Don't get me wrong, these sexy new polygons look great, and I certainly wouldn't want to go back to the days of pixelated 2-D sprites sliding around a repetitive blocky room, but the problem is that each new graphical step forward has cost us more and given us less in return, and I think at this point we're getting a lot less than what we're giving up.
As these costs rose, we started getting less game for our money. Games began getting shorter. Forty hour games became twenty hour games. Then ten hour games. When developers couldn't make any more cuts to gameplay, they began protecting their investments by simply taking fewer risks. It's one thing to try something outlandish and innovative when a game costs half a million dollars to produce. It's quite another to do so if the game is going to cost twenty million, and anything less than a complete commercial success will spell bankruptcy for your company.
It is annoying, how I remember a game like Deus Ex taking over 30 hours, yet Deus Ex: IW took me around 10 hours to beat.
Yet, L4D and all its campaigns can be finished in 4-5 hours...
In 1992, you could pay $40 for a forty hour game that was unlike anything we'd ever seen before. Today, you'll pay $60 for a ten hour game that plays much like a lot of the titles you already have on your shelf. (Assuming you can get the thing to run at all.) We're getting shorter games and less innovation and more buggy games. All this, and developers are still having trouble keeping up financially and technologically. The constant push to improve visuals is hurting both parties, and I think it would be great if we could just call a graphical time-out and tried to make the most of what we have now.
It costs a lot to jump from one generation of technology to the next. Each new graphics engine has its own tools, its own quirks, its own limitations, its own visual trade-offs. It takes time to master these tools, and for the most part we're throwing them out just when artists are getting good at them. Compare the debut PS2 titles with the games that came out near the end of its lifespan. (Which would be now, I guess. Quality PS2 titles are still coming out.) The newer games look better and run smoother, even though the hardware hasn't changed. It's possible to improve the visuals and performance of your game without changing the hardware at all, just by giving artists enough time to become adept with the tools.
We can look at with the Scimitar Engine, how much better on the PC Prince of Persia 2008 performed over Assassin's Creed -- it was like night and day for me.
Since we're on Assassin's Creed, wow @ 450 employees working on AC2 -- that's crazy!
What developers should do - and what should have happened years ago - is start treating the PC (and if we're lucky, the Mac) like consoles. Pick a nice safe spot on the tech curve and make that your baseline target platform. Now keep it there for eight years or so. When you finish a game in 2003, make another game aimed at the same 2003 level hardware. Then another. Get three or four games out of your tech before you re-invent the wheel. Sure, it means the graphics still look a little stale the third or forth time around, but the games will be cheaper to produce. Millions of dollars cheaper.
See, I think this is something Valve figured out. They'll take their Source Engine, which they've seem to have been using...well, for a LONG time now, since basically HL2. Every now and then -- namely, with each new game, they'll add in a new extra graphical trick which you can turn ON or OFF. Which, thus leads to still, their new games often having LOW system requirements and gamers still can go out and buy them.
I can still say look at Source Engine games and say, "Damn, it might not look the best -- but it still looks good." Look at HL2 + Episodes, Sin Episodes: Emergence, Vamp: Bloodlines, Left 4 Dead -- they still look at least "good enough." Though, one thing -- Valve needs to work on that L4D Net-Code; it's still head-scratching.
Mount & Blade -- what a graphically dated-looking game, looking like it was made during Morrowind's era. But, my Gosh -- the amount of content this game has and the amount of innovation in the gameplay department is truly something to behold. Game's a blast to play.
And at this point in the tech curve, a lot of people might not even notice you're standing still. Quake II came out five years after Wolfenstein 3D. In those five years we'd seen the world of in-game graphics revolutionized twice. (At least.) Anyone that released a game with Wolfenstein-level graphics in 1997 would have been laughed at. Yet here we are five years after the release of Doom 3, and that game barely looks dated at all. You could be pumping out games based on 2004-level technology and produce something that's commercially viable, attractive to look at, and relatively cheap to produce. (Cheap compared to chasing after the next engine, anyway.) I suspect that with strong art direction and experienced artists you could actually get another five years out of that 2004 technology before you absolutely had to move to a new generation.
I'm thinking of the art direction here taken in Demigod -- which just looks awesome on-screen, with so many vibrant colors going and insane amounts of battle happening on-screen without a hitch in the framerate. Is it the best looking game? No, but it looks good, technically. Artistically, the game's just an art-show to watch on display. I love the artistic direction of this thing -- from the character models to the maps themselves.
Yes, there are mainstream game reviewers out there who are obsessed with graphics and spend their non-gaming hours masturbating to the NVIDIA product catalog. They will indeed give you a hard time because you're not using the next-gen bling mapping. I'm sorry about those guys. But for what it's worth, some reviewers won't do that, and I think consumers will be happy to pony up for your game as long as its fun. This might sound risky, but think about the millions you'll save in development costs. You'll be producing a game for less money that can run on a far larger portion of PCs. It will run smoother, be less of a support headache, and give gamers more value for their gaming dollar. That sounds like a winning strategy to me. All you have to do is sacrifice a bit of your graphical spectacle. The odd snarky review might cost you a few sales, but I can't imagine it will hurt you as bad as riding the bleeding edge. What are you after here? Do you want the approval of a jaded graphics fetishist or do you want to make awesome games?
I'm thinking this might go back to what we were saying about NVidia PhysX, in the Cryostasis thread -- this technology ain't caught on yet b/c it's really too early for it still. Still, it's not worth the framerate trade-off yet, until they say master it and tweak performance out on it so anyone can basically run it. And well, ATI really hasn't punched their version of it out yet, so that don't help, either. Sure, PhysX ON in Mirror's Edge looks great -- but performance ain't so hot with it, right now. Maybe years down the road, it'll be worth playing Mirror's Edge with PhysX ON...
Guys, your take...?