It is currently Sun Sep 23, 2018 3:41 pm

All times are UTC - 7 hours





Post new topic Reply to topic  [ 41 posts ]  Go to page Previous  1, 2, 3
Author Message
PostPosted: Fri Jun 15, 2018 1:44 am 
Offline
Formerly Espozo
User avatar

Joined: Mon Sep 15, 2014 4:35 pm
Posts: 3363
Location: Richmond, Virginia
Quote:
that few resources

I don't think "few" when I hear 8 core... Mind telling me why it isn't enough?


Top
 Profile  
 
PostPosted: Fri Jun 15, 2018 5:57 am 
Offline

Joined: Sun Sep 19, 2004 11:12 pm
Posts: 20567
Location: NE Indiana, USA (NTSC)
Espozo wrote:
Quote:
Racing games of all things can fail to credibly simulate a three-car pileup.

You'd love these physics :lol: https://youtu.be/8lrBwSgCovE I really love how photorealistic they're trying to make racing games look, when you can slam directly into a wall at 100mph and only the body paneling will dent...

Some of that is the result of contracts with the car manufacturers, which limit how much damage can be shown to a vehicle with a licensed name and likeness. Does the game with more realistic damage use fictional cars?


Top
 Profile  
 
PostPosted: Fri Jun 15, 2018 6:29 am 
Offline
User avatar

Joined: Thu Aug 13, 2015 4:40 pm
Posts: 281
Location: Rio de Janeiro - Brazil
While browsing steam yesterday this game caught my eye and I got interested, but looking at the gameplay video I feel like it is still very lenient in how fast you can hit a wall without totalling the front of your car.
https://www.youtube.com/watch?v=IgryuoWaaOI

_________________
https://twitter.com/bitinkstudios <- Follow me on twitter! Thanks!


Top
 Profile  
 
PostPosted: Fri Jun 15, 2018 3:26 pm 
Offline

Joined: Fri Jul 04, 2014 9:31 pm
Posts: 962
Espozo wrote:
You'd love these physics :lol: https://youtu.be/8lrBwSgCovE

Reminds me of the battle arena in Donkey Kong 64. Bunch of inconclusive bashing, and then somebody gets a crystal coconut and takes a knockback attack at the same time... SEE YA

rainwarrior wrote:
The hard part about refraction in games is how you render/determine what's "under" the surface of the water. If the surface of the water was flat and still, you could render the entire scene upside down under the water, and use that are your reflected version, or an offset lookup to that for your refracted version.

...I can't tell if you actually know what refraction is. It's got nothing to do with reflection; it's just the distortion of the image of what's actually under the water - rocks, weeds, fish, sunken chests and so forth - due to the bending of light being transmitted up through the surface in accordance with Snell's Law.

And I've seen people fake the distortion of that transmitted image due to surface disturbances. What I've never seen is modelling of the average effect, the one that's still there even if the water is completely still. If you stand next to a pond in a video game and aim at a fish with a spear gun, you will hit the fish, and that's not physically accurate.

Quote:
The problem is, refraction and water requires a continuous variation of the surface, i.e. every different angle requires a different viewpoint on that reflection. You can't get all of that from one upside-down viewpoint, you'd need a different view from each point on the curved surface. No-go. In general, the technique is to render the upside-down scene once (and save to a texture) and then use the angle of refraction to warp the lookup to that texture

I've actually tried to figure out how to do this on the Nintendo 64, to get somewhat realistic-looking water reflections without massive tessellation of the water surface. Linking the opacity of the reflected image to the value of a contour texture seems feasible, at least with a multipass approach, but unfortunately I don't see a way to alter the position of a texture read based on another texture read (the block that generates the filtered pixel is downstream of the one that reads TMEM). There are other possible methods, but nothing quite as neat and easy has occurred to me yet...


Top
 Profile  
 
PostPosted: Fri Jun 15, 2018 4:18 pm 
Offline
User avatar

Joined: Sun Jan 22, 2012 12:03 pm
Posts: 6819
Location: Canada
93143 wrote:
rainwarrior wrote:
The hard part about refraction in games is how you render/determine what's "under" the surface of the water. If the surface of the water was flat and still, you could render the entire scene upside down under the water, and use that are your reflected version, or an offset lookup to that for your refracted version.

...I can't tell if you actually know what refraction is. It's got nothing to do with reflection; it's just the distortion of the image of what's actually under the water - rocks, weeds, fish, sunken chests and so forth - due to the bending of light being transmitted up through the surface in accordance with Snell's Law.

Sorry, I conflated the two things a little when I said "upside down", but refraction and reflection are physically tied together, and usually both are needed together for a simulation of water.

So, the implementation I was talking about has two lookup textures, one for reflection, one for refraction. The refraction texture is the surface below the water (possibly just all the opaque stuff in the scene that's rendered and saved off to a texture before you start doing translucent stuff like water on top), and the reflection texture is a separate version of the scene rendered upside down flipped through the plane of the water (ignoring its perturbations).

So, for the refraction component, you take the view direction vs. the surface normal of the water, and you displace the texture lookup based on that. It's not accurate to the actual angles, but you at least get a continuous effect where a shallower viewing angle creates a stronger distortion.

The reflection component is the same idea but with the reflected view normal, and looking up into that upside down scene reflection texture instead. The two results are blended, the refracted light coming from under the surface, and the reflected light bouncing off it. The blend might be altered based on viewing angle, depending on how you want to simulate this.

What I was saying is that this particular way of faking water refraction (and reflection) is prone to error at the edges of the water especially, which is why you might want to have some vertex weight or something to fade the strength of the effect out at the edges. There are many other ways to simulate both reflection and refraction, though. This is just one thing I've used and seen used in several places.

Reflections are often done with cube maps (or other kind of environment map) where some static approximation of the scene, or often just the sky, is used in place of the actual reflected scene. This does have the potential to simulate how reflections change drastically according to viewing angle, so it's often pretty effective at simulating the feel of reflection. You can use the same technique for refraction, but it tends to be a bit less applicable/convincing.

93143 wrote:
And I've seen people fake the distortion of that transmitted image due to surface disturbances. What I've never seen is modelling of the average effect, the one that's still there even if the water is completely still. If you aim at a fish with a spear in a video game, you will hit it, and that's not physically accurate.

Well, I don't know which hypothetical fish-spearing game you're referring to. The pursuit of some aspect of realism and accessible gameplay are often at odds, so I'm not sure the incentive is there to make a properly refracted fish in a lot of games to begin with, even if it were feasible? A game like Fishing Planet might be a good place to go looking for this kind of thing.

I think the one big thing that's hard to solve without raytracting is just the uneven surface of the water. If you have an object that is partially in the water and partially out, it's very hard to make that edge match up properly when you're relying on a flat plane approximation underneath, and if often results in seeing "inside" a 3D object that's being cut off or displaced and leaving a hole where it crosses the water. If you can control the player's viewpoint so that you can get away with just normal mapping instead of actually having a non-flat water geometry, there's a lot you can get away with, though.

93143 wrote:
rainwarrior wrote:
The problem is, refraction and water requires a continuous variation of the surface, i.e. every different angle requires a different viewpoint on that reflection. You can't get all of that from one upside-down viewpoint, you'd need a different view from each point on the curved surface. No-go. In general, the technique is to render the upside-down scene once (and save to a texture) and then use the angle of refraction to warp the lookup to that texture

I've actually tried to figure out how to do this on the Nintendo 64, to get somewhat realistic-looking water reflections. Unfortunately I don't think you can alter the position of a texture read based on the value of the previous texture read, possibly because the texture filter is downstream of the texturing unit so the latter doesn't actually have access to the filtered local value. There are other possible methods, but nothing quite as neat and easy has occurred to me yet...

I don't know quite what you've got available on the N64, but you can simulate both refractions and reflections with vertex effects. This is something vertex shaders can be quite good at on modern GPUs, and even without a GPU to do the grunt work it might be pretty reasonable on the CPU for the right number of vertices.

For refraction, distorting the shape unders a planar surface according to the viewing angle is pretty straightforward. Splitting it at the surface might not be quite as easy (though you can just let there be some error on edges that cross the surface). Dealing with a non-planar surface for the water becomes much, much tougher though. (Again, would be trivial for a raytracer...) Clipping planes and multiple passes can help. You can apply other "watery" distortions to the vertices to simulate some wobbly refraction too (similar to how an SNES game might put a sine offset on backgroundscanlines underwater), but that's getting away from accuracy and more toward just simulating the feel of it.

Similarly for reflections, you can flip the scene upside down and render it translucently. That can make a perfect looking reflection in a flat plane, at least. The same wobbly vertex modulations can apply to this too.


Actually, with a modern GPU it might even be pretty feasible to actually raytrace/raymarch some water surface in a shader. Wouldn't combine with the usual raster surfaces everything else is made of, but if the water can kinda be its own self-contained little procedural world it might work. Or if your game is fully about that kinds stuff, like that Voxel Quest experiment, it could be pretty straightforward to implement very accurate refraction.


Top
 Profile  
 
PostPosted: Fri Jun 15, 2018 6:09 pm 
Offline

Joined: Fri Jul 04, 2014 9:31 pm
Posts: 962
The Nintendo 64's graphics (and sound) are usually handled by the Reality Co-Processor, or RCP, which runs at 62.5 MHz and consists of the programmable Reality Signal Processor and the fixed-function Reality Display Processor. The RDP consists of six functional blocks:

Rasterizer - generates pixel coordinates and various attributes (including an RGBA vertex shader value), and attribute slopes (?)
Texture Unit - picks out four texels from TMEM
Texture Filter - combines those four texels into one pixel based on filter settings
Color Combiner - executes (A-B)*C+D with a wide variety of possible sources for A, B, C, and D; handles alpha separately
Blender - blends the pixel with the framebuffer, adds effects like fog, and handles the first stage of antialiasing
Memory Interface - handles RMW operations with span buffering

These functional blocks can be chained once-through for 1-cycle mode, which has a theoretical peak of 62.5 Mpix/s and can handle most of the advertised features, or each block can run twice on the same pixel for 2-cycle mode, which peaks at 31.25 Mpix/s and adds mipmapping/multitexturing and fog to the feature list. The color combiner and blender can use their own results from the first cycle in the second cycle, but apparently the only block that can feed back into the previous block is the memory interface.

The RSP basically consists of a custom 32-bit (?) MIPS R4000-like core (62.5 MIPS peak) running in parallel with a 128-bit SIMD fixed-point vector unit (500 MOPS peak). The RSP runs the microcode, handles display lists and tells the RDP what to do.


The RCP is perfectly capable of environment mapping based on vertex normals, using a render of the object's view as a reflection texture, and I imagine alpha (for pasting the reflection over the fogged and distorted underwater terrain image) could be simultaneously controlled by vertex shading as well. But the trouble with vertex effects is that the wavelength of the effect is limited to the scale of the mesh. If you wanted small ripples on a large lake in a Nintendo 64 game, you'd need way too many polygons to do it that way (you could perhaps save RAM and compute time by tiling the mesh or something like that, but short runs of rasterization are expensive due to RAM latency and buffer delays, and even vertex transforms aren't free on a system this old).

If there were a way to just use some relative of normal mapping to distort the reflection and control its intensity, that could potentially be much faster, so that's what I was trying to figure out. But even ordinary normal mapping isn't directly supported (though I think it should be possible to trick the RDP into doing it in limited circumstances). Coupling normal mapping with reflection mapping might be unrealistic without some low-level assistance from the RSP or the CPU...

rainwarrior wrote:
Well, I don't know which hypothetical fish-spearing game you're referring to. The pursuit of some aspect of realism and accessible gameplay are often at odds, so I'm not sure the incentive is there to make a properly refracted fish in a lot of games to begin with, even if it were feasible?

I'm not thinking of anything in particular. Even in games where it shouldn't matter because you never have to target anything through a water surface, even in games that do use differential refraction based on surface waves, nobody ever bothers with realistic average refraction.

Quote:
A game like Fishing Planet might be a good place to go looking for this kind of thing.

Maybe, but it's hard to tell from that trailer. I can't really see anything through the epic metal...

Quote:
I think the one big thing that's hard to solve without raytracting is just the uneven surface of the water. If you have an object that is partially in the water and partially out, it's very hard to make that edge match up properly when you're relying on a flat plane approximation underneath, and if often results in seeing "inside" a 3D object that's being cut off or displaced and leaving a hole where it crosses the water.

I can see that being an issue. But the RSP is capable of tessellating a Bézier surface, so maybe it's capable of splitting the object and water polygons where they intersect, and transforming the mesh below that line. Wave distortions could be clamped at the intersection lines.

Or is that not something that you'd want to run at 30 fps on a 62.5 MHz loosely MIPS-based geometry processor? Maybe you could design the object with movable wave nodes on the side or something...

On a modern system, I figure if you can handle the local differential distortion without glaring artifacts, you can probably handle the mean distortion the same way. But as you've already figured out, I'm not an expert on 3D graphics...


Top
 Profile  
 
PostPosted: Sat Jun 16, 2018 2:24 am 
Offline

Joined: Tue Oct 06, 2015 10:16 am
Posts: 796
For water surfaces, I've used geoplanes before, see the attachment. I believe Wind Waker used the same approach for its ocean - have the geoplane centered on your character (or a bit forward), and you naturally have high precision waves close and low precision waves far.


Attachments:
geoplane.png
geoplane.png [ 5.77 KiB | Viewed 1052 times ]
Top
 Profile  
 
PostPosted: Sat Jun 16, 2018 6:34 pm 
Offline

Joined: Sun Nov 23, 2014 12:16 pm
Posts: 269
Quote:
I'm not the only one to notice that physics and AI haven't improved as much as graphics......AI has been mostly stagnant for a decade.


This is exactly what I'm talking about. It seems as if the graphics of a game are the prime objective of the developer nowadays. In fact ever since the 16 bit wars it always seemed that graphics were of the upmost importance. But I think now we have good enough graphics that we can focus on other things.

What I want to see in a new modern game is a world full of people who have advanced AI. Say you are a character and you go into a typical shop to buy something. I want to have a real conversation with that AI bot aka the shop keeper. Like talk to the character for literally hours just about anything.

How about an entire village in the game where every person in the village has its own advanced AI and the whole village interacts with each other in very complex ways, even without the players involvement.

I want to play a first person shooter game where my opponent isn't just some stupid AI bot that runs, then shoots, then hides every time. I want to play against computer players that have almost human like skills.

Also while I've always appreciated realistic graphics in some games, I actually don't really like super realistic looking graphics. I never thought that I would ever say that before because many times it was the GRAPHICS that literally sold me on a game. But here is my argument, games to me are abstract representations of reality, but not reality itself.

If you take any game and study it, you will find out just how unrealistic it is. So to try and make it more realistic with realistic graphics is a turn off for me. I prefer games to be cartoon or abstract in their art/graphics because that is how I think games should be represented. For example playing Zelda 1 on NES vs playing Zelda Breath of the Wild on Switch. Zelda 1 is more abstract in its graphics and allows the player to "fill in the gaps" with their own imagination to fully realize the adventure.


Top
 Profile  
 
PostPosted: Sat Jun 16, 2018 9:41 pm 
Offline
User avatar

Joined: Thu Aug 13, 2015 4:40 pm
Posts: 281
Location: Rio de Janeiro - Brazil
The players are to blame. Only the players. You want to know why? Every time I see a game deviate from the norm even by a little bit, suddenly nobody buys the game. There are countless examples of games I liked over the years that never became mainstream (I like some mainstream games too, I'm not that hipster). So if you want games that are not about graphics then you will have to look for indies or do it yourself. Otherwise you won't. It doesn't sell to invest effort into something that doesn't show in a screenshot.

_________________
https://twitter.com/bitinkstudios <- Follow me on twitter! Thanks!


Top
 Profile  
 
PostPosted: Sat Jun 16, 2018 10:06 pm 
Offline
User avatar

Joined: Sun Jan 22, 2012 12:03 pm
Posts: 6819
Location: Canada
calima wrote:
For water surfaces, I've used geoplanes before

The term "geoplane" is new to me, but yeah that's a great techinque. Shaping geomertry to make simple UV offset animations can make really good animation of flow. That circular arrangement is perfect for flow out from the centre, rings, etc., but other shapes can apply for waterfalls and rivers, etc. I've seen some very clever and creative uses of this over the years, very easy on the GPU but with some creatively laid out textures and UV geometry you can get a lot of mileage.


Top
 Profile  
 
PostPosted: Sat Jun 16, 2018 10:50 pm 
Offline

Joined: Thu Aug 20, 2015 3:09 am
Posts: 396
Erockbrox wrote:
What I want to see in a new modern game is a world full of people who have advanced AI. Say you are a character and you go into a typical shop to buy something. I want to have a real conversation with that AI bot aka the shop keeper. Like talk to the character for literally hours just about anything.

How about an entire village in the game where every person in the village has its own advanced AI and the whole village interacts with each other in very complex ways, even without the players involvement.

I want to play a first person shooter game where my opponent isn't just some stupid AI bot that runs, then shoots, then hides every time. I want to play against computer players that have almost human like skills.

Hear hear! I've been dreaming of all of those things for years now.

I've been researching, designing and writing AI code as a hobby for almost as long, but other than a nifty-but-unfinished Minecraft mod I have nothing to show for it. One of these days... :lol:


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 41 posts ]  Go to page Previous  1, 2, 3

All times are UTC - 7 hours


Who is online

Users browsing this forum: No registered users and 4 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group