Man, so you'd almost want to always have the CPU at 100% so you can get as much graphics performance as possible, unless you at just absolutely content with how the game looks. PCs actually seem like a pain in the ass to do any non super high level stuff, because parts changes from computer to computer. When you were talking about trying to do as much as possible on the CPU, what happens for people who have a very good graphics card but a bad CPU? (If you'd ever run into this problem.) Because you would want to run your game cleanly on as many systems as possible, you'd then have to develop a system that gauges how good your CPU is compared to your GPU and adjust the load accordingly. At least with a console, you don't have to worry about that. That's one advantage, but that's only a programming one (as in easier for the developer), but not really a performance one.
No, you wouldn't normally try to adjust the CPU vs GPU load automatically at run-time. That's what the graphics settings are for. (These days nVidia cards even come with a utility that's like a library of "good" settings for games based on the hardware you have installed.)
The nature of things is that you will either be using 100% of the available CPU, or 100% of the GPU, and the one that isn't fully utilized will wait on the other. One of these two things will be determining your maximum framerate in any given situation. Even if you have just one set of hardware it's quite normal for the changing game situation to shift from one to the other. It depends on how much stuff changes in your game.
I mentioned culling as the most important thing to do on the CPU to unload the GPU, but it's not like a transfer of work. You can't really just take stuff from the GPU and transfer it to the CPU. You can't just spend more CPU on the problem and cull more stuff, you just remove everything you can so the GPU doesn't end up doing redundant work, and doing this isn't usually a big burden on the CPU.
There are some tasks that could potentially be done on a CPU or GPU, but that's a bit of an aside... the usual purpose of having a GPU is that it's really efficient at rendering things and you want to let it do what it's good at. The CPU acts like its "manager", trying to give it tasks in the appropriate order and form that will keep things running smoothly, and getting rid of tasks that would have gone to waste.
Being honest though, I imagine this stuff isn't worried about 90% of the time. You'd probably only need to worry about it if you're developing a game engine.
Incorrect. The quality of the engine might adjust your overall efficiency, but the load you're putting on the GPU is driven entirely by the art assets you're feeding into it. It doesn't matter how good the engine is, if you cram too much work into either end of it, your framerate will drop.
The only time it doesn't matter is when you're under-budget and already running at your target framerate. (At that point you're not bound by either CPU or GPU.)
If you don't have capable engineers on your team to diganose specific performance problem situations, there's lo-fi approaches to trying to solve them (e.g. "delete the trees in that room, see if it helps?").
to do it on a console setting where there's only one target hardware, but it's still a problem there too.
For PC you set a "minimum specification" target and test and make sure your game runs on that with all the lowest settings. You might also set a "medium" and "maximum" target and try to balance the assets for that as well. You develop on whatever is reasonable and hope that by testing at least against the minimum spec your game will be viable across a range of hardware setups. If you're lucky you have a QA department and a bunch of random hardware to get some additional testing done on, and then of course if you eventually release the game you suddenly get live testing on all sorts of hardware you couldn't possibly consider, probably find new bugs and problems and try to patch issues as they're reported (or not, if you don't have the budget for it).
Okay, I think I'm starting to get it... Direct3D is like a traditional compiler that is used only for development and is not ran when you open a game, and the driver is a program that is part of the operating system and run at runtime? I would be under the impression a driver is strictly for compatibility, (as in you don't need a different version of the program for every computer) but I thought I heard game consoles have them now. For consoles, it would make sense to me to just have the shader compiler output into machine code directly; if a piece of software is designed to run on only one machine, a driver should be an unnecessary step.
Consoles are pretty much like the PC architecture, and drivers still exist there. Think about what it means to have both a PS4 and a PS4 Pro, for example. There's less variations on the hardware, but that doesn't mean the driver architecture idea isn't worthwhile. Again, you seem to be under the impression that the drivers are a significant source of inefficiency, and they really aren't
, for the most part.
Direct3D is a runtime system that sits between the game program and the driver. If you're talking about shader compilation specifically, you can do some of the work before runtime, because it has an intermediate bytecode format for that purpose. Again, the driver has to have the final word when compiling the shader, though, so that gets done at runtime. (Results can typically be cached so it only has to happen the first time it's run.)
Lastly (no promises!
) assuming it's standard (I mean, the whole point is that it's supposed to make it work on multiple systems) shouldn't every driver be the same from the game's perspective, in that the all accept the same input? (not including forwards compatibility, obviously.) I can't seem to find how it wants data formatted, although I really don't know what I'm looking for.
Yes, that's entirely the purpose of having a driver. Standard input, with output customized to the hardware.