It is currently Thu Dec 14, 2017 3:11 am

All times are UTC - 7 hours





Post new topic Reply to topic  [ 12 posts ] 
Author Message
PostPosted: Sun Dec 06, 2015 8:34 pm 
Offline
User avatar

Joined: Mon Dec 29, 2014 1:46 pm
Posts: 750
Location: New York, NY
This wiki article provides a link to a free Java port of the hqx algorithm that closely mirrors the C version. However, testing reveals it doesn't execute nearly as efficiently; 4x scaling of a 256x256 image takes about 100 millis, way too slow for real-time. This is quite surprising because the hqx algorithm is even used in JavaScript games. For anyone who has had experience with hqx, any ideas on speeding it up or links to alternate Java implementations? Thanks.


Top
 Profile  
 
PostPosted: Mon Dec 07, 2015 12:26 am 
Offline
User avatar

Joined: Wed Nov 10, 2004 6:47 pm
Posts: 1845
IMO there's no reason any of these scaling algorithms should be done on CPU side. They should all be done on the GPU through use of shaders. This kind of thing is what the GPU is designed for, and it'll do it virtually instantaneously and won't consume any CPU time.

I'd be very surprised if the Hqx scalers weren't available as GLSL shaders. I'd also be very surprised if you couldn't use Java to run shaders. My advice would be to figure out how to run them that way.


Top
 Profile  
 
PostPosted: Mon Dec 07, 2015 12:36 am 
Offline
Formerly 65024U

Joined: Sat Mar 27, 2010 12:57 pm
Posts: 2257
If it was made as a CPU-based rendering system, making it work through GPU shaders would be a hell of a change to the engine. I mean, I agree, and any emulator using a texture and opengl should do it, but any emulator doing it CPU-side should be fine. An emulator is a joke to run on modern CPU's.


Top
 Profile  
 
PostPosted: Mon Dec 07, 2015 12:45 am 
Offline
User avatar

Joined: Wed Nov 10, 2004 6:47 pm
Posts: 1845
3gengames wrote:
If it was made as a CPU-based rendering system, making it work through GPU shaders would be a hell of a change to the engine.


Eh.

The core of the emulator shouldn't know or care what it's rendering to, it should only care about emulating. The only change here would have to be done to the display / rendering mechanism.

And if you're doing blitting or some other ancient form of software rendering -- this might just be the excuse you need to make your practices more modern.

It's almost 2016 for crying out loud. Software rendering these days reminds me of the ROM hackers who were still making DOS utilities in 2001.


(I'm very opinionated if you can't tell, but don't let that get to you. If you disagree and want to do a software renderer that's totally fine)


Top
 Profile  
 
PostPosted: Mon Dec 07, 2015 12:48 am 
Offline
Formerly 65024U

Joined: Sat Mar 27, 2010 12:57 pm
Posts: 2257
It's 2016. CPU's are so powerful, emulating little dinky systems aren't worth wasting time porting to OpenGL because the CPU is more than capable. Tbh, I think it's a critical waste of time, since you're going to have to modify the texture CPU-side anyway, it practically offers zero benefit.


Top
 Profile  
 
PostPosted: Mon Dec 07, 2015 1:33 am 
Offline
User avatar

Joined: Wed Nov 10, 2004 6:47 pm
Posts: 1845
Then explain OP's problem? Software rendering is costing him 100ms to do a full frame.


Software rendering is fine for drawing a tiny 256x240 image, but nobody runs emus at that res. They scale it up to fit monitors that are 20x that size. Doing that scaling CPU side is, IMO, pointless and extremely inefficient. And with 4K on the horizon this is just going to become a bigger and bigger thing.

Just let the GPU do it. This is exactly what they're designed for.


Top
 Profile  
 
PostPosted: Mon Dec 07, 2015 6:53 am 
Offline

Joined: Thu Oct 05, 2006 6:29 am
Posts: 911
Seems to me like you could just implement the scaling on the GPU side as a shader and keep the actual PPU emulation running on the CPU if you want to write as little new code as possible. Transferring the CPU-generated framebuffer to GPU memory should be fairly efficient if you use a PBO.


Top
 Profile  
 
PostPosted: Mon Dec 07, 2015 7:45 am 
Offline
User avatar

Joined: Mon Dec 29, 2014 1:46 pm
Posts: 750
Location: New York, NY
I'm all for doing this on the GPU, but I have no clue how to do that in Java.


Top
 Profile  
 
PostPosted: Mon Dec 07, 2015 8:44 am 
Offline
User avatar

Joined: Wed Nov 10, 2004 6:47 pm
Posts: 1845
@mic_:

Yes that is exactly what I'm suggesting. Create the emulated image (256 x 240 x 9 bit .. 6 bits of 'nes color' + 3 bits of emphasis) CPU side, then transfer that raw image to the GPU and have the shader do the conversion to RGB and the scaling.


@ zeroone:

The keyword to search for here is "pixel shader". Find some Java tutorial which shows you how to load and use a pixel shader and get a basic graphic displaying with one... then you should be able to figure out how to plug Hqx into that.


EDIT: Here's the first one I found after a quick google search:

http://wiki.lwjgl.org/wiki/GLSL_Shaders_with_LWJGL

Note that I'm not very familiar with Java so I have no idea if it's any good or not =/




EDIT 2:

It does appear that HQX is available as a shader, as well:

https://github.com/Armada651/hqx-shader ... ader-files


Top
 Profile  
 
PostPosted: Mon Dec 07, 2015 9:09 am 
Offline
User avatar

Joined: Mon Dec 29, 2014 1:46 pm
Posts: 750
Location: New York, NY
Thanks for the links. This is a complicated topic involving multiple libraries that will take me quite a while to explore.


Top
 Profile  
 
PostPosted: Mon Dec 07, 2015 10:02 am 
Offline

Joined: Sun Sep 19, 2004 11:12 pm
Posts: 19342
Location: NE Indiana, USA (NTSC)
Disch wrote:
Software rendering these days reminds me of the ROM hackers who were still making DOS utilities in 2001.

Probably because by early 2001, 32-bit DOS tools such as DJGPP were stable, MinGW may not have yet matured, and Visual C++ Express certainly hadn't yet shipped.

Disch wrote:
Software rendering is fine for drawing a tiny 256x240 image, but nobody runs emus at that res.

Is there still a huge bottleneck in getting the texture back to the CPU so that it can be compressed to video?


Top
 Profile  
 
PostPosted: Mon Dec 07, 2015 11:33 am 
Offline
User avatar

Joined: Wed Nov 10, 2004 6:47 pm
Posts: 1845
tepples wrote:
Probably because by early 2001, 32-bit DOS tools such as DJGPP were stable, MinGW may not have yet matured, and Visual C++ Express certainly hadn't yet shipped.


I think it was more that they were more familiar with it, and didn't want to bother to learn a new API.

Quote:
Is there still a huge bottleneck in getting the texture back to the CPU so that it can be compressed to video?


You shouldn't be pulling the image back to the CPU. That would be incredibly slow.

If you want to do video encoding that would be another topic entirely. For that you probably would send the raw 256x240x9bit image to another thread and have it do the encoding in parallel. But yeah... in that case... if you want to apply filters for the video, it would make more sense to do that in software.


EDIT: Or you could leave video encoding to capture programs like OBS so you don't have to bother building that into your emulator.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 12 posts ] 

All times are UTC - 7 hours


Who is online

Users browsing this forum: Bing [Bot] and 3 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group