log in | register | forums
Show:
Go:
Forums
Username:

Password:

User accounts
Register new account
Forgot password
Forum stats
List of members
Search the forums

Advanced search
Recent discussions
- WROCC Newsletter Volume 41:11 reviewed (News:)
- WROCC March 2024 meeting o... Hughes and Peter Richmond (News:1)
- Rougol March 2024 meeting on monday with Bernard Boase (News:)
- Drag'n'Drop 13i2 edition reviewed (News:)
- South-West Show 2024 talks (News:4)
- February 2024 News Summary (News:1)
- Next developer fireside chat (News:)
- DDE31d released (News:)
- South-West Show 2024 Report (News:)
- South-West Show 2024 in pictures (News:)
Latest postings RSS Feeds
RSS 2.0 | 1.0 | 0.9
Atom 0.3
Misc RDF | CDF
 
View on Mastodon
@www.iconbar.com@rss-parrot.net
Site Search
 
Article archives
The Icon Bar: General: OpenGL
 
  OpenGL
  This is a long thread. Click here to view the threaded list.
 
Simon Wilson Message #66796, posted by ksattic at 17:25, 13/6/2005
ksattic
Finally, an avatar!

Posts: 1291
I compiled the old RISC OS Mesa port on my Iyonix this weekend and ran a demo app. Was disappointed to see that the framerate was still shit, but impressed that Aemulor Pro ran the old 26-bit demo apps not too much slower than my nice 32-bit versions. :)

So, after a bit of Googling, I found a BeOS open source OpenGL-ish driver using Mesa and an NVidia backend...yes, it uses hardware acceleration.

http://haikunews.org/1050
http://web.inter.nl.net/users/be-hold/BeOS/NVdriver/

I've been messing around with overlays on the Iyonix for a while with varying degrees of success and I would like to put my graphics knowledge to use in getting hardware 3D running on the Iyonix. Can anyone here chip in any advice/help?

[yes, I realise what is involved with this task...but this is my area of expertise ;)]
  ^[ Log in to reply ]
 
Peter Naulls Message #66797, posted by pnaulls at 17:55, 13/6/2005, in reply to message #66796
Member
Posts: 317
I compiled the old RISC OS Mesa port on my Iyonix this weekend and ran a demo app. Was disappointed to see that the framerate was still s***, but impressed that Aemulor Pro ran the old 26-bit demo apps not too much slower than my nice 32-bit versions. :)

Well, if you want something newer, the compiling the current Mesa/GL Stuff is pretty straightforward - it's part of Xfree of course. The build for this is in the GCCSDK autobuilder, although it does required some more work to complete successfully. This might be a good starting spot. I think you're already somewhat familiar with GCCSDK stuff. You will need to make a few build option changes to build Mesa, from memory.
  ^[ Log in to reply ]
 
Adrian Lees Message #66875, posted by adrianl at 00:34, 15/6/2005, in reply to message #66796
Member
Posts: 1637
I compiled the old RISC OS Mesa port on my Iyonix this weekend and ran a demo app.
Great.

Was disappointed to see that the framerate was still s***, but impressed that Aemulor Pro ran the old 26-bit demo apps not too much slower than my nice 32-bit versions. :)
That'll be because you were using the StrongARM engine and most of the execution time is spent in tight plotting loops which execute natively, or FP emulation (also native).

So, after a bit of Googling, I found a BeOS open source OpenGL-ish driver using Mesa and an NVidia backend
Cool. :E

I've been messing around with overlays on the Iyonix for a while with varying degrees of success and I would like to put my graphics knowledge to use in getting hardware 3D running on the Iyonix.
I have working overlay code in Cino now, though I'm still not too happy about using it to be honest because the colours aren't right, and I don't think they ever will be whilst the R/B crossover is still present :|

Can anyone here chip in any advice/help?
Well, I can certainly contribute the low-level knowledge that's needed, ARM code, Iyonix hardware, DMA and nVIDIA access.
  ^[ Log in to reply ]
 
Jeffrey Lee Message #66899, posted by Phlamethrower at 11:18, 17/6/2005, in reply to message #66796
PhlamethrowerHot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot stuff

Posts: 15100
So, after a bit of Googling, I found a BeOS open source OpenGL-ish driver using Mesa and an NVidia backend...yes, it uses hardware acceleration.
Is that *proper* open source or does it still rely on nVidia's precompiled binaries?
  ^[ Log in to reply ]
 
Simon Wilson Message #66904, posted by ksattic at 16:58, 17/6/2005, in reply to message #66899
ksattic
Finally, an avatar!

Posts: 1291
Well, if you want something newer, the compiling the current Mesa/GL Stuff is pretty straightforward - it's part of Xfree of course.
I think for now I'll have a play with the hardware acceleration and focus on a newer API later.

I have working overlay code in Cino now, though I'm still not too happy about using it to be honest because the colours aren't right, and I don't think they ever will be whilst the R/B crossover is still present :|

How does Viewfinder do it? I doubt it uses SW R/B swapping as the penalty would be large. Perhaps ATI cards support more types of byte ordering.

Well, I can certainly contribute the low-level knowledge that's needed, ARM code, Iyonix hardware, DMA and nVIDIA access.
Thanks!

Is that *proper* open source or does it still rely on nVidia's precompiled binaries?
No pre-compiled stuff from what I can see. I don't think NVidia ever did anything for BeOS anyway, and I don't know how it goes hacking Linux binaries to work on BeOS.
  ^[ Log in to reply ]
 
Adrian Lees Message #66962, posted by adrianl at 23:29, 20/6/2005, in reply to message #66904
Member
Posts: 1637
How does Viewfinder do it? I doubt it uses SW R/B swapping as the penalty would be large. Perhaps ATI cards support more types of byte ordering.
It probably has custom hardware between the podule bus and the ATI card which can be programmed to do the R/B swapping on all read/write transfers to/from the frame buffer, so that the card itself operates exactly as it does in a PC.

At least, that's the way I would have done it ;)
  ^[ Log in to reply ]
 
Simon Wilson Message #67055, posted by ksattic at 19:47, 26/6/2005, in reply to message #66796
ksattic
Finally, an avatar!

Posts: 1291
I just got 2D accelerated rectangles working...I know this isn't particularly impressive as it's been done before, but it's proof that I am talking to the NVidia card in the Iyonix (in particular talking to the FIFOs correctly), and a good stepping stone on the way to the 3D driver.

Are rectangles drawn from BASIC accelerated in this way under the bonnet? I'll do some timings to see how much quicker they are.

Is there a 2D acceleration module in development out there by anyone? Something that would provide accelerated sprite caching and plotting routines for games. If not, I might write one.
  ^[ Log in to reply ]
 
Adrian Lees Message #67058, posted by adrianl at 00:14, 27/6/2005, in reply to message #67055
Member
Posts: 1637
I just got 2D accelerated rectangles working...
Great! :)

Are rectangles drawn from BASIC accelerated in this way under the bonnet?
Yes, BASIC just calls the OS and it's the OS's VDU drivers that pass rectangle copy/fill operations to the graphics driver. The nVIDIA module only accelerates copies and solid rectangle fills though afaik; not inversions, for example.

Is there a 2D acceleration module in development out there by anyone? Something that would provide accelerated sprite caching and plotting routines for games. If not, I might write one.
It's one of the things that Geminus aims to do; I have part-written, basically working code but it does cause occasional interesting screen corruptions, hence it hasn't been released yet.

You might remember the redraw cacheing prototype (accelerating the redraw of the ArtWorks Apple file, for example) that I posted in 'Programming' a while ago; well, that code found its way into Geminus too.
  ^[ Log in to reply ]
 
Simon Wilson Message #67059, posted by ksattic at 00:46, 27/6/2005, in reply to message #67058
ksattic
Finally, an avatar!

Posts: 1291
Yes, BASIC just calls the OS and it's the OS's VDU drivers that pass rectangle copy/fill operations to the graphics driver. The nVIDIA module only accelerates copies and solid rectangle fills though afaik; not inversions, for example.
I have inversions working and tested. Perhaps I should speak to John Ballance about this.

You might remember the redraw cacheing prototype (accelerating the redraw of the ArtWorks Apple file, for example) that I posted in 'Programming' a while ago; well, that code found its way into Geminus too.
Yep, I remember that - the icon cacheing was cool too. So is Geminus useful for people who don't want to use dual displays? If so, I will look into it.
  ^[ Log in to reply ]
 
Adrian Lees Message #67060, posted by adrianl at 01:20, 27/6/2005, in reply to message #67059
Member
Posts: 1637
I have inversions working and tested. Perhaps I should speak to John Ballance about this.
Perhaps you should sell it to them? :P

So is Geminus useful for people who don't want to use dual displays? If so, I will look into it.
So far I've released just two independent features - multi-screen support and screen rotation. Accelerations will be released later, along with some other stuff too. There'll be a demo of the acceleration stuff, of course.

Since the features interact - eg. acceleration needs to know whether the screen has been rotated into portrait orientation - I felt it best to include them all in one product, with just the desired features being purchased.

Geminus isn't intended to do 3D graphics, however, just to enhance the desktop in various ways. I'd love to see OpenGL/Mesa ported to the Iyonix but I think that's a separate project. ;)
  ^[ Log in to reply ]
 
Simon Wilson Message #67061, posted by ksattic at 01:28, 27/6/2005, in reply to message #67060
ksattic
Finally, an avatar!

Posts: 1291
I'd love to see OpenGL/Mesa ported to the Iyonix but I think that's a separate project. ;)
Working on it... :)
  ^[ Log in to reply ]
 
Phil Mellor Message #67062, posted by monkeyson2 at 10:37, 27/6/2005, in reply to message #67061
monkeyson2Please don't let them make me be a monkey butler

Posts: 12380
This thread is all very exciting :)
  ^[ Log in to reply ]
 
Simon Wilson Message #67066, posted by ksattic at 15:56, 27/6/2005, in reply to message #67062
ksattic
Finally, an avatar!

Posts: 1291
This thread is all very exciting :)
Damn my day job!!!

I can't wait till we have Quake 2 running on the Iyonix, hardly breaking a sweat. ;) Someone needs to port GL Quake, tho.
  ^[ Log in to reply ]
 
Leo White Message #67068, posted by Leo at 16:44, 27/6/2005, in reply to message #67055
Member
Posts: 7
Hi,
I just got 2D accelerated rectangles working...I know this isn't particularly impressive as it's been done before, but it's proof that I am talking to the NVidia card in the Iyonix (in particular talking to the FIFOs correctly), and a good stepping stone on the way to the 3D driver.
I was thinking of looking at this to speed up the redraw of a certain web browser (currently in development) when running on an Iyonix. I looked at the BeOS driver but don't currently have time to work out what its up to :(

Don't suppose you've spotted how to speed up the rendering of bitmaps? Especially bitmaps with an alpha channel?

Leo
  ^[ Log in to reply ]
 
Simon Wilson Message #67070, posted by ksattic at 17:06, 27/6/2005, in reply to message #67068
ksattic
Finally, an avatar!

Posts: 1291
I was thinking of looking at this to speed up the redraw of a certain web browser (currently in development) when running on an Iyonix. I looked at the BeOS driver but don't currently have time to work out what its up to :(
The main problem here is the speed at which data can be transferred to the screen once it is generated by the web browser. I believe Adrian wrote some routines to do fast memory transfers for image data.

The NVidia card could be used to speed up the rendering of solid blocks of colour, like rectangles or scanlines, but the OS already speeds up these types of shapes for us. The other thing it could do is buffer commonly used sprites, like backgrounds or even entire windows for fast scrolling. I do believe it can support 8 bit alpha channels too, but I haven't yet found out how to use those in 2D.

Don't suppose you've spotted how to speed up the rendering of bitmaps? Especially bitmaps with an alpha channel?
Yep, bitmaps can be copied into graphics card memory and then redrawn as many times as you like, very quickly.
  ^[ Log in to reply ]
 
Leo White Message #67074, posted by Leo at 11:43, 28/6/2005, in reply to message #67070
Member
Posts: 7
I was thinking of looking at this to speed up the redraw of a certain web browser (currently in development) when running on an Iyonix. I looked at the BeOS driver but don't currently have time to work out what its up to :(
The main problem here is the speed at which data can be transferred to the screen once it is generated by the web browser. I believe Adrian wrote some routines to do fast memory transfers for image data.
I've certainly seen that mentioned for Cino, to speed up DVD playback. Such funtionality would be useful to have, but it would need to be able to handle the source and destination being of different colour depths (i.e. rendering an 8bit image to a 32bit display). Otherwise the potential performance gains are slightly limited.


The NVidia card could be used to speed up the rendering of solid blocks of colour, like rectangles or scanlines, but the OS already speeds up these types of shapes for us.
Whilst it is true that the OS does speed up the rendering of primitive types, the various OS calls to perform the operation appear to be blocking. In tests I did at the start of the month rendering a solid rectangle takes more time in a 32bit display mode than it does in an 8 bit display mode.
My guess is that the OS waits until the graphics card has completed the operation before returning from the SWI. Now I can see why this is needed, as most programs are written to assume that by the time the SWI call returns, the render operation has been completed. But it does mean the CPU is sitting around idle for quite some time.
What I would like to be able to do is request a rectangle to be drawn, and then have the main CPU carry on working out where to draw the next rectangle whilst the graphcis card is busy performing the work. This way the CPU can get on with something useful.
Obviously you would need toensure all graphics operations have been completed before calling Wimp_Poll, but the overall waiting time should be reduced.


The other thing it could do is buffer commonly used sprites, like backgrounds or even entire windows for fast scrolling. I do believe it can support 8 bit alpha channels too, but I haven't yet found out how to use those in 2D.
What I would like to be able to do is to render all output into an offscreen buffer (Held in the Graphics card memory) and the copy that to the display. Thus providing a double buffered output that means you never see the application redrawing, thus removing any flickering caused by
animated images, flash etc.

Leo
  ^[ Log in to reply ]
 
Leo White Message #67075, posted by Leo at 11:48, 28/6/2005, in reply to message #67070
Member
Posts: 7

Don't suppose you've spotted how to speed up the rendering of bitmaps? Especially bitmaps with an alpha channel?
Yep, bitmaps can be copied into graphics card memory and then redrawn as many times as you like, very quickly.
This would certainly be a useful feature to have, but I think the biggest performance gains would be to have HW support of the alpha channel, because at present when rendering image data with alpha (PNGs, text) we have to read the current contents of the display to work out what the final pixel colour will be. And reading from the graphic's card memory is supposed to be very slow.

This sort of functionality is provided on the PlayStation 2, and does really speed up the display of text etc.

Leo
  ^[ Log in to reply ]
 
Simon Wilson Message #67076, posted by ksattic at 15:03, 28/6/2005, in reply to message #67074
ksattic
Finally, an avatar!

Posts: 1291
I've certainly seen that mentioned for Cino, to speed up DVD playback. Such funtionality would be useful to have, but it would need to be able to handle the source and destination being of different colour depths (i.e. rendering an 8bit image to a 32bit display).
I wouldn't be surprised if the NVidia card could do some depth conversions in hardware. I just don't know how yet. ;)

What I would like to be able to do is request a rectangle to be drawn, and then have the main CPU carry on working out where to draw the next rectangle whilst the graphcis card is busy performing the work. This way the CPU can get on with something useful.
This is how my 2D stuff works - the CPU does not wait for the GPU to finish after pushing it the graphics operation, leaving the CPU to queue up more graphics operations or do other work.

What I would like to be able to do is to render all output into an offscreen buffer (Held in the Graphics card memory) and the copy that to the display.
I think you can just set up two screen banks in the usual way and then draw with my accelerated routines to the back buffer, before swtiching. Or, you could allocate the back buffer yourself in graphics card memory (placing it away from anything else).

This would certainly be a useful feature to have, but I think the biggest performance gains would be to have HW support of the alpha channel, because at present when rendering image data with alpha (PNGs, text) we have to read the current contents of the display to work out what the final pixel colour will be. And reading from the graphic's card memory is supposed to be very slow.
The GPU supports various raster operations (ROPs), which allow you to do stuff like this (making changes to the frame buffer based on the old contents).
  ^[ Log in to reply ]
 
Leo White Message #67080, posted by Leo at 15:17, 29/6/2005, in reply to message #67076
Member
Posts: 7
I've certainly seen that mentioned for Cino, to speed up DVD playback. Such funtionality would be useful to have, but it would need to be able to handle the source and destination being of different colour depths (i.e. rendering an 8bit image to a 32bit display).
I wouldn't be surprised if the NVidia card could do some depth conversions in hardware. I just don't know how yet. ;)

I would have been surprised if it wasn't supported, after all storing an 8 bit image with a 256 colour palette uses much less memory than storing a 32 bit image that only uses 256 colours.


What I would like to be able to do is request a rectangle to be drawn, and then have the main CPU carry on working out where to draw the next rectangle whilst the graphcis card is busy performing the work. This way the CPU can get on with something useful.
This is how my 2D stuff works - the CPU does not wait for the GPU to finish after pushing it the graphics operation, leaving the CPU to queue up more graphics operations or do other work.
Does your 2D code allow the CPU to put together a list of graphics operations and then send them to the GPU in one go? Or is there no real performance difference between sending multiple requests over the PCI bus, compared to sending occasional blocks of requests?

What I would like to be able to do is to render all output into an offscreen buffer (Held in the Graphics card memory) and the copy that to the display.
I think you can just set up two screen banks in the usual way and then draw with my accelerated routines to the back buffer, before swtiching. Or, you could allocate the back buffer yourself in graphics card memory (placing it away from anything else).

When I manage to find some free time I will have to work out how to do that. I've not yet looked at how to play with PCI devices on an Iyonix yet.


This would certainly be a useful feature to have, but I think the biggest performance gains would be to have HW support of the alpha channel, because at present when rendering image data with alpha (PNGs, text) we have to read the current contents of the display to work out what the final pixel colour will be. And reading from the graphic's card memory is supposed to be very slow.
The GPU supports various raster operations (ROPs), which allow you to do stuff like this (making changes to the frame buffer based on the old contents).
Well once I've found the time to work out how to prod the GPU, I start looking at this sort of stuff to see what code would need to be added.

Leo
  ^[ Log in to reply ]
 
Simon Wilson Message #67085, posted by ksattic at 19:32, 29/6/2005, in reply to message #67080
ksattic
Finally, an avatar!

Posts: 1291
Does your 2D code allow the CPU to put together a list of graphics operations and then send them to the GPU in one go? Or is there no real performance difference between sending multiple requests over the PCI bus, compared to sending occasional blocks of requests?
My code isn't that generic yet, but yes, you can potentially queue up a list of operations and send them to the GPU in one go. At the moment, the CPU sends the commands in chunks, one chunk for each operation.

I've done some performance measurements of such a system on other hardware, but not the Iyonix yet. It is certainly better if you can keep the GPU busy with rendering operations while doing other stuff on the CPU.

When I manage to find some free time I will have to work out how to do that. I've not yet looked at how to play with PCI devices on an Iyonix yet.
You can set up two screen banks in the usual way and switch with OS_Byte 112 and 113 (ask if you need more info).
  ^[ Log in to reply ]
 
Leo White Message #67112, posted by Leo at 13:16, 30/6/2005, in reply to message #67085
Member
Posts: 7
Does your 2D code allow the CPU to put together a list of graphics operations and then send them to the GPU in one go? Or is there no real performance difference between sending multiple requests over the PCI bus, compared to sending occasional blocks of requests?
My code isn't that generic yet, but yes, you can potentially queue up a list of operations and send them to the GPU in one go. At the moment, the CPU sends the commands in chunks, one chunk for each operation.

I've done some performance measurements of such a system on other hardware, but not the Iyonix yet. It is certainly better if you can keep the GPU busy with rendering operations while doing other stuff on the CPU.
Well certainly sending one chunk per operation is fine, as that is still going to be lots faster than performing the operation manually.
One way I've done it before on another platform is to use the CPU to build up a list of operations, fire off a DMA request to send that list to the GPU, then whilst the DMA is in progress start to build up the net list of operations in a different block of memory.
Course I've no idea if you can use DMA to send operations to the Nvidia GPU.


When I manage to find some free time I will have to work out how to do that. I've not yet looked at how to play with PCI devices on an Iyonix yet.
You can set up two screen banks in the usual way and switch with OS_Byte 112 and 113 (ask if you need more info).
I've not had to play around with using screen banks for years... Tho. I don't think I should be using them in a WIMP application.

Leo
  ^[ Log in to reply ]
 
Simon Wilson Message #67114, posted by ksattic at 16:39, 30/6/2005, in reply to message #67112
ksattic
Finally, an avatar!

Posts: 1291
Well, I've got Mesa 3.2.1 building and the NVidia 2D and 3D drivers too...just got to put everything together now. At this point I still haven't run a 3D test, so who knows what'll happen. :o
  ^[ Log in to reply ]
 
Phil Mellor Message #67117, posted by monkeyson2 at 18:27, 30/6/2005, in reply to message #67114
monkeyson2Please don't let them make me be a monkey butler

Posts: 12380
Well, I've got Mesa 3.2.1 building and the NVidia 2D and 3D drivers too...just got to put everything together now. At this point I still haven't run a 3D test, so who knows what'll happen. :o
*explodes* :o
  ^[ Log in to reply ]
 
Adrian Lees Message #67122, posted by adrianl at 20:23, 30/6/2005, in reply to message #67114
Member
Posts: 1637
Well, I've got Mesa 3.2.1 building and the NVidia 2D and 3D drivers too...just got to put everything together now. At this point I still haven't run a 3D test, so who knows what'll happen. :o
When it freezes your Iyonix and you need some help debugging it, let me know! ;)
  ^[ Log in to reply ]
 
Simon Wilson Message #67129, posted by ksattic at 23:01, 30/6/2005, in reply to message #67122
ksattic
Finally, an avatar!

Posts: 1291
When it freezes your Iyonix and you need some help debugging it, let me know! ;)
Thanks! I will most certainly take you up on that offer, sir!
  ^[ Log in to reply ]
 
Adrian Lees Message #67131, posted by adrianl at 23:43, 30/6/2005, in reply to message #67129
Member
Posts: 1637
When it freezes your Iyonix and you need some help debugging it, let me know! ;)
Thanks! I will most certainly take you up on that offer, sir!
Aw c'mon. You're supposed to be reporting on rotating teapots by now :P
  ^[ Log in to reply ]
 
Simon Wilson Message #67132, posted by ksattic at 05:03, 1/7/2005, in reply to message #67131
ksattic
Finally, an avatar!

Posts: 1291
Aw c'mon. You're supposed to be reporting on rotating teapots by now :P
The teapot *will* be the first test! :)

Just getting DMA working for 2D right now...then I can link to the 3D driver. 3D needs a DMA buffer in PCI accessible RAM, but as I'm writing user-mode code and not a module, I need to change all the direct accesses to PCI memory into calls to some assembler. :( It's taking some time.

Edit: screw it - I'll just alter the memory map so I have access in user mode. ;)

[Edited by ksattic at 08:06, 1/7/2005]
  ^[ Log in to reply ]
 
Simon Wilson Message #67146, posted by ksattic at 17:01, 1/7/2005, in reply to message #67132
ksattic
Finally, an avatar!

Posts: 1291
Couldn't get DMA working last night. :( I actually suspect it might be working, but I lose all display output at that point. I might be able to make it render something under its own control of the NVidia card and then restore the previous configuration for RISC OS. Adrian - do you know if RISC OS 5 uses FIFO or DMA access for hardware acceleration? [DMA in that drawing operations are copied into PCI accessible main memory or the GPU's own memory and then the GPU is instructed to read drawing commands from the memory.]

Anyway, I found a version of the 3D driver that uses FIFOs instead of DMA for rendering, so I should still be able to get 3D going. I can try to enable DMA at some point in the future. I suspect there might not be a speed difference, given the CPU and memory speeds.
  ^[ Log in to reply ]
 
Phil Mellor Message #67147, posted by monkeyson2 at 17:06, 1/7/2005, in reply to message #67146
monkeyson2Please don't let them make me be a monkey butler

Posts: 12380
Couldn't get DMA working last night. :( I actually suspect it might be working, but I lose all display output at that point.
Could you dump a screenshot to disc to test this?
  ^[ Log in to reply ]
 
Simon Wilson Message #67148, posted by ksattic at 17:16, 1/7/2005, in reply to message #67147
ksattic
Finally, an avatar!

Posts: 1291
Could you dump a screenshot to disc to test this?
I realised on the way to work this morning that I probably could have just tried drawing something. After calling the init routine, the display stopped working and I assumed at the time that I had done something wrong. I was probably just preventing RISC OS from accessing the screen.

I probably don't want to do this, though. This would prevent anyone from seeing OpenGL output in a window.

It was 3am when I got to that stage last night, so I'll try again this evening when I'm more alert. ;)

Everything seems to be going well so far. The OpenGL teapot with lighting renders in 3 seconds in a 1152x864x32 mode using software rendering. I am hoping eventually to get this down to 1/100th of a second using hardware. :o
  ^[ Log in to reply ]
 
Pages (2): 1 > >|

The Icon Bar: General: OpenGL