View unanswered posts
View active topics
It is currently Fri Sep 25, 2020 12:37 pm


*  Forum rules
Dear users,
we are sorry to inform you, that we had to end development of our FurryBall after almost ten years!
We will give FurryBall for free to all users, but without any Maintenace and Support.
Because we have to keep our servers working, we have set the symbolic price 29,- EUR per year.
Hope you will understand us and keep using FurryBall if it fits you.
Thanks for all those nice years with GPU rendering.
Your FurryBall team

Author Message
ymangolds
Post  Post subject: Specify graphics card(s) utilized in globalsettings node  |  Posted: Sat Nov 14, 2015 2:38 am
User avatar

Joined: Fri Aug 17, 2012 8:23 am
Posts: 89

Offline
I know this would probably be tricky to implement (maybe not even possible) but if it can be done it would drastically increase rendering animation sequence speeds and make Furryball much more flexible when installed on machines with multiple types of graphics cards. :ugeek:

Back in the dark ages before frame buffering and GPU rendering, when render farms consisted of a network of multi-core machines controlled by a render manager, in order to render out separate channels (diffuse, spec, AO, etc etc) for composting each channel would need to be rendered out separately. Many of the channels would render quickly per frame and we discovered that an 8-core machine would only reach full utilization on the really slow channels (like indirect illumination/FG/GI). After some experimenting, we discovered that if we split each machine up into 3 or 4 render clients (so a single machine could receive 3 or 4 jobs to render at the same time), the total time to render out a sequence would be up to 3 times as fast. I mention this because my idea for Furryball, while not identical, is somewhat similar.

Although a 2-gpu machine will render Furryball faster then a single gpu machine, it does not render twice as fast. A 3-gpu machine renders faster then a 2-gpu machine, but the speed increase percentage is even smaller, and so on etc etc. So we can conclude that 2 machines with 1 gpu will render a sequence faster then 1 machine with 2 gpu's, and 4 single-gpu machines will render a sequence MUCH faster then 1 machine with 4 gpu's. This concept also applies considerably greater for a machine with several gpu's of different types vs the same cards put into multiple machines.

Using NVIDIA GeForce GTX TITAN (6144MB) and "High Raytracing" from your benchmarking page, here's some rough statistics (there's other factors besides the number of GPUs so this is just a rough calculation):

1 GPU: 239.81 s
2 GPU: 150.65 s
3 GPU: 115.14 s
4 GPU: 86.02 s
6 GPU: 82.93 s
7 GPU: 76.33 s

Time it would take to render a 1000 frame sequence:
1 GPU: 239810 s (66 hours 36 minutes 50 seconds)
2 GPU: 150650 s (41 hours 50 minutes 50 seconds)
3 GPU: 115140 s (31 hours 59 minutes 0 seconds)
4 GPU: 86020 s (23 hours 53 minutes 40 seconds)
6 GPU: 82930 s (23 hours 2 minutes 10 seconds)
7 GPU: 76330 s (21 hours 12 minutes 10 seconds)

Time it would take multiple machines with 1 GPU each:
1 Machine: 239810 s (66 hours 36 minutes 50 seconds)
2 Machine: 119905 s (33 hours 18 minutes 25 seconds)
3 Machine: 79936 s (22 hours 12 minutes 16 seconds)
4 Machine: 59952 s (16 hours 39 minutes 12 seconds)
6 Machine: 39968 s (11 hours 6 minutes 8 seconds)
7 Machine: 34258 s (9 hours 30 minutes 58 seconds)

Speed increase from multiple machines with 1 GPU vs multiple GPUs in single machine:
1: Equal
2: 1.256 times faster
3: 1.440 times faster
4: 1.435 times faster
6: 2.075 times faster
7: 2.228 times faster

In other words, the increase in speed is obvious as soon as we compare 2 GPUs, and by the time we compare 6 GPUs, it's takes less then HALF THE TIME to render :shock: ! Although those numbers were taken from benchmarks of 4.8, I image similar results would come from benchmarks of RT.

Now that I've stated the obvious, your probably wondering where I'm going with this. So to get to the point...

:idea: In theory, similar overall speed increases on rendering sequences could be achieved on a single machine with multiple GPUs using a similar technique from my example of splitting up a machine into several render clients to achieve max CPU utilization. If the GPU(s) to be used can be specified dynamically at render time (like putting an option in the FurryBallGlobal node), a render manager could treat a machine as multiple render clients and fire off multiple render processes with each having it's own GPU designated. This would have an effect similar to rendering on multiple single-GPU machines. Although the results might not be identical to multiple machines (since there are more bottle necks like CPU and RAM needed when loading and rendering with several separate processes), there will definitely be a speed increase (and a significant one at that).

Additional benefits:
  • Each render process would use the GPU memory on its designated GPU, so it would drastically increase the amount of GPU memory utilized.
  • If GPU selection can be dynamically set, it would be easy to use all available GPUs when working on the scene, or do things like reserving 1 GPU/render client to not be used by the render manager and so you could still work setting up the next scene while the previous one is being rendered.
  • Machines with different GPUs will no longer be limited to its weakest card, and each card would be able to render at its full potential.
  • Almost all of FurryBall's clients will benefit from this since anyone who's serious about using FurryBall will be using machines with multiple GPUs, and will be rendering sequences/animations (it usually doesn't bother people who only render single images if it takes 3 hours to render 1 frame)


Top
Jan_Tomanek
Post  Post subject: Re: Specify graphics card(s) utilized in globalsettings node  |  Posted: Sun Nov 15, 2015 4:52 pm
User avatar

Joined: Tue Oct 27, 2009 11:03 pm
Posts: 1607
Location: Prague - Czech republic

Offline
Hi,
thanks for your nice test and suggestions.

You are totally right, there is not so much effective to use more than 3-4 GPU in one computer.
We also suggest NOT to use more than 3-4 GPU in one computer. (http://furryball.aaa-studio.eu/support/ ... Renderfarm)
It's same like with MULTI CPU - 4x CPU on one motherboard is not same like 4x single workstation with same CPU. Unfortunately this WILL NEVER BE inear and there is ALWAYS some overhead. This overhead is starting to be bigger and bigger and in some point there is even slower to add another core to the system. :cry:

There is also no possible to split your GPU for different task because of GPU architecture, etc. :cry:

For the rendering sequences we recommend to split your ULTRA-MEGA-CORE ;) computer to 2-3 computers with 2-3 GPU inside. When you split your sequence with some render manager to the sequences 1-10, 10-20, 20-30... for each computer the result will be the fastest.

_________________
Thank you for contacting us.
If you have any further questions, please do not hesitate to contact us.

All the Best
Jan


Top
ymangolds
Post  Post subject: Re: Specify graphics card(s) utilized in globalsettings node  |  Posted: Fri Jan 01, 2016 8:58 pm
User avatar

Joined: Fri Aug 17, 2012 8:23 am
Posts: 89

Offline
Jan_Tomanek wrote:
There is also no possible to split your GPU for different task because of GPU architecture, etc. :cry:

I think you might of missunderstood what I was suggesting.
While writing a FB plugin for a render manager, I discovered that Redshift (GPU renderer simular to FB) has attributes in its renderNode that let you specify how many graphics cards to use (or you can specify specific cards to use). This allows what I was suggesting in 2 ways (1 of which I'm not sure would work, and another which I'm certain would work):
  1. When submitting a job to the manager, specify it to use 1 GPU. Then set a machine (4 GPUs) to allow taking 4 concurrent tasks.
    -Not sure this will work, since it depends on each mayaBatch process automatically selecting a card that the other processes aren't using (I.E. GPU with lowest load)
    -Edit: This does work in Deadline. Found code that in the mayaBatch plugin that makes sure threads use different GPUs then eachother.
  2. Configure a machine (4 GPUs) to register itself as 4 different clients to the manager. Then add a custom pre-render-script to each client that specifies a specific card(s) to be used.
    -I'm certain this will work and is exactly what I was suggesting in my first post.

Also, for theoretical arguments sake... If someone was using FurryBall as well as another GPU renderer (like Redshift) at the same time on the same machine (4 GPUs): It would be easy since you can tell FB to use GPUs 1 and 2, and Redshift to use GPUs 3 and 4. Then load 2 instances of maya (1 with the FB scene and the other with the RedShift scene) set each one to render. So in theory this should be possible using just FB if we were able to specify the GPUs to use in the render settings of a scene.

Edit:
Just found the RedShift FAQ (https://www.redshift3d.com/support/faq#question552) which confirms that it's capable of what I was suggesting:
Quote:
Which render managers support Redshift?
RoyalRender, Deadline and RenderPal all have native Redshift support. Apart from the default Redshift behavior which is to use all GPUs to render a single frame, some of these render managers also offer the capability of rendering multiple frames at once (on the same computer), with each frame using a different combination of GPUs.


Top
Jan_Tomanek
Post  Post subject: Re: Specify graphics card(s) utilized in globalsettings node  |  Posted: Fri Jan 08, 2016 7:17 pm
User avatar

Joined: Tue Oct 27, 2009 11:03 pm
Posts: 1607
Location: Prague - Czech republic

Offline
Hi,
thanks for the suggestion, we have this feature ready, but there is little bit problem with licensing... ;)
There are computers like VCA for example (Huge system with 16GPUs and it can be theoretically used by 16 users in the same time... it means, if we have this feature, with single linecense there will be possible to work for 16 people.

We were thinking about to add this feature, but you will have to buy extra license for EACH license you will want to render separate task on single computer.
In fact you will buy another license and it will depend on you, if you activate it on another computer, or you will use for example GPU #3 and #4 like another pool for rendering.

Are you interesting about this? Please let us know.

_________________
Thank you for contacting us.
If you have any further questions, please do not hesitate to contact us.

All the Best
Jan


Top
ymangolds
Post  Post subject: Re: Specify graphics card(s) utilized in globalsettings node  |  Posted: Mon Jan 11, 2016 8:57 pm
User avatar

Joined: Fri Aug 17, 2012 8:23 am
Posts: 89

Offline
As long as it's flexible I'd be interested, as would anyone else trying to do something similar, since most render management licencing works the same way (need to pay for additional "clients" even if it's just one computer being used as multiple clients).

By flexible, I mean I don't want to be locked down to predefined pools and can change pools as needed for the task at hand. For example: CompA (4 GPU), while working and setting up scenes and not rendering I'd want it to use all 4 GPU's. While working and rendering at the same time I'd want the render to use 3 GPU's and the instance that I'm working on to use 1 GPU. While only rendering I'd want 2 pools of 2 GPUs.


Top
Jan_Tomanek
Post  Post subject: Re: Specify graphics card(s) utilized in globalsettings node  |  Posted: Mon Jan 11, 2016 11:20 pm
User avatar

Joined: Tue Oct 27, 2009 11:03 pm
Posts: 1607
Location: Prague - Czech republic

Offline
Yes, I understand what you want, but as i wrote this will not be possible, because in fact is "more different tasks and pools can be more users".
With the single FurryBall license you will be able to run 16 clients who can work on Maya with 16 FurryBall. ;)

_________________
Thank you for contacting us.
If you have any further questions, please do not hesitate to contact us.

All the Best
Jan


Top
ymangolds
Post  Post subject: Re: Specify graphics card(s) utilized in globalsettings node  |  Posted: Tue Jan 12, 2016 1:18 am
User avatar

Joined: Fri Aug 17, 2012 8:23 am
Posts: 89

Offline
A floating license server would solve that.


Top
Jan_Tomanek
Post  Post subject: Re: Specify graphics card(s) utilized in globalsettings node  |  Posted: Tue Jan 12, 2016 9:27 am
User avatar

Joined: Tue Oct 27, 2009 11:03 pm
Posts: 1607
Location: Prague - Czech republic

Offline
How? It will the SAME HARDWARE be for FurryBall!
Look on NVIDIA VCA - http://www.nvidia.com/object/visual-com ... iance.html

Same hardware, same HWID, but 16 users in same time.... 8-)

_________________
Thank you for contacting us.
If you have any further questions, please do not hesitate to contact us.

All the Best
Jan


Top
ymangolds
Post  Post subject: Re: Specify graphics card(s) utilized in globalsettings node  |  Posted: Tue Jan 12, 2016 11:04 pm
User avatar

Joined: Fri Aug 17, 2012 8:23 am
Posts: 89

Offline
Because each user would have a licenses checked out before rendering starts. So if the floating license server only has 1 license, only 1 user would be able to use FB at the same time (then when they're done the license gets checked back in and another user would be able to check it out and work with FB). If they have 2 licenses then 2 people would be able to work with FB at the same time, etc.

In other words: Instead of linking a license to hardware, licenses would be linked to working sessions/processes.

Aside from being useful for many GPU machines, this is what larger studios prefer. If they have 20 workstations and 5 licenses on a floating licence server, any 5 machines would be able to work with FB at the same time (but only as many machines as they have licences), and don't have to designate specific machines to be used for FB.

https://en.wikipedia.org/wiki/Floating_licensing


Top
Jan_Tomanek
Post  Post subject: Re: Specify graphics card(s) utilized in globalsettings node  |  Posted: Wed Jan 13, 2016 9:48 am
User avatar

Joined: Tue Oct 27, 2009 11:03 pm
Posts: 1607
Location: Prague - Czech republic

Offline
Thanks I know what is floating license, but you can't use licensing by "username". 16 users can log under same username... :(

It is possible to give licenses to "number of clients" and it's usually distinguished by different MAC, IP or whole hardware. There will be SAME hardware and SAME IP...

We have unlimited GPU in computer...

You are right that we can sell clients like session, but it will not be unlimited GPUs, but SESSION = GPU. In this scenario, you will have 4 clients (sessions/processes) and you can select if you use it like 4 GPU on one task, or 4x single task or even 4x different user on 4x different computers. But in this scenario we will NOT have unlimited GPUs per machine.

_________________
Thank you for contacting us.
If you have any further questions, please do not hesitate to contact us.

All the Best
Jan


Top
Display posts from previous:  Sort by  
Print view

Who is online
Users browsing this forum: No registered users and 5 guests
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum
Jump to:   
cron

Delete all board cookies | The team | All times are UTC + 1 hour [ DST ]

Powered by phpBB® Forum Software © phpBB Group
DAJ Glass 2 template created by Dustin Baccetti