Comments Locked

29 Comments

Back to Article

  • chandan1014 - Monday, November 14, 2011 - link

    Can NVIDIA give us at least a timeline when to expect optimus for desktops? Current sandy bridge processors have powerful enough graphics unit for general computing and I really would like to see, let's say, a GTX 580 to power down when I'm surfing the web.
  • GoodBytes - Monday, November 14, 2011 - link

    You don't want Optimus.
    Optimus uses the system bus (which you only have 1 from the CPU to memory), to send rendered frame from the GPU to the Intel memory reserved space. As the bus can only be used by 1 at a time.. this means you'll get a large reduction in CPU power. Basically creating bottleneck. The more demanding the game, the less CPU power you have.. which will slow down the game, unless you kill anything from the game that uses the CPU.

    This the major downside of Optimus.
    If you want to control your GPU performance (which already does via Nvidia PowerMizer technology, which clocks the GPU based on it's work load), you can install Nvidia System Tools, and have look at Nvidia Control Panel. A new section"Performance" will appear and you can create profiles to switch form by double click on them, which has the performance specification set by you.
    Or
    you can use many overclock tools out there for your GPU, but instead of overclocking, you switch between normal default speed (let's say you don't want to overclock), and minimum speed, and switch between mode via keyboard shortcuts or something.
  • tipoo - Monday, November 14, 2011 - link

    I've never seen any benchmarks indicating Optimus slows down the CPU when using the IGP, sauce please
  • Guspaz - Monday, November 14, 2011 - link

    Sandy bridge has 21.2GB/s of memory bandwidth. It's a dual-channel system, not single-channel as you imply (both the PCIe and memory controllers are on-die on the CPU). Sending a 1080p image at 60Hz over that bus, uncompressed, would require ~0.35 GB/s of memory bandwidth (Double that, I suppose, to actually display it). I'm not seeing how this would have a major impact on a system that isn't anywhere near being memory bandwidth limited at the moment.
  • MrSpadge - Monday, November 14, 2011 - link

    Totally agree with Guspaz.
    And you'Ve got even more facts wrong: the amount of bandwidth needed for the frame buffer depends on display refresh rate and resolution, but not on game complexity.

    MrS
  • Roland00Address - Monday, November 14, 2011 - link

    And it was supposedly close to release in April, but then it wasn't heard from again.
  • know of fence - Tuesday, November 15, 2011 - link

    "Optimus for Desktops" headline made its rounds in April 2011, 6 month ago. Maybe after the next gen of cards or maybe after the holidays? It's time desktops get some cool tech instead of "super-size" technology (OC, SLI, multi monitor. 3D).
    Scalable noise & power has the potential to make Geforce Desktop PCs viable for general purposes again, instead of being a vacuum / jet engine.
    Seriously with equal power consumption heat pumps are three to four times as efficient when it comes to room heating.
  • euler007 - Monday, November 14, 2011 - link

    How about some Autocad 2012 Quadro drivers Nvidia?
  • GTVic - Monday, November 14, 2011 - link

    I believe Autodesk is moving away from vendor specific drivers. The preferred option for hardware driver in the performance tuner is just "Autodesk". If you have an issue with a Quadro driver on an older release of AutoCAD, Autodesk's official response will be to use the Autodesk driver instead. Autodesk certifies the Quadro and AMD FirePro drivers as well as drivers for consumer video cards.

    In the past, 2D users didn't generally turn on the Autodesk hardware acceleration, especially if you had a consumer level video card. With 2012 you almost have to turn on hardware acceleration because almost everything is now accelerated. You will see significant flickering in the user interface without acceleration. As a result Autodesk now needs to supports most consumer graphics cards out of the box and can't rely on a vendor specific driver.

    With FirePro and Quadro what you are paying for is hopefully increased reliability and increased testing to ensure that the entire graphics pipeline supports the Autodesk requirements for hardware acceleration. In addition, Autodesk is more likely to certify Quadro/FirePro hardware with newer graphics drivers than if you purchase a high-end consumer card.
  • Filiprino - Monday, November 14, 2011 - link

    How about some GNU/Linux support? You know, there's lot of compute power being used on GNU/Linux systems, and video rendering too.

    And finally, what about Optimus on GNU/Linux? NVIDIA drivers are ok on Linux based systems, but they're not complete as now there's no Optimus at all.
  • Lonbjerg - Monday, November 14, 2011 - link

    When other OS have a market penetration that makes for a viable profit, I am sure NVIDIA will look at them.
    But until then...you get what you pay for.
  • Filiprino - Monday, November 14, 2011 - link

    LOL. GNU/Linux is the most used OS on supercomputers. You get what you pay for.
  • Solidstate89 - Monday, November 14, 2011 - link

    And?

    What does Optimus or Maximus have to do with large Super Computers? You don't need a graphical output for Super Computers, and last I checked a laptop with switchable graphics sure as hell isn't remotely comparable to a Super Computer.
  • Filiprino - Thursday, November 17, 2011 - link

    But you as a worker don't have always access to your rendering farms, supercomputers or workstations. Having a laptop to program on the go is a must for increased productivity.
  • Ryan Smith - Monday, November 14, 2011 - link

    On the compute side of things I'd say NVIDIA already has strong support for Linux. Tesla is well supported, as Linux is the natural OS to pair them up with for a compute cluster.

    As for Optimus or improved Quadro support, realistically I wouldn't hold my breath. It's still a very small market; if NVIDIA was going to chase it, I'd expect they'd have done so by now.
  • iwod - Monday, November 14, 2011 - link

    Hopefully all these technology lays the ground work for next generation GPU based on 28nm.

    We have been stuck with the current generation for too long... i think.
  • beginner99 - Monday, November 14, 2011 - link

    Since this is artifical, is it possible to turn a 580 into a tesla and quadro? bascially getting the same performance for a lot less?
  • Stuka87 - Monday, November 14, 2011 - link

    A GTX580 and a Fermi based Quadro may share a lot in common, they are not the same thing. There are many differences between them.

    I seriously doubt you could make a single 580 act as both, much less one or the other.
  • jecs - Monday, November 14, 2011 - link

    I am an artist working on CG and this is what I know.

    There is one software I know that can take advantage of both, Quadro's and GeForce's cards at their full potential and that is the Octane render. And as Ryan Smith explain there are software designed to use 2 graphic cards for different purposes and this rendering engine is optimized to use let's say a basic Nvidia Quadro for display or for the viewport, and a set of one or more 580-590 or any GeForce for rendering. This is great for your economy but Octane is a particular solution and not something Nvidia is pushing directly. The engineers at Refractive Software are the ones responsible to support the implementation and Nvidia could do any change at anytime that can disrupt any functionality without any compromise.

    With Maximus I can see Nvidia is heading in the same direction but endorsing Tesla as the workforce. The problem for smaller studios and professionals on their own is that Tesla cards are still in the 2K level whether a GeForce's is in the $200-700.

    So Maximus is a welcome as even high end studios are asking for these features that are cost effective and Nvidia is responding to their needs but Maximus is still to expensive for students or smaller studios working on CG.

    Well, Nvidia may not be willing yet to give you a "cheap" card to do high end work on it as they spend a lot on R&D on their drivers, lets be fair. So ask either to your favorite software producer to implement and fully support commercial gaming cards as an alternate hardware for rendering or compute but I can guess they wont be willing to support mayor features and develop and support their own drivers.
  • erple2 - Wednesday, November 16, 2011 - link

    Just how much money is your time worth? How many "wasted" hours of your time waiting for a render to finish adds up to 1800-1300 dollar price difference?

    This is an argument that I've had with several customers that seem to not quite fully realize that time wasted for things to finish working (on the CPU, GPU, RAM, Disk etc) side is time that your Engineer or Developer is very expensively twiddling their thumbs. It gets even more crazy when you realize that everyone downstream is also twiddling their thumbs waiting for work to finish.

    I wonder if it's a holdover from the old "bucket" style of accounting - having separate buckets for Hardware vs. Software vs. Engineering vs. etc. It's ultimately all the same bucket for a given project - saving $15,000 on hardware costs means that your development staff (and therefore everyone else that does work after Development) is delayed hours each week just to wait for things to finish working. So while the Hardware Bucket looks great ("We saved $15,000!"), your Development bucket winds up looking terrible ("We're now $35,000 behind schedule just from development, so everything downstream is in backlog").

    The flip side of that is that you can work on more projects over the long haul if you reduce "machine" time - more projects means more money (or accomplishments) which is good for the future.
  • MrSpadge - Monday, November 14, 2011 - link

    Short answer: no.

    The chips are the same, but it's hard-wired which functionality they are allowed to expose. The professional cards also have ECC memory.. but anyone asking for an unlock probably wouldn't be terribly interested in this anyway ;)

    MrS
  • hpvd - Monday, November 14, 2011 - link

    will the quadro be automatically used for cuda calculation if their ist no strong graphic load on it?
    Or do I loose its compute power completely if I put in an additional Tesla??

    if it could be used for computing together with the new tesla: What happens if graphic load increase?
    will there be a smooth transition? eg
    Tesla: compute load 100% , graphic load 0%
    Quadro: compute load 30%, graphic load 70%
    ?
  • Ryan Smith - Monday, November 14, 2011 - link

    It's going to depend on the software you're using. If the compute load can easily be split among multiple CUDA devices, then you can still use both the Quadro and the Tesla for compute as long as the software has a way to select this.

    NVIDIA has a case study video of 3ds Max where they show off compute device selection: http://www.youtube.com/watch?v=xCIAsvT5mYo

    However using both cards for compute automatically doesn't seem like it's possible right now. Keep in mind that NVIDIA's favorite pairing is the Quadro 2000 - a GF106 part - with the C2075, so the Q2000 isn't even in the same league as the C2075. In any case if you did assign compute workloads to both GPUs, then things would be graphically sluggish until the application in question terminated the load on the Quadro.
  • Havor - Tuesday, November 15, 2011 - link

    Dose Open CL not cover the same goals as this Maximus, so why use Maximus?

    Specially if you can use a open standard, used by all big players.

    How good your product works with OpenCL depends on how good your drivers are, so why not focus on that and have the best product with the best OpenCL drivers?

    It looks to me this is one of these product ware some will fall for the marketing crap, but as a product it will fail in the long run.
  • Dribble - Tuesday, November 15, 2011 - link

    No, Maximus is a way of using compute hardware. Open CL is compute software that competes with CUDA. I would have thought the nvidia hardware does support open cl, but everyone uses CUDA because it's much more advanced.
  • Nenad - Tuesday, November 15, 2011 - link

    Why not allowing Tesla+GeForce?

    So far Nvidia was marketing cards more or less like:
    - GeForce : primarily graphics/video
    - Tesla: primarily/only compute
    - Quadro: equaly capable of compute and graphics

    Now with Maximus they say "if you have Quadro and Tesla, use Quadro for graphics and Tesla for compute", but that leaves one question:

    If so far it was GeForce that was best suited for graphics (and its cheaper and much better performance/$ than Quadro for graphics), then why would someone want to buy Quadro just to limit it to graphics?

    Why instead not allowing to have cheaper but faster GeForce (like GTX580), and pair it with Tesla in Maximus?
  • jecs - Wednesday, November 16, 2011 - link

    It all depends on the Nvidia license and not on the hardware used.

    Look at it this way: If Nvidia allowed you to use GeForces on professional applications you either would have to pay the price difference to buy the additional "Quadro Drivers".

    The important part here are the very specilized drivers Nvidia only allows to run with qualified Quadros or Tesla cards.

    Quadro and Tesla class Drivers are very expensive in R&D, Nvidia puts to many engineering resources in the professional drivers, but those licenses are only distributed (sold) on a smaller professional user base and the why these drivers cost more even if used on a very similar hardware.

    Also you can't use a Quadro driver on any similar GeForce because is illegal, sometimes there is not and equivalent hardware on the GeForce side and also because Nvidia physically fixes the installation on the card with transistors.
  • Freakie - Sunday, November 20, 2011 - link

    PhysX, now for workstations! Unless I'm understanding it a bit wrong... but PhysX is quite similar in theory, is it not? Compute the physics of specific things (explosions/smoke/plasma cannon) with a separate card so that you have more realistic effects in your game, and then use a more powerful card to actually display everything else in the game. This is just reversing the power roles.
  • Hobstob - Saturday, May 5, 2012 - link

    It is extremely outdated! Why have they not released a new Line of quadro cards? Are they even planning on releasing a new line of card for workstations?

Log in

Don't have an account? Sign up now