Please report all spam threads, posts and suspicious members. We receive spam notifications and will take immediate action!
Results 1 to 8 of 8

Thread: nVidia news




  1. #1
    Join Date
    Jul 2002
    Posts
    1,661

    Default

    NVIDIA GeForce FX 5500 approaches the home stretch
    Another DirectX 9.0 GPU from NVIDIA


    A leading graphics company is preparing another DirectX 9.0 graphics processing unit to enter the scene shortly. A new GeForce FX technology based on the design principles unveiled a year ago is about to be announced weeks before a new breed of NVIDIA’s graphics chip sees the light of the day.

    Santa Clara, California-based NVIDIA Corporation was months behind the arch-rival ATI Technologies with its NV30 also known as the GeForce FX technology. Despite of the fact that the majority of expensive DirectX 9.0 compliant graphics cards usually acquired by computer enthusiasts sold last year were based on VPUs from ATI Technologies, NVIDIA seems to be making a good progress in entry-level and mainstream market segments with its GeForce FX 5200, 5600 and 5700 technologies, which is indisputably a success. Additionally, the company wants to address more market segments with very thoroughly tailored graphics processors to offer attractive price performance ratio for its customers.

    Sources with knowledge of the matter indicated that NVIDIA wants to deliver another flavour of the GeForce FX technology to the marketplace addressing its already heavily penetrated market segments. The firm is set to deliver the GeForce FX 5500 shortly, weeks before the next-generation NV40 graphics processors will also soon hit the market.

    The FX 5500 is expected to fight in entry-level/mainstream market segments before the lower-cost versions of the NV40 emerge in mid-2004. The newcomer may deliver some exciting speed boosts for the targeted market segments, something NVIDIA’s customers have been waiting eagerly. The GeForce FX 5500 chip may also improve the margins of NVIDIA because of improved GPU design and/or manufacturing technology.

    Xbit

  2. #2
    Join Date
    Jul 2002
    Posts
    1,661

    Default

    Samsung mass produces 1.60GHz GDDR3 memory chips
    NVIDIA welcomes Samsung 1.60GHz GDDR3


    Samsung Electronics announced mass production of the industry’s highest speed monolithic memory device, a Graphics Double Data Rate 3 (GDDR3) capable of achieving clock speeds of up to 800MHz.

    “Our 8Mx32 GDDR3 provides customers with the fastest graphics memory on the planet at lower power consumption than existing graphics systems,” said Mueez Deen, director of marketing for DRAM and Graphic memory products at Samsung Semiconductor

    The GDDR3 256Mb graphics memory devices manufactured by Samsung are clocked at 500MHz, 600MHz, 700MHz and 800MHz speeds effectively providing 1000Mb/s, 1200Mb/s, 1400Mb/s and 1600Mb/s per pin bandwidth. Such memory will enable high speed 256MB and 512MB graphics cards as well as power efficient 128MB notebook solutions.

    To allow speeds beyond 1.0GHz Samsung used a number of technologies developed for DDR-II and GDDR2 memories, such as On-Die Termination (ODT), Output Driver Strength adjustment by EMRS, Calibrated output drive, Pseudo Open drain compatible inputs/outputs and some other.

    Samsung’s GDDR3 memory chips are packaged in 144-ball FBGA package and require 1.9V power supply for device operation and 1.9V power supply for I/O interface.

    “It is a lower cost, lower power and faster alternative to equivalent density stacked solutions today,” the representative for Samsung said.

    “With the aid of high-speed memory technology such as Samsung’s new GDDR3 graphics memory, NVIDIA will continue to advance graphics technology at an incredible pace so game developers have the freedom to stretch the limits of the cinematic computing experience,” an NVIDIA’s spokesperson said.

    In mid-September we reported that Samsung supplied a batch of its high-speed 1.60GHz memory chips to NVIDIA Corporation for NV40 testing purposes. Peak theoretical bandwidth of 1600MHz memory on 256-bit bus is mind-blowing 51.2GB/s, therefore, in case NVIDIA’s next-generation high-end GPUs work with such memory, there will be a major performance improvement over current generation of high-performance graphics cards.

    Xbit

  3. #3
    Join Date
    Jul 2002
    Posts
    1,661

    Default

    NVIDIA to voucher up Doom 3
    Not settled but very likely case of the voucher wars


    We now have very strong reasons to believe that NVIDIA is going to make the pact with the devil and going to ship a Doom 3 voucher with its latest cards. This comes from a few conversation that we had in the last few days after the NV40 and NV45 storm that we created and some of them suggested that NVIDIA already won this deal.
    The way it’s meant to be played is a hell of a strong marketing initiative that we will tell you about more some other time.

    NVIDIA won the hearts of John Carmack, the rocket man, after the incident where someone from ATI apparently or allegedly leaked the Doom 3 game. From that moment on, Carmack was claiming that even NV30 is good for the game and his engine that can use NVIDIA’s 8x1 NV30/35/38 architecture, when Z passes are not needed.

    As ATI got Half Life 2, it was natural to assume that NVIDIA had to ship Doom 3, Unreal Tournament 2004 or some similar title to win some market share back.

    It already has the upcoming Unreal tournament 2004 in its pocket as Mark Rain man with the attitude has been an NVIDIA friendly chap for a long time.. We know that NVIDIA was betting on Doom 3 horse last summer but Carmack, as we’ve said millions of times is a fellow that works three days as a game programmer, and the other two days plays with a rocket that he wants to send in space and get some money from NASA and obviously become even more famous then he is already. If NVIDIA should announce Doom 3 voucher then it will be in a serious case of Voucher Wars with ATI. Well it seems that NV40 + Doom 3 will be up against R420 + Half Life 2 voucher. Nasty fight.

    The Inquirer

  4. #4
    Join Date
    Jul 2002
    Posts
    1,661

    Default

    <img src="http://images.tweaktown.com/weta/nvidia/dawn_pcx.jpg">

    NVIDIA introduces industry's first top-to-bottom family of PCI Express GPUs

    New NVIDIA GeForce PCX (PCI Express-based product) Family Features Innovative High-Speed Interconnect Technology for Performance and Stability


    NVIDIA, the worldwide leader in visual processing solutions, today unveiled the industry’s first top-to-bottom family of PCI Express graphics processing units (GPUs), all designed to take full advantage of the additional bandwidth and features that this new I/O interconnection standard delivers. By using an innovative PCI Express (PCX) high-speed interconnect (HSI), a complex piece of networking technology that performs seamless, bi-directional interconnect protocol conversion at incredible speed lines, NVIDIA can transform its current award-winning GeForce FX series into a full-family of PCI Express GPUs.

    The new family includes:

    NVIDIA GeForce PCX 5950 – based on the DX9 GeForce architecture, this new GPU delivers extreme graphics power and performance for extreme gamers.
    NVIDIA GeForce PCX 5750 – designed for high-performance gaming with NVIDIA’s full suite of cinematic effects and an unmatched feature set.
    NVIDIA GeForce PCX 5300 –delivers state-of-the-art, best-in-class features and the reliability users have come to expect from NVIDIA, at an affordable price.
    NVIDIA GeForce PCX 4300 – provides entry-level pricing coupled with strong performance, unbeatable visual quality, and DVD playback.

    “The PCI Express transition is going to be an exciting time for the PC industry,” stated Jen-Hsun Huang, president and CEO at NVIDIA. “By aligning ourselves closely with Intel and helping define this new specification, we were able to engineer an innovative protocol engine, in HSI, that delivers the full-PCI Express feature set without any compromises. HSI and PCI Express will enable a new level of performance for high bandwidth applications like graphics and networking.”

    In addition, last week in Taipei, Taiwan, NVIDIA validated its family of GeForce PCX products with the industry’s top motherboard, chipset, and BIOS vendors. NVIDIA has already shipped more than 1,000 PCI Express boards to customers and partners.

    “NVIDIA is one of several key players in the PCI Express technology initiative, so it is fitting that they selected Intel’s premiere technology showcase to introduce their PCI Express product family,” said Randy Wilhelm, vice president, desktop platforms group and general manager, client platform division at Intel Corporation. “By working closely with companies such as NVIDIA, the industry is experiencing revolutionary new advancements in PC technology at a remarkable pace.”

    By qualifying a single unified device, NVIDIA partners can launch and be the first to validate, launch and market an entire family of PCI Express solutions. Products based on this new design are expected to become available in the second half of 2004 from the world’s leading add-in card manufacturers including: Albatron Technology, Co. Ltd, AOpen, Anextek, ASUS Computer International, Chaintech, Gainward Co., Ltd., Gigabyte Technology, Co., Ltd., Leadtek Research, Inc., MSI, Palit Microsystems, Inc., Pine, XFX, a Division of Pine, Prolink Computer Inc,, and Sparkle.

    As a Gold Sponsor of the Intel Developer Forum (IDF) Spring 2004, NVIDIA is showcasing the results of their collaborative engineering efforts with industry players, such as Intel. NVIDIA and Intel processors form the foundation for many PCs, notebooks, workstations, gaming systems, media centers, and handheld devices being produced worldwide, and they are all on display at IDF. Intel and companies like NVIDIA enjoy a long-standing collaborative relationship with the common goal of driving progress in PC technology, including transitioning the PC industry to an advanced PC bus architecture, PCI Express. Since the graphics processing unit (GPU) has the largest bandwidth demand of all the PC subsystems, NVIDIA has been a key contributor to the joint definition, development, and deployment of PCI Express solutions. NVIDIA also serves as a member of PCI-SIG, the industry organization that owns the PCI Express specification.

    NVIDIA press release

  5. #5
    Join Date
    Jul 2002
    Posts
    1,661

    Default

    What does a GPU look like in 2014?

  6. #6
    Join Date
    Jul 2002
    Posts
    1,661

    Default

    NVIDIA adopts TSMC's 0.11 micron technology
    Deep sub-micron process technology allows for better performance and power improvements


    NVIDIA today confirmed that it will be one of the first semiconductor companies to manufacture select up-coming graphics processing units (GPUs) at Taiwan Semiconductor Manufacturing Company’s (TSMC’s) 0.11 ěm (micron) process technology. NVIDIA will combine TSMC’s 0.11 micron process with its own innovative engineering designs, to deliver high-performance and low-power consumption in a graphics processor.

    “The decision to move to this new process underscores our long-standing relationship between TSMC and NVIDIA,” stated Di Ma, vice president of operations at NVIDIA. "This new manufacturing technology, along with numerous architectural enhancements, enables us to continue delivering products that allow end users to interact with a wide variety of digital devices. We look forward to the new opportunities this advancement will allow us.”

    "TSMC is committed to innovative collaboration with forward-looking companies such as NVIDIA," said Kenneth Kin, senior vice president of worldwide marketing and sales for TSMC. "Through these relationships, we can deliver technology platforms that create new opportunities for our customers."

    TSMC’s 0.11 micron process technology is fundamentally a photolithographic shrink of its industry-leading 0.13 micron process. The process will be available in both high-performance and general-purpose versions using FSG-based dielectrics. Though actual results are design-dependent, TSMC’s 0.11 micron high-performance process also includes transistor enhancements that improve speed and reduce power consumption relative to its 0.13 micron FSG-based technology.

    TSMC began 0.11 micron high-performance technology development in 2002 and product qualified the process in December of 2003. Design rules, design guidelines, SPICE and SRAM models have been developed, and third-party compilers are expected to be available in March. Yields have already reached production-worthy levels and the low-voltage version has already ramped into volume production. The 0.11 micron general-purpose technology is expected to enter risk production in the first quarter of next year.

    NVIDIA press release

  7. #7
    Join Date
    Jul 2002
    Posts
    1,661

    Default

    NVIDIA’s NV3x, NV4x High-Speed Interconnect revealed

    NVIDIA will not deliver native PCI Express x16 graphics processors when the first mainboards supporting the new interconnection emerge in the second quarter. But the firm’s partners will be in a position to offer graphics cards for PCI Express bus, as NVIDIA has some headroom with its AGP GPUs along with technology called High-Speed Interconnect.

    Reports over at AnandTech and PC Watch claim that NVIDIA’s existing GeForce FX graphics processors as well as forthcoming NV40 chip boast with overclocked AGP interconnection that is capable of transferring up to 4GB per second – bandwidth on par with PCI Express x16 interconnection.

    NVIDIA’s spokesman confirmed the unofficial information.

    Currently NVIDIA’s graphics chips work using AGP 8x bus, but once they are equipped with the company’s AGP-to-PEG bridge, they will bolster the transfer rate to the so-called AGP 16x speed between the graphics processor and the bridge. The latter will work with PCI Express for Graphics lane at some 4GB/s speed. Therefore, provided that the bridge itself does not cause tangible latencies, NVIDIA’s GeForce FX and NV40 graphics chips will see a speed boost from PCI Express.

    NVIDIA will call designs for PCI Express slots with GeForce PCX brand-name.

    Even though native PEG x16 (PCI Express for Graphics x16) support ATI promises to bring may be more advanced from technology point of view and provide higher performance and bandwidth, NVIDIA’s approach may probably help the company to better control its inventory, as GPU lineup will be unified for AGP and PEG platforms. In contrast, ATI will have to stock two separate lines of its VPUs.

    NVIDIA’s key add-in-card partners, such as ASUSTeK Computer and Microstar International, confirmed plans to manufacture PEG x16 graphics cards using NVIDIA’s High-Speed Interconnect.

    Xbit

  8. #8
    Join Date
    Jul 2002
    Posts
    1,661

    Default

    Jen-Hsun Talks NV4x

    The Morgan Stanley Semiconductor and Systems Conference was held earlier today, and among the speakers was NVDIA’s CEO, Jen-Hsun Huang there to discuss NVIDIA’s business and its prospects. Very early on in the conference Jen-Hsun was asked if he’d like to talk about the NV4x generation of parts to which, in a possible reference to the performance of the NV3x generation, his quick response was "Nothing would give me more pleasure to talk about NV4x", and so he did.

    Jen-Hsun noted that that NV4x series was a new architectural generation that has been designed to offer specific goals: give more programmability, more performance, take advantage of the PCI-Express platform, have higher yields and to be very scalable.

    When looking at the performance element of NV4x, Jen-Hsun expects the performance increment from the previous generation to be dramatically higher than any previous architectural transition they have previously been through. Indeed, presumably speaking about NV40 specifically, NVIDIA’s CEO states that "if we’re not a lot more than 2 times faster I’m going to be very disappointed". Upon discussing where such performance increases could come from he made note that due to the programmable nature of the graphics pipeline and that now applications are making use of this, more and more elements cane be brought over from the CPU world to enhance the instruction execution performance, and its expected that NV4x will adopt a lot of these techniques.

    In terms of scalability, as with previous generations, NV4x will span a top to bottom line of graphics processors for the PC market space, however there has been a greater emphasis in NV4x’s design to implement this goal. Its expected that by the end of the year there will be an entire family of NV4x processors spanning the very high end performance space, right down to the entry-level market; while this strategy sounds similar to their previous generations what marks it as different this time is the number of processors that may be available. In the NV1x and NV2x generation more or less one two distinct processors were introduced and produced at the same time, with the NV3x architecture this increased to 3 processors, however Jen-Hsun made note that potentially as many as 5 distinct NV4x graphics processors may be in production at any one time.

    The NV4x generation is also designed to be fully PCI-Express compliant, and take full advantage of the benefits this bus architecture brings. The indication here is that the parts produced on the NV4x platform will be introduced with a native PCI-Express interface – rumours suggest that NV40 will be AGP compliant, but Jen-Hsun’s comments raises the possibility that NV40 may be a PCI-Express chip but the board may utilise their bridge chip to enable it to operate in an AGP system. It was noted that their strategy of not porting any of their current line over to PCI-Express does mean they will need to utilise their bridge chip for the low end initially as its they are not expecting to see the entry level NV4x part available until the end of Quarter 3 ’04.

    Yields on the NV3x line of chips appears to have been a bugbear of NVIDIA over the past year, and that is something they are hoping to address with NV4x. Jen-Hsun spoke of "heavily patented technology" utilised in the design of NV4x in order to bring the yields up, but there was no expansion on just what this technology was. They noted that due to the fast cycling nature of the market they are not able to get the similar types of benefits as CPU vendors by refining a design to bring costs down – it was noted that for these reasons and due to the issues of yield which are not able to be resolved in the timescales available for one platform NVIDIA actually has the largest "scrap" (wasted die) of any semiconductor in the industry, which is probably a good reason for their margin performance over the past year. Jen-Hsun made note of his desire to see a "100% yield coming out of TSMC" – while NVIDIA talked up IBM last year, its clear that TSMC is rapidly becoming NVIDIA’s primary foundry partner in a more vocal sense, as the number of processors produced from IBM for NVIDIA are still likely to be dwarfed by those that have continued to be produced by TSMC.

    Jen-Hsun commented that the rumours are suggesting the announcement of NV4x in "a couple of short months" and that these are "likely to be correct".

    Beyond 3D

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •