Just over eight months after it all started, the end of the NVIDIA launch of the GeForce Turing product stack is finally in sight. This morning, the company launches its latest and cheapest GeForce Turing graphics card, the GeForce GTX 1650. The newest member of the GeForce family, available for $ 149, will succeed the GeForce product stack with the latest NVIDIA architecture in a low-power, 1080-pixel graphics card and compromise at a bargain price fits.
In a very traditional NVIDIA manner, the launch of Turing is a top-to-bottom affair. After launching the four RTX 20 Series cards at the beginning of the cycle, NVIDIA's efforts over the past two months have focused on filling in the back end of their product stack. The focus is on a design variant of the NVIDIA GPUs, the TU1
To date, the GTX 16 series consisted exclusively of the GTX 1660 family of cards. the GTX 1660 (vanilla) and GTX 1660 Ti. Both were based on the TU116 GPU. Today, however, the GTX 16 Series family is expanding with the launch of the GTX 1650 and the new Turing GPU for the NVIDIA Junior Card: TU117.
Unofficial TU117 Block Diagram
Unlike the GeForce GTX 1660 Ti The underlying TU116 GPU was the first glimpse of NVIDIA's mainstream product plans. The GeForce GTX 1650 is a much more pedestrian affair. The underlying TU117 GPU is a smaller version of the TU116 GPU for all practical purposes, containing the same Turing core functions but with fewer resources around. Overall, NVIDIA has saved a third of the CUDA cores, one third of the memory channels and one third of the ROPs from the TU116, creating a GPU that is smaller and easier to manufacture for this low-margin market. Nevertheless, the TU117 with 200 mm2 size and 4.7 B-transistors is by no means a simple chip. In fact, it's exactly the same die-size as GP106 – the GPU at the heart of the GeForce GTX 1060 series – so you get an idea of how performance (and slowly) has cascaded down to cheaper products in recent years.
In any case, over time TU117 will enter numerous NVIDIA products. For the time being, however, things are starting with the GeForce GTX 1650.
|NVIDIA GeForce Specification Comparison|
|GTX 1650||GTX 1660||GTX 1050 Ti||GTX 1050|
|      ||32||48||32||32                 [19459044  GDDR5|
|Memory bus width||128-bit||192-bit||128-bit||128-bit|
|Single Precision Perf.||3 TFLOPS||5 TFLOPS||2.1 TFLOPS||1.9 TFLOPS||75W||75W||7 5W||120W||] GPU|| TU117
| TU1 16
|Manufacturing process||14nm||Samsung 14nm|
|Date of introduction||23.04.2014||14.03.2014||25.10.2016||25.10 .2016|
|] $ 149||$ 219||$ 139||$ 109|
It is interesting that the GTX 1650 is not and is not fully functional TU117 GPU used. Compared to the full chip, the version included in the GTX 1650 has a TPC melted off, which means that the chip loses 2 SMs / 64 CUDA cores. The net result is that the GTX 1650 is a very rare case in which NVIDIA does not put the best foot forward – the company is essentially a sandbag.
Within the NVIDIA historical product stack, placing the GTX 1650 is a bit tricky. Officially, this is the successor to the GTX 1050, which itself was a similar reduced card. However, the GTX 1050 was also introduced for $ 109, while the GTX 1650 is available for $ 149. This means a hefty price increase of 37 percent over the generations. Therefore, you might apologize if you think that the GTX 1650 feels much like the successor to the GTX 1050 Ti, as the $ 149 price is very similar to the starting price of the GTX 1050 Ti. Turing cards were more generationally expensive than the Pascal cards they replaced, and the low price of these budget cards adds to the difference.
The GTX 1650 will dive into the numbers Activated 896 CUDA cores, spread over 2 GPCs. This is actually not as big a step from the GeForce GTX 1050 series on paper, but the architectural changes made by Turing and the more effective increase in graphics efficiency mean that the small card packs a bit more punch than on paper. The CUDA cores themselves are clocked slightly lower than usual for a Turing card, with the GTX 1650 rising to 1665 MHz with reference timing.
The package is rounded off by 32 ROPs, which are among the card's 4 ROP / L2 / Memory clusters. This means the card is powered by a 128-bit memory bus that NVIDIA has paired with 8 Gbps GDDR5 memory. Conveniently, the memory capacity of the card is 128GB / s, about 14% more than the last generation of GTX 1050 cards. Although NVIDIA did not significantly improve the storage capacity of the other Turing cards, the GTX 1650 does not. The minimum is now 4 GB instead of the very limited 2 GB of the GTX 1050. Not so 4 GB is particularly spacious in 2019, but the storage capacity should not be as high as that of its predecessor.
Overall, the GTX 1650 delivers around 60% of the next card's performance on paper NVIDIA's product stack, the GTX 1660. In practice, I expect the pair to be a little closer – scaling GPU performance is not really 1: 1 – but that's the area we're in right now to actually test the map.
In terms of energy consumption, the smallest member of the GeForce Turing stack is now the least powerful. NVIDIA now keeps its GTX xx50 cards at 75 W (or less) for a few generations. The GTX 1650 continues this trend. This means that, at least for NVIDIA reference clock cards, no additional PCIe power connection is required and the card can only be powered from the PCIe bus. This addresses the need for a card that can be used in base systems where no PCIe power cable is available or in low power systems where a more power hungry card is not suitable. This also means that discrete graphics cards are not as popular as they used to be for HTPCs, but for HTPC manufacturers looking to go that way, the GTX 1650 will replace the GTX 1050 series in this market too
Reviews , Product Positioning and Competition
Business Transformation, let's talk about product positioning and availability of hardware.
The GeForce GTX 1650 is a tough start for NVIDIA; This means that as of today, cards are supplied by retailers and in OEM systems. Typical for low-end NVIDIA cards, there are no reference cards or reference designs to talk about. As a result, NVIDIA's board partners will do their own thing with their respective product lines. These include, above all, overclocked cards that offer more power but also require an external PCIe power connector to meet the higher power requirements of the cards.
Although this is a tough start, this is a very unorthodox (if not downright undervalued) NVIDIA has decided not to allow the press to prematurely test GTX 1650 cards NVIDIA withheld the drivers needed to test the card, which means we would not have been able to install the card even if we were able to secure a card in advance.We have maps on the way and will compile a test report in due course, but for now With GTX 1650 cards, we have no more hands-on experience than you, our readers.
NVIDIA has always treated low-end card launches as a minor matter as their high-end merchandise, and the GTX 1650 is no different In fact, the launch of this generation is particularly cautious: we have no pictures or even a press kit that we can work with because NVIDIA has decided to give us the card to be notified by e-mail. At this time, there is little need for extensive fanfare – it is a Turing card, and the Turing architecture / feature set was excessively covered at the time – it is rare for a card to be used on a new GPU is based, without critics get an early crack on it. And for good reason: examiners offer a neutral analysis of the card and its performance by third parties. It is therefore generally not in the buyers' best interests to eliminate the reviewers – and if so, this may lead to red flags – but we are still here.
In any case, while I assert that buyers are struggling a week or so to put reviews together, Turing is a well known quantity at this point. As mentioned previously, the GTX 1650 is around 60% of the GTX 1660's performance due to its specifications on paper, and the actual performance is likely to be slightly higher. NVIDIA, for its part, is primarily an upgrade to the GeForce GTX 950 and its same-generation AMD counterparts. This is the same cadence we have seen in the rest of the GeForce Turing family. NVIDIA says the performance should be 2x (or more) faster than the GTX 950, and this should be easily achieved.
While we're waiting for us to get a card to set benchmarks, by and large, the GTX xx50 Series cards are said to be 1080p cards with compromises, and I expect a similar amount for the GTX 1650, which based on the GTX 1660. The GTX 1650 should be able to run some games at 1080p For maximum picture quality – think DOTA2 and the like – but for more demanding games, I expect some settings to be reset to stay at playable frame rates at 1080p. One advantage that it offers, however, is that it should not struggle as much with its 4GB VRAM on newer games as the 2GB GTX 950 and GTX 1050.
Strangely enough, NVIDIA also offers a (similar) game package with the GTX 1650. Or rather, the company has extended its continuing Fortnite bundle along with the rest of the new card to the GeForce GTX 16 series. The bundle itself is not much to write about – some game currencies and skins for a game that is still free at the beginning – but it's an unexpected move, as NVIDIA did not put that bundle on the other GTX 16 cards ,
If you look at the specifications of the GTX 1650 and the choice of the NVIDIA price for the card, it is clear that NVIDIA is holding back a bit. Normally, the company releases two low-end cards at once – a fully activated GPU-based card and a degraded card – that they did not do this time. This means that NVIDiA has the option to introduce a fully functional TU117 card in the future, if so desired. And while the actual CUDA core number differences between the GTX 1650 and a theoretical GTX 1650 Ti are quite limited, some other CUDA cores alone are probably not worth it, but NVIDIA has another ace in the form of GDDR6 memory. If the conceptually similar GTX 1660 Ti is mature, a full-featured TU117 with a small clock stroke and 4GB of GDDR6 could probably be far enough ahead of the Vanilla GTX 1650 to justify a new card, closing NVIDIA's current product gap.
As far as the competition is concerned, AMD is, of course, represented at the end of Polaris' Radeon RX 500 series, and that's the GTX 1650. AMD is keen to set up the Radeon RX 570 8GB against the GTX 1650, which makes for a very interesting fight. Based on what we've seen with the GTX 1660, the RX 570 should do quite well compared to the GTX 1650, and the 8GB VRAM would be the icing on the cake. However, I'm not sure if AMD and its partners can necessarily hold 8GB card prices of $ 149 or less. In this case, the competition could be the 4GB RX 570.
Eventually, AMD's position will be as long as it can The GTX 1650 is inefficient in terms of features and power efficiency – and keep in mind that the RX 570 consumes almost twice as much power here – it can compete in pricing and the Outperform performance. As long as AMD wants to keep the line there, for AMD this will be a low-priced matchup on a pure price / performance basis for games of the current generation. Of course, to see how cheap this is, we need to use the GTX 1650 as a benchmark. So stay tuned.