News Posts matching #GPU

Return to Keyword Browsing

Intel Releases Arc GPU Graphics Drivers 101.5522 WHQL

Intel released the latest version of Arc GPU Graphics Drivers today. The latest version 101.5522 WHQL brings mostly game ready support for titles like Senua's Saga: Hellblade II, Starfield May Update, Wuthering Waves, and XDefiant. With the support for Starfield May Update, Intel has prepared performance improvements when the game runs on DirectX 12 API. This brings up to 8% average FPS uplift at 1080p with Ultra settings and up to 7% average FPS uplift at 1440p with High settings, which is a notable improvement coming only from the driver. You can download the drivers from the link below.
DOWNLOAD: Intel Arc GPU Graphics Drivers 101.5522 WHQL.

Intel Prepares Core Ultra 5-238V Lunar Lake-MX CPU with 32 GB LPDDR5X Memory

Intel has prepared the Core Ultra 5-238V, a Lunar Lake-MX CPU that integrates 32 GB of LPDDR5X memory into the CPU package. This new design represents a significant departure from the traditional approach of using separate memory modules, promising enhanced performance and efficiency, similar to what Apple is doing with its M series of processors. The Core Ultra 5-238V is the first of its kind for Intel to hit mass consumers. Previous attempt was with Lakefield, which didn't take off, but had advanced 3D stacked Foveros packaging. With 32 GB of high-bandwidth, low-power LPDDR5X memory directly integrated into the CPU package, the Core Ultra 5-238V eliminates the need for separate memory modules, reducing latency and improving overall system responsiveness. This seamless integration results in faster data transfer rates and lower power consumption with LPDDR5X memory running at 8533 MT.

Applications that demand intensive memory usage, such as video editing, 3D rendering, and high-end gaming, will be the first to experience performance gains. Users can expect smoother multitasking, quicker load times, and more efficient handling of memory-intensive tasks. The Core Ultra 5-238V is equipped with four big Lion Cove and four little Skymont cores, in combination with seven Xe2-LPG cores based on Battlemage GPU microarchitecture. The bigger siblings to Core Ultra 5, the Core Ultra 7 series, will feature eight Xe2-LPG cores instead of seven, with the same CPU core count, while all of them will run the fourth generation NPU.

Lenovo Announces its New AI PC ThinkPad P14s Gen 5 Mobile Workstation Powered by AMD Ryzen PRO Processors

Today, Lenovo launched the Lenovo ThinkPad P14s Gen 5 designed for professionals who need top-notch performance in a portable 14-inch chassis. Featuring a stunning 16:10 display, this mobile workstation is powered by AMD Ryzen PRO 8040 HS-Series processors. These processors are ultra-advanced and energy-efficient, making them perfect for use in thin and light mobile workstations. The AMD Ryzen PRO HS- Series processors also come with built-in Artificial Intelligence (AI) capabilities, including an integrated Neural Processing Unit (NPU) for optimized performance in AI workflows.

The Lenovo ThinkPad P14s Gen 5 is provided with independent software vendor (ISV) certifications and integrated AMD Radeon graphics, making it ideal for running applications like AutoCAD, Revit, and SOLIDWORKS with seamless performance. This mobile workstation is ideal for mobile power users, offering advanced ThinkShield security features and passes comprehensive MIL-SPEC testing for ultimate durability.

Intel Ponte Vecchio Waves Goodbye, Company Focuses on Falcon Shores for 2025 Release

According to ServeTheHome, Intel has decided to discontinue its high-performance computing (HPC) product line, Ponte Vecchio, and shift its focus towards developing its next-generation data center GPU, codenamed Falcon Shores. This decision comes as Intel aims to streamline its operations and concentrate its resources on the most promising and competitive offerings. The Ponte Vecchio GPU, released in January of 2023, was intended to be Intel's flagship product for the HPC market, competing against the likes of NVIDIA's H100 and AMD's Instinct MI series. However, despite its impressive specifications and features, Ponte Vecchio faced significant delays and challenges in its development and production cycle. Intel's decision to abandon Ponte Vecchio is pragmatic, recognizing the intense competition and rapidly evolving landscape of the data center GPU market.

By pivoting its attention to Falcon Shores, Intel aims to deliver a more competitive and cutting-edge solution that can effectively challenge the dominance of its rivals. Falcon Shores, slated for release in 2025, is expected to leverage Intel's latest process node and architectural innovations. Currently, Intel has Gaudi 2 and Gaudi 3 accelerators for AI. However, the HPC segment is left without a clear leader in the company's product offerings. Intel's Ponte Vecchio is powering Aurora exascale supercomputer, which is the latest submission to the TOP500 supercomputer lists. This is also coming after the Rialto Bridge cancellation, which was supposed to be an HPC-focused card. In the future, the company will focus only on the Falcon Shores accelerator, which will unify HPC and AI needs for high-precision FP64 and lower-precision FP16/INT8.

TOP500: Frontier Keeps Top Spot, Aurora Officially Becomes the Second Exascale Machine

The 63rd edition of the TOP500 reveals that Frontier has once again claimed the top spot, despite no longer being the only exascale machine on the list. Additionally, a new system has found its way into the Top 10.

The Frontier system at Oak Ridge National Laboratory in Tennessee, USA remains the most powerful system on the list with an HPL score of 1.206 EFlop/s. The system has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray's Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt - putting Frontier at the No. 13 spot on the GREEN500.

NVIDIA Blackwell Platform Pushes the Boundaries of Scientific Computing

Quantum computing. Drug discovery. Fusion energy. Scientific computing and physics-based simulations are poised to make giant steps across domains that benefit humanity as advances in accelerated computing and AI drive the world's next big breakthroughs. NVIDIA unveiled at GTC in March the NVIDIA Blackwell platform, which promises generative AI on trillion-parameter large language models (LLMs) at up to 25x less cost and energy consumption than the NVIDIA Hopper architecture.

Blackwell has powerful implications for AI workloads, and its technology capabilities can also help to deliver discoveries across all types of scientific computing applications, including traditional numerical simulation. By reducing energy costs, accelerated computing and AI drive sustainable computing. Many scientific computing applications already benefit. Weather can be simulated at 200x lower cost and with 300x less energy, while digital twin simulations have 65x lower cost and 58x less energy consumption versus traditional CPU-based systems and others.

NVIDIA Grace Hopper Ignites New Era of AI Supercomputing

Driving a fundamental shift in the high-performance computing industry toward AI-powered systems, NVIDIA today announced nine new supercomputers worldwide are using NVIDIA Grace Hopper Superchips to speed scientific research and discovery. Combined, the systems deliver 200 exaflops, or 200 quintillion calculations per second, of energy-efficient AI processing power.

New Grace Hopper-based supercomputers coming online include EXA1-HE, in France, from CEA and Eviden; Helios at Academic Computer Centre Cyfronet, in Poland, from Hewlett Packard Enterprise (HPE); Alps at the Swiss National Supercomputing Centre, from HPE; JUPITER at the Jülich Supercomputing Centre, in Germany; DeltaAI at the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign; and Miyabi at Japan's Joint Center for Advanced High Performance Computing - established between the Center for Computational Sciences at the University of Tsukuba and the Information Technology Center at the University of Tokyo.

NVIDIA Accelerates Quantum Computing Centers Worldwide With CUDA-Q Platform

NVIDIA today announced that it will accelerate quantum computing efforts at national supercomputing centers around the world with the open-source NVIDIA CUDA-Q platform. Supercomputing sites in Germany, Japan and Poland will use the platform to power the quantum processing units (QPUs) inside their NVIDIA-accelerated high-performance computing systems.

QPUs are the brains of quantum computers that use the behavior of particles like electrons or photons to calculate differently than traditional processors, with the potential to make certain types of calculations faster. Germany's Jülich Supercomputing Centre (JSC) at Forschungszentrum Jülich is installing a QPU built by IQM Quantum Computers as a complement to its JUPITER supercomputer, supercharged by the NVIDIA GH200 Grace Hopper Superchip. The ABCI-Q supercomputer, located at the National Institute of Advanced Industrial Science and Technology (AIST) in Japan, is designed to advance the nation's quantum computing initiative. Powered by the NVIDIA Hopper architecture, the system will add a QPU from QuEra. Poland's Poznan Supercomputing and Networking Center (PSNC) has recently installed two photonic QPUs, built by ORCA Computing, connected to a new supercomputer partition accelerated by NVIDIA Hopper.

NVIDIA Testing GeForce RTX 50 Series "Blackwell" GPU Designs Ranging from 250 W to 600 W

According to Benchlife.info insiders, NVIDIA is supposedly in the phase of testing designs with various Total Graphics Power (TGP), running from 250 Watts to 600 Watts, for its upcoming GeForce RTX 50 series Blackwell graphics cards. The company is testing designs ranging from 250 W aimed at mainstream users and a more powerful 600 W configuration tailored for enthusiast-level performance. The 250 W cooling system is expected to prioritize compactness and power efficiency, making it an appealing choice for gamers seeking a balance between capability and energy conservation. This design could prove particularly attractive for those building small form-factor rigs or AIBs looking to offer smaller cooler sizes. On the other end of the spectrum, the 600 W cooling solution is the highest TGP of the stack, which is possibly only made for testing purposes. Other SKUs with different power configurations come in between.

We witnessed NVIDIA testing a 900-watt version of the Ada Lovelace AD102 GPU SKU, which never saw the light of day, so we should take this testing phase with a grain of salt. Often, the engineering silicon is the first batch made for the enablement of software and firmware, while the final silicon is much more efficient and more optimized to use less power and align with regular TGP structures. The current highest-end SKU, the GeForce RTX 4090, uses 450-watt TGP. So, take this phase with some reservations as we wait for more information to come out.

SpiNNcloud Systems Announces First Commercially Available Neuromorphic Supercomputer

Today, in advance of ISC High Performance 2024, SpiNNcloud Systems announced the commercial availability of its SpiNNaker2 platform, a supercomputer-level hybrid AI high-performance computer system based on principles of the human brain. Pioneered by Steve Furber, designer of the original ARM and SpiNNaker1 architectures, the SpiNNaker2 supercomputing platform uses a large number of low-power processors for efficiently computing AI and other workloads.

First-generation SpiNNaker1 architecture is currently used in dozens of research groups across 23 countries worldwide. Sandia National Laboratories, Technical University of München and Universität Göttingen are among the first customers placing orders for SpiNNaker2, which was developed around commercialized IP invented in the Human Brain Project, a billion-euro research project funded by the European Union to design intelligent, efficient artificial systems.

Report: 3 Out of 4 Laptop PCs Sold in 2027 will be AI Laptop PCs

Personal computers (PCs) have been used as the major productivity device for several decades. But now we are entering a new era of PCs based on artificial intelligence (AI), thanks to the boom witnessed in generative AI (GenAI). We believe the inventory correction and demand weakness in the global PC market have already normalized, with the impacts from COVID-19 largely being factored in. All this has created a comparatively healthy backdrop for reshaping the PC industry. Counterpoint estimates that almost half a billion AI laptop PCs will be sold during the 2023-2027 period, with AI PCs reviving the replacement demand.

Counterpoint separates GenAI laptop PCs into three categories - AI basic laptop, AI-advanced laptop and AI-capable laptop - based on different levels of computational performance, corresponding use cases and the efficiency of computational performance. We believe AI basic laptops, which are already in the market, can perform basic AI tasks but not completely GenAI tasks and, starting this year, will be supplanted by more AI-advanced and AI-capable models with enough TOPS (tera operations per second) powered by NPU (neural processing unit) or GPU (graphics processing unit) to perform the advanced GenAI tasks really well.

Apple Unveils the Redesigned 11‑inch and All‑new 13‑inch iPad Air, Supercharged by the M2 Chip

Apple today announced the redesigned 11-inch and all-new 13-inch iPad Air, supercharged by the M2 chip. Now available in two sizes for the first time, the 11-inch iPad Air is super-portable, and the 13-inch model provides an even larger display for more room to work, learn, and play. Both deliver phenomenal performance and advanced capabilities, making iPad Air more powerful and versatile than ever before. Featuring a faster CPU, GPU, and Neural Engine in M2, the new iPad Air offers even more performance and is an incredibly powerful device for artificial intelligence. The front-facing Ultra Wide 12MP camera with Center Stage is now located along the landscape edge of iPad Air, which is perfect for video calls. It also includes faster Wi-Fi, and cellular models include super-fast 5G, so users can stay connected on the go. With a portable design, all-day battery life, a brilliant Liquid Retina display, and support for Apple Pencil Pro, Apple Pencil (USB-C), and Magic Keyboard, iPad Air empowers users to be even more productive and creative. The new iPad Air is available in new blue and purple finishes, along with starlight and space gray. The 11-inch iPad Air still starts at just $599, and the 13-inch iPad Air is a fantastic value at just $799. Customers can order the new iPad Air today, with availability beginning Wednesday, May 15.

"So many users—from students, to content creators, to small businesses, and more—love iPad Air for its performance, portability, and versatility, all at an affordable price. Today, iPad Air gets even better," said Bob Borchers, Apple's vice president of Product Marketing. "We're so excited to introduce the redesigned 11-inch and all-new 13-inch iPad Air, offering two sizes for the first time. With its combination of a brilliant Liquid Retina display, the phenomenal performance of the M2 chip, incredible AI capabilities, and its colorful, portable design with support for new accessories, iPad Air is more powerful and versatile than ever."

Apple Unveils Stunning New iPad Pro With the World's Most Advanced Display, M4 Chip and Apple Pencil Pro

Apple today unveiled the groundbreaking new iPad Pro in a stunningly thin and light design, taking portability and performance to the next level. Available in silver and space black finishes, the new iPad Pro comes in two sizes: an expansive 13-inch model and a super-portable 11-inch model. Both sizes feature the world's most advanced display—a new breakthrough Ultra Retina XDR display with state-of-the-art tandem OLED technology—providing a remarkable visual experience. The new iPad Pro is made possible with the new M4 chip, the next generation of Apple silicon, which delivers a huge leap in performance and capabilities. M4 features an entirely new display engine to enable the precision, color, and brightness of the Ultra Retina XDR display. With a new CPU, a next-generation GPU that builds upon the GPU architecture debuted on M3, and the most powerful Neural Engine yet, the new iPad Pro is an outrageously powerful device for artificial intelligence. The versatility and advanced capabilities of iPad Pro are also enhanced with all-new accessories. Apple Pencil Pro brings powerful new interactions that take the pencil experience even further, and a new thinner, lighter Magic Keyboard is packed with incredible features. The new iPad Pro, Apple Pencil Pro, and Magic Keyboard are available to order starting today, with availability in stores beginning Wednesday, May 15.

"iPad Pro empowers a broad set of pros and is perfect for anyone who wants the ultimate iPad experience—with its combination of the world's best displays, extraordinary performance of our latest M-series chips, and advanced accessories—all in a portable design. Today, we're taking it even further with the new, stunningly thin and light iPad Pro, our biggest update ever to iPad Pro," said John Ternus, Apple's senior vice president of Hardware Engineering. "With the breakthrough Ultra Retina XDR display, the next-level performance of M4, incredible AI capabilities, and support for the all-new Apple Pencil Pro and Magic Keyboard, there's no device like the new iPad Pro."

Apple Introduces the M4 Chip

Apple today announced M4, the latest chip delivering phenomenal performance to the all-new iPad Pro. Built using second-generation 3-nanometer technology, M4 is a system on a chip (SoC) that advances the industry-leading power efficiency of Apple silicon and enables the incredibly thin design of iPad Pro. It also features an entirely new display engine to drive the stunning precision, color, and brightness of the breakthrough Ultra Retina XDR display on iPad Pro. A new CPU has up to 10 cores, while the new 10-core GPU builds on the next-generation GPU architecture introduced in M3, and brings Dynamic Caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to iPad for the first time. M4 has Apple's fastest Neural Engine ever, capable of up to 38 trillion operations per second, which is faster than the neural processing unit of any AI PC today. Combined with faster memory bandwidth, along with next-generation machine learning (ML) accelerators in the CPU, and a high-performance GPU, M4 makes the new iPad Pro an outrageously powerful device for artificial intelligence.

"The new iPad Pro with M4 is a great example of how building best-in-class custom silicon enables breakthrough products," said Johny Srouji, Apple's senior vice president of Hardware Technologies. "The power-efficient performance of M4, along with its new display engine, makes the thin design and game-changing display of iPad Pro possible, while fundamental improvements to the CPU, GPU, Neural Engine, and memory system make M4 extremely well suited for the latest applications leveraging AI. Altogether, this new chip makes iPad Pro the most powerful device of its kind."

NVIDIA Advertises "Premium AI PC" Mocking the Compute Capability of Regular AI PCs

According to the report from BenchLife, NVIDIA has started the marketing campaign push for "Premium AI PC," squarely aimed at the industry's latest trend pushed by Intel, AMD, and Qualcomm for an "AI PC" system, which features a dedicated NPU for processing smaller models locally. NVIDIA's approach comes from a different point of view: every PC with an RTX GPU is a "Premium AI PC," which holds a lot of truth. Generally, GPUs (regardless of the manufacturer) hold more computing potential than the CPU and NPU combined. With NVIDIA's push to include Tensor cores in its GPUs, the company is preparing for next-generation software from vendors and OS providers that will harness the power of these powerful silicon pieces and embed more functionality in the PC.

At the Computex event in Taiwan, there should be more details about Premium AI PCs and general AI PCs. In its marketing materials, NVIDIA compares AI PCs to its Premium AI PCs, which have enhanced capabilities across various applications like image/video editing and upscaling, productivity, gaming, and developer applications. Another relevant selling point is the user base for these Premium AI PCs, which NVIDIA touts to be 100 million users. Those PCs support over 500 AI applications out of the box, highlighting the importance of proper software support. NVIDIA's systems are usually more powerful, with GeForce RTX GPUs reaching anywhere from 100-1300+ TOPS, compared to 40 TOPS of AI PCs. How other AI PC makers plan to fight in the AI PC era remains to be seen, but there is a high chance that this will be the spotlight of the upcoming Computex show.

Alphacool Launches New Eisblock Aurora 180° Terminal

With the new Alphacool Eisblock Aurora 180° terminal, you can give your Eisblock Aurora GPU cooler a new look and gain additional options for connecting your Eisblock Aurora GPU cooler to your water cooling circuit. The Alphacool Aurora 180° terminal allows flexible connection options for all Eisblock GPU coolers. Perfect for extensive modding projects or for systems with limited space. The elegant design is perfected by a magnetic cover.

Flexible connections
The Alphacool Eisblock Aurora 180° terminal replaces the standard terminal of the Eisblock GPU cooler. It positions the connections above the backplate, significantly reducing the depth of the cooling block. With three possible connection options for each input and output - top, side and rear - the terminal offers maximum flexibility.

SK hynix Presents CXL Memory Solutions Set to Power the AI Era at CXL DevCon 2024

SK hynix participated in the first-ever Compute Express Link Consortium Developers Conference (CXL DevCon) held in Santa Clara, California from April 30-May 1. Organized by a group of more than 240 global semiconductor companies known as the CXL Consortium, CXL DevCon 2024 welcomed a majority of the consortium's members to showcase their latest technologies and research results.

CXL is a technology that unifies the interfaces of different devices in a system such as semiconductor memory, storage, and logic chips. As it can increase system bandwidth and processing capacity, CXL is receiving attention as a key technology for the AI era in which high performance and capacity are essential. Under the slogan "Memory, The Power of AI," SK hynix showcased a range of CXL products at the conference that are set to strengthen the company's leadership in AI memory technology.

More than 500 AI Models Run Optimized on Intel Core Ultra Processors

Today, Intel announced it surpassed 500 AI models running optimized on new Intel Core Ultra processors - the industry's premier AI PC processor available in the market today, featuring new AI experiences, immersive graphics and optimal battery life. This significant milestone is a result of Intel's investment in client AI, the AI PC transformation, framework optimizations and AI tools including OpenVINO toolkit. The 500 models, which can be deployed across the central processing unit (CPU), graphics processing unit (GPU) and neural processing unit (NPU), are available across popular industry sources, including OpenVINO Model Zoo, Hugging Face, ONNX Model Zoo and PyTorch. The models draw from categories of local AI inferencing, including large language, diffusion, super resolution, object detection, image classification/segmentation, computer vision and others.

"Intel has a rich history of working with the ecosystem to bring AI applications to client devices, and today we celebrate another strong chapter in the heritage of client AI by surpassing 500 pre-trained AI models running optimized on Intel Core Ultra processors. This unmatched selection reflects our commitment to building not only the PC industry's most robust toolchain for AI developers, but a rock-solid foundation AI software users can implicitly trust."
-Robert Hallock, Intel vice president and general manager of AI and technical marketing in the Client Computing Group

AMD Celebrates its 55th Birthday

AMD is now a 55-year-old company. The chipmaker was founded on May Day, 1969, and traversed practically every era of digital computing to reach where it is today—a company that makes contemporary processors for PCs, servers, and consumer electronics; GPUs for gaming graphics, professional visualization, and the all important AI HPC processors that are driving the latest era of computing. As of this writing, AMD has a market capitalization of over $237 billion, presence in all market regions, and supplies hardware and services to nearly every Fortune 500 company, including every IT giant. Happy birthday, AMD!

We Tested NVIDIA's new ChatRTX: Your Own GPU-accelerated AI Assistant with Photo Recognition, Speech Input, Updated Models

NVIDIA today unveiled ChatRTX, the AI assistant that runs locally on your machine, and which is accelerated by your GeForce RTX GPU. NVIDIA had originally launched this as "Chat with RTX" back in February 2024, back then this was regarded more as a public tech demo. We reviewed the application in our feature article. The ChatRTX rebranding is probably aimed at making the name sound more like ChatGPT, which is what the application aims to be—except it runs completely on your machine, and is exhaustively customizable. The most obvious advantage of a locally-run AI assistant is privacy—you are interacting with an assistant that processes your prompt locally, and accelerated by your GPU; the second is that you're not held back by performance bottlenecks by cloud-based assistants.

ChatRTX is a major update over the Chat with RTX tech-demo from February. To begin with, the application has several stability refinements from Chat with RTX, which felt a little rough on the edges. NVIDIA has significantly updated the LLMs included with the application, including Mistral 7B INT4, and Llama 2 7B INT4. Support is also added for additional LLMs, including Gemma, a local LLM trained by Google, based on the same technology used to make Google's flagship Gemini model. ChatRTX now also supports ChatGLM3, for both English and Chinese prompts. Perhaps the biggest upgrade ChatRTX is its ability to recognize images on your machine, as it incorporates CLIP (contrastive language-image pre-training) from OpenAI. CLIP is an LLM that recognizes what it's seeing in image collections. Using this feature, you can interact with your image library without the need for metadata. ChatRTX doesn't just take text input—you can speak to it. It now accepts natural voice input, as it integrates the Whisper speech-to-text NLI model.
DOWNLOAD: NVIDIA ChatRTX

AMD Releases Software Adrenalin 24.4.1 WHQL GPU Drivers

AMD has released the latest version of Adrenalin Edition graphics drivers, version 24.4.1 WHQL. It includes support for the upcoming Manor Lords game, as well as add performance improvements for HELLDIVERS 2 game, and adds AMD HYPR-Tune support to Nightingale and SKULL AND BONES games. New drivers also expand Vulkan API extensions support with VK_KHR_shader_maximal_reconvergence and VK_KHR_dynamic_rendering_local_read, as well as include support and optimizations for Topaz Gigapixel AI application, versions 7.1.0 and 7.1.1, with new "Recovery" and "Low Resolution" AI upscaling features.

New AMD Software Adrenalin Edition 24.4.1 WHQL drivers come with several fixes, including performance improvements for HELLDIVERS 2, fix for intermittent application crash in Lords of the Fallen on Radeon RX 6000 series graphics cards, various artifact issues in SnowRunner and Horizon Forbidden West Complete Edition on Radeon RX 6800 and Radeon RX 6000 series graphics cards, fix for intermittent application crash or driver timeout in Overwatch 2 when Radeon Boost is enabled on Radeon RX 6000 and above series graphics cards, intermittent crash while changing Anti-Aliasing settings in Enshrouded on Radeon 7000 series graphics cards, and various application freeze or crash issues with the SteamVR while using Quest Link on Meta Quest 2 or when screen sharing with Microsoft Teams.

DOWNLOAD: AMD Software Adrenalin 24.4.1 WHQL

Aetina Accelerates Embedded AI with High-performance, Small Form-factor Aetina IA380E-QUFL Graphics Card

Aetina, a leading Edge AI solution provider, announced the launch of the Aetina IA380E-QUFL at Embedded World 2024 in Nuremberg, Germany. This groundbreaking product is a small form factor PCIe graphics card powered by the high-performance Intel Arc A380E GPU.

Unmatched Power in a Compact Design
The Aetina IA380E-QUFL delivers workstation-level performance packed into a low-profile, single-slot form factor. This innovative solution consumes only 50 W, making it ideal for space and power-constrained edge computing environments. Embedded system manufacturers and integrators can leverage the power of 4.096 TFLOPs peak FP32 performance delivered by the Intel Arc A380E GPU.

Unreal Engine 5.4 is Now Available With Improvements to Nanite, AI and Machine Learning, TSR, and More

Unreal Engine 5.4 is here, and it's packed with new features and improvements to performance, visual fidelity, and productivity that will benefit game developers and creators across industries. With this release, we're delivering the toolsets we've been using internally to build and ship Fortnite Chapter 5, Rocket Racing, Fortnite Festival, and LEGO Fortnite. Here are some of the highlights.

Animation
Character rigging and animation authoring
This release sees substantial updates to Unreal Engine's built-in animation toolset, enabling you to quickly, easily, and enjoyably rig characters and author animation directly in engine, without the frustrating and time-consuming need to round trip to external applications. With an Experimental new Modular Control Rig feature, you can build animation rigs from understandable modular parts instead of complex granular graphs, while Automatic Retargeting makes it easier to get great results when reusing bipedal character animations. There are also extensions to the Skeletal Editor and a suite of new deformer functions to make the Deformer Graph more accessible.

AMD's RDNA 4 GPUs Could Stick with 18 Gbps GDDR6 Memory

Today, we have the latest round of leaks that suggest that AMD's upcoming RDNA 4 graphics cards, codenamed the "RX 8000-series," might continue to rely on GDDR6 memory modules. According to Kepler on X, the next-generation GPUs from AMD are expected to feature 18 Gbps GDDR6 memory, marking the fourth consecutive RDNA architecture to employ this memory standard. While GDDR6 may not offer the same bandwidth capabilities as the newer GDDR7 standard, this decision does not necessarily imply that RDNA 4 GPUs will be slow performers. AMD's choice to stick with GDDR6 is likely driven by factors such as meeting specific memory bandwidth requirements and cost optimization for PCB designs. However, if the rumor of 18 Gbps GDDR6 memory proves accurate, it would represent a slight step back from the 18-20 Gbps GDDR6 memory used in AMD's current RDNA 3 offerings, such as the RX 7900 XT and RX 7900 XTX GPUs.

AMD's first generation RDNA used GDDR6 with 12-14 Gbps speeds, RDNA 2 came with GDDR6 at 14-18 Gbps, and the current RDNA 3 used 18-20 Gbps GDDR6. Without an increment in memory generation, speeds should stay the same at 18 Gbps. However, it is crucial to remember that leaks should be treated with skepticism, as AMD's final memory choices for RDNA 4 could change before the official launch. The decision to use GDDR6 versus GDDR7 could have significant implications in the upcoming battle between AMD, NVIDIA, and Intel's next-generation GPU architectures. If AMD indeed opts for GDDR6 while NVIDIA pivots to GDDR7 for its "Blackwell" GPUs, it could create a disparity in memory bandwidth performance between the competing products. All three major GPU manufacturers—AMD, NVIDIA, and Intel with its "Battlemage" architecture—are expected to unveil their next-generation offerings in the fall of this year. As we approach these highly anticipated releases, more concrete details on specifications and performance capabilities will emerge, providing a clearer picture of the competitive landscape.

China Circumvents US Restrictions, Still Acquiring NVIDIA GPUs

A recent Reuters investigation has uncovered evidence suggesting Chinese universities and research institutes may have circumvented US sanctions on high-performance NVIDIA GPUs by purchasing servers containing the restricted chips. The sanctions tightened on November 17, 2023, prohibit the export of advanced NVIDIA GPUs like the consumer GeForce RTX 4090 to China. Despite these restrictions, Reuters found that at least ten China-based organizations acquired servers equipped with the sanctioned NVIDIA GPUs between November 20, 2023, and February 28, 2024. These servers were purchased from major vendors such as Dell, Gigabyte, and Supermicro, raising concerns about potential sanctions evasion. When contacted by Reuters, the companies provided varying responses.

Dell stated that it had not observed any instances of servers with restricted chips being shipped to China and expressed willingness to terminate relationships with resellers found to be violating export control regulations. Gigabyte, on the other hand, stated that it adheres to Taiwanese laws and international regulations. Notably, the sale and purchase of the sanctioned GPUs are not illegal in China. This raises the possibility that the restricted NVIDIA chips may have already been present in the country before the sanctions took effect on November 17, 2023. The findings highlight the challenges in enforcing export controls on advanced technologies, particularly in the realm of high-performance computing hardware. As tensions between the US and China continue to rise, the potential for further tightening of export restrictions on cutting-edge technologies remains a possibility.
Return to Keyword Browsing
May 17th, 2024 12:35 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts