Intel

  • Intel’s Custom Panther Lake CPU for Handheld PCs


    Intel is planning a custom Panther Lake CPU for handheld PCsIntel is entering the handheld gaming market with its new Panther Lake chips, aiming to create a dedicated gaming platform that could outperform current offerings. The company plans to develop custom Intel Core G3 variants specifically for handheld devices, leveraging the advanced 18A process to enhance GPU performance. This move places Intel in competition with other tech giants like Qualcomm and AMD, who are also exploring opportunities in the handheld gaming space. While specific details about Intel's gaming platform remain under wraps, further announcements are expected from Intel and its partners later this year. This matters as it signifies a growing trend toward more powerful and specialized handheld gaming devices, potentially transforming the portable gaming experience.

    Read Full Article: Intel’s Custom Panther Lake CPU for Handheld PCs

  • Samsung & Intel’s OLED Tech Enhances HDR Efficiency


    Samsung and Intel’s OLED tech makes HDR easier on laptop battery lifeSamsung and Intel have developed OLED technology that optimizes HDR (High Dynamic Range) performance on laptops, significantly reducing power consumption. Traditional HDR modes often require maximum brightness, leading to excessive energy use even during standard tasks like web browsing. SmartPower HDR™ technology addresses this by adjusting the voltage and brightness levels, resulting in up to 22% lower power consumption in general use and up to 17% during HDR content playback. This advancement allows laptops to maintain the visual benefits of HDR while operating with energy efficiency similar to SDR (Standard Dynamic Range) mode. This matters because it enhances the battery life of laptops without compromising display quality, making HDR more practical for everyday use.

    Read Full Article: Samsung & Intel’s OLED Tech Enhances HDR Efficiency

  • Windows on Arm: A Year of Progress


    Windows on Arm had another good yearIn 2024, Qualcomm's Snapdragon X chips significantly improved the viability of Arm-based Windows laptops, offering solid performance and impressive battery life, especially in Microsoft's Surface Laptop and Surface Pro models. Despite these advancements, inconsistent app compatibility remained a challenge, particularly for creative applications and gaming. However, by 2025, software improvements and better emulation support have made Arm laptops more appealing, with native versions of apps like Adobe Premiere Pro and improved gaming capabilities. The competition between Arm and x86 architectures is intensifying, with upcoming releases from Qualcomm, Intel, and AMD promising further advancements. Additionally, rumors of Nvidia's entry into the Arm space could enhance graphics performance, making Arm laptops even more attractive to gamers. As the gap between Arm and x86 narrows, the choice of platform may increasingly depend on specific user needs and preferences. This matters because it highlights the evolving landscape of laptop technology, offering consumers more options and potentially shifting market dynamics.

    Read Full Article: Windows on Arm: A Year of Progress

  • RPC-server llama.cpp Benchmarks


    RPC-server llama.cpp benchmarksThe llama.cpp RPC server facilitates distributed inference of large language models (LLMs) by offloading computations to remote instances across multiple machines or GPUs. Benchmarks were conducted on a local gigabit network utilizing three systems and five GPUs, showcasing the server's performance in handling different model sizes and parameters. The systems included a mix of AMD and Intel CPUs, with GPUs such as GTX 1080Ti, Nvidia P102-100, and Radeon RX 7900 GRE, collectively providing a total of 53GB VRAM. Performance tests were conducted on various models, including Nemotron-3-Nano-30B and DeepSeek-R1-Distill-Llama-70B, highlighting the server's capability to efficiently manage complex computations across distributed environments. This matters because it demonstrates the potential for scalable and efficient LLM deployment in distributed computing environments, crucial for advancing AI applications.

    Read Full Article: RPC-server llama.cpp Benchmarks