hardware upgrades

  • Nvidia Shifts Focus to AI, No New GPUs at CES


    For the first time in 5 years, Nvidia will not announce any new GPUs at CES — company quashes RTX 50 Super rumors as AI expected to take center stageNvidia will not be announcing any new GPUs at CES for the first time in five years, quashing rumors of RTX 50 Super cards and highlighting a limited supply of the 5070Ti, 5080, and 5090 models. Instead, the company is expected to focus on AI developments, while considering reintroducing the 3060 model to meet demand. Meanwhile, the prices of DDR5 memory and storage have surged, with 128GB kits reaching $1460, making hardware upgrades increasingly challenging. This matters because it highlights the shifting focus in the tech industry towards AI and the impact of rising component costs on consumer upgrades.

    Read Full Article: Nvidia Shifts Focus to AI, No New GPUs at CES

  • Benchmarking Small LLMs on a 16GB Laptop


    I benchmarked 7 Small LLMs on a 16GB Laptop. Here is what is actually usable.Running small language models (LLMs) on a standard 16GB RAM laptop reveals varying levels of usability, with Qwen 2.5 (14B) offering the best coding performance but consuming significant RAM, leading to crashes when multitasking. Mistral Small (12B) provides a balance between speed and resource demand, though it still causes Windows to swap memory aggressively. Llama-3-8B is more manageable but lacks the reasoning abilities of newer models, while Gemma 3 (9B) excels in instruction following but is resource-intensive. With rising RAM prices, upgrading to 32GB allows for smoother operation without swap lag, presenting a more cost-effective solution than investing in high-end GPUs. This matters because understanding the resource requirements of LLMs can help users optimize their systems without overspending on hardware upgrades.

    Read Full Article: Benchmarking Small LLMs on a 16GB Laptop