links from November 2024
Cerebras Now The Fastest LLM Inference Processor; Its Not Even Close
To put it into perspective, Cerebras ran the 405B model nearly twice as fast as the fastest GPU cloud ran the 1B model. Twice the speed on a model that is two orders of magnitude more complex.
A guide and tools for running macOS on QEMU/KVM. Supports running modern macOS versions including Monterey, Ventura and Sonoma using OpenCore.
Py2/py3 script that can download macOS components direct from Apple
A Python script that can download macOS components directly from Apple and create bootable USB installers. Supports both Python 2 and 3.
OpenAI and others seek new path to smarter AI as current methods hit limitations
Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training - the phase of training an AI model that use s a vast amount of unlabeled data to understand language patterns and structures - have plateaued.
Then from Yann LeCun:
I don’t wanna say “I told you so”, but I told you so.
Also, from Gary Marcus;
Yann LeCun is absolute conniving thief