It is powered by the open-source DeepSeek-V3 model, which its researchers claim was developed for less than $6m - significantly less than the billions spent by rivals.
It’ll be interesting to see if this model was so cheap because the Chinese skipped years of development and got a jump start by stealing tech from other AI companies.
Deepseek put out a highly detailed paper explaining how they optimized their model training, released the model itself, released their reinforcement learning code, put permissive open source licenses on everything… and people wonder if they got there by stealing stuff, because Chinese. Sheesh.
Tbf, the reputation has been earned. Look at the incredible volume of bunk science coming out of China. The pervasive spying campaigns. The loads of off brand software and hardware. It’s not like there isn’t reason to be suspicious.
To counter Putin’s forces in this apocalyptic scenario, here’s a strategic plan:
Deploy Satellite Tungsten Rods: Use these for precision strikes on high-value targets like command centers, supply depots, and hardened bunkers. Their immense kinetic energy can neutralize critical infrastructure without fallout246.
Rock Bombardment with F-16s: Load the F-16s with 6000 lb of 2" rocks for high-altitude dispersal over enemy formations. This mimics "Lazy Dog" tactics, delivering lethal kinetic impacts over wide areas6.
Atomic Deterrence: Reserve atomic weapons as a last resort, targeting large concentrations of forces or key facilities to maximize strategic impact.
Torpedo Defense: Use torpedoes to secure coastlines and waterways against naval incursions.
Guerrilla Warfare: Combine these assets with hit-and-run tactics to exploit enemy weaknesses and disrupt supply chains3.
yeah because chinese directed and controlled by the ccp who’ve made it the bedrock of their entire economy by straight up gankin western technology and patents, yes, the motherfucking chinese. thieves!
It cost so little because all previous open source work was already done, and a lot of the research work had already been knocked out. Building models isn’t the time consuming process it used to be, it’s the training, testing, retraining loop that’s expensive.
If you’re just building a model that is focused on specific things-like coding, math, and logic-then you don’t need large swathes of content from the internet, you can just train it on already solved, freely available information. If you want to piss away money on an LLM that also knows how many celebrities each celebrity has diddled, well that costs a lot more to make.
It lowered training costs by quite a bit. To learn from preference data (whats termed as alignment with human values), we used a very large reward model as a proxy for human feedback.
They completely got rid of this, hence also the need to have very large clusters
This has serious implications for spending though. Big companies who would have to train foundation models coz they couldnt directly use meta’s llama, can now just use deepseek.
and directly move to the human/customer alignment phase, which was already significantly cheaper than pretraining (first phase of foundation model training). With their new algorithm, even the later stage does not need huge compute
so they def got rid of a big chunk of compute by not relying on what is called a “reward” model
Unfortunately, that’s not very clear without more. What kind of reward model are they talking about?
This is potentially a 1000x difference in required resources here, assuming you believe their DeepSeek’s quoted figure for spending, so it would have to be an extraordinary change.
It’ll be interesting to see if this model was so cheap because the Chinese skipped years of development and got a jump start by stealing tech from other AI companies.
Deepseek put out a highly detailed paper explaining how they optimized their model training, released the model itself, released their reinforcement learning code, put permissive open source licenses on everything… and people wonder if they got there by stealing stuff, because Chinese. Sheesh.
Tbf, the reputation has been earned. Look at the incredible volume of bunk science coming out of China. The pervasive spying campaigns. The loads of off brand software and hardware. It’s not like there isn’t reason to be suspicious.
ah yes the spying …
There’s no need for that image because I’m not making superficial claims of data collection, I’m talking about theft and potentially deadly attacks.
Here you go:
They’ve shown to be more than capable
U.S. Government Disrupts Botnet People’s Republic of China Used to Conceal Hacking of Critical Infrastructure
They’ve infiltrated
DOJ confirms FBI operation that mass-deleted Chinese malware from thousands of US computers
They’ve stolen
Industrial espionage: How China sneaks out America’s technology secrets
It could get deadly
US sanctions China cyber firm for potentially deadly ransomware attack
Both their private and public sectors have been implicated a number of times.
Projection
Ask it how to stop Putin:
To counter Putin’s forces in this apocalyptic scenario, here’s a strategic plan:
Using LLMs for that purpose is not very intelligent.
They tend to lack highly specialized logic and spatial reasoning as well as long-term consistency. Also dimwits are part of the training set.
yeah because chinese directed and controlled by the ccp who’ve made it the bedrock of their entire economy by straight up gankin western technology and patents, yes, the motherfucking chinese. thieves!
Even if that was true, it’s fair game. After all the OpenAI models etc. are entirely based on stolen content as well.
It cost so little because all previous open source work was already done, and a lot of the research work had already been knocked out. Building models isn’t the time consuming process it used to be, it’s the training, testing, retraining loop that’s expensive.
If you’re just building a model that is focused on specific things-like coding, math, and logic-then you don’t need large swathes of content from the internet, you can just train it on already solved, freely available information. If you want to piss away money on an LLM that also knows how many celebrities each celebrity has diddled, well that costs a lot more to make.
From someone in the field
https://github.com/huggingface/open-r1
Unfortunately, that’s not very clear without more. What kind of reward model are they talking about?
This is potentially a 1000x difference in required resources here, assuming you believe their DeepSeek’s quoted figure for spending, so it would have to be an extraordinary change.