1 Simon Willison's Weblog
melvamichaelis edited this page 2025-02-09 18:44:23 +01:00


That model was trained in part using their unreleased R1 "thinking" design. Today they've released R1 itself, in addition to an entire family of new models obtained from that base.

There's an entire lot of stuff in the brand-new release.

DeepSeek-R1-Zero seems the base design. It's over 650GB in size and, like most of their other releases, is under a tidy MIT license. DeepSeek caution that "DeepSeek-R1-Zero encounters difficulties such as unlimited repetition, bad readability, and language blending." ... so they also released:

DeepSeek-R1-which "incorporates cold-start information before RL" and "attains performance comparable to OpenAI-o1 throughout mathematics, code, and reasoning tasks". That a person is also MIT licensed, and is a similar size.

I do not have the capability to run designs larger than about 50GB (I have an M2 with 64GB of RAM), so neither of these 2 designs are something I can easily play with myself. That's where the new distilled designs are available in.

To support the research study neighborhood, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and 6 dense models distilled from DeepSeek-R1 based on Llama and Qwen.

This is an interesting flex! They have actually designs based on Qwen 2.5 (14B, 32B, Math 1.5 B and Math 7B) and Llama 3 (Llama-3.1 8B and Llama 3.3 70B Instruct).

Weirdly those Llama designs have an MIT license connected, which I'm uncertain is compatible with the underlying Llama license. Qwen designs are Apache accredited so possibly MIT is OK?

(I also just noticed the MIT license files state "Copyright (c) 2023 DeepSeek" so they may need to pay a little bit more attention to how they copied those in.)

Licensing aside, these distilled models are interesting monsters.

Running DeepSeek-R1-Distill-Llama-8B-GGUF

Quantized versions are currently beginning to reveal up. So far I've attempted simply one of those- unsloth/DeepSeek-R 1-Distill-Llama-8B-GGUF launched by Unsloth AI-and it's really fun to play with.

I'm running it using the mix of Ollama, LLM and the llm-ollama plugin.

First I brought and ran the design utilizing Ollama itself:

This downloads around 8.5 GB of design data and starts an interactive chat interface.

Once the model has been fetched, LLM can talk to it also. I prefer utilizing LLM for experiments because it logs whatever to SQLite for later expedition.

I set up the plugin and ran it like this:

Each response starts with a ... tag containing the chain of thought utilized to assist generate the action. For "a joke about a pelican and a walrus who run a tea room together" that began like this:

Okay, so I need to come up with a joke about a pelican and a walrus running a tea space together. Hmm, that's a fascinating mix. Let me consider how these two characters might connect in a funny situation.

First, let's consider their qualities. Pelicans are understood for their long beaks and Webbed feet, typically seen near the beach or water. Walruses have big teeth, thick fur, securityholes.science and they're normally found in chillier climates, like icebergs or snowy locations. So, combining these two into a tea space setting is currently an amusing image.

[...]
It then believed for 20 paragraphs before outputting the joke! Here's the complete transcript and here's the output at the end, after the tag:

The Joke:

A pelican and a walrus choose to open a quaint little tea room together. The walrus, with its big size, has a hard time to discover comfy chairs, so it sits on the table by accident, knocking over the teapot. Meanwhile, the pelican, attempting to assist, utilizes its beak to place dishes on the table, triggering a couple of spills.

After a series of funny incidents, the walrus looks up and states with a grin, "This isn't so fishy any longer." The pelican smirks and mentions, "Maybe not, but we do have a lot of krill in our tea!"

Explanation:

- Setup: Pelican and Walrus run a not likely tea space. Their sizes cause comedic mayhem.