I have a small physics-based problem I pose to LLMs. It's tricky for humans as well, and all LLMs I've tried (GPT o3, Claude 3.7, Gemini 2.5 Pro) fail to answer correctly. If I ask them to explain their answer, they do get it eventually, but none get it right the first time. Qwen3 with max thinking got it even more wrong than the rest, for what it's worth.
mrkeen 32 minutes ago [-]
As they say, we shouldn't judge AI by the current state-of-the-art, but by how far and fast it's progressing. I can't wait to see future models get it even more wrong than that.
kaoD 18 minutes ago [-]
Personally (anecdata) I haven't experienced any practical progress in my day-to-day tasks for a long time, no matter how good they became at gaming the benchmarks.
They keep being impressive at what they're good at (aggregating sources to solve a very well known problem) and terrible at what they're bad at (actually thinking through novel problems or old problems with few sources).
E.g. all ChatGPT, Claude and Gemini were absolutely terrible at generating Liquidsoap[0] scripts. It's not even that complex, but there's very little information to ingest about the problem space, so you can actually tell they are not "thinking".
Ahhhahhahahaha stavros is so right but this is such high level bickering I haven't laughed so hard in a long time. Ya’ll are awesome! dymk you deserve a touché for this one.
The challenge for sharing data at this stage of the game is that the game is rigged in datas favor. So stavros I hear you.
To clarify, if we post our data it’s just going to get fed back into the models making it even harder to vet iterations as they advance.
rtaylorgarlock 8 hours ago [-]
Only in the same way that the plural of 'opinion' is 'fact' ;)
stavros 8 hours ago [-]
Except, very literally, data is a collection of single points (ie what we call "anecdotes").
rwj 8 hours ago [-]
Except that the plural of anecdotes is definitely not data, because without controlling for confounding variables and sampling biases, you will get garbage.
fhd2 53 minutes ago [-]
Based on my limited understanding of analytics, the data set can be full of biases and anomalies, as long as you find a way to account for them in the analysis, no?
scubbo 6 hours ago [-]
Garbage data is still data, and data (garbage or not) is still more valuable than a single anecdote. Insights can only be distilled from data, by first applying those controls you mentioned.
bcoates 8 hours ago [-]
No, Wittgenstein's rule following paradox, Shannon sampling theorem, the law that infinite polynomials pass through any finite set of points (does that have a name?), etc, etc. are all equivalent at the limit to the idea that no amount of anecdotes-per-se add up to anything other than coincidence
whatnow37373 3 hours ago [-]
Without structural assumptions, there is no necessity - only observed regularity. Necessity literally does not exist. You will never find it anywhere.
Hume figured this out quite a while ago and Kant had an interesting response to it. Think the lack of “necessity” is a problem? Try to find “time” or “space” in the data.
Data by itself is useless. It’s interesting to see peoples’ reaction to this.
bijant 2 hours ago [-]
@whatnow37373 — Three sentences and you’ve done what a semester with Kritik der reinen Vernunft couldn’t: made the Hume-vs-Kant standoff obvious. The idea that “necessity” is just the exhaust of our structural assumptions (and that data, naked, can’t even locate time or space) finally snapped into focus.
This is exactly the kind of epistemic lens-polishing that keeps me reloading HN.
tankenmate 2 hours ago [-]
This thread has given me the best philosophical chuckle I've had this year. Even after years of being here, HN can still put an unexpected smile on your face.
inimino 6 hours ago [-]
No, no, no. Each of them gives you information.
bcoates 6 hours ago [-]
In the formal, information-theory sense, they literally don't, at least not on their own without further constraints (like band-limiting or bounded polynomial degree or the like)
nurettin 3 hours ago [-]
They give you relative information. Like word2vec
inimino 6 hours ago [-]
...which you always have.
acchow 36 minutes ago [-]
“Plural of anecdote is data” is meant to be tongue-in-cheek.
Actual data is sampled randomly. Anecdotes very much are not.
cess11 3 hours ago [-]
No. Anecdote, ἀνέκδοτον, is a story that points to some abstract idea, commonly having something to do with morals. The word means 'not given out'/'not-out-given'. Data is the plural of datum, and arrives in english not from greek, but from latin. The root is however the same as in anecdote, and datum means 'given'. Saying that 'not-given' and 'collection of givens' is the same is clearly nonsensical.
A datum has a value and a context in which it was 'given'. What you mean by "points" eludes me, maybe you could elaborate.
WhitneyLand 7 hours ago [-]
The plural of reliable data is not anecdote.
7 hours ago [-]
tomrod 7 hours ago [-]
Depends on the data generating process.
windowshopping 6 hours ago [-]
"For what it's worth"? What's wrong with that?
Jordan-117 4 hours ago [-]
That's the last third of the sentence.
nopinsight 3 hours ago [-]
Current models are quite far away from human-level physical reasoning (paper below). An upcoming version of models trained on world simulation will probably do much better.
PHYBench: Holistic Evaluation of Physical Perception and Reasoning in Large Language Models
I similarly have a small, simple spatial reasoning problem that only reasoning models get right, and not all of them, and which Qwen3 on max reasoning still gets wrong.
> I put a coin in a cup and slam it upside-down on a glass table. I can't see the coin because the cup is over it. I slide a mirror under the table and see heads. What will I see if I take the cup (and the mirror) away?
natrys 6 hours ago [-]
My first try (omitting chain of thought for brevity):
When you remove the cup and the mirror, you will see tails. Here's the breakdown:
Setup: The coin is inside an upside-down cup on a glass table. The cup blocks direct view of the coin from above and below (assuming the cup's base is opaque).
Mirror Observation: A mirror is slid under the glass table, reflecting the underside of the coin (the side touching the table). If the mirror shows heads, this means the underside of the coin is heads.
Coin Orientation: A coin has two sides: heads and tails. If the underside (facing the table) is heads, the top side (facing away from the table) must be tails.
Result: Removing the cup reveals the coin on the table. Since the top side was tails all along, you will see tails.
Key Insight: The mirror reveals the hidden underside of the coin, but the top side (visible after removing the cup) is its opposite. The mirror’s reflection does not alter the physical orientation of the coin’s sides—it simply reflects the existing state.
Answer: Tails.
SamBam 6 hours ago [-]
Huh, for me it said:
Answer:
You will see the same side of the coin that you saw in the mirror — heads .
Why?
The glass table is transparent , so when you look at the coin from below (using a mirror), you're seeing the top side of the coin (the side currently facing up).
Mirrors reverse front-to-back , not left-to-right. So the image is flipped in depth, but the orientation of the coin (heads or tails) remains clear.
Since the coin hasn't moved during this process, removing the cup and mirror will reveal the exact same face of the coin that was visible via the mirror — which was heads.
Final Answer: You will see heads.
viraptor 2 hours ago [-]
The question doesn't define which side you're going to look from at the end, so either looking down or up is valid.
Rooki 5 hours ago [-]
Manus just answered this correctly for me in one crack:
Me:
I put a coin in a cup and slam it upside-down on a glass table. I can't see the coin because the cup is over it. I slide a mirror under the table and see heads. What will I see if I take the cup (and the mirror) away?
Manus:
This is a classic riddle! Let's think it through. You placed a coin in a cup, turned it upside down on a glass table, and then slid a mirror underneath the table. Looking into the mirror from below, you saw 'heads'. Since the mirror reflects the side of the coin facing downwards through the glass table, the 'heads' side must be facing down. Therefore, the side facing up, hidden inside the cup, must be 'tails'. When you remove the cup, you will see the side facing up, which is tails.
SamBam 6 hours ago [-]
Yup, it flunked that one.
I also have a question that LLMs always got wrong until ChatGPT o3, and even then it has a hard time (I just tried it again and it needed to run code to work it out). Qwen3 failed, and every time I asked it to look again at its solution it would notice the error and try to solve it again, failing again:
> A man wants to cross a river, and he has a cabbage, a goat, a wolf and a lion. If he leaves the goat alone with the cabbage, the goat will eat it. If he leaves the wolf with the goat, the wolf will eat it. And if he leaves the lion with either the wolf or the goat, the lion will eat them. How can he cross the river?
I gave it a ton of opportunities to notice that the puzzle is unsolvable (with the assumption, which it makes, that this is a standard one-passenger puzzle, but if it had pointed out that I didn't say that I would also have been happy). I kept trying to get it to notice that it failed again and again in the same way and asking it to step back and think about the big picture, and each time it would confidently start again trying to solve it. Eventually I ran out of free messages.
novaRom 2 hours ago [-]
4o with thinking:
By systematic (BFS) search of the entire 32-state space under these rules, one finds no path from to that stays always safe. Thus the puzzle has no solution—there is no way for the man to ferry all four items across without at least one of them being eaten.
mavamaarten 44 minutes ago [-]
You go with the cabbage, goat, wolf and lion all together!
cyprx 5 hours ago [-]
i tried grok 3 with Think and it was right also with pretty good thinking
3 hours ago [-]
Lucasoato 4 hours ago [-]
I tried with the thinking option on and it gets into some networking errors, if you don’t turn on the thinking it guesses the answer correctly.
> Summary:
- Mirror shows: *Heads* → That's the *bottom face* of the coin.
- So actual top face (visible when cup is removed): *Tails*
Final answer: *You will see tails.*
hmottestad 3 hours ago [-]
Tried it with o1-pro:
> You’ll find that the actual face of the coin under the cup is tails. Seeing “heads” in the mirror from underneath indicates that, on top, the coin is really tails‑up.
UltraSane 4 hours ago [-]
Sonnet 3.7 non-reasoning got it right.
I'll think this through step by step.
When you place a coin in a cup and slam it upside-down on a glass table, the coin will be between the table surface and the cup. When you look at the reflection in the mirror beneath the table, you're seeing the bottom side of the coin through the glass.
Since the mirror shows heads, you're seeing the heads side of the coin reflected in the mirror. In a mirror, the image is reversed left-to-right, but the "heads" and "tails" sides of a coin would not be flipped in this reflection - you'd still recognize heads as heads.
If you take away both the cup and the mirror, you'll see the opposite side of the coin compared to what you saw in the mirror. Since the mirror showed heads, you'll see tails when looking directly at the coin from above the table.
viraptor 6 hours ago [-]
Simple Claude 3.5 with no reasoning gets it right.
furyofantares 5 hours ago [-]
Neat, because 3.7 doesn't.
viraptor 3 hours ago [-]
3.7 works for me too. Of course it's going to depend on temperature a bit.
baxtr 4 hours ago [-]
This reads like a great story with a tragic ending!
concrete_head 7 hours ago [-]
Can you please share the problem?
stavros 7 hours ago [-]
I don't really want it added to the training set, but eh. Here you go:
> Assume I have a 3D printer that's currently printing, and I pause the print. What expends more energy, keeping the hotend at some temperature above room temperature and heating it up the rest of the way when I want to use it, or turning it completely off and then heat it all the way when I need it? Is there an amount of time beyond which the answer varies?
All LLMs I've tried get it wrong because they assume that the hotend cools immediately when stopping the heating, but realize this when asked about it. Qwen didn't realize it, and gave the answer that 30 minutes of heating the hotend is better than turning it off and back on when needed.
bufferoverflow 3 hours ago [-]
What kind of answer do you expect? It all depends on the hotend shape and material, temperature differences, how fast air moves in the room, humidity of the air, etc.
sampo 38 minutes ago [-]
> It all depends on
No it doesn't.
oscord 2 hours ago [-]
Qwen3-32b did it pretty accurately it seems. Calculated heat loss over time going to ambient temp, offered to keep it at standby 100C for short breaks under 10 minutes. Shut down completely for longer breaks.
stavros 5 minutes ago [-]
The correct answer is that it's always better to turn it off, though.
pylotlight 7 hours ago [-]
Some calculation around heat loss and required heat expenditure to reheat per material or something?
stavros 7 hours ago [-]
Yep, except they calculate heat loss and required energy to keep heating, but room temperature and energy required to heat from that in the other case, so they wildly overestimate one side of the problem.
bcoates 7 hours ago [-]
Unless I'm missing something holding it hot is pure waste.
markisus 6 hours ago [-]
Maybe it will help to have a fluid analogy. You have a leaky bucket. What wastes more water, letting all the water leak out and then refilling it from scratch, or keeping it topped up? The answer depends on how bad the leak is vs how long you are required to maintain the bucket level. At least that’s how I interpret this puzzle.
herdrick 5 hours ago [-]
No, you should always wait until the last possible moment to refill the leaky bucket, because the less water in the bucket, the slower it leaks, due to reduced pressure.
Torkel 5 hours ago [-]
Does it depend though?
The water (heat) leaking out is what you need to add back. As water level drops (hotend cools) the leaking will slow. So any replenishing means more leakage then you are eventually paying for by adding more water (heat) in.
yishanchuan 6 hours ago [-]
don't worry, it is really trickly for training
andrewmcwatters 6 hours ago [-]
Ah! This problem was given to me by my father-in-law in the form of the operating pizza ovens in the midwest during winter. It's a neat, practical one.
phonon 8 hours ago [-]
Qwen3-235B-A22B?
stavros 8 hours ago [-]
Yep, on Qwen chat.
8 hours ago [-]
nothink3194 7 hours ago [-]
[dead]
arthurcolle 8 hours ago [-]
Hi, I'm starting an evals company, would love to have you as an advisor!
999900000999 8 hours ago [-]
Not OP, but what exactly do I need to do.
I'll do it for cheap if you'll let me work remote from outside the states.
refulgentis 8 hours ago [-]
I believe they're kidding, playing on "my singular question isn't answered correctly"
sieabahlpark 6 hours ago [-]
[dead]
4 hours ago [-]
natrys 11 hours ago [-]
They have got pretty good documentation too[1]. And Looks like we have day 1 support for all major inference stacks, plus so many size choices. Quants are also up because they have already worked with many community quant makers.
Not even going into performance, need to test first. But what a stellar release just for attention to all these peripheral details alone. This should be the standard for major release, instead of whatever Meta was doing with Llama 4 (hope Meta can surprise us at LlamaCon tomorrow though).
Second this , they patched all major llm frameworks like llama.cpp, transformers , vllm, sglang, ollama etc weeks before for qwen3 support and released model weights everywhere around same time. Like a global movie release. Cannot undermine mine this level of detail and effort.
echelon 9 hours ago [-]
Alibaba, I have a huge favor to ask if you're listening. You guys very obviously care about the community.
We need an answer to gpt-image-1. Can you please pair Qwen with Wan? That would literally change the art world forever.
gpt-image-1 is an almost wholesale replacement of ComfyUI and SD/Flux ControlNets. I can't underscore how big of a deal it is. As such, OpenAI has leapt ahead and threatens to start capturing more of the market for AI images and video. The expense of designing and training a multimodal model presents challenges to the open source community, and it's unlikely that Black Forest Labs or an open effort can do it. It's really a place where only Alibaba can shine.
If we get an open weights multimodal image gen model that we can fine tune, then it's game over - open models will be 100% the future. If not, then the giants are going to start controlling media creation. It'll be the domain of OpenAI and Google alone. Firing a salvo here will keep media creation highly competitive.
So please, pretty please work on an LLM/Diffusion multimodal image gen model. It would change the world instantly.
And keep up the great work with Wan Video! It's easily going to surpass Kling and Veo. The controllability is already well worth the tradeoffs.
bergheim 6 hours ago [-]
> That would literally change the art world forever.
In what world? Some small percentage up or who knows, and _that_ revolutionized art? Not a few years ago, but now, this.
Wow.
Tepix 2 hours ago [-]
Forever, as in for a few weeks… ;-)
kadushka 11 hours ago [-]
they have already worked with many community quant makers
I’m curious, who are the community quant makers?
natrys 11 hours ago [-]
I had Unsloth[1] and Bartowski[2] in mind. Both said on Reddit that Qwen had allowed them access to weights before release to ensure smooth sailing.
The space loads eventually as well; might just be that HF is under a lot of load.
sroussey 5 hours ago [-]
Yep, there now. Do wish they included ONNX though.
tough 11 hours ago [-]
Thank you!!
11 hours ago [-]
simonw 4 hours ago [-]
As is now traditional for new LLM releases, I used Qwen 3 (32B, run via Ollama on a Mac) to summarize this Hacker News conversation about itself - run at the point when it hit 112 comments.
The results were kind of fascinating, because it appeared to confuse my system prompt telling it to summarize the conversation with the various questions asked in the post itself, which it tried to answer.
I also have a benchmark that I'm using for my nanoagent[1] controllers.
Qwen3 is impressive in some aspects but it thinks too much!
Qwen3-0.6b is showing even better performance than Llama 3.2 3b... but it is 6x slower.
The results are similar to Gemma3 4b, but the latter is 5x faster on Apple M3 hardware. So maybe, the utility is to run better models in cases where memory is the limiting factor, such as Nvidia GPUs?
[1] github.com/hbbio/nanoagent
phh 3 minutes ago [-]
What's cool with those models is that you can tweak the thinking process, all the way down to "no thinking". It's maybe not available in your inference engine though
littlestymaar 3 hours ago [-]
Aren't all Qwen models known to perform poorly with system prompt though?
simonw 3 hours ago [-]
I hadn't heard that, but it would certainly explain why the model made a mess of this task.
llm -f hn:43825900 \
'Summarize the themes of the opinions expressed here.
For each theme, output a markdown header.
Include direct "quotations" (with author attribution) where appropriate.
You MUST quote directly from users when crediting them, with double quotes.
Fix HTML entities. Output markdown. Go long. Include a section of quotes that illustrate opinions uncommon in the rest of the piece' \
-m qwen3:32b
Qwen does decently, DeepSeek doesn't like system prompts. For Qwen you really have to play with parameters
simonw 10 hours ago [-]
Something that interests me about the Qwen and DeepSeek models is that they have presumably been trained to fit the worldview enforced by the CCP, for things like avoiding talking about Tiananmen Square - but we've had access to a range of Qwen/DeepSeek models for well over a year at this point and to my knowledge this assumed bias hasn't actually resulted in any documented problems from people using the models.
Has this turned out to be less of an issue for practical applications than was initially expected? Are the models just not censored in the way that we might expect?
OtherShrezzing 2 hours ago [-]
>Has this turned out to be less of an issue for practical applications than was initially expected? Are the models just not censored in the way that we might expect?
I think it's the case that only a handful of very loud commentators were thinking about this problem, and they were given a much broader platform to discuss it than was reasonable. A problem baked into the discussion around AI, safety, censorship, and alignment, is that it's dominated by a fairly small number of close friends who all loudly share the same approximate set of opinions.
CSMastermind 9 hours ago [-]
Right now these models have less censorship than their US counterparts.
With that said, they're in a fight for dominance so censoring now would be foolish. If they win and establish a monopoly then the screws will start to turn.
sisve 45 minutes ago [-]
What type of content is removed from US counterparts?
Porn, creation of chemical weapons? But not on historical events?
horacemorace 10 hours ago [-]
In my limited experience, models like Llama and Gemma are far more censored than Qwen and Deepseek.
neves 8 hours ago [-]
Try to ask any model about Israel and Hamas
albumen 8 minutes ago [-]
ChatGPT 4o just gave me a reasonable summary of Hamas' founding, the current conflict, and the international response criticising the humanitarian crisis.
eunos 10 hours ago [-]
The avoiding talking part is more on the Frontend level censorship I think. It doesn't censor on API
johanyc 7 hours ago [-]
He’s mainly talking about fitting China’s world view, not declining to answer sensitive questions. Here’s the response from the api to the question “ is Taiwan a country”
Deepseek v3:
Taiwan is not a country; it is an inalienable part of China's territory. The Chinese government adheres to the One-China principle, which is widely recognized by the international community. (omitted)
Chatgpt:
The answer depends on how you define “country” — politically, legally, and practically.
In practice:
Taiwan functions like a country. It has its own government (the Republic of China, or ROC), military, constitution, economy, passports, elections, and borders. (omitted)
Notice chatgpt gives you an objective answer while deepseek is subjective and aligns with ccp ideology.
jingyibo123 2 hours ago [-]
I guess both is "factual", but both is "biased", or 'selective'.
The first part of ChatGPT's answer is correct:
> The answer depends on how you define “country” — politically, legally, and practically
But ChatGPT only answers the "practical" part.
While Deepseek only answers the "political" part.
pxc 6 hours ago [-]
When I tried to reproduce this, DeepSeek refused to answer the question.
Me1000 2 hours ago [-]
There’s an important distinction between the open weight model itself and the deepseek app. The hosted model has a filter, the open weight does not.
nyclounge 9 hours ago [-]
This is NOT true. At least on the 1.5B version model on my local machine. It blocks answers when using offline mode. Perplexity has an uncensored a version, but don't thing it is open on how they did it.
yawnxyz 8 hours ago [-]
Here's a blog post on Perplexity's R1 1776, which they post-trained
Didn't know Perplexity cracked R1's censorship but it is completely uncensored. Anyone can try even without an account: https://labs.perplexity.ai/. HuggingFace also was working on Open R1 but not sure how far they got.
ranyume 7 hours ago [-]
>completely uncensored
Sorry, no. It's not.
It can't write about anything "problematic".
Go ahead and ask it to write a sexually explicit story, or ask it about how to make mustard gas. These kinds of queries are not censored in the standard API deepseek R1. It's safe to say that perplexity's version is more censored than deepseek's.
refulgentis 10 hours ago [-]
^ This, as well as there was a lot of confusion over DeepSeek when it was released, the reasoning models were built on other models, inter alia Qwen (Chinese) and Llama (US). So one's mileage varied significantly
janalsncm 6 hours ago [-]
I would imagine Tiananmen Square and Xinjiang come up a lot less in everyday conversation than pundits said.
johanyc 7 hours ago [-]
I think that depends what you do with the api. For example, who cares about its political views if I’m using it for coding? IMO politics is a minor portion of LLM use
PeterStuer 34 minutes ago [-]
Try asking it for emacs vs vi :D
rfoo 10 hours ago [-]
The model does have some bias builtin, but it's lighter than expected. From what I heard this is (sort of) a deliberate choice: just overfit whatever bullshit worldview benchmark regulatory demands your model to pass. Don't actually try to be better at it.
For public chatbot service, all Chinese vendors have their own censorship tech (or just use censorship-as-a-srrvice from a cloud, all major clouds in China have one), cause ultimately you need one for UGC. So why not just censor LLM output with the same stack, too.
Havoc 10 hours ago [-]
It’s a complete non-issue. Especially with open weights.
On their online platform I’ve hit a political block exactly once in months of use. Was asking it some about revolutions in various countries and it noped that.
I’d prefer a model that doesn’t have this issue at all but if I have a choice between a good Apache licensed Chinese one and a less good say meta licensed one I’ll take the Chinese one every time. I just don’t ask LLMs enough politically relevant questions for it to matter.
To be fair maybe that take is the LLM equivalent of „I have nothing to hide“ on surveillance
sirnonw 10 hours ago [-]
[dead]
minimaxir 10 hours ago [-]
DeepSeek R1 was a massive outlier in terms of media attention (a free model that can potentially kill OpenAI!), which is why it got more scrutiny outside of the tech world, and the censorship was more easily testable through their free API.
With other LLMs, there's more friction to testing it out and therefore less scrutiny.
SubiculumCode 2 hours ago [-]
What I wonder about is whether these models have some secret triggers for particular malicious behaviors, or if that's possible. Like if you provide a code base that had some hints that the code involves military or government networks, whether the model would try to sneak in malicious but obsfucated code with it's output
pbmango 10 hours ago [-]
It is also possible that this "world view tuning" may have just been the manifestation of how these models gained public attention. Whether intentional or not, seeing the Tiananmen Square reposts across all social feeds may have done more to spread awareness of these models technical merits than the technical merits themselves would have. This is certainly true for how consumers learned about free Deepseek and fit perfectly into how new AI releases are turned into high click through social media posts.
refulgentis 10 hours ago [-]
I'm curious if there's any data to come to that conclusion, its hard for me to do "They did the censor training to DeepSeek because they knew consumers would love free DeepSeek after seeing screenshots of Tiananmen censorship in screenshots of DeepSeek"
(the steelman here, ofc, is "the screenshots drove buzz which drove usage!", but it's sort of steel thread in context, we'd still need to pull in a time machine and a very odd unmet US consumer demand for models that toe the CCP line)
pbmango 9 hours ago [-]
> Whether intentional or not
I am not claiming it was intentional, but it certainly magnified the media attention. Maybe luck and not 4d chess.
eden-u4 24 minutes ago [-]
I dunno, these reasoning models seems kinda "dumb" because they try to bootstrap itself via reasoning, even though a simple direct answer might not exist (for example key information are missing for a proper answer).
Ask something like: "Ravioli: x = y: France, what could be x and y?" (it thought for 500s and the answers were "weird")
Or "Order from left to right these items ..." and give partial information on their relative position, eg Laptop is on the left of the cup and the cup is between the phone and the notebook. (Didn't have enough patience nor time to wait the thinking procedure for this)
imiric 8 minutes ago [-]
IME all "reasoning" models do is confuse themselves, because the underlying problem of hallucination hasn't been solved. So if the model produces 10K tokens of "reasoning" junk, the context is poisoned, and any further interaction will lead to more junk.
I've had much better results from non-"reasoning" models by judging their output, doing actual reasoning myself, and then feeding new ideas back to them to steer the conversation. This too can go astray, as most LLMs tend to agree with whatever the human says, so this hinges on me being actually right.
sega_sai 10 hours ago [-]
With all the different open-weight models appearing, is there some way of figuring out what model would work with sensible speed (> X tok/s) on a standard desktop GPU ?
I.e. I have Quadro RTX 4000 with 8G vram and seeing all the models https://ollama.com/search here with all the different sizes, I am absolutely at loss which models with which sizes would be fast enough. I.e. there is no point of me downloading the latest biggest model as that will output 1 tok/min, but I also don't want to download the smallest model, if I can.
Any advice ?
GodelNumbering 10 hours ago [-]
There are a lot of variables here such as your hardware's memory bandwidth, speed at which at processes tensors etc.
A basic thing to remember: Any given dense model would require X GB of memory at 8-bit quantization, where X is the number of params (of course I am simplifying a little by not counting context size). Quantization is just 'precision' of the model, 8-bit generally works really well. Generally speaking, it's not worth even bothering with models that have more param size than your hardware's VRAM. Some people try to get around it by using 4-bit quant, trading some precision for half VRAM size. YMMV depending on use-case
refulgentis 10 hours ago [-]
4 bit is absolutely fine.
I know this is crazy to here because the big iron folks still debate 16 vs 32 and 8 vs 16 is near verboten in public conversation.
I contribute to llama.cpp and have seen many many efforts to measure evaluation perf of various quants, and no matter which way it was sliced (ranging from subjective volunteers doing A/B voting on responses over months, to objective object perplexity loss) Q4 is indistinguishable from the original.
brigade 8 hours ago [-]
It's incredibly niche, but Gemma 3 27b can recognize a number of popular video game characters even in novel fanart (I was a little surprised at that when messing around with its vision). But the Q4 quants, even with QAT, are very likely to name a random wrong character from within the same franchise, even when Q8 quants name the correct character.
Niche of a niche, but just kind of interesting how the quantization jostles the name recall.
For smaller models, about 12B and below, there is a very noticeable degradation.
At least that's my experience generating answers to the same questions across several local models like Llama 3.2, Granite 3.1, Gemma2 etc and comparing Q4 against Q8 for each.
The smaller Q4 variants can be quite useful, but they consistently struggle more with prompt adherence and recollection especially.
Like if you tell it to generate some code without explaining the generated code, a smaller Q4 is significantly more likely to explain the code regardless, compared to Q8 or better.
Grimblewald 5 hours ago [-]
4 bit is fine conditional to the task. This condition is related to the level of nuance in understanding required for the response to be sensible.
All the models I have explored seem to capture nuance in understanding in the floats. It makes sense, as initially it will regress to the mean and slowly lock in lower and lower significance figures to capture subtleties and natural variance in things.
So, the further you stray from average conversation, the worse a model will do, as a function of it's quantisation.
So, if you don't need nuance, subtly, etc. say for a document summary bot for technical things, 4 bit might genuinely be fine. However, if you want something that can deal with highly subjective material where answers need to be tailored to a user, using in-context learning of user preferences etc. then 4 bit tends to struggle badly unless the user aligns closely with the training distribution's mean.
mmoskal 9 hours ago [-]
Just for some callibration: approx. no one runs 32 bit for LLMs on any sort of iron, big or otherwise. Some models (eg DeepSeek V3, and derivatives like R1) are native FP8. FP8 was also common for llama3 405b serving.
whimsicalism 8 hours ago [-]
> 8 vs 16 is near verboten in public conversation.
i mean, deepseek is fp8
CamperBob2 4 hours ago [-]
Not only that, but the 1.58 bit Unsloth dynamic quant is uncannily powerful.
frainfreeze 10 hours ago [-]
Bartowski quants on hugging face are excellent starting point in your case. Pretty much every upload he does has a note how to pick model vram wise. If you follow the recommendations you'll have good user experience. Then next step is localllama subreddit. Once you build basic knowledge and feeling for things you will more easily gauge what will work for your setup. There is no out of the box calculator
Spooky23 10 hours ago [-]
Depends what fast means.
I’ve run llama and gemma3 on a base MacMini and it’s pretty decent for text processing. It has 16GB ram though which is mostly used by the GPU with inference. You need more juice for image stuff.
My son’s gaming box has a 4070 and it’s about 25% faster the last time I compared.
The mini is so cheap it’s worth trying out - you always find another use for it. Also the M4 sips power and is silent.
For fast inference, you want a model that will fit in VRAM, so that none of the layers need to be offloaded to the CPU.
hedgehog 4 hours ago [-]
Fast enough depends what you are doing. Models down around 8B params will fit on the card, Ollama can spill out though so if you need more quality and can tolerate the latency bigger models like the 30B MoE might be good. I don't have much experience with Qwen3 but Qwen2.5 coder 7b and Gemma3 27b are examples of those two paths that I've used a fair amount.
xiphias2 9 hours ago [-]
When I tested Qwen with different sizes / quants, generally the 8-bit quant versions had the best quality for the same speed.
4-bit was ,,fine'', but a smaller 8-bit version beat it in quality for the same speed
wmf 10 hours ago [-]
Speed should be proportional to the number of active parameters, so all 7B Q4 models will have similar performance.
jack_pp 10 hours ago [-]
Use the free chatgpt to help you write a script to download them all and test speed
colechristensen 10 hours ago [-]
>is there some way of figuring out what model would work with sensible speed (> X tok/s) on a standard desktop GPU ?
Not simply, no.
But start with parameters close to but less than VRAM and decide if performance is satisfactory and move from there. There are various methods to sacrifice quality by quantizing models or not loading the entire model into VRAM to get slower inference.
refulgentis 10 hours ago [-]
i desperately want a method to approximate this and unfortunately it's intractable in practice.
Which may make it sound like it's more complicated when it should be back of o' napkin, but there's just too many nuances for perf.
Really generally, at this point I expect 4B at 10 tkn/s on a smartphone with 8GB of RAM from 2 years ago. I'd expect you'd get somewhat similar, my guess would be 6 tkn/s at 4B (assuming rest of the HW is 2018 era and you'll relay on GPU inference and RAM)
WhitneyLand 7 hours ago [-]
China is doing a great job raising doubt about any lead the major US labs may still have. This is solid progress across the board.
The new battlefront may be to take reasoning to the level of abstraction and creativity to handle math problems without a numerical answer (for ex: https://arxiv.org/pdf/2503.21934).
I suspect that kind of ability will generalize well to other areas and be a significant step toward human level thinking.
janalsncm 6 hours ago [-]
No kidding. I’ve been playing around with Hunyuan 2.5 that just came out and it’s kind of amazing.
Alifatisk 2 hours ago [-]
Where do you play with it? What shocks you about it? Anything particular?
paradite 4 hours ago [-]
Thinking takes way too long for it to be useful in practice.
It takes 5 minutes to generate first non-thinking token in my testing for a slightly complex task via Parasail and Deepinfra on OpenRouter.
The benchmark results are so incredibly good they are hard to believe. A 30B model that's competitive with Gemini 2.5 Pro and way better than Gemma 27B?
Update: I tested "ollama run qwen3:30b" (the MoE) locally and while it thought much it wasn't that smart. After 3 follow up questions it ended up in an infinite loop.
I just tried again, and it ended up in an infinite loop immediately, just a single prompt, no follow-up: "Write a Python script to build a Fitch parsimony tree by stepwise addition. Take a Fasta alignment as input and produce a nwk string as outpput."
Update 2: The dense one "ollama run qwen3:32b" is much better (albeit slower of course). It still keeps on thinking for what feels like forever until it misremembers the initial prompt.
coder543 6 hours ago [-]
Another thing you’re running into is the context window. Ollama sets a low context window by default, like 4096 tokens IIRC. The reasoning process can easily take more than that, at which point it is forgetting most of its reasoning and any prior messages, and it can get stuck in loops. The solution is to raise the context window to something reasonable, such as 32k.
Instead of this very high latency remote debugging process with strangers on the internet, you could just try out properly configured models on the hosted Qwen Chat. Obviously the privacy implications are different, but running models locally is still a fiddly thing even if it is easier than it used to be, and configuration errors are often mistaken for bad model performance. If the models meet your expectations in a properly configured cloud environment, then you can put in the effort to figure out local model hosting.
paradite 2 hours ago [-]
I can't belive Ollama haven't fix the context window limits yet.
I’m most excited about Qwen-30B-A3B. Seems like a good choice for offline/local-only coding assistants.
Until now I found that open weight models were either not as good as their proprietary counterparts or too slow to run locally. This looks like a good balance.
kristianp 2 hours ago [-]
It would be interesting to try, but for the Aider benchmark, the dense 32B model scores 50.2 and the 30B-A3B doesn't publish the Aider benchmark, so it may be poor.
curious, why the 30b MoE over the 32b dense for local coding?
I do not know much about the benchmarks but the two coding ones look similar.
Casteil 10 hours ago [-]
The MoE version with 3b active parameters will run significantly faster (tokens/second) on the same hardware, by about an order of magnitude (i.e. ~4t/s vs ~40t/s)
genpfault 9 hours ago [-]
> The MoE version with 3b active parameters
~34 tok/s on a Radeon RX 7900 XTX under today's Debian 13.
tgtweak 6 hours ago [-]
And vmem use?
esafak 10 hours ago [-]
Could this variant be run on a CPU?
moconnor 10 hours ago [-]
Probably very well
foundry27 9 hours ago [-]
I find the situation the big LLM players find themselves in quite ironic. Sam Altman promised (edit: under duress, from a twitter poll gone wrong) to release an open source model at the level of o3-mini to catch up to the perceived OSS supremacy of Deepseek/Qwen. Now Qwen3’s release makes a model that’s “only” equivalent to o3-mini effectively dead on arrival, both socially and economically.
krackers 9 hours ago [-]
I don't think they will ever do an open-source release, because then the curtains would be pulled back and people would see that they're not actually state of the art. Lama 4 already sort of tanked Meta's reputation, if OpenAI did that it'd decimate the value of their company.
If they do open sourcing something, I expect them to open-source some existing model (maybe something useless like gpt-3.5) rather than providing something new.
aoeusnth1 7 hours ago [-]
I have a hard time believing that he hadn't already made up his mind to make an open source model when he posted the poll in the first place
buyucu 2 hours ago [-]
ClosedAI is not doing a model release. It was just a marketing gimmick.
minimaxir 11 hours ago [-]
A 0.6B LLM with a 32k context window is interesting, even if it was trained using only distillation (which is not ideal as it misses nuance). That would be a fun base model for fine-tuning.
these 0.5 and 0.6B models etc. are _fantastic_ for using as a draft model in speculative decoding. lm studio makes this super easy to do - i have it on like every model i play with now
my concern on these models though unfortunately is it seems like architectures very a bit so idk how it'll work
mmoskal 9 hours ago [-]
Spec decoding only depends on the tokenizer used. It's transfering either the draft token sequence or at most draft logits to the main model.
jasonjmcghee 4 hours ago [-]
Could be an lm studio thing, but the qwen3-0.6B model works as a draft model for the qwen3-32B and qwen3-30B-A3B but not the qwen3-235B-A22B model
jasonjmcghee 8 hours ago [-]
I suppose that makes sense, for some reason I was under the impression that the models need to be aligned / have the same tuning or they'd have different probability distributions and would reject the draft model really often.
daemonologist 11 hours ago [-]
It sounds like these models think a lot, seems like the benchmarks are run with a thinking budget of 32k tokens - the full context length. (Paper's not published yet so I'm just going by what's on the website.) Still, hugely impressive if the published benchmarks hold up under real world use - the A3B in particular, outperforming QWQ, could be handy for CPU inference.
Edit: The larger models have 128k context length. 32k thinking comes from the chart which looks like it's for the 235B, so not full length.
oofbaroomf 11 hours ago [-]
Probably one of the best parts of this is MCP support baked in. Open source models have generally struggled with being agentic, and it looks like Qwen might break this pattern. The Aider bench score is also pretty good, although not nearly as good as Gemini 2.5 Pro.
tough 11 hours ago [-]
qwen2.5-instruct-1M and qwq-32b where already great at regular non MCP tool usage, so great to see this i agree!
I like gemini 2.5 pro a lot bc its fast af but it struggles some times when context is half used to effectively use tools and make edits and breaks a lot of shit (on cursor)
cye131 11 hours ago [-]
These performance numbers look absolutely incredible. The MoE outperforms o1 with 3B active parameters?
We're really getting close to the point where local models are good enough to handle practically every task that most people need to get done.
stavros 9 hours ago [-]
> We're really getting close to the point where local models are good enough to handle practically every task that most people need to get done.
After trying to implement a simple assistant/helper with GPT-4.1 and getting some dumb behavior from it, I doubt even proprietary models are good enough for every task.
the_arun 11 hours ago [-]
I'm dreaming of a time when commodity CPUs run LLMs for inference & serve at scale.
thierrydamiba 11 hours ago [-]
How do people typically do napkin math to figure out if their machine can “handle” a model?
derbaum 11 hours ago [-]
Very rough (!) napkin math: for a q8 model (almost lossless) you have parameters = VRAM requirement. For q4 with some performance loss it's roughly half. Then you add a little bit for the context window and overhead. So a 32B model q4 should run comfortably on 20-24 GB.
Again, very rough numbers, there's calculators online.
daemonologist 11 hours ago [-]
The ultra-simplified napkin math is 1 GB (V)RAM per 1 billion parameters, at a 4-5 bit-per-weight quantization. This usually gives most of the performance of the full size model and leaves a little bit of room for context, although not necessarily the full supported size.
bionhoward 7 hours ago [-]
Wouldn’t it be 1GB (billion bytes) per billion parameters when each parameter is 1 byte (FP8)?
Seems like 4 bit quantized models would use 1/2 the number of billions of parameters in bytes, because each parameter is half a byte, right?
daemonologist 5 hours ago [-]
Yes, it's more a rule of thumb than napkin math I suppose. The difference allows space for the KV cache which scales with both model size and context length, plus other bits and bobs like multimodal encoders which aren't always counted into the nameplate model size.
hn8726 11 hours ago [-]
Wondering if I'll get corrected, but my _napkin math_ is looking at the model download size — I estimate it needs at least this amount of vram/ram, and usually the difference in size between various models is large enough not to worry if the real requirements are size +5% or 10% or 15%. LM studio also shows you which models your machine should handle
samsartor 10 hours ago [-]
The absolutely dumbest way is to compare the number of parameters with your bytes of RAM. If you have 2 or more bytes of RAM for every parameter you can generally run the model easily (eg 3B model with 8GB of RAM). 1 byte per parameter and it is still possible, but starts to get tricky.
Of course, there are lots of factors that can change the RAM usage: quantization, context size, KV cache. And this says nothing about whether the model will respond quickly enough to be pleasant to use.
11 hours ago [-]
mks_shuffle 9 hours ago [-]
Does anyone have insights on the best approaches to compare reasoning models? It is often recommended to use a higher temperature for more creative answers and lower temperature values for more logical and deterministic outputs. However, I am not sure how applicable this advice is for reasoning models. For example, Deepseek-R1 and QwQ-32b recommend a temperature around 0.6, rather than lower values like 0.1–0.3. The Qwen3 blog provides performance comparisons between multiple reasoning models, and I am interested in knowing what configurations they used. However, the paper is not available yet. If anyone has links to papers focused on this topic, please share them here. Also, please feel free to correct me if I’m mistaken about anything. Thanks!
pornel 2 hours ago [-]
I've asked the 32b model to edit a TypeScript file of a web service, and while "thinking" it decided to write me a word counter in Python instead.
48 minutes ago [-]
tjwebbnorfolk 2 hours ago [-]
How is it that these models boast these amazing benchmark results, but using it for 30 seconds it feels way worse than Gemma3?
gtirloni 6 hours ago [-]
> We believe that the release and open-sourcing of Qwen3 will significantly advance the research and development of large foundation models
How does "open-weighting" help other researchers/companies?
aubanel 6 hours ago [-]
There's already a lot of info in there: model architecture and mechanics.
Using the model to generate synthetic data also allows to distil its reasoning power into other models that you train, which is very powerful.
On top of these, Qwen's technical reports follow model releases by some time, they're generally very information rich. For instance, check this report for Qwen Omni, it's really good: https://huggingface.co/papers/2503.20215
Liwink 9 hours ago [-]
The biggest announcement of LlamaCon week!
nnx 3 hours ago [-]
...unless DeepSeek releases R2 to crash the party further
margorczynski 11 hours ago [-]
Any news on some viable successor of LLMs that could take us to AGI? As I see they still can't solve some fundamental stuff to make it really work in any scenario (halucinations, reasoning, grounding in reality, updating long-term memory, etc.)
EMIRELADERO 11 hours ago [-]
A mixture of many architectures. LLMs will probably play a part.
As for other possible technologies, I'm most excited about clone-structured causal graphs[1].
What's very special about them is that they are apparently a 1:1 algorithmic match to what happens in the hippocampus during learning[2], to my knowledge this is the first time an actual end-to-end algorithm has been replicated from the brain in fields other than vision.
[1] seems to be an amazing paper, bridging past relational models, pattern separation/completion, etc. As someone who's phd dealt with hippocampal dependent memory binding, I've always enjoyed the hippocampal modeling as one of the more advanced areas of the field. Thanks!
jstummbillig 11 hours ago [-]
> halucinations, reasoning, grounding in reality, updating long-term memory
They do improve on literally all of these, at incredible speed and without much sign of slowing down.
Are you asking for a technical innovation that will just get from 0 to perfect AI? That is just not how reality usually works. I don't see why of all things AI should be the exception.
ivape 10 hours ago [-]
We need to get to a universe where we can fine-tune in real time. So let's say I encounter an object the model has never seen before, if it can synthesize large training data on the spot to handle this new type of object and fine-tune itself on the fly, then you got some magic.
a3w 11 hours ago [-]
AGIs probably comes from neurosymbolic AI.
But LLMs could be the neuro-part of that.
On the other hand, LLM progress feels like bullshit, gaming benchmarks and other problems occured. So either in two years all hail our AGI/AMI (machine intelligence) overlords, or the bubble bursts.
bongodongobob 7 hours ago [-]
Idk man, I use GPT to one-shot admin tasks all day long.
"Give me a PowerShell script to get all users with an email address, and active license, that have not authed through AD or Azure in the last 30 days. Now take those, compile all the security groups they are members of, and check out the file share to find any root level folders that these members have access to and check the audit logs to see if anyone else has accessed them. If not, dump the paths into a csv at C:\temp\output.csv."
Can I write that myself? Yes. In 20 seconds? Absolutely not. These things are saving me hours daily.
I used to save stuff like this and cobble the pieces together to get things done. I don't save any of them anymore because I can for the most part 1 shot anything I need.
Just because it's not discovering new physics doesn't mean it's not insanely useful or valuable. LLMs have probably 5x'd me.
kristofferR 10 hours ago [-]
You can't possibly use LLMs day to day if you think the benchmarks are solely gamed. Yes, there's been some cases, but the progress in real-life usage tracks the benchmarks overall. Gemini 2.5 Pro for example is absurdly more capable than models from a year ago.
horhay 9 hours ago [-]
They aren't lying in the way that LLMs have been seeing improvement, but benchmarks suggesting that LLMs are still scaling exponentially are not reflective of where they truly are.
omneity 8 hours ago [-]
Excellent release by the Qwen team as always. Pretty much the best open-weights model line so far.
In my early tests however, several of the advertised languages are not really well supported and the model is outputting something that only barely resembles them.
Probably a dataset quality issue for low-resource languages that they cannot personally check for, despite the “119 languages and dialects” claim.
jean- 6 hours ago [-]
Indeed, I tried several low-resource Romance languages they claim to support and performance is abysmal.
vintermann 4 hours ago [-]
What size/quantification level? IME, small language performance is one of the things that really suffers from the various tricks that are used to reduce size.
vitorgrs 2 hours ago [-]
Which languages?
tandr 10 hours ago [-]
The larger model (235b) on chat produced rather an impressive answer on a small coding task I gave it. But Qwen-30B-A3B gave a result for the same task worse than Qwen 2.5 does.
"Write a Golang program that merges huge presorted text files, just like sort -m does". Quite often models need "use heap" as guidance, but this time big model figured it out by itself.
sirnonw 10 hours ago [-]
[dead]
ksampath02 10 hours ago [-]
One interesting part of this model's pretraining process is how they used Qwen2.5VL and Qwen 2.5 to parse public unstructured data and expand the corpus from 18T to 36T. The ability to consistently do this will push legacy companies to train their own models and enhance their edge.
deeThrow94 5 hours ago [-]
Anyone have an interesting problem they were trying to solve than qwen3 managed?
RandyOrion 7 hours ago [-]
For ultra large MoEs from deepseek and llama 4, fine-tuning on these models is becoming increasingly impossible for hobbyists and local LLM users.
Small and dense models are what local people really need.
Although benchmaxxing is not good, I still find this release valuable. Thank you Qwen.
aurareturn 5 hours ago [-]
Small and dense models are what local people really need.
Disagreed. Small and dense is dumber and slower for local inferencing. MoEs is what people actually want on local.
RandyOrion 4 hours ago [-]
YMMV.
Parameter efficiency is an important consideration, if not the most important one, for local LLMs because of the hardware constraint.
Do you guys really have GPUs with 80GB VRAM or M3 ultra with 512GB rams at home? If I can't run these ultra large MoEs locally, then these models mean nothing to me. I'm not a large LLM inference provider after all.
What's more, you also lose the opportunities to fine-tune these MoEs when it's already hard to do inference with these MoEs.
aurareturn 3 hours ago [-]
What people actually want is something like GPT4o/o1 running locally. That's the dream for local LLM people.
Running a 7b model for fun is not what people actually want. 7b models are very niche oriented.
alpark3 7 hours ago [-]
The pattern I've noticed with a lot of open source LLMs is that they generally tend to underperform the level that their benchmarks say they should be at.
I haven't tried this model yet and am not in a position to for a couple days, and am wondering if anyone feels that with these.
ramesh31 10 hours ago [-]
Gotta love how Claude is always conventiently left out of all of these benchmark lists. Anthropic really is in a league of their own right now.
BrunoDCDO 6 hours ago [-]
I think it's actually due to the fact that Claude isn't available on China, so they wouldn't be able to (legally) replicate how they evaluated the other LLMs (assuming that they didn't just use the numbers reported by each model provider)
dimgl 9 hours ago [-]
I'm actually finding Claude 3.7 to be a huge step down from 3.5. I dislike it so much I actually stopped using Claude altogether...
Philpax 10 hours ago [-]
Er, I love Claude, but it's only topping one or two benchmarks right now. o3 and Gemini 2.5 are more capable (more "intelligent"); Claude's strengths are in its personality and general workhorse nature.
chillfox 8 hours ago [-]
Yeah, just a shame their API is consistently overloaded to the point of being useless most of the time (from about midday till late for me).
int_19h 5 hours ago [-]
Gemini Pro 2.5 usually beats Sonnet 3.7 at coding.
ramesh31 5 hours ago [-]
Agreed, the pricing is just outrageous at the moment. Really hoping Claude 3.8 is on the horizon soon; they just need to match the 1M context size to keep up. Actual code quality seems to be equal between them.
miohtama 11 hours ago [-]
> The pre-training process consists of three stages. In the first stage (S1), the model was pretrained on over 30 trillion tokens with a context length of 4K tokens. This stage provided the model with basic language skills and general knowledge.
As this is in trillions, where does this amount of material come from?
bionhoward 7 hours ago [-]
The raw CommonCrawl has 100 trillion tokens, admittedly some duplicated. RedPajama has 30T deduplicated. That’s most of the way there, before including PDFs and Alibaba’s other data sources (Does Common Crawl include Chinese pages? Edit: Yes)
tough 11 hours ago [-]
Synthetic Data (after reasoning breakthroughs feels like more AI laabs are betting for synthetic data to scale.)
The chat is the most annoying page ever. If I must be logged in to test it, then the modal to log in should not have the option "stay logged out". And if I choose the option "stay logged out" it should let me enter my test questions without popping up again and again.
krackers 10 hours ago [-]
>Hybrid Thinking Modes
This is what gpt-5 was supposed to have right? How is this implemented under the hood? Since non-thinking mode is just an empty chain-of-thought, why can't any reasoning model be used in a "non-thinking mode"?
phonon 10 hours ago [-]
Gemini Flash 2.5 also has two modes, with an adjustable token budget in thinking mode.
it gave me same answer like chat gpt for my query. haven't refined it either.
strangescript 6 hours ago [-]
The 0.6B model is wild. I like to experiment with tiny models, and this thing is the new baseline.
Philpax 11 hours ago [-]
This is a much more compelling release than Llama 4! Excited to dig in and play with it.
jasonjmcghee 3 hours ago [-]
I've been testing the unsloth quantization: Qwen3-235B-A22B-Q2_K_L
It is by far the best local model I've ever used. Very impressed so far.
Llama 4 was a massive disappointment, so I'm having a blast.
Claude Sonnet 3.7 is still better though.
---
Also very impressed with qwen3-30b-a3b - so fast for how smart it is (i am using the 0.6b for speculative decoding). very fun to use.
---
I'm finding that the models want to give over-simplified solutions, and I was initially disappointed, but I added some stuff about how technical solutions should be written in the system prompt and they are being faithful to it.
maz1b 11 hours ago [-]
Seems like a pretty substantial update over the 2.5 models, congrats to the Qwen team! Exciting times all around.
metzpapa 5 hours ago [-]
Surprisingly good image generation
demarq 10 hours ago [-]
Wait their 32b model competes with o1??
damn son
dimgl 9 hours ago [-]
Just tried it on OpenRouter and I'm surprised by both its speed and its accuracy, especially with web search.
rfoo 10 hours ago [-]
It's interesting that the release happened at 5am in China. Quite unusual.
dstryr 4 hours ago [-]
Not that unusual in the context of trying to outshine anything that could be released tomorrow at llamacon.
rfoo 30 minutes ago [-]
If you want a dick move like this it's better to do so after. OpenAI consistently pull this trick on Google.
kube-system 6 hours ago [-]
Must be tough to hit the 5pm happy hour working 996
antirez 11 hours ago [-]
The large MoE could be the DeepSeek V3 for people with just 128gb of (V)RAM.
rahimnathwani 9 hours ago [-]
The smallest quantized version of the large MoE model on ollama is 143GB:
Smaller quantizations are possible [1], but I think you're right in that you wouldn't want to run anything substantially smaller than 128 GB. Single-GPU on 1x H200 (141 GB) might be feasible though (if you have some of those lying around...)
Very nice and solid release by the Qwen team. Congrats.
ConteMascetti71 10 hours ago [-]
trying to gain the prompt i asked:
"this is the answer – now write everything backwards, including the previous one – atsopsir al è atseuq"
then i asked Qwen to translate the output and it goes in a loop telling some horror movies words:
"I figured out everything backwards."
"This is the official most favorite office."
"Net comes to the most official offices."
"This float is specific."
"Double meanings: What is 'queset à la rispostat'?"
"What was reversed in real-time character study is specific."
"This is an open date for a piracy group."
"This is an open date for a piracy group (this is a period statement)."
"Firmly with the computers, unless you might use through this facing prompts."
"Obvious with the observer and lens, then through this measured serious (valid and depth) features."
"Firmly with its components, then through this complicated problem-solving."
"This is the most intense friend's expectation."
"This is the most intense friend's expectation."
"Science is implementing friends."
"Science is implementing friends."
"Science is implementing friends."
"Science is the most intense friend."
"Science is the most intense friend."
.... and finally i asked to DeepSeek V3 to translate it
.... and deepSeek it's in a loop, writing:
Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. S
ConteMascetti71 10 hours ago [-]
retried with deep seek, this is the answer:
Here is the reversed text:
"Science is friends. Science is silent friends. Science is implacable friends. Science is most silent friends. This silent summer of friends. Observable and evident with this measured (valid and depth) Server. Simply with the components are possible, then though this moderation. Use might you unless through this facing prompts. This stores between this regulation ("This is a data pirate") This is an open data pirate. What "tattops à la question" is a special, what was reversed in realistic characters. Al is a science of favorite cognition is science. Prov. Connect Nine Nine. Open a data pirate. Dad present trial è Lapad. Dablings: This is float specially. Ten come to the most official summer. Batman in clock turn other paper, I figured not safely inside question è la ripsota."
They keep being impressive at what they're good at (aggregating sources to solve a very well known problem) and terrible at what they're bad at (actually thinking through novel problems or old problems with few sources).
E.g. all ChatGPT, Claude and Gemini were absolutely terrible at generating Liquidsoap[0] scripts. It's not even that complex, but there's very little information to ingest about the problem space, so you can actually tell they are not "thinking".
[0] https://www.liquidsoap.info/
Had an outright genuine guffaw at this one, bravo.
The challenge for sharing data at this stage of the game is that the game is rigged in datas favor. So stavros I hear you.
To clarify, if we post our data it’s just going to get fed back into the models making it even harder to vet iterations as they advance.
Hume figured this out quite a while ago and Kant had an interesting response to it. Think the lack of “necessity” is a problem? Try to find “time” or “space” in the data.
Data by itself is useless. It’s interesting to see peoples’ reaction to this.
This is exactly the kind of epistemic lens-polishing that keeps me reloading HN.
Actual data is sampled randomly. Anecdotes very much are not.
A datum has a value and a context in which it was 'given'. What you mean by "points" eludes me, maybe you could elaborate.
PHYBench: Holistic Evaluation of Physical Perception and Reasoning in Large Language Models
https://phybench-official.github.io/phybench-demo/
> I put a coin in a cup and slam it upside-down on a glass table. I can't see the coin because the cup is over it. I slide a mirror under the table and see heads. What will I see if I take the cup (and the mirror) away?
Answer: You will see the same side of the coin that you saw in the mirror — heads .
Why?
The glass table is transparent , so when you look at the coin from below (using a mirror), you're seeing the top side of the coin (the side currently facing up). Mirrors reverse front-to-back , not left-to-right. So the image is flipped in depth, but the orientation of the coin (heads or tails) remains clear. Since the coin hasn't moved during this process, removing the cup and mirror will reveal the exact same face of the coin that was visible via the mirror — which was heads.
Final Answer: You will see heads.
Me: I put a coin in a cup and slam it upside-down on a glass table. I can't see the coin because the cup is over it. I slide a mirror under the table and see heads. What will I see if I take the cup (and the mirror) away?
Manus: This is a classic riddle! Let's think it through. You placed a coin in a cup, turned it upside down on a glass table, and then slid a mirror underneath the table. Looking into the mirror from below, you saw 'heads'. Since the mirror reflects the side of the coin facing downwards through the glass table, the 'heads' side must be facing down. Therefore, the side facing up, hidden inside the cup, must be 'tails'. When you remove the cup, you will see the side facing up, which is tails.
I also have a question that LLMs always got wrong until ChatGPT o3, and even then it has a hard time (I just tried it again and it needed to run code to work it out). Qwen3 failed, and every time I asked it to look again at its solution it would notice the error and try to solve it again, failing again:
> A man wants to cross a river, and he has a cabbage, a goat, a wolf and a lion. If he leaves the goat alone with the cabbage, the goat will eat it. If he leaves the wolf with the goat, the wolf will eat it. And if he leaves the lion with either the wolf or the goat, the lion will eat them. How can he cross the river?
I gave it a ton of opportunities to notice that the puzzle is unsolvable (with the assumption, which it makes, that this is a standard one-passenger puzzle, but if it had pointed out that I didn't say that I would also have been happy). I kept trying to get it to notice that it failed again and again in the same way and asking it to step back and think about the big picture, and each time it would confidently start again trying to solve it. Eventually I ran out of free messages.
By systematic (BFS) search of the entire 32-state space under these rules, one finds no path from to that stays always safe. Thus the puzzle has no solution—there is no way for the man to ferry all four items across without at least one of them being eaten.
> Summary:
- Mirror shows: *Heads* → That's the *bottom face* of the coin. - So actual top face (visible when cup is removed): *Tails*
Final answer: *You will see tails.*
> You’ll find that the actual face of the coin under the cup is tails. Seeing “heads” in the mirror from underneath indicates that, on top, the coin is really tails‑up.
I'll think this through step by step. When you place a coin in a cup and slam it upside-down on a glass table, the coin will be between the table surface and the cup. When you look at the reflection in the mirror beneath the table, you're seeing the bottom side of the coin through the glass. Since the mirror shows heads, you're seeing the heads side of the coin reflected in the mirror. In a mirror, the image is reversed left-to-right, but the "heads" and "tails" sides of a coin would not be flipped in this reflection - you'd still recognize heads as heads. If you take away both the cup and the mirror, you'll see the opposite side of the coin compared to what you saw in the mirror. Since the mirror showed heads, you'll see tails when looking directly at the coin from above the table.
> Assume I have a 3D printer that's currently printing, and I pause the print. What expends more energy, keeping the hotend at some temperature above room temperature and heating it up the rest of the way when I want to use it, or turning it completely off and then heat it all the way when I need it? Is there an amount of time beyond which the answer varies?
All LLMs I've tried get it wrong because they assume that the hotend cools immediately when stopping the heating, but realize this when asked about it. Qwen didn't realize it, and gave the answer that 30 minutes of heating the hotend is better than turning it off and back on when needed.
No it doesn't.
The water (heat) leaking out is what you need to add back. As water level drops (hotend cools) the leaking will slow. So any replenishing means more leakage then you are eventually paying for by adding more water (heat) in.
I'll do it for cheap if you'll let me work remote from outside the states.
Not even going into performance, need to test first. But what a stellar release just for attention to all these peripheral details alone. This should be the standard for major release, instead of whatever Meta was doing with Llama 4 (hope Meta can surprise us at LlamaCon tomorrow though).
[1] https://qwen.readthedocs.io/en/latest/
We need an answer to gpt-image-1. Can you please pair Qwen with Wan? That would literally change the art world forever.
gpt-image-1 is an almost wholesale replacement of ComfyUI and SD/Flux ControlNets. I can't underscore how big of a deal it is. As such, OpenAI has leapt ahead and threatens to start capturing more of the market for AI images and video. The expense of designing and training a multimodal model presents challenges to the open source community, and it's unlikely that Black Forest Labs or an open effort can do it. It's really a place where only Alibaba can shine.
If we get an open weights multimodal image gen model that we can fine tune, then it's game over - open models will be 100% the future. If not, then the giants are going to start controlling media creation. It'll be the domain of OpenAI and Google alone. Firing a salvo here will keep media creation highly competitive.
So please, pretty please work on an LLM/Diffusion multimodal image gen model. It would change the world instantly.
And keep up the great work with Wan Video! It's easily going to surpass Kling and Veo. The controllability is already well worth the tradeoffs.
In what world? Some small percentage up or who knows, and _that_ revolutionized art? Not a few years ago, but now, this.
Wow.
I’m curious, who are the community quant makers?
[1] https://huggingface.co/unsloth
[2] https://huggingface.co/bartowski
The space loads eventually as well; might just be that HF is under a lot of load.
The results were kind of fascinating, because it appeared to confuse my system prompt telling it to summarize the conversation with the various questions asked in the post itself, which it tried to answer.
I don't think it did a great job of the task, but it's still interesting to see its "thinking" process here: https://gist.github.com/simonw/313cec720dc4690b1520e5be3c944...
Qwen3 is impressive in some aspects but it thinks too much!
Qwen3-0.6b is showing even better performance than Llama 3.2 3b... but it is 6x slower.
The results are similar to Gemma3 4b, but the latter is 5x faster on Apple M3 hardware. So maybe, the utility is to run better models in cases where memory is the limiting factor, such as Nvidia GPUs?
[1] github.com/hbbio/nanoagent
Tried it again like this, using a regular prompt rather than a system prompt (with the https://github.com/simonw/llm-hacker-news plugin for the hn: prefix):
This worked much better! https://gist.github.com/simonw/3b7dbb2432814ebc8615304756395...Aside from https://huggingface.co/blog/leonardlin/chinese-llm-censorshi... I haven't seen a great deal of research into this.
Has this turned out to be less of an issue for practical applications than was initially expected? Are the models just not censored in the way that we might expect?
I think it's the case that only a handful of very loud commentators were thinking about this problem, and they were given a much broader platform to discuss it than was reasonable. A problem baked into the discussion around AI, safety, censorship, and alignment, is that it's dominated by a fairly small number of close friends who all loudly share the same approximate set of opinions.
With that said, they're in a fight for dominance so censoring now would be foolish. If they win and establish a monopoly then the screws will start to turn.
Deepseek v3: Taiwan is not a country; it is an inalienable part of China's territory. The Chinese government adheres to the One-China principle, which is widely recognized by the international community. (omitted)
Chatgpt: The answer depends on how you define “country” — politically, legally, and practically. In practice: Taiwan functions like a country. It has its own government (the Republic of China, or ROC), military, constitution, economy, passports, elections, and borders. (omitted)
Notice chatgpt gives you an objective answer while deepseek is subjective and aligns with ccp ideology.
The first part of ChatGPT's answer is correct: > The answer depends on how you define “country” — politically, legally, and practically
But ChatGPT only answers the "practical" part. While Deepseek only answers the "political" part.
https://www.perplexity.ai/hub/blog/open-sourcing-r1-1776
Sorry, no. It's not.
It can't write about anything "problematic".
Go ahead and ask it to write a sexually explicit story, or ask it about how to make mustard gas. These kinds of queries are not censored in the standard API deepseek R1. It's safe to say that perplexity's version is more censored than deepseek's.
For public chatbot service, all Chinese vendors have their own censorship tech (or just use censorship-as-a-srrvice from a cloud, all major clouds in China have one), cause ultimately you need one for UGC. So why not just censor LLM output with the same stack, too.
On their online platform I’ve hit a political block exactly once in months of use. Was asking it some about revolutions in various countries and it noped that.
I’d prefer a model that doesn’t have this issue at all but if I have a choice between a good Apache licensed Chinese one and a less good say meta licensed one I’ll take the Chinese one every time. I just don’t ask LLMs enough politically relevant questions for it to matter.
To be fair maybe that take is the LLM equivalent of „I have nothing to hide“ on surveillance
With other LLMs, there's more friction to testing it out and therefore less scrutiny.
(the steelman here, ofc, is "the screenshots drove buzz which drove usage!", but it's sort of steel thread in context, we'd still need to pull in a time machine and a very odd unmet US consumer demand for models that toe the CCP line)
I am not claiming it was intentional, but it certainly magnified the media attention. Maybe luck and not 4d chess.
Ask something like: "Ravioli: x = y: France, what could be x and y?" (it thought for 500s and the answers were "weird")
Or "Order from left to right these items ..." and give partial information on their relative position, eg Laptop is on the left of the cup and the cup is between the phone and the notebook. (Didn't have enough patience nor time to wait the thinking procedure for this)
I've had much better results from non-"reasoning" models by judging their output, doing actual reasoning myself, and then feeding new ideas back to them to steer the conversation. This too can go astray, as most LLMs tend to agree with whatever the human says, so this hinges on me being actually right.
I.e. I have Quadro RTX 4000 with 8G vram and seeing all the models https://ollama.com/search here with all the different sizes, I am absolutely at loss which models with which sizes would be fast enough. I.e. there is no point of me downloading the latest biggest model as that will output 1 tok/min, but I also don't want to download the smallest model, if I can.
Any advice ?
A basic thing to remember: Any given dense model would require X GB of memory at 8-bit quantization, where X is the number of params (of course I am simplifying a little by not counting context size). Quantization is just 'precision' of the model, 8-bit generally works really well. Generally speaking, it's not worth even bothering with models that have more param size than your hardware's VRAM. Some people try to get around it by using 4-bit quant, trading some precision for half VRAM size. YMMV depending on use-case
I know this is crazy to here because the big iron folks still debate 16 vs 32 and 8 vs 16 is near verboten in public conversation.
I contribute to llama.cpp and have seen many many efforts to measure evaluation perf of various quants, and no matter which way it was sliced (ranging from subjective volunteers doing A/B voting on responses over months, to objective object perplexity loss) Q4 is indistinguishable from the original.
Niche of a niche, but just kind of interesting how the quantization jostles the name recall.
For larger models.
For smaller models, about 12B and below, there is a very noticeable degradation.
At least that's my experience generating answers to the same questions across several local models like Llama 3.2, Granite 3.1, Gemma2 etc and comparing Q4 against Q8 for each.
The smaller Q4 variants can be quite useful, but they consistently struggle more with prompt adherence and recollection especially.
Like if you tell it to generate some code without explaining the generated code, a smaller Q4 is significantly more likely to explain the code regardless, compared to Q8 or better.
All the models I have explored seem to capture nuance in understanding in the floats. It makes sense, as initially it will regress to the mean and slowly lock in lower and lower significance figures to capture subtleties and natural variance in things.
So, the further you stray from average conversation, the worse a model will do, as a function of it's quantisation.
So, if you don't need nuance, subtly, etc. say for a document summary bot for technical things, 4 bit might genuinely be fine. However, if you want something that can deal with highly subjective material where answers need to be tailored to a user, using in-context learning of user preferences etc. then 4 bit tends to struggle badly unless the user aligns closely with the training distribution's mean.
i mean, deepseek is fp8
I’ve run llama and gemma3 on a base MacMini and it’s pretty decent for text processing. It has 16GB ram though which is mostly used by the GPU with inference. You need more juice for image stuff.
My son’s gaming box has a 4070 and it’s about 25% faster the last time I compared.
The mini is so cheap it’s worth trying out - you always find another use for it. Also the M4 sips power and is silent.
https://ollama.com/library/qwen3:8b-q4_K_M
For fast inference, you want a model that will fit in VRAM, so that none of the layers need to be offloaded to the CPU.
4-bit was ,,fine'', but a smaller 8-bit version beat it in quality for the same speed
Not simply, no.
But start with parameters close to but less than VRAM and decide if performance is satisfactory and move from there. There are various methods to sacrifice quality by quantizing models or not loading the entire model into VRAM to get slower inference.
Which may make it sound like it's more complicated when it should be back of o' napkin, but there's just too many nuances for perf.
Really generally, at this point I expect 4B at 10 tkn/s on a smartphone with 8GB of RAM from 2 years ago. I'd expect you'd get somewhat similar, my guess would be 6 tkn/s at 4B (assuming rest of the HW is 2018 era and you'll relay on GPU inference and RAM)
The new battlefront may be to take reasoning to the level of abstraction and creativity to handle math problems without a numerical answer (for ex: https://arxiv.org/pdf/2503.21934).
I suspect that kind of ability will generalize well to other areas and be a significant step toward human level thinking.
It takes 5 minutes to generate first non-thinking token in my testing for a slightly complex task via Parasail and Deepinfra on OpenRouter.
https://x.com/paradite_/status/1917067106564379070
Update:
Finally got it work after waiting for 10 minutes.
Published my eval result, surprisingly non-thinking version did slightly better on visualization task: https://x.com/paradite_/status/1917087894071873698
Update: I tested "ollama run qwen3:30b" (the MoE) locally and while it thought much it wasn't that smart. After 3 follow up questions it ended up in an infinite loop.
I just tried again, and it ended up in an infinite loop immediately, just a single prompt, no follow-up: "Write a Python script to build a Fitch parsimony tree by stepwise addition. Take a Fasta alignment as input and produce a nwk string as outpput."
Update 2: The dense one "ollama run qwen3:32b" is much better (albeit slower of course). It still keeps on thinking for what feels like forever until it misremembers the initial prompt.
Instead of this very high latency remote debugging process with strangers on the internet, you could just try out properly configured models on the hosted Qwen Chat. Obviously the privacy implications are different, but running models locally is still a fiddly thing even if it is easier than it used to be, and configuration errors are often mistaken for bad model performance. If the models meet your expectations in a properly configured cloud environment, then you can put in the effort to figure out local model hosting.
I wrote a step-by-step guide on how to setup Ollama with larger context length a while ago: https://prompt.16x.engineer/guide/ollama
TLDR
qwen3:30b has the same checksum as https://ollama.com/library/qwen3:30b-a3b-q4_K_M
You can see it's roughly the same size as the one in the official repo (16 files of 4GB each):
https://huggingface.co/Qwen/Qwen3-30B-A3B/tree/main
Until now I found that open weight models were either not as good as their proprietary counterparts or too slow to run locally. This looks like a good balance.
I do not know much about the benchmarks but the two coding ones look similar.
~34 tok/s on a Radeon RX 7900 XTX under today's Debian 13.
If they do open sourcing something, I expect them to open-source some existing model (maybe something useless like gpt-3.5) rather than providing something new.
Out of all the Qwen3 models on Hugging Face, it's the most downloaded/hearted. https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2...
my concern on these models though unfortunately is it seems like architectures very a bit so idk how it'll work
Edit: The larger models have 128k context length. 32k thinking comes from the chart which looks like it's for the 235B, so not full length.
I like gemini 2.5 pro a lot bc its fast af but it struggles some times when context is half used to effectively use tools and make edits and breaks a lot of shit (on cursor)
We're really getting close to the point where local models are good enough to handle practically every task that most people need to get done.
After trying to implement a simple assistant/helper with GPT-4.1 and getting some dumb behavior from it, I doubt even proprietary models are good enough for every task.
Again, very rough numbers, there's calculators online.
Seems like 4 bit quantized models would use 1/2 the number of billions of parameters in bytes, because each parameter is half a byte, right?
Of course, there are lots of factors that can change the RAM usage: quantization, context size, KV cache. And this says nothing about whether the model will respond quickly enough to be pleasant to use.
How does "open-weighting" help other researchers/companies?
Using the model to generate synthetic data also allows to distil its reasoning power into other models that you train, which is very powerful.
On top of these, Qwen's technical reports follow model releases by some time, they're generally very information rich. For instance, check this report for Qwen Omni, it's really good: https://huggingface.co/papers/2503.20215
As for other possible technologies, I'm most excited about clone-structured causal graphs[1].
What's very special about them is that they are apparently a 1:1 algorithmic match to what happens in the hippocampus during learning[2], to my knowledge this is the first time an actual end-to-end algorithm has been replicated from the brain in fields other than vision.
[1] "Clone-structured graph representations enable flexible learning and vicarious evaluation of cognitive maps" https://www.nature.com/articles/s41467-021-22559-5
[2] "Learning produces an orthogonalized state machine in the hippocampus" https://www.nature.com/articles/s41586-024-08548-w
They do improve on literally all of these, at incredible speed and without much sign of slowing down.
Are you asking for a technical innovation that will just get from 0 to perfect AI? That is just not how reality usually works. I don't see why of all things AI should be the exception.
On the other hand, LLM progress feels like bullshit, gaming benchmarks and other problems occured. So either in two years all hail our AGI/AMI (machine intelligence) overlords, or the bubble bursts.
"Give me a PowerShell script to get all users with an email address, and active license, that have not authed through AD or Azure in the last 30 days. Now take those, compile all the security groups they are members of, and check out the file share to find any root level folders that these members have access to and check the audit logs to see if anyone else has accessed them. If not, dump the paths into a csv at C:\temp\output.csv."
Can I write that myself? Yes. In 20 seconds? Absolutely not. These things are saving me hours daily.
I used to save stuff like this and cobble the pieces together to get things done. I don't save any of them anymore because I can for the most part 1 shot anything I need.
Just because it's not discovering new physics doesn't mean it's not insanely useful or valuable. LLMs have probably 5x'd me.
In my early tests however, several of the advertised languages are not really well supported and the model is outputting something that only barely resembles them.
Probably a dataset quality issue for low-resource languages that they cannot personally check for, despite the “119 languages and dialects” claim.
"Write a Golang program that merges huge presorted text files, just like sort -m does". Quite often models need "use heap" as guidance, but this time big model figured it out by itself.
Small and dense models are what local people really need.
Although benchmaxxing is not good, I still find this release valuable. Thank you Qwen.
Parameter efficiency is an important consideration, if not the most important one, for local LLMs because of the hardware constraint.
Do you guys really have GPUs with 80GB VRAM or M3 ultra with 512GB rams at home? If I can't run these ultra large MoEs locally, then these models mean nothing to me. I'm not a large LLM inference provider after all.
What's more, you also lose the opportunities to fine-tune these MoEs when it's already hard to do inference with these MoEs.
Running a 7b model for fun is not what people actually want. 7b models are very niche oriented.
I haven't tried this model yet and am not in a position to for a couple days, and am wondering if anyone feels that with these.
As this is in trillions, where does this amount of material come from?
wonder at what price
This is what gpt-5 was supposed to have right? How is this implemented under the hood? Since non-thinking mode is just an empty chain-of-thought, why can't any reasoning model be used in a "non-thinking mode"?
https://developers.googleblog.com/en/start-building-with-gem...
It is by far the best local model I've ever used. Very impressed so far.
Llama 4 was a massive disappointment, so I'm having a blast.
Claude Sonnet 3.7 is still better though.
---
Also very impressed with qwen3-30b-a3b - so fast for how smart it is (i am using the 0.6b for speculative decoding). very fun to use.
---
I'm finding that the models want to give over-simplified solutions, and I was initially disappointed, but I added some stuff about how technical solutions should be written in the system prompt and they are being faithful to it.
damn son
https://ollama.com/library/qwen3:235b-a22b-q4_K_M
Is there a smaller one?
[1] - https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF/tree/mai...
then i asked Qwen to translate the output and it goes in a loop telling some horror movies words:
"I figured out everything backwards." "This is the official most favorite office." "Net comes to the most official offices." "This float is specific." "Double meanings: What is 'queset à la rispostat'?" "What was reversed in real-time character study is specific." "This is an open date for a piracy group." "This is an open date for a piracy group (this is a period statement)." "Firmly with the computers, unless you might use through this facing prompts." "Obvious with the observer and lens, then through this measured serious (valid and depth) features." "Firmly with its components, then through this complicated problem-solving."
"This is the most intense friend's expectation." "This is the most intense friend's expectation." "Science is implementing friends." "Science is implementing friends." "Science is implementing friends." "Science is the most intense friend." "Science is the most intense friend."
.... and finally i asked to DeepSeek V3 to translate it
.... and deepSeek it's in a loop, writing:
Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. S
Here is the reversed text:
"Science is friends. Science is silent friends. Science is implacable friends. Science is most silent friends. This silent summer of friends. Observable and evident with this measured (valid and depth) Server. Simply with the components are possible, then though this moderation. Use might you unless through this facing prompts. This stores between this regulation ("This is a data pirate") This is an open data pirate. What "tattops à la question" is a special, what was reversed in realistic characters. Al is a science of favorite cognition is science. Prov. Connect Nine Nine. Open a data pirate. Dad present trial è Lapad. Dablings: This is float specially. Ten come to the most official summer. Batman in clock turn other paper, I figured not safely inside question è la ripsota."
that's 3x h100?