hexagon: support for IQ4_NL and MXFP4#21018
Merged
max-krasnyansky merged 3 commits intoggml-org:masterfrom Mar 27, 2026
Merged
Conversation
- Add IQ4_NL quantization type support to Hexagon backend (buffer set/get tensor repack, mul_mat, mul_mat_id dispatch) - Implement HVX IQ4_NL vec_dot kernels (1x1, 2x1, 2x2) with LUT-based 4-bit index to int8 kvalue dequantization - Add MXFP4 HMX dequantization path with E8M0 scale conversion, including batch-4 fast path and single-tile fallback - Unify quantized row size / scale offset logic to handle Q4_0, Q8_0, IQ4_NL, and MXFP4 in the DMA fetch path
Contributor
Author
|
By the way, what's the correct way of format code? I tried .clang-format but it does not work. |
Member
Use |
1ed9693 to
282a270
Compare
max-krasnyansky
approved these changes
Mar 27, 2026
Member
max-krasnyansky
left a comment
There was a problem hiding this comment.
Looks good.
Only one nit. Misaligned pragma (line 127). clang-format always gets that wrong.
btw Back when I first added MXFP4 support I also just quantized a smaller model (llama3.2-3B at the time) myself to test that end-to-end.
lhez
approved these changes
Mar 27, 2026
Member
|
@lhez when you get the chance please review/ack. I tested everything on my devices and looks good. |
slartibardfast
pushed a commit
to slartibardfast/llama.cpp
that referenced
this pull request
Apr 12, 2026
* ggml-hexagon: add IQ4_NL and MXFP4 HMX matmul support - Add IQ4_NL quantization type support to Hexagon backend (buffer set/get tensor repack, mul_mat, mul_mat_id dispatch) - Implement HVX IQ4_NL vec_dot kernels (1x1, 2x1, 2x2) with LUT-based 4-bit index to int8 kvalue dequantization - Add MXFP4 HMX dequantization path with E8M0 scale conversion, including batch-4 fast path and single-tile fallback - Unify quantized row size / scale offset logic to handle Q4_0, Q8_0, IQ4_NL, and MXFP4 in the DMA fetch path * ggml-hexagon: fix SKIP_QUANTIZE src1 address mismatch in mixed-quant models * Fix the pragma indent
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Overview
This PR adds support for IQ4_NL and MXFP4 on Hexagon HMX, and fixes a memory address issue that occurs when MXFP4 and Q4_0/IQ4_NL matmul operations are executed consecutively.
Additional information
Due to the lack of a WoA laptop and limited RAM on the test device, I couldn’t run GPT-OSS-20B directly. Instead, I used a Qwen3 variant to validate MXFP4.
The error happens when
SKIP_QUANTIZEis enabled. The operator attempts to reuse previously quantized activations. However, the activation memory offset is computed based on the current op’s weight size, not previous op weight size. Now we just record the correct offset for next layer's use.Requirements