Skip to content

hexagon: support for IQ4_NL and MXFP4#21018

Merged
max-krasnyansky merged 3 commits intoggml-org:masterfrom
aizip:feat/hexagon-mxfp4-iq4nl
Mar 27, 2026
Merged

hexagon: support for IQ4_NL and MXFP4#21018
max-krasnyansky merged 3 commits intoggml-org:masterfrom
aizip:feat/hexagon-mxfp4-iq4nl

Conversation

@njsyw1997
Copy link
Copy Markdown
Contributor

Overview

This PR adds support for IQ4_NL and MXFP4 on Hexagon HMX, and fixes a memory address issue that occurs when MXFP4 and Q4_0/IQ4_NL matmul operations are executed consecutively.

Additional information

Due to the lack of a WoA laptop and limited RAM on the test device, I couldn’t run GPT-OSS-20B directly. Instead, I used a Qwen3 variant to validate MXFP4.

The error happens when SKIP_QUANTIZE is enabled. The operator attempts to reuse previously quantized activations. However, the activation memory offset is computed based on the current op’s weight size, not previous op weight size. Now we just record the correct offset for next layer's use.

Requirements

  • I have read and agree with the contributing guidelines
  • AI usage disclosure: Yes. Used for adding tests, logs and creating scripts to filter logs.

- Add IQ4_NL quantization type support to Hexagon backend (buffer
  set/get tensor repack, mul_mat, mul_mat_id dispatch)
- Implement HVX IQ4_NL vec_dot kernels (1x1, 2x1, 2x2) with
  LUT-based 4-bit index to int8 kvalue dequantization
- Add MXFP4 HMX dequantization path with E8M0 scale conversion,
  including batch-4 fast path and single-tile fallback
- Unify quantized row size / scale offset logic to handle Q4_0,
  Q8_0, IQ4_NL, and MXFP4 in the DMA fetch path
@github-actions github-actions Bot added ggml changes relating to the ggml tensor library for machine learning Hexagon labels Mar 26, 2026
@njsyw1997
Copy link
Copy Markdown
Contributor Author

By the way, what's the correct way of format code? I tried .clang-format but it does not work.

@CISC
Copy link
Copy Markdown
Member

CISC commented Mar 26, 2026

By the way, what's the correct way of format code? I tried .clang-format but it does not work.

Use git clang-format (after installing the appropriate release of git-clang-format).

@njsyw1997 njsyw1997 force-pushed the feat/hexagon-mxfp4-iq4nl branch from 1ed9693 to 282a270 Compare March 26, 2026 18:00
@njsyw1997 njsyw1997 requested a review from a team as a code owner March 26, 2026 18:00
Copy link
Copy Markdown
Member

@max-krasnyansky max-krasnyansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.
Only one nit. Misaligned pragma (line 127). clang-format always gets that wrong.

btw Back when I first added MXFP4 support I also just quantized a smaller model (llama3.2-3B at the time) myself to test that end-to-end.

@max-krasnyansky
Copy link
Copy Markdown
Member

@lhez when you get the chance please review/ack.

I tested everything on my devices and looks good.
We're now passing IQ4_NL MUL_MAT tests with Hexagon 👍

$ HB=0 D=HTP0 ./scripts/snapdragon/adb/run-tool.sh test-backend-ops -b HTP0 -o MUL_MAT -p type_a=iq4_nl 
...
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=16,n=1,k=256,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1): OK
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=16,n=2,k=256,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1): OK
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=16,n=3,k=256,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1): OK
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=16,n=4,k=256,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1): OK
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=16,n=5,k=256,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1): OK
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=16,n=6,k=256,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1): OK
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=16,n=7,k=256,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1): OK
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=16,n=8,k=256,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1): OK
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=16,n=9,k=256,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1): OK
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=16,n=1,k=32,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1): OK
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=16,n=1,k=256,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1): OK
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=1,n=64,k=256,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1): OK
  12/12 tests passed

@max-krasnyansky max-krasnyansky merged commit ee051c1 into ggml-org:master Mar 27, 2026
45 checks passed
slartibardfast pushed a commit to slartibardfast/llama.cpp that referenced this pull request Apr 12, 2026
* ggml-hexagon: add IQ4_NL and MXFP4 HMX matmul support

- Add IQ4_NL quantization type support to Hexagon backend (buffer
  set/get tensor repack, mul_mat, mul_mat_id dispatch)
- Implement HVX IQ4_NL vec_dot kernels (1x1, 2x1, 2x2) with
  LUT-based 4-bit index to int8 kvalue dequantization
- Add MXFP4 HMX dequantization path with E8M0 scale conversion,
  including batch-4 fast path and single-tile fallback
- Unify quantized row size / scale offset logic to handle Q4_0,
  Q8_0, IQ4_NL, and MXFP4 in the DMA fetch path

* ggml-hexagon: fix SKIP_QUANTIZE src1 address mismatch in mixed-quant models

* Fix the pragma indent
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning Hexagon

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants