Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
Sergei Golubchik
MDEV-39576 PROXY v2 protocol uninitialized memory reads
KhaledSayed04
MDEV-37608: Increase default binlog row event size

Increase default binlog_row_event_max_size to 64k
to reduce event header overhead and improve
performance on modern networks.

Summary of changes:
- Updated sql/sys_vars.cc for the new 64k default.
- Added binlog_row_event_max_size_basic.test to
verify min/max boundaries and read-only property.
- Audited MTR suite; retained pins only for tests
requiring specific byte-math for cache spills
and event fragmentation logic.
- Standardized some affected .opt files to a single
line format to follow MTR best practices.

# Affected Tests Grouping

## Cache Boundary & Overflow Handling
These tests utilize a 4KB binlog_cache_size (the server minimum).
The 8KB pin ensures that events exceed the memory buffer, forcing
the server into the incremental flush-to-disk and overflow logic.
With 64KB, the 16x scale disparity may trigger 'Event too large'
rejections during allocation, bypassing IO_CACHE spill code paths.

Affected Files:
- mysql-test/main/alter_table_online_debug.opt
- mysql-test/suite/binlog/t/binlog_bug23533.opt
- mysql-test/suite/galera/t/galera_binlog_cache_size.opt
- mysql-test/suite/rpl/t/rpl_row_binlog_max_cache_size-master.opt
- mysql-test/suite/rpl/t/rpl_stm_binlog_max_cache_size-master.opt
- mysql-test/suite/rpl/t/rpl_mixed_binlog_max_cache_size-master.opt
- mysql-test/suite/rpl/t/rpl_create_select_row-master.opt

## Engine Spills & Recovery (OOB/ENOSPC)
These tests exercise Out-of-Band (OOB) offloading or verify
resilience during Disk Full (ENOSPC) scenarios while
flushing the binary log cache.
The 8KB pin is required to maintain the 'Small World'
math where even a single statement forces a transition from
memory to storage.
At 64KB, these transactions would remain 'in-line' in memory,
failing to trigger the temporary file creation,
disk-full error injection, and encryption logic that these
tests are specifically designed to measure.

Affected Files:
- mysql-test/suite/binlog_in_engine/rpl_oob.opt
- mysql-test/suite/binlog_in_engine/xa.opt
- mysql-test/suite/binlog_in_engine/rpl_dual_cache-master.opt
- mysql-test/suite/rpl/t/rpl_row_binlog_tmp_file_flush_enospc-master.opt
- mysql-test/suite/rpl/t/rpl_mdev-11092.opt
- mysql-test/suite/encryption/t/tempfiles.opt
- mysql-test/suite/perfschema/t/io_cache-master.opt

## Micro-Rotation & Space Limits
These tests validate how MariaDB purges binary logs
or manages temporary space under strict constraints.
They utilize limits ranging from 'Micro-Thresholds'
(e.g., max-binlog-total-size=1.5k) to moderate session
quotas (e.g., max_tmp_session_space_usage=512K).
The 8KB pin is retained to maintain the mathematical
proportionality between event size and space limits.

Affected Files:
- mysql-test/suite/rpl/t/max_binlog_total_size-master.opt
- mysql-test/main/tmp_space_usage-master.opt
- mysql-test/suite/rpl/t/rpl_binlog_cache_disk_full_row-master.opt
- mysql-test/suite/binlog/include/binlog_cache_stat.opt

## Event Fragmentation & Protocol Integrity
These tests verify that large rows are correctly
sliced into multiple Rows_log_event fragments.
A 64KB default would allow the test payloads to fit
into a single event, neutralizing the validation
of reassembly and protocol flags.

Affected Files:
- mysql-test/suite/rpl/t/rpl_fragment_row_event_main.cnf
- mysql-test/suite/rpl/t/rpl_loaddata_map-master.opt
- mysql-test/suite/rpl/t/rpl_loaddata_map-slave.opt
- mysql-test/suite/rpl/t/rpl_checksum_cache.opt
- mysql-test/suite/binlog_encryption/rpl_packet.cnf

Reviewed-by: Brandon Nesterenko <[email protected]>
Reviewed-by: Georgi Kodinov <[email protected]>
Sergei Golubchik
MDEV-39564 One-byte OOB write in PROXY protocol v1 header parser
Jan Lindström
MDEV-39488 : Skip Galera test requiring perfschema if -DPLUGIN_PERFSCHEMA=NO

Test change only, this test does not need perfschema.
Georgi (Joro) Kodinov
MDEV-39572: Add marking requirements for AI-assisted contributions to COMMUNITY_CONTRIBUTIONS.md

Explained the use of git commit trailers to mark contributions
co-developed using AI tools.
Monty
fixup! 13e1fdfb6d3deb492f81e07caacadc2c4fa75dfb

Fixed duplicate key error when converting HEAP table Aria
Georgi (Joro) Kodinov
MDEV-39572: Add marking requirements for AI-assisted contributions to COMMUNITY_CONTRIBUTIONS.md

Explained the use of git commit trailers to mark contributions
co-developed using AI tools.
bsrikanth-mariadb
MDEV-39405: store all the plugin-engines optimizer costs

store all the plugin-engines optimizer costs into the context, so that
the replay server uses the same engine costs while computing the query cost
kjarir
MDEV-37664: Fix main.lotofstack failure under UBSAN

The test called bug10100p(255) expecting ER_STACK_OVERRUN_NEED_MORE, but
under UBSAN inflated per-frame stack usage causes the recursion limit to
be hit before the stack guard fires.

The max_sp_recursion_depth was not possible to extend beyond 255 which
would have enabled a compatible test.

The alternate considered was allowing an alternative error
ER_SP_RECURSION_LIMIT, however there is already sufficient tests on this.

To gain a positive test result for MTR in the UBSAN environment we've just
disabled the test there.

Reviewers: Joro Kodinov, Sergey Vojtovich, Daniel Black
Arcadiy Ivanov
Fix MSAN crash in `hp_rec_hashnr` for geometry/blob DISTINCT keys

`heap_prepare_hp_create_info()` mishandled blob key segments whose
field type is `Field_blob` or `Field_geom` (as opposed to
`Field_blob_key`).

`Field_blob::key_type()` returns `HA_KEYTYPE_VARBINARY2` (geometry)
or `HA_KEYTYPE_VARTEXT2` (text blobs).  Commit `30415846402`
("Introduce Field_blob_key") added logic that stripped `HA_BLOB_PART`
from these segments, assuming they use "VARCHAR packing."  This is
wrong: DISTINCT/UNION key fields are `Field_blob` (not
`Field_blob_key`), and their record format is a blob descriptor
(`packlength` bytes of length + 8-byte data pointer), not a varchar
(2-byte length prefix + inline data).

After `HA_BLOB_PART` was stripped, `hp_create.c` normalized
`VARBINARY2` → `VARTEXT1`.  The hash function `hp_rec_hashnr()` then
entered the `VARTEXT1` branch, which reads the first 2 bytes as a
varchar length and hashes that many bytes starting at offset 2.  For a
geometry blob descriptor, the first 2 bytes are the low bytes of the
WKB data length (e.g. ~100 for a simple polygon), so the hash read
~100 bytes starting inside the 12-byte descriptor — overshooting into
adjacent fields or uninitialized record buffer memory.

This caused MSAN "use-of-uninitialized-value" crashes in
`innodb_gis.1`, `innodb_gis.point_basic`, `main.gis`, and
`innodb_gis.gis` on the `amd64-msan-clang-20` CI builder.  Beyond the
MSAN crash, it was also a functional bug: hashing the raw pointer
bytes meant two rows with identical geometry data but different memory
addresses would hash differently, breaking UNION DISTINCT
deduplication.

**Fix**: when `HA_BLOB_PART` is set and the key type is
`VARBINARY2`/`VARTEXT2`, promote to `VARBINARY4`/`VARTEXT4` instead
of stripping the flag.  Set `bit_start` to the actual `packlength`
and `length` to `4 + sizeof(pointer)`.  `hp_create.c` then normalizes
to `VARTEXT4` and the hash/compare functions use the blob path:
dereference the pointer and operate on the actual data.
Arcadiy Ivanov
Add stress test for HEAP blob insert/delete/update cycles

3-phase `heap/blob_stress` MTR test exercising free-list
fragmentation, continuation chain reuse, and data integrity under
sustained mixed DML:

- Phase 1: 200-cycle stored procedure — 5 inserts, 2 deletes,
  3 updates per cycle with blob sizes cycling through Case A/B/C;
  shadow table row count verification with `SIGNAL` on mismatch
- Phase 2: near-capacity (2 MB) fill/fragment/refill with free-list
  scavenging, then full-delete reinsert
- Phase 3: 20 grow/shrink `UPDATE` cycles (even rows 10-18 KB,
  odd rows 5-20 bytes)

All phases verify blob content integrity via single-character
`REPEAT` pattern check. Addresses Monty's F14 feedback.
Arcadiy Ivanov
Disable `ps_protocol` in `blob_big` tests

`SHOW STATUS LIKE 'Created_tmp%'` counts include the extra temp table
created by prepared statement re-execution under `--ps-protocol`. The
test already disabled `cursor_protocol` and `ps2_protocol` but missed
`ps_protocol`, causing `blob_big1`/`blob_big2`/`blob_big3` to fail on
CI builders that run with `--ps-protocol`.
Arcadiy Ivanov
Overflow-to-Aria on `ha_update_tmp_row()` for GROUP BY temp tables

When `ha_update_tmp_row()` fails with `HA_ERR_RECORD_FILE_FULL` on a
HEAP temp table (e.g., `MAX(TEXT)` aggregate growing the blob during
GROUP BY accumulation), convert the table to Aria and retry the update
— matching the existing INSERT overflow handling.

**Mechanism**: `create_internal_tmp_table_from_heap()` copies all rows;
`record[0]` write is rejected as duplicate (same GROUP BY key).
`get_dup_key()` populates `dup_ref`, then `ha_rnd_pos()` locates the
old row in Aria for the update.

**Two call sites fixed**:
- `end_update()`: switches to `end_unique_update` after conversion
- `end_unique_update()`: restores INDEX mode via `rnd_inited` flag

`GROUP_CONCAT` does NOT trigger this path — its `update_field()` is
`DBUG_ASSERT(0)` (it accumulates internally). `MAX(TEXT)` / `MIN(TEXT)`
are the aggregates that write growing blobs via `result_field->store()`.
bsrikanth-mariadb
MDEV-39405: store all the plugin-engines optimizer costs

store all the plugin-engines optimizer costs into the context, so that
the replay server uses the same engine costs while computing the query cost
Arcadiy Ivanov
Replace `hp_blob_run_format()` enum with direct bit testing

Remove `enum hp_blob_format` and `hp_blob_run_format()` indirection.
Add `HP_ROW_MULTIPLE_REC` (bit 5) so all three blob storage formats
have a dedicated flag bit. Add named inline predicates
`hp_is_single_rec()`, `hp_is_zerocopy()`, `hp_is_multi_run()` matching
the existing `hp_is_active()`/`hp_has_cont()`/`hp_is_cont()` pattern.

Change `hp_write_run_data()` format parameter from enum to `uchar`
receiving bit constants directly; simplify flags byte assignment from
ternary to bitwise OR.

Addresses review feedback F127-F128, F130, F132-F134.
Arcadiy Ivanov
Avoid double blob materialization in `find_unique_row()`

For blob tables, `find_unique_row()` previously materialized blobs
twice: once during `hp_rec_key_cmp()` (per-segment via
`hp_materialize_one_blob()`) and again via `hp_read_blobs()` after
the match was found.

Reorder the blob path to materialize-then-compare: save the input
record, copy the stored candidate, call `hp_read_blobs()` once to
materialize all blobs, then compare via `hp_rec_key_cmp()` with
`info=NULL` since both records now have direct data pointers.

Non-blob tables keep the original fast path unchanged.
Sergei Golubchik
MDEV-39565 missing filename check in mariadb-backup --decompress

check for tablename-safe characters in backed up table files
Arcadiy Ivanov
Batch tail allocation for blob continuation chains

`hp_alloc_from_tail()` now takes `uint *blocks` (in/out) and allocates
a contiguous batch of records from the current leaf block in one call,
replacing the per-record inner loop in `hp_write_one_blob()` Step 2.

The caller pre-computes the record count needed for the chosen storage
format (Case B for `is_only_run`, Case C otherwise), and the function
returns however many are available up to the request. The flat
if/else-if/else then selects Case A, B, or C based on the actual count.

This eliminates the record-by-record extension loop, both contiguity
guards with `abort()`, and the Case B extra-record allocation logic,
reducing Step 2 from ~170 lines to ~60.
Arcadiy Ivanov
Reclaim tail records on failed blob allocation

When `hp_write_one_blob()` fails partway through tail allocation
(e.g. `HA_ERR_RECORD_FILE_FULL`), `hp_free_run_chain()` puts the
partial chain onto the delete list. These records were just
tail-allocated but once on the delete list they could only be reused
via free-list scavenging, not tail allocation, so `last_allocated`
only grew forward.

Add `hp_shrink_tail()` which pops tail-positioned records from the
delete list head and decrements `last_allocated`. Crosses block
boundaries by locating the previous leaf block via `hp_find_block()`
and updating `last_blocks`. Empty blocks stay allocated in the tree
(freed at table drop).

Add `high_water_allocated` to `HP_BLOCK` to track the peak
`last_allocated` before shrinking. In `hp_alloc_from_tail()`, when
`block_pos == 0` and `last_allocated < high_water_allocated`, the
next leaf block already exists in the tree: reuse it via
`hp_find_block()` instead of calling `hp_get_new_block()`, avoiding
memory waste from duplicate block allocations.

Unit tests (41 new assertions in `hp_test_freelist-t.c`):
- Single-block tail reclaim after failed blob insert
- Cross-block reclaim (2 blocks, 1 boundary crossing)
- 3-block reclaim (2 boundary crossings, `last_blocks` restoration)
- Orphaned block reuse: non-blob and blob inserts fill reclaimed
  blocks without growing `data_length`
Jan Lindström
MDEV-39561 : Galera test failure on mysql-wsrep-features#8

Test case changes only. Fix wait_conditions.
Dmitry Shulga
MDEV-30645: CREATE TRIGGER FOR { STARTUP | SHUTDOWN }

Follow-up to fix building with EMBEDDED server. Since support of
events isn't compiled in for embedded server but some stuff from
implementation of events is used for support of triggers, common
source code used both for events and triggers was extracted into
the separate files event_common.cc/event_common.h
Monty
Remove pack_length_no_ptr()

Replaced pack_length_no_ptr() with existing length_size() that does the
same thing.
bsrikanth-mariadb
MDEV-39405: store all the plugin-engines optimizer costs

store all the plugin-engines optimizer costs into the context, so that
the replay server uses the same engine costs while computing the query cost
Monty
squash! 4888cdb69fd115986625da2a44b78a6a3898983c

Fixed that heap_prepare_hp_create_info() honors tmp_memory_table_size
Fixed memory overrun error in hp_test_hash-t
Added DBUG_ASSERT to ensure that we are not used converted heap
keys with index_read()
Andrei Elkin
MDEV-39500 STRICT execution mode slave silently overwrites record even at mismatch

The problem at hand is that the default STRICT slave execution mode is
not strict enough.
In row-based replication, if the slave applier locates a row for an
UPDATE using a unique key, it silently applies the changes even if the
before-image in the replication event does not match the local non-PK columns.
This allows among other things data divergence to go undetected
without throwing any errors.

This is resolved by introducing a new slave execution mode: STRINGENT.
When @@global.slave_exec_mode = STRINGENT is configured, the applier explicitly compares the event's before-image against the local record. If a mismatch is detected, it rejects the update and safely halts replication by throwing a new error, ER_INCONSISTENT_SLAVE_RECORD.
Sergei Golubchik
MDEV-39581 dynamic column header missing sanity checks
Monty
Removed duplicate versions of Field::row_pack_length()
Dmitry Shulga
MDEV-30645: CREATE TRIGGER FOR { STARTUP | SHUTDOWN }

Core implementation

- Extended the sql grammar to support creation of system triggers
  on startup/shutdown
- System triggers metadata is loaded from the table mysql.event
- Events and system triggers share the same name space since both are stored
  in the table mysql.event declared with the PRIMARY KEY (`db`,`name`).
  In result, attempt to create an event with the same name as existent
  trigger and vice versa results in failure with the new error
  ER_TRG_EVENT_CONFLICTS_NAME
- System triggers can be created only by users with the SUPER privilege
- Added the option --sys_triggers for mysqldump to dump system triggers.
  The option is turned off by default.
Oleksandr Byelkin
MDEV-39389 Memory leaks in _db_set_init

Handle forgotten case of resetting parameters
(free command_line before assigning a new one).
Arcadiy Ivanov
Code review feedback: `hp_update.c` cleanup, test renames, style fixes

Apply Monty's review feedback across HEAP blob implementation:

**`hp_update.c`:**
- Hoist `HP_BLOB_DESC *desc` to block scope, use `desc++` in all three
  blob loops instead of re-indexing `&share->blob_descs[i]` each iteration
- Move `new_len` declaration to block scope, remove inner `{ }` wrapper
  block, dedent the write-new-chains loop body by one level
- Replace `if (blob_changed[i]) any_changed= TRUE` with branchless
  `any_changed|= blob_changed[i]`
- Add braces to rollback `for (j= 0; j < i; j++)` loop body
- Update chain pointer restoration comment to explain `pos` vs `old`
  pointer semantics for segmented blobs
- Rename inner loop `new_len` to `cur_len` to avoid shadowing block-scope
  `new_len`

**`read_lowendian()` move (F33/F63):**
- Move `read_lowendian()` from `sql/field.h` to `include/my_base.h` so
  pure-C storage engines can use it
- Convert `hp_blob_length()` from standalone function in `hp_blob.c` to
  `static inline` wrapper in `heapdef.h` calling `read_lowendian()`
- Convert `hp_blob_key_length()` in `hp_hash.c` to `static inline`
  wrapper calling `read_lowendian()`

**Test renames** (Monty's naming convention — drop `heap_` prefix):
- `heap_blob.test` → `blob.test`
- `heap_blob_big{1,2,3}.test` → `blob_big{1,2,3}.test`
- `heap_blob_big.inc` → `blob_big.inc`
- `heap_blob_groupby.test` → `blob_group_by.test`
- `heap_blob_ops.test` → `blob_ops.test`

**Other files:** Style fixes from earlier feedback items applied across
`hp_blob.c`, `hp_write.c`, `hp_hash.c`, `hp_scan.c`, `hp_delete.c`,
`ha_heap.cc`, `heapdef.h`, `_check.c`, `heap.h`, `field.h`,
`sql_select.cc`, and test files.
Sergei Golubchik
proxy protocol v2: fix a harmless typo

according to the rfc, the length is 2 bytes,
but the max length is 226 and there's a validity
check for length <= 240.
Jan Lindström
MDEV-39488 : Skip Galera test requiring perfschema if -DPLUGIN_PERFSCHEMA=NO
Mohammad Tafzeel Shams
MDEV-38305: Expose adaptive hash index statistics in ANALYZE FORMAT=JSON

Expose InnoDB's Adaptive Hash Index (AHI) statistics through ANALYZE
FORMAT=JSON output to provide query-level visibility into AHI usage
and effectiveness. This allows DBAs and developers to monitor how well
the adaptive hash index is serving their workloads on a per-query basis.

The r_ahi_stats object (nested inside r_engine_stats) now reports four
key metrics: ahi_searches (successful AHI lookups), ahi_searches_btree
(AHI misses requiring B-tree fallback), ahi_rows_added (rows inserted
into AHI), and ahi_pages_added (pages indexed by AHI).

- btr_ahi_inc_searches(): Increment counter when AHI lookup succeeds.
- btr_ahi_inc_searches_btree(): Increment counter when AHI lookup fails
  and falls back to B-tree search.
- btr_ahi_inc_rows_added(): Increment counter when rows are added to
  the adaptive hash index structure.
- btr_ahi_inc_pages_added(): Increment counter when new pages are
  indexed by AHI.
- btr_cur_t::search_leaf(): Call btr_ahi_inc_searches() on successful
  AHI hit and btr_ahi_inc_searches_btree() on AHI miss to track search
  outcomes at the point where AHI is utilized.
- trace_engine_stats(): Output r_ahi_stats object with all four AHI
  counters in JSON format when any AHI activity is detected during query
  execution.
- ha_handler_stats: Added ahi_searches, ahi_searches_btree, ahi_rows_added,
  and ahi_pages_added fields to track per-query AHI statistics.
- ahi_stats.test: Comprehensive verification of AHI statistics reporting
  across different scenarios: insufficient accesses (no AHI build),
  threshold triggering (AHI construction), heavy warmup (full AHI
  utilization), and disabled AHI (verify zero statistics).
- check_ahi_status.inc: Reusable include file for executing queries with
  configurable warmup repetitions and extracting AHI statistics from
  ANALYZE FORMAT=JSON output using JSON path expressions.
Arcadiy Ivanov
Add `DBUG_ASSERT` guards for MSAN regression fixes

`Field_geom::store()`: assert `blob_storage` is not set, catching
any future removal of the MDEV-16699 `group_concat` downgrade in
`Field_blob::make_new_field()`.

`heap_prepare_hp_create_info()`: after `HA_BLOB_PART` promotion,
assert key type is `VARTEXT4`/`VARBINARY4` and `bit_start` is 1-4,
catching blob key segments that were not promoted from `VARTEXT2`.
Arcadiy Ivanov
Free-list scavenge fallback + contiguity fix for blob allocation

Two fixes in `hp_write_one_blob()`:

**Bug fix**: Step 1 free-list contiguity detection failed to update
`prev_pos` inside the contiguity branch, so the check
`pos == prev_pos - recbuffer` could only detect 2-record groups.
The third record was always compared against the original `prev_pos`
(2 recbuffers away), causing a false discontinuity. Fix: add
`prev_pos = pos` after `run_start = pos`.

**Deficiency #2**: When Step 2 (tail allocation) fails with
`HA_ERR_RECORD_FILE_FULL` and there are still deleted records on the
free list, a new Step 3 walks the entire free list accepting any
contiguous group (even single slots). Each group is written as a
Case C run via `hp_unlink_and_write_run()`. This produces maximally
fragmented chains, which are slower to read but correct. Failing with
table-full when free slots exist is worse than a fragmented chain.

Tests:
- `hp_test_freelist-t.c`: 38 unit tests covering contiguity detection
  (prev_pos bug guard), repeated delete-reinsert cycles, Step 3
  scavenge fallback, and true capacity exhaustion
- `heap/blob_fallback.test`: MTR test exercising the fallback at SQL
  level with fragmented free list
- Extracted shared `hp_test_helpers.h` from duplicate code in
  `hp_test_scan-t.c` and `hp_test_freelist-t.c`
Dmitry Shulga
MDEV-30645: CREATE TRIGGER FOR { STARTUP | SHUTDOWN }

Extended the system table mysql.event to support creation of
system triggers (startup, shutdown, logon, logoff) and ddl triggers.

The system table mysql.event is extended with three columns
  `kind`, `when`, `ddl_type`.
For the task MDEV-30645 only the column `kind` is required,
the other two columns are required for the tasks
  MENT-2355, MENT-2291
but since it is better to reduce a number of times the system table
is changed and as a consequences a number of upgrades to be run,
the entire set of columns is added at once.

Type of the columns `kind` and `ddl_type` are specified as SET to allow
storing of triggers that handle several events.
Dmitry Shulga
MDEV-30645: CREATE TRIGGER FOR { STARTUP | SHUTDOWN }

Follow-up to fix the issue with starting server on a data dictionary
with broken table mysql.event or on its previous version without
columns required for storing metadata about triggers.
Monty
Fixed duplicate key error when converting HEAP table Aria

The problem was that create_internal_tmp_table() tries to create a
normal key for the blob, which does not work.
The fix is to force a unique key if BLOB keys was used for the
the orignal HEAP table.
Arcadiy Ivanov
Fix stale comments, test bugs, and expand test coverage

**Comment fixes** (source):
- `hp_create.c`: fix "VARTEXT2" → "VARTEXT4", "Paclength" typo, replace
  stale 8-line `bit_start` derivation comment with accurate 3-line version
- `sql_select.cc`: fix `make_sort_key()` → `make_sort_key_part()` in
  `remove_duplicates()` comment; clarify HEAP packed-format comment
- `field.h`: fix "inc record" → "in a record" typo

**Comment fixes** (tests):
- `hp_test_key_setup-t.cc`: replace stale Phase 1 file header and function
  comment describing `key_part->length` widening with accurate blob segment
  normalization description
- `hp_test_hash-t.c`: fix swapped field names in mixed-key record layout
  comment; remove stale "hp_hashnr is static" comment; add missing
  `bit_length=2` for VARTEXT1 segment in `setup_mixed_keydef`
- `blob_big3.test`: fix "without" → "with", "dicrectly" → "directly"
- `blob_big.inc`: fix "in both runs" → "when HEAP is used"

**Test bug fixes**:
- `blob_sj_test`: change `semijoin=off` to `semijoin=on,firstmatch=off,
  loosescan=off` so the test actually exercises DuplicateWeedout SJ strategy;
  add optimizer_switch save/restore
- `blob_fallback`: replace MD5-based integrity checks with direct
  `b = repeat(...)` comparisons
- `blob_stress`: replace 4 useless `check table` (HEAP returns "not
  supported") with echo comments
- `blob_big.inc`: add save/restore for `max_sort_length` and
  `sort_buffer_size`

**Test coverage expansion** (unit tests — `hp_test_hash-t.c`):
- Add packlength 1, 3, 4 hash/comparison tests (12 assertions)
- Add blob+blob multi-segment key test (6 assertions)
- Plan: 49 → 67

**Test coverage expansion** (MTR — `blob.test`):
- INSERT ON DUPLICATE KEY UPDATE with blobs (conflict, no-conflict,
  NULL transitions)
- JSON column CRUD on MEMORY table
- LONGBLOB at uint16 `run_rec_count` split boundary (1,048,549 / 1,048,550
  / 2MB)
- Case A/B/C exact boundary blob sizes (5B, 6B, 10KB, 50KB) with
  cross-case UPDATE
- BTREE+blob rejection (both BTREE and HASH explicit blob keys rejected;
  BTREE on non-blob column with blob data works)
- Table-full error: verify no partial rows from failed inserts (row count,
  corruption check, scan count)

**Test coverage expansion** (MTR — `blob_stress.test`):
- NULL→non-NULL and non-NULL→NULL blob UPDATE operations in the 200-cycle
  stored procedure

**Build**:
- Wire `hp_test_key_setup-t.cc` into CMake build; remove Phase 1-only
  tests (`test_rebuild_key_from_group_buff_mixed`,
  `test_varchar_promoted_to_blob`); update assertions for current blob
  segment normalization; plan: 47 → 34
Arcadiy Ivanov
Fix MSVC `C4267` warnings: `size_t` to narrower type conversions

- `Field_blob::get_key_image_itRAW` and `Field_blob::key_cmp`: cast
  `local_char_length` to `uint32` in `set_if_smaller()` — safe because
  `charpos()` is bounded by `blob_length` which is already `uint32`.
- `hp_test_hash-t.c`: widen `blob_len` parameter from `uint16` to
  `size_t` in `build_record()` and `build_mixed_record()` to match
  `LEX_CUSTRING::length` type.