Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
Arcadiy Ivanov
Fix PAD SPACE blob comparison and add blob key tests

**Bug fix** (`padspace_early_exit`): `hp_rec_key_cmp()` and `hp_key_cmp()`
in `hp_hash.c` had early-exit checks `if (len1 != len2) return 1` for
blob key segments, which broke PAD SPACE collations (the default). With
PAD SPACE, `'abc'` (len=3) and `'abc  '` (len=6) must compare equal
because trailing spaces are insignificant, but the length check rejected
them before reaching `strnncollsp()`.

Fix: only short-circuit on length mismatch for NO PAD collations
(`MY_CS_NOPAD`). This bug was discovered during the VARCHAR-to-BLOB
promotion work (Phase 1) and affects any HEAP table with blob key
segments, manifesting in `COUNT(DISTINCT)` on TEXT columns returning
inflated counts.

**Test coverage** transferred from Phase 1:

- `heap.heap_blob_ops` MTR test: exercises HEAP internal temp tables
  with explicit TEXT columns (GROUP BY, DISTINCT, IN-subquery, CTEs,
  window functions, ROLLUP). Includes a targeted PAD SPACE scenario
  that catches `padspace_early_exit`.

- `hp_test_hash-t` unit test (43 tests): validates blob hash/compare
  functions including PAD SPACE collation, NULL/empty blobs,
  multi-segment keys, key format round-trips.

- `hp_test_key_setup-t` unit test (9 tests): validates
  `heap_prepare_hp_create_info()` handling of blob key segments
  (`distinct_key_truncation`) and garbage `key_part_flag`
  (`garbage_key_part_flag`). Four Phase 1-specific assertions are
  deferred via `#if 0` (these bugs are compensated by `hp_create.c`
  runtime normalization in MDEV-38975 proper but will be needed
  when Phase 1 removes that safety net).
Marko Mäkelä
fixup! b182d721372bbb85b90118ace5065f5a279a3586
Arcadiy Ivanov
MDEV-38975: HEAP engine BLOB/TEXT/JSON/GEOMETRY column support

Allow BLOB/TEXT/JSON/GEOMETRY columns in MEMORY (HEAP) engine tables
by storing blob data in variable-length continuation record chains
within the existing `HP_BLOCK` structure.

**Continuation runs**: blob data is split across contiguous sequences
of `recbuffer`-sized records. Each run stores a 10-byte header
(`next_cont` pointer + `run_rec_count`) in the first record; inner
records (rec 1..N-1) have no flags byte — full `recbuffer` payload.
Runs are linked via `next_cont` pointers. Individual runs are capped
at 65,535 records (`uint16` format limit); larger blobs are
automatically split into multiple runs.

**Zero-copy reads**: single-run blobs return pointers directly into
`HP_BLOCK` records, avoiding `blob_buff` reassembly entirely:
- Case A (`run_rec_count == 1`): return `chain + HP_CONT_HEADER_SIZE`
- Case B (`HP_ROW_CONT_ZEROCOPY` flag): return `chain + recbuffer`
- Case C (multi-run): walk chain, reassemble into `blob_buff`
`HP_INFO::has_zerocopy_blobs` tracks zero-copy state; used by
`heap_update()` to refresh the caller's record buffer after freeing
old chains, preventing dangling pointers.

**Free list scavenging**: on insert, the free list is walked read-only
(peek) tracking contiguous groups in descending address order (LIFO).
Qualifying groups (>= `min_run_records`) are unlinked and used. The
first non-qualifying group terminates the scan — remaining data is
allocated from the block tail. The free list is never disturbed when
no qualifying group is found.

**Record counting**: new `HP_SHARE::total_records` tracks all physical
records (primary + continuation). `HP_SHARE::records` remains logical
(primary-only) to preserve linear hash bucket mapping correctness.

**Scan/check batch-skip**: `heap_scan()` and `heap_check_heap()` read
`run_rec_count` from rec 0 and skip entire continuation runs at once.

**Hash functions**: `hp_rec_hashnr()`, `hp_rec_key_cmp()`, `hp_key_cmp()`,
`hp_make_key()` updated to handle `HA_BLOB_PART` key segments — reading
actual blob data via pointer dereference or chain materialization.

**SQL layer**: `choose_engine()` no longer rejects HEAP for blob tables
(replaced `blob_fields` check with `reclength > HA_MAX_REC_LENGTH`).
`remove_duplicates()` routes HEAP+blob to `remove_dup_with_compare()`.
`ha_heap::remember_rnd_pos()` / `restart_rnd_next()` implemented for
DISTINCT deduplication support. Fixed undefined behavior in
`test_if_cheaper_ordering()` where `select_limit/fanout` could overflow
to infinity — capped at `HA_POS_ERROR`.

https://jira.mariadb.org/browse/MDEV-38975
sjaakola
MDEV-38243 Write binlog row events for changes done by cascading FK operations

This commit implements a feature which changes the handling of cascading foreign
key operations to write the changes of cascading operations into binlog.
The applying of such transaction, in the slave node, will apply just the binlog
events, and does not execute the actual foreign key cascade operation.
This will simplify the slave side replication applying and make it more predictable
in terms of potential interference with other parallel applying happning
in the node.

This feature can be turned ON/OFF by new variable:
rpl_use_binlog_events_for_fk_cascade, with default value OFF

The actual implementation is largely by windsurf.

The commit has also  two mtr tests for testing rpl_use_binlog_events_for_fk_cascade
feature:  rpl.rpl_fk_cascade_binlog_row and rpl.rpl_fk_set_null_binlog_row
Sergei Petrunia
Add comment about Debug_key.
Arcadiy Ivanov
Early FULLTEXT detection for derived table engine choice

When a derived table is used in a query with FULLTEXT functions,
detect this in `mysql_derived_prepare()` and force a disk-based
tmp engine (`TMP_TABLE_FORCE_MYISAM`) before the result table is
created. This avoids creating a HEAP handler and then swapping it
for Aria/MyISAM later in `Item_func_match::fix_fields()`.

The check uses `derived->select_lex->ftfunc_list->elements` to
detect FULLTEXT in the outer query, following the same approach as
`st_select_lex_unit::prepare()` in `sql_union.cc`.

The handler swap block in `Item_func_match::fix_fields()` is
replaced with a simple `ER_TABLE_CANT_HANDLE_FT` error, which now
serves only as a safety net for engines that genuinely lack
FULLTEXT support.
Marko Mäkelä
MDEV-39087 os_file_set_size() may behave inconsistently during shutdown

os_file_set_size(): Ignore srv_shutdown_state and consistently extend
the file to completion. If someone is in a hurry, they can forcibly
kill the mariadbd process and test the crash recovery.
Arcadiy Ivanov
HEAP GROUP BY / DISTINCT on TEXT/BLOB columns

Enable GROUP BY and DISTINCT operations on TEXT/BLOB columns to use
HEAP temp tables instead of falling back to Aria.

**SQL layer** (`sql_select.cc`, `create_tmp_table.h`, `field.h`):
- Extract `pick_engine()` from `choose_engine()` for early HEAP detection
- `m_heap_expected` flag gates blob-aware paths in GROUP BY key setup
- Fix `calc_group_buffer()` blob subtype bug (TINY/MEDIUM/LONG_BLOB)
- `is_any_blob_field_type()` helper (includes GEOMETRY)
- GROUP BY key setup: `store_length` init, `key_field_length` cap,
  blob `store_length` override, `key_part_flag` deferred assignment
- HEAP-specific: `end_update()` group key restoration after `copy_funcs()`
- HEAP-specific: skip null-bits helper key part for DISTINCT
- `empty_clex_str` for implicit key part field name (prevents SIGSEGV)

**HEAP engine** (`ha_heap.cc`, `ha_heap.h`, `hp_hash.c`, `heap.h`):
- `rebuild_key_from_group_buff()`: parses SQL-layer GROUP BY key buffer
  into `record[0]`, then rebuilds via `hp_make_key()`
- `materialize_heap_key_if_needed()`: dispatches between group-buff
  rebuild and direct `hp_make_key(record[0])` for blob indexes
- `needs_key_rebuild_from_group_buff` flag on `HP_KEYDEF`
- `hp_keydef_has_blob_seg()` inline helper
- `hp_make_key()`: normalize VARCHAR to 2-byte length prefix with
  zero-padding for sanitizer cleanliness
- `hp_vartext_key_pack_size()` helper for key advancement
- Endian-safe blob length write via `store_lowendian()`
- Varchar bounds clamp in `rebuild_key_from_group_buff()`
- Fix geometry GROUP BY key widening: skip widening when
  `key_part->length <= pack_length_no_ptr()` to prevent `store_length`
  overflow with `Field_geom::key_length()` = 4 (MSAN fix)
- Pre-compute `has_blob_seg` in `heap_prepare_hp_create_info()` so
  callers can use it before `heap_create()` runs (MSAN fix)

**Tests**:
- `heap.heap_blob_ops`: COUNT(DISTINCT), IN-subquery, GROUP BY ROLLUP,
  window functions, CTE materialization, PAD SPACE scenarios
- `hp_test_hash-t.c`: 43->56 TAP tests (hash consistency, mixed keys)
- `hp_test_key_setup-t.cc`: 9->63 TAP tests with `Fake_thd_guard` RAII,
  geometry GROUP BY no-widening test
- Result updates: `count_distinct`, `status`, `tmp_table_error`
Marko Mäkelä
MDEV-11426 fixup: Remove fil_node_t::init_size

The field fil_node_t::init_size that had been added in
mysql/mysql-server@38e3aa74d8d2bf882863d9586ad8c9e9ed2c4f00
should have been removed in 0b66d3f70d365bbb936aae4ca67892c17d68d241.
Abhishek Bansal
MDEV-36929: Warning: Memory not freed: 32 on SELECT COLUMN_JSON()

Item_func_dyncol_json::val_str() used an internal DYNAMIC_STRING
without ensuring it was freed when mariadb_dyncol_json failed.

Fixed by freeing the string in the failure case.
Marko Mäkelä
MDEV-16926 fixup: GCC 16 -Wmaybe-uninitialized

Year::year_precision(): Make static, so that VYear::VYear
and VYear_op::VYear_op() will avoid invoking this function with
an unnecessary "this" pointer to an uninitialized object.
Oleksandr Byelkin
MDEV-39287 Fix strrchr usage to make it compilable with glibc 2.43/gcc 16/fedora 44

Use 'const' where possible.
Use string copy to make dbug usage safe
Check safetiness of myisam/aria usage (made by Monty)
Arcadiy Ivanov
Cap `min_run_records` for small blob free-list reuse

The free-list allocator's minimum contiguous run threshold
(`min_run_records`) could exceed the total records a small blob
actually needs, making free-list reuse impossible on narrow tables.

For example, with `recbuffer=16` the 128-byte floor produced
`min_run_records=8`, but a 32-byte blob only needs 3 records.
Any contiguous free-list group of 3 would be rejected, forcing
unnecessary tail allocation.

Cap both `min_run_bytes` at `data_len` and `min_run_records` at
`total_records_needed` so small blobs can reuse free-list slots
when a sufficient contiguous group exists.
Vladislav Vaintroub
MDEV-39027 suboptimal code for InnoDB big endian access functions

Optimize big endian (and some little endian) Innodb access functions by
using my_htobeN/my_betohN and memcpy.
Sergei Petrunia
Fix in condition pushdown code: don't pass List<Field_pair> by value.

Affected functions:
  find_matching_field_pair(Item *item, List<Field_pair> pair_list)
  get_corresponding_field_pair(Item *item, List<Field_pair> pair_list)

Both only traverse the pair_list.
They use List_iterator so we can't easily switch to using const-reference.
Oleksandr Byelkin
MDEV-39287 Fix strrchr usage to make it compilable with gcc 16

Use 'const' where possible.
Use string copy to make dbug usage safe
Check safetiness of myisam/aria usage (made by Monty)
Thirunarayanan Balathandayuthapani
MDEV-39261 MariaDB crash on startup in presence of indexed virtual columns

Problem:
========
A single InnoDB purge worker thread can process undo logs from different
tables within the same batch. But get_purge_table(), open_purge_table()
incorrectly assumes that a 1:1 relationship between a purge worker thread
and a table within a single batch. Based on this wrong assumtion,
InnoDB attempts to reuse TABLE objects cached in thd->open_tables for
virtual column computation.

1) Purge worker opens Table A and caches the TABLE pointer in thd->open_tables.
2) Same purge worker moves to Table B in the same batch, get_purge_table()
retrieves the cached pointer for Table A instead of opening Table B.
3) Because innobase::open() is ignored for Table B, the virtual column
template is never initialized.
4) virtual column computation for Table B aborts the server

Solution:
========
get_purge_table(): Accept the specific db_name and table_name
associated with the current undo log record. Compare the db_name
and table_name against the existing cached TABLE objects.
If it is match then return the cached table object.
Arcadiy Ivanov
Skip unchanged blobs in `heap_update()`

Per-column blob change detection in `heap_update()`: compare each
blob column's old and new values before rewriting continuation chains.
Detection order (cheapest first): length comparison (O(1)), data
pointer comparison (O(1)), `memcmp` fallback (O(n) with early exit).

Unchanged blobs keep their existing chains with no allocation, copy,
or free. Only changed blobs get new chains written (write-before-free
for crash safety) and old chains freed. This avoids unnecessary chain
churn for common patterns like `UPDATE t SET non_blob_col = x`,
`INSERT ... ON DUPLICATE KEY UPDATE` with unchanged blob values, and
`REPLACE` with identical blob data.

`hp_blob_length()` and `hp_write_one_blob()` made non-static for use
by `heap_update()` directly (bypassing `hp_write_blobs()` which always
rewrites all columns).
sjaakola
MDEV-38243 Write binlog row events for changes done by cascading FK operations

This commit implements a feature which changes the handling of cascading foreign
key operations to write the changes of cascading operations into binlog.
The applying of such transaction, in the slave node, will apply just the binlog
events, and does not execute the actual foreign key cascade operation.
This will simplify the slave side replication applying and make it more predictable
in terms of potential interference with other parallel applying happning
in the node.

This feature can be turned ON/OFF by new variable:
rpl_use_binlog_events_for_fk_cascade, with default value OFF

The actual implementation is largely by windsurf.

The commit has also  two mtr tests for testing rpl_use_binlog_events_for_fk_cascade
feature:  rpl.rpl_fk_cascade_binlog_row and rpl.rpl_fk_set_null_binlog_row
Marko Mäkelä
WIP MDEV-14992 BACKUP SERVER

This introduces a basic driver Sql_cmd_backup, storage engine interfaces,
and basic copying of InnoDB data files.
On Windows, we pass a target directory name; elsewhere, we pass a
target directory handle.

fil_space_t::write_or_backup: Keep track of in-flight page writes and
pending backup operation. We must not allow them concurrently, because
that could lead into torn pages in the backup.

fil_space_t::backup_end: The first page number that is not being backed up
(by default 0, to indicate that no backup is in progress).

log_t::backup: Whether BACKUP SERVER is in progress. The purpose of this
is to make BACKUP SERVER prevent the concurrent execution of
SET GLOBAL innodb_log_archive=OFF or SET GLOBAL innodb_log_file_size
when innodb_log_archive=OFF.

log_sys.archived_checkpoint: Keep track of the earliest available
checkpoint, corresponding to log_sys.archived_lsn. This reflects
SET GLOBAL innodb_log_recovery_start (which is settable now), for
incremental backup.

buf_flush_list_space(): Check for concurrent backup before writing each
page. This is inefficient, but this function may be invoked from multiple
threads concurrently, and it cannot be changed easily, especially for
fil_crypt_thread().

TODO: Implement finer-grained locking around copying page ranges.

TODO: Implement other storage engine interfaces.

TODO: Implement the necessary locking around backup_end.

TODO: Fix the space.get_create_lsn() < checkpoint logic.
Daniel Black
MDEV-39086 MSAN/UBSAN/ASAN builds run without basic optimization

MSAN/UBSAN/ASAN/TSAN builds are preforming addition runtime
checks requiring additional CPU to do so. To help the test
systems of these run with a bit faster, but still be debuggable,
we should add -Og where another C/CXX optimization flag isn't
specified.

Gcc/Clang environment that support these range of sanitizers
use -O followed by 0-3, g, s, z, and apparently "fast". These
are the flags matched by the regex.
Arcadiy Ivanov
Set `key_part_flag` from field type in GROUP BY key setup

Rebuild HEAP index key from `record[0]` when the index has blob key
segments, because `Field_blob::new_key_field()` returns `Field_varstring`
(2B length + inline data) while HEAP's `hp_hashnr`/`hp_key_cmp` expect
`hp_make_key` format (4B length + data pointer).

Precompute `HP_KEYDEF::has_blob_seg` flag during table creation to avoid
per-call loop through key segments.
Arcadiy Ivanov
Consistent `hp_rec_key_cmp()` argument order in `heap_update()`

Swap rec1/rec2 arguments to match the API convention: rec1 = input
record (direct data pointers), rec2 = potentially stored record
(chain pointers when info != NULL). Both calls pass info=NULL so the
swap is a no-op for behavior, but makes the argument order consistent
with all other call sites (`hp_write.c`, `hp_delete.c`, `ha_heap.cc`).
Arcadiy Ivanov
Clarify comments for HEAP blob continuation and tmp table overflow

- `item_sum.cc`: fix misleading "HEAP table full" comment — the error
  type is unknown at this point; `create_internal_tmp_table_from_heap()`
  determines whether it is a convertible HEAP overflow or a fatal error
- `sql_select.cc`: document why `choose_engine()` re-checks key limits
  after picking a disk engine for non-key-limit reasons
- `heapdef.h`: add descriptive comments to `HP_CONT_MIN_RUN_BYTES`,
  `HP_CONT_RUN_FRACTION_NUM`, `HP_CONT_RUN_FRACTION_DEN`
bsrikanth-mariadb
MDEV-32758: TRIM uses memory after freed

Item_func_trim::trimmed_value() prepares Item_func_trim::tmp_value
to be the return value of Item_func_trim::val_str().

Before the fix, tmp_value would have pointer to the trimmed string, but
didn't own it. This meant that second use of TRIM function could get to
point to temporary buffer, then free the buffer and invalidate the return
value of the first use of TRIM().

Avoid this by copying the return value into Item_func_trim::tmp_value.
Fariha Shaikh
MDEV-36725 Fix innodb_ctype_ldml test in view-protocol mode

The test innodb.innodb_ctype_ldml was failing in view-protocol mode due
to different column naming behavior for complex expressions.

Without explicit column aliases, view-protocol mode generates automatic
names (Name_exp_1, Name_exp_2) while normal mode uses the full
expression as the column name.

Add explicit column aliases to SELECT statements in innodb_ctype_ldml to
ensure consistent column names across both normal and view-protocol
modes.

All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Marko Mäkelä
Merge 12.3
Daniel Black
MDEV-39286: CPACK_RPM_BUILDREQUIRES to include selinux dependency

The cmake/selinux.cmake within the columnstore submodule will
set CPACK_RPM_columnstore-engine_BUILDREQUIRES in PARENT_SCOPE
on the assumption it populates the the build requires.

Per documentation[1], this isn't set on a per component basis.

[1] https://cmake.org/cmake/help/latest/cpack_gen/rpm.html#variable:CPACK_RPM_BUILDREQUIRES
Monty
Temporary push for testing
Thirunarayanan Balathandayuthapani
MDEV-24356 InnoDB: Failing assertion: len < sizeof(tmp_buff) in get_foreign_key_info

get_foreign_key_info(): Changed tmp_buff size from NAME_LEN+1
to MAX_DATABASE_NAME_LEN+1 to accommodate encoded database names.
The buffer stores the database name extracted from
referenced_table_name and foreign_table_name, which can contain
encoded special characters (In this case dots encoded as @002e).
With encoding, database names can exceed NAME_LEN (64 bytes) even
if the logical name is within limits.
sjaakola
MDEV-38243 Write binlog row events for changes done by cascading FK operations

This commit implements a feature which changes the handling of cascading foreign
key operations to write the changes of cascading operations into binlog.
The applying of such transaction, in the slave node, will apply just the binlog
events, and does not execute the actual foreign key cascade operation.
This will simplify the slave side replication applying and make it more predictable
in terms of potential interference with other parallel applying happning
in the node.

This feature can be turned ON/OFF by new variable:
rpl_use_binlog_events_for_fk_cascade, with default value OFF

The actual implementation is largely by windsurf.

The commit has also  two mtr tests for testing rpl_use_binlog_events_for_fk_cascade
feature:  rpl.rpl_fk_cascade_binlog_row and rpl.rpl_fk_set_null_binlog_row
Raghunandan Bhat
MDEV-38562: `mariabackup` exits with success (0) despite "No space left on device" errors

Problem:
  `mariabackup` ignores `my_close()` failures, resulting in false
  success report with a 'completed OK!' message.

Fix:
  Update `local_close()` function in `mariabackup` to check `my_close()`
  failures and enusre errors are reported.
Sergei Petrunia
MDEV-38753: Debugging help: print which MEM_ROOT the object is on

Add two functions intended for use from debugger:
  bool dbug_is_mem_on_mem_root(MEM_ROOT *mem_root, void *ptr);
  const char *dbug_which_mem_root(THD *thd, void *ptr);

Also, collect declarations of all other functions intended for use
from debugger in sql/sql_test.h
Sergei Golubchik
let's be explicit about what abs() to use

fixes main.mysqltest_string_functions failure on centos7
with gcc 4.8.5
forkfun
MDEV-36183 mariadb-dump backup corrupt if stored procedure ends with comment

When a stored procedure, trigger, or event ends with a line comment (#comment
or --comment), mariadb-dump would append the custom delimiter (;;) on the same line.
This caused the comment to "comment out" the delimiter, leading to syntax errors
during restoration.

Fixed by ensuring a newline is inserted before the delimiter in the dump output.
Arcadiy Ivanov
Skip run header for single-record blob continuation runs

When a blob fits entirely within a single continuation record
(`data_len <= visible`), skip the 10-byte run header (`next_cont`
pointer + `run_rec_count`) and store data starting at offset 0.
This reclaims 10 bytes of payload per small blob, which matters
for tables with small `recbuffer` (e.g. 16 bytes: payload increases
from 5 to 15 bytes, avoiding a second record for blobs up to 15 bytes).

**`HP_ROW_SINGLE_REC` flag** (bit 4 in the flags byte) signals that
the continuation record has no run header. The reader gets `visible`
bytes of contiguous data starting at the chain pointer (zero-copy).

**`enum hp_blob_format`** replaces ad-hoc boolean/flag checks with
a single vocabulary for blob storage format detection:
- `HP_BLOB_CASE_A_SINGLE_REC`: no header, data at offset 0 (new)
- `HP_BLOB_CASE_B_ZEROCOPY`: header in rec 0, data in rec 1..N-1
- `HP_BLOB_CASE_C_MULTI_RUN`: header + data in each run, linked

**`hp_blob_run_format()`** is the single decoder used by all paths:
write (`hp_write_run_data`), read (`hp_materialize_blobs`,
`hp_materialize_one_blob`), free (`hp_free_run_chain`), scan
(`heap_scan`), and integrity check (`heap_check_heap`).

Files changed:
- `storage/heap/heapdef.h`: flag, enum, decoder function
- `storage/heap/hp_blob.c`: write/read/free paths
- `storage/heap/hp_scan.c`: scan skip logic
- `storage/heap/_check.c`: integrity check
Aquila Macedo
MDEV-39082 mysql-test: use vardir as datadir in collect_mysqld_features

collect_mysqld_features probes mariadbd with --datadir=., which can
point to the installed mysql-test tree.

During this probe, mariadbd may run the datadir case-sensitivity check
and try to create a *.lower-test file there. This is a bad fit when the
test suite is run from a read-only installed path (e.g. /usr/share/...).

Use $opt_vardir as the datadir for this probe instead, so it runs in a
writable location and avoids writes into the installed test tree.
hadeer
Mroonga: fix SIGSEGV on NULL mroonga_log_file

Add mrn_log_file_check() to validate that
mroonga_log_file is not set to NULL or an empty
string, which previously caused a segfault.
forkfun
MDEV-32770 mariadb-dump produces not loadable dump due to temporary view structure

When mariadb-dump produces a dump, it first creates a temporary placeholder for views
to satisfy potential dependencies before their actual creation later in the dump file.
Previously, these views were populated with int literals for their columns (1 AS `col_name`).
This could cause syntax or type-resolution errors during restoration if another view depended
on this placeholder view.

This commit changes the placeholder column values from 1 to NULL, as it is more permissive
and allowing the dump to be restored successfully.
Arcadiy Ivanov
MDEV-38975: Add hash pre-check to skip expensive blob materialization in hash chain traversal

`hp_search()`, `hp_search_next()`, `hp_delete_key()`, and
`find_unique_row()` walk hash chains calling `hp_key_cmp()` or
`hp_rec_key_cmp()` for every entry. For blob key segments, each
comparison triggers `hp_materialize_one_blob()` which reassembles
blob data from continuation chain records.

Since each `HASH_INFO` already stores `hash_of_key`, compare it
against the search key's hash before the full key comparison. When
hashes differ the keys are guaranteed different, skipping the
expensive materialization. This pattern already existed in
`hp_write_key()` for duplicate detection but was missing from the
four read/delete paths.

`HP_INFO::last_hash_of_key` is added so `hp_search_next()` can
reuse the hash computed by `hp_search()` without recomputing it.