Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
Aleksey Midenkov
Potential fix for pull request finding

Co-authored-by: Copilot Autofix powered by AI <[email protected]>
Aleksey Midenkov
MDEV-38854 Assertion table->vers_write fails upon ODKU into table with versioned column

In MDEV-25644 vers_check_update() sets vers_write to false in case it
returns false. It is ok for UPDATE but is not correct for ODKU is bulk
insert requires vers_write on next tuple.

The fix return vers_write value back when vers_check_update() and
related vers_insert_history_row() are done in ODKU.
Aleksey Midenkov
MDEV-25529 converted COMBINE macro to interval2usec inline function
Aleksey Midenkov
MDEV-25529 ALTER TABLE FORCE syntax improved

Improves ALTER TABLE syntax when alter_list can be supplied alongside a
partitioning expression, so that they can appear in any order. This is
particularly useful for the FORCE clause when adding it to an existing
command.

Also improves handling of AUTO with FORCE, so that AUTO FORCE
specified together provides more consistent syntax, which is used by
this task in further commits.
Aleksey Midenkov
MDEV-25529 TimestampString for printing timestamps
Aleksey Midenkov
MDEV-25529 converted COMBINE macro to interval2usec inline function
Thirunarayanan Balathandayuthapani
MDEV-39081 InnoDB: tried to purge non-delete-marked record, assertion fails in row_purge_del_mark_error

Reason:
=======
Following the changes in MDEV-38734, the server no longer marks
all indexed virtual columns during an UPDATE operation.
Consequently, ha_innobase::update() populates the upd_t vector
with old_vrow but omits the actual data for these virtual columns.

Despite this omission, trx_undo_page_report_modify() continues to
write metadata for indexed virtual columns into the undo log. Because
the actual values are missing from the update vector, the undo log
entry is recorded without the historical data for these columns.

When the purge thread processes the undo log to reconstruct a
previous record state for MVCC, it identifies an indexed virtual
column but finds no associated data.

The purge thread incorrectly interprets this missing data as a NULL
value, rather than a "missing/unrecorded" value. The historical
record is reconstructed with an incorrect NULL for the virtual column.
This causes the purge thread to incorrectly identify and purge
records that are not actually delete-marked, leading to abort
of server.

Solution:
=========
ha_innobase::column_bitmaps_signal(): Revert the column-marking
logic to the state prior to commit a4e4a56720c, ensuring all
indexed virtual columns are unconditionally marked during an UPDATE.

The previous "optimization" attempted to manually detect indexed
column changes before marking virtual columns. The manual check
for indexed column modifications is redundant. InnoDB already
provides the UPD_NODE_NO_ORD_CHANGE flag within row_upd_step().
This flag is being used in trx_undo_page_report_modify() and
trx_undo_read_undo_rec() should log or read virtual column values.

Refactored column_bitmaps_signal() to accept a mark_for_update
parameter that controls when indexed virtual columns are marked.
TABLE::mark_columns_needed_for_update() is the only place that needs
mark_for_update=true because only UPDATE operations need to mark
indexed virtual columns for InnoDB's undo logging mechanism.

INSERT operation is already handled by
TABLE::mark_virtual_columns_for_write(insert_fl=true).
Even the commit a4e4a56720c974b547d4e469a8c54510318bc2c9 changes are
going to affect TABLE::mark_virtual_column_for_write(false) and
It is called during UPDATE operation, and that's when
column_bitmaps_signal() needs to mark  indexed virtual columns.

Online DDL has separate code path which is handled by
row_log_mark_virtual_cols() for all DML operations
Aleksey Midenkov
Potential fix for pull request finding

Co-authored-by: Copilot Autofix powered by AI <[email protected]>
Oleksandr Byelkin
MDEV-39287 (11.4 part) Fix compilation problems with glibc 2.43/gcc 16/fedora 44

Trick compiler with ==.
Aleksey Midenkov
MDEV-25529 ALTER TABLE FORCE syntax improved

Improves ALTER TABLE syntax when alter_list can be supplied alongside a
partitioning expression, so that they can appear in any order. This is
particularly useful for the FORCE clause when adding it to an existing
command.

Also improves handling of AUTO with FORCE, so that AUTO FORCE
specified together provides more consistent syntax, which is used by
this task in further commits.
Aleksey Midenkov
MDEV-25529 cleanup for vers_set_starts() and starts_clause
Razvan-Liviu Varzaru
Disable invalid-handle check under MSan

The invalid-handle check was disabled in a55878f4baa061fc9943228beecb91367d7d08fe for Valgrind
and MSan issues an warning originating from the Driver Manager, during the same call;

```
==26==WARNING: MemorySanitizer: use-of-uninitialized-value
    #0 0x7a902a23c3a2 in __validate_stmt /msan-build/DriverManager/__handles.c:1375:20
    #1 0x7a902a1fc264 in __SQLFreeHandle /msan-build/DriverManager/SQLFreeHandle.c:398:19
    #2 0x559ce0edbbb9 in sqlchar /home/buildbot/odbc_build/source/test/unicode.c:270:12
    #3 0x559ce0ed4419 in run_tests_ex /home/buildbot/odbc_build/source/test/tap.h:1182:11
    #4 0x7a9029eccca7  (/lib/x86_64-linux-gnu/libc.so.6+0x29ca7) (BuildId: 58749c528985eab03e6700ebc1469fa50aa41219)
    #5 0x7a9029eccd64 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29d64) (BuildId: 58749c528985eab03e6700ebc1469fa50aa41219)
    #6 0x559ce0e2e670 in _start (/home/buildbot/odbc_build/build/bintar/test/odbc_unicode+0x34670) (BuildId: 92f183f3e775737cd445db531d16f5654952b845)

SUMMARY: MemorySanitizer: use-of-uninitialized-value /msan-build/DriverManager/__handles.c:1375:20 in __validate_stmt
  ORIGIN: invalid (0). Might be a bug in MemorySanitizer origin tracking.
    This could still be a bug in your code, too!
Exiting
```
  • codbc-alma8-aarch64: build linux-connector_odbc failed -  stdiowarnings (1)
  • codbc-alma9-aarch64: build linux-connector_odbc failed -  stdio
  • codbc-alma9-amd64: build linux-connector_odbc failed -  stdio
  • codbc-alma84-amd64: build linux-connector_odbc failed -  stdiowarnings (1)
  • codbc-bookworm-aarch64: build linux-connector_odbc failed -  stdio
  • codbc-bookworm-aarch64-deb: build linux-connector_odbc failed -  stdio
  • codbc-bookworm-amd64: build linux-connector_odbc failed -  stdio
  • codbc-bookworm-amd64-deb: build linux-connector_odbc failed -  stdio
  • codbc-bullseye-aarch64: build linux-connector_odbc failed -  stdio
  • codbc-bullseye-aarch64-deb: build linux-connector_odbc failed -  stdio
  • codbc-bullseye-amd64: build linux-connector_odbc failed -  stdio
  • codbc-bullseye-amd64-deb: build linux-connector_odbc failed -  stdio
  • codbc-fedora38-amd64: build linux-connector_odbc failed -  stdiowarnings (1)
  • codbc-fedora39-amd64: build linux-connector_odbc failed -  stdiowarnings (1)
  • codbc-jammy-aarch64: build linux-connector_odbc failed -  stdio
  • codbc-jammy-aarch64-deb: build linux-connector_odbc failed -  stdio
  • codbc-jammy-amd64: build linux-connector_odbc failed -  stdio
  • codbc-jammy-amd64-deb: build linux-connector_odbc failed -  stdio
  • codbc-linux-amd64-asan: build linux-connector_odbc failed -  stdio
  • codbc-linux-amd64-msan: build linux-connector_odbc failed -  stdio
  • codbc-linux-amd64-ubsan: build linux-connector_odbc failed -  stdio
  • codbc-noble-aarch64-deb: build linux-connector_odbc failed -  stdiowarnings (1)
  • codbc-noble-amd64-deb: build linux-connector_odbc failed -  stdiowarnings (1)
  • codbc-rhel8-aarch64: build linux-connector_odbc failed -  stdiowarnings (1)
  • codbc-rhel8-amd64: build linux-connector_odbc failed -  stdiowarnings (1)
  • codbc-rhel9-aarch64: build linux-connector_odbc failed -  stdiowarnings (1)
  • codbc-rhel9-amd64: build linux-connector_odbc failed -  stdiowarnings (1)
  • codbc-rocky8-aarch64: build linux-connector_odbc failed -  stdio
  • codbc-sles15-amd64: build linux-connector_odbc failed -  stdio
Aleksey Midenkov
MDEV-25529 TimestampString for printing timestamps
Aleksey Midenkov
MDEV-25529 Auto-create: Pre-existing historical data is not partitioned as specified by ALTER

Adds logic into prep_alter_part_table() for AUTO to check the history
range (vers_get_history_range()) and based on (max_ts - min_ts)
difference compute the number of created partitions and set STARTS
value to round down min_ts value (vers_set_starts()) if it was not
specified by user or if the user specified it incorrectly. In the
latter case it will print warning about wrongly specified user value.

In case of fast ALTER TABLE, f.ex. when partitioning already exists,
the above logic is ignored unless FORCE clause is specified. When user
specifies partition list explicitly the above logic is ignored even
with FORCE clause.

vers_get_history_range() detects if the index can be used for row_end
min/max stats and if so it gets it with ha_index_first() and
HA_READ_BEFORE_KEY (as it must ignore current data). Otherwise it does
table scan to read the stats. There is test_mdev-25529 debug keyword
to check the both and compare results. A warning is printed if the
algorithm uses slow scan.

Field_vers_trx_id::get_timestamp() is implemented for TRX_ID based
versioning to get epoch value. It works in vers_get_history_range()
but since partitioning is not enabled for TRX_ID versioning create
temporary table fails with error, requiring timestamp-based system
fields. This method will be useful when partitioning will be enabled
for TRX_ID which is mostly performance problems to solve.

Static key_cmp was renamed to key_eq to resolve compilation after
key.h was included as key_cmp was already declared there.
Aleksey Midenkov
MDEV-27569 Valgrind/MSAN errors in ha_partition::swap_blobs

row_sel_store_mysql_field() does:

3113                    mysql_rec[templ->mysql_null_byte_offset]
3114                            &= static_cast<byte>(~templ->mysql_null_bit_mask);

but mysql_rec[] which is prefetch buffer was allocated without
initialization. Prefetch buffers are allocated by
row_sel_prefetch_cache_init():

3881            ptr = static_cast<byte*>(ut_malloc_nokey(sz));

and then it initializes the buffers with the magic numbers.

The fix initializes the buffers with null bytes as well for nullable
fields.
Aleksey Midenkov
MDEV-25529 cleanup for vers_set_starts() and starts_clause
Aleksey Midenkov
MDEV-32317 Trace for ref_ptrs array (ref_array keyword)

Usage:

  mtr --mysqld=--debug=d,ref_array,query:i:o,/tmp/mdl.log
Oleksandr Byelkin
MDEV-39287 (11.4 part) Fix compilation problems with glibc 2.43/gcc 16/fedora 44

Trick compiler with ==.
Aleksey Midenkov
MDEV-25529 set_up_default_partitions() ER_OUT_OF_RESOURCES error
Aleksey Midenkov
MDEV-25529 Auto-create: Pre-existing historical data is not partitioned as specified by ALTER

Adds logic into prep_alter_part_table() for AUTO to check the history
range (vers_get_history_range()) and based on (max_ts - min_ts)
difference compute the number of created partitions and set STARTS
value to round down min_ts value (vers_set_starts()) if it was not
specified by user or if the user specified it incorrectly. In the
latter case it will print warning about wrongly specified user value.

In case of fast ALTER TABLE, f.ex. when partitioning already exists,
the above logic is ignored unless FORCE clause is specified. When user
specifies partition list explicitly the above logic is ignored even
with FORCE clause.

vers_get_history_range() detects if the index can be used for row_end
min/max stats and if so it gets it with ha_index_first() and
HA_READ_BEFORE_KEY (as it must ignore current data). Otherwise it does
table scan to read the stats. There is test_mdev-25529 debug keyword
to check the both and compare results. A warning is printed if the
algorithm uses slow scan.

Field_vers_trx_id::get_timestamp() is implemented for TRX_ID based
versioning to get epoch value. It works in vers_get_history_range()
but since partitioning is not enabled for TRX_ID versioning create
temporary table fails with error, requiring timestamp-based system
fields. This method will be useful when partitioning will be enabled
for TRX_ID which is mostly performance problems to solve.

Static key_cmp was renamed to key_eq to resolve compilation after
key.h was included as key_cmp was already declared there.
Oleksandr Byelkin
MDEV-39287 (10.6 part) Fix compilation problems with glibc 2.43/gcc 16/fedora 44

Use 'const' where possible.
Use string copy to make dbug usage safe
Check safetiness of myisam/aria usage (made by Monty)
Aleksey Midenkov
Potential fix for pull request finding

Co-authored-by: Copilot Autofix powered by AI <[email protected]>
Thirunarayanan Balathandayuthapani
MDEV-38412 System tablespace fails to shrink due to legacy tables

Problem:
=======
- InnoDB system tablespace fails to autoshrink when it contains
legacy internal tables. These are non-user tables and internal
table exist from older version. Because the current shrink logic
does recognize these entries as user table, they block the
defragmentation process required to reduce the tablespace size.

Solution:
=========
To enable successful shrinking, InnoDB has been updated to
identify and remove these legacy entries during the startup:

drop_all_orphaned_tables(): A new function that scans the InnoDB
system tables for entries lacking the / naming convention.
It triggers the removal of these table objects from InnoDB system
tables and ensures the purge thread subsequently drops any
associated index trees in SYS_INDEXES.

scan_system_tablespace_tables(): Scan the records in
SYS_TABLES and invoke callback(orphaned record, user table exist check)
on non-system tables in system tablespace

dict_drop_table_metadata(): Function is to remove specific
table IDs from internal system tables.

fsp_system_tablespace_truncate(): If legacy tables are detected,
InnoDB prioritizes their removal. To ensure data integrity and
complete the shrink, a two-restart sequence may be required:
1) purge the legacy table
2) Defragment the system tablespace and shrink the
system tablespace further
Aleksey Midenkov
MDEV-25529 get_next_time() comment
PranavKTiwari
MDEV-39184- Rework MDL enum constants values
Incorporated code review comments of copilot.
Aleksey Midenkov
MDEV-25529 Auto-create: Pre-existing historical data is not partitioned as specified by ALTER

Adds logic into prep_alter_part_table() for AUTO to check the history
range (vers_get_history_range()) and based on (max_ts - min_ts)
difference compute the number of created partitions and set STARTS
value to round down min_ts value (vers_set_starts()) if it was not
specified by user or if the user specified it incorrectly. In the
latter case it will print warning about wrongly specified user value.

In case of fast ALTER TABLE, f.ex. when partitioning already exists,
the above logic is ignored unless FORCE clause is specified. When user
specifies partition list explicitly the above logic is ignored even
with FORCE clause.

vers_get_history_range() detects if the index can be used for row_end
min/max stats and if so it gets it with ha_index_first() and
HA_READ_BEFORE_KEY (as it must ignore current data). Otherwise it does
table scan to read the stats. There is test_mdev-25529 debug keyword
to check the both and compare results. A warning is printed if the
algorithm uses slow scan.

Field_vers_trx_id::get_timestamp() is implemented for TRX_ID based
versioning to get epoch value. It works in vers_get_history_range()
but since partitioning is not enabled for TRX_ID versioning create
temporary table fails with error, requiring timestamp-based system
fields. This method will be useful when partitioning will be enabled
for TRX_ID which is mostly performance problems to solve.

Static key_cmp was renamed to key_eq to resolve compilation after
key.h was included as key_cmp was already declared there.
Aleksey Midenkov
Potential fix for pull request finding

Co-authored-by: Copilot Autofix powered by AI <[email protected]>
Oleksandr Byelkin
MDEV-39287 (10.6 part) Fix compilation problems with glibc 2.43/gcc 16/fedora 44

Use 'const' where possible.
Use string copy to make dbug usage safe
Check safetiness of myisam/aria usage (made by Monty)
Aleksey Midenkov
MDEV-25529 get_next_time() comment
Oleksandr Byelkin
MDEV-39287 (10.6 part) Fix compilation problems with glibc 2.43/gcc 16/fedora 44

Use 'const' where possible.
Use string copy to make dbug usage safe
Check safetiness of myisam/aria usage (made by Monty)
Alexey Botchkov
Progress report
    07.04.26
Alessandro Vetere
fixup! MDEV-37070  Implement table options to enable/disable features
Alexey Botchkov
Progress report
  14.04.2026
Monty
MDEV-37070  Implement table options to enable/disable features

Added ADAPTIVE_HASH_INDEX=DEFAULT|YES|NO table and index option to InnoDB.
The table and index options only have an effect if InnoDB adaptive hash
index feature is enabled.

- Having the ADAPTIVE_HASH_INDEX TABLE option set to NO will disable
  adaptive hash index for all indexes in the table that does not have
  the index option adaptive_hash_index=yes.
- Having the ADAPTIVE_HASH_INDEX TABLE option set to YES will enable the
  adaptive hash index for all indexes in the table that does not have
  the index option adaptive_hash_index=no.
- Using adaptive_hash_index=default deletes the old setting.
- One can also use OFF/ON as the options. This is to make it work similar
  as other existing options.
- innodb.adaptive_hash_index has been changed from a bool to an enum with
  values OFF, ON and IF_SPECIFIED.  If IF_SPECIFIED is used, adaptive
  hash index are only used for tables and indexes that specifies
  adaptive_hash_index=on.
- The following new options can be used to further optimize adaptive hash
  index for an index (default is unset/auto for all of them):
  - complete_fields:
    - 0 to the number of columns the key is defined on (max 64)
  - bytes_from_incomplete_field:
    - This is only usable for memcmp() comparable index fields, such as
      VARBINARY or INT. For example, a 3-byte prefix on an INT will
      return an identical hash value for 0‥255, another one for 256‥511,
      and so on.
    - Range is min 0 max 16383.
  - for_equal_hash_point_to_last_record
    -  Default is unset/auto, NO points to the first record, known as
        left_side in the code; YES points to the last record.
        Example: we have an INT column with the values 1,4,10 and bytes=3,
        will that hash value point to the record 1 or the record 10?
        Note: all values will necessarily have the same hash value
        computed on the big endian byte prefix 0x800000, for all of the
        values 0x80000001, 0x80000004, 0x8000000a. InnoDB inverts the
        sign bit in order to have memcmp() compatible comparison

Example:
CREATE TABLE t1 (a int primary key, b varchar(100), c int,
index (b) adaptive_hash_index=no, index (c))
engine=innodb, adaptive_hash_index=yes;

Notable changes in InnoDB
- btr_search.enabled was changed from a bool to a ulong to be
  able to handle options OFF, ON as IF_ENABLED. ulong is needed
  to compile with MariaDB enum variables.
- To be able to find all instances where btr_search.enabled was used
  I changed all code to use btr_search.get_enabled() when accessing
  the value and used btr_search.is_enabled(index) to test if AHI is
  enabled for the index.
- btr_search.enable() was changed to always take two parameters,
  resize and value of enabled. This was needed as enabled can now
  have values 0, 1, and 2.
- store all AHI related options in per-index `dict_index_t::ahi`
  bit-packed 32-bit atomic field `ahi_enabled_fixed_mask`
  - static assertions and debug assertions ensure that all options fit
    into the 32-bit field
  - packing details:
    - `enabled`, `adaptive_hash_index` (first 2 bits)
    - `fields`, `complete_fields` (7 bit)
    - `bytes`, `bytes_from_incomplete_field` (14 bits)
    - `left`, `~for_equal_hash_point_to_last_record` (1 bit)
    - `is_fields_set`, `fields` set flag (1 bit)
    - `is_bytes_set`, `bytes` set flag (1 bit)
    - `is_left_set`, `left` set flag (last 1 bit)
    - 5 bits spare after `is_left_set`
  - manipulation of the bit-packed field avoids usage of branches or
    conditional instructions to minimize the performance impact of
    the new options
- in `btr_search_update_hash_ref` apply the per-index AHI options
  using bit-masking to override internal heuristic values with user
  preferences
- add `innodb.index_ahi_option` test:
  - test a combination of per-table and per-index AHI options
  - use a stored procedure which checks if AHI is used during a burst of
    index lookups checking delta in `adaptive_hash_searches` InnoDB
    monitor variable
  - test that the maximum number of fields per (secondary) index is 64
    (32+32)
- add `innodb.index_ahi_option_debug` test:
  - test debug builds with `index_ahi_option_debug_check` debug variable
    enabled to verify that the proper per-index AHI options are applied
    during index lookups
  - test that illegal per-index AHI are non-destructive and just lead to
    no AHI usage

Visible user changes:
- select @@global.adaptive_hash_index will now return a string instead
  of 0 or 1.

Other notable changes:
- In `sql/create_options.cc`:
  - In function `parse_engine_part_options` do allocate table options
    in share root to avoid MSAN/ASAN errors due to use after free of
    `option_struct` in test `parts.partition_special_innodb`.
  - In function `set_one_value` avoid reading after the end of the
    current string.

Co-authored-by: Monty <[email protected]>
Co-authored-by: Marko Mäkelä <[email protected]>
Co-authored-by: Alessandro Vetere <[email protected]>
Co-authored-by: Thirunarayanan Balathandayuthapani <[email protected]>
Aleksey Midenkov
MDEV-25529 Auto-create: Pre-existing historical data is not partitioned as specified by ALTER

Adds logic into prep_alter_part_table() for AUTO to check the history
range (vers_get_history_range()) and based on (max_ts - min_ts)
difference compute the number of created partitions and set STARTS
value to round down min_ts value (vers_set_starts()) if it was not
specified by user or if the user specified it incorrectly. In the
latter case it will print warning about wrongly specified user value.

In case of fast ALTER TABLE, f.ex. when partitioning already exists,
the above logic is ignored unless FORCE clause is specified. When user
specifies partition list explicitly the above logic is ignored even
with FORCE clause.

vers_get_history_range() detects if the index can be used for row_end
min/max stats and if so it gets it with ha_index_first() and
HA_READ_BEFORE_KEY (as it must ignore current data). Otherwise it does
table scan to read the stats. There is test_mdev-25529 debug keyword
to check the both and compare results. A warning is printed if the
algorithm uses slow scan.

Field_vers_trx_id::get_timestamp() is implemented for TRX_ID based
versioning to get epoch value. It works in vers_get_history_range()
but since partitioning is not enabled for TRX_ID versioning create
temporary table fails with error, requiring timestamp-based system
fields. This method will be useful when partitioning will be enabled
for TRX_ID which is mostly performance problems to solve.

Static key_cmp was renamed to key_eq to resolve compilation after
key.h was included as key_cmp was already declared there.
Alexey Botchkov
Progress report
    10.04.2026
Oleksandr Byelkin
MDEV-39287 (11.4 part) Fix compilation problems with glibc 2.43/gcc 16/fedora 44

Trick compiler with ==.
Aleksey Midenkov
MDEV-36362 MariaDB crashes when parsing fuzzer generated PARTITION

Function calls in INTERVAL expression of DDL have little
sense. SETVAL() fails because sequences require opened tables and
vers_set_interval() is called at earlier stage.

The fix throws ER_SUBQUERIES_NOT_SUPPORTED for sequence functions
SETVAL(), NEXTVAL(), LASTVAL() when the context does not allow
subselect (determined by clause_that_disallows_subselect).
Aleksey Midenkov
Potential fix for pull request finding

Co-authored-by: Copilot Autofix powered by AI <[email protected]>
Aleksey Midenkov
MDEV-25529 set_up_default_partitions() ER_OUT_OF_RESOURCES error