Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
Aleksey Midenkov
MDEV-25529 set_up_default_partitions() ER_OUT_OF_RESOURCES error
Rex Johnston
MDEV-39333 main.group_by fails with incorrectly ordered rows

Since commit 80ea16c6209, 'order by column is null' may produce an
indeterminate column order of rows where 'column is null' is
satisfied.  This commit fixes an existing test in main.group_by
for MDEV-6129.
Daniel Black
MDEV-34902: debian-start erroneously reports issues

Remove the complexity of check_for_crashed_tables
by removing it all together. When check_for_crashed_tables
was written there was a desire to recover MyISAM tables.

With Aria being default from 10.4 for system tables,
and all non-MyISAM engines being able to do crash
recovery there isn't the need to autocheck as part
of a system service.

With check_for_crashed_tables removed there is no need
for a mailx package recommendation.
Aleksey Midenkov
MDEV-25529 Comments

* get_next_time() comment
* THD::used comment
Aleksey Midenkov
MDEV-25529 set_up_default_partitions() ER_OUT_OF_RESOURCES error
Yuchen Pei
MDEV-38752 Check that virtual column is a supertype to its expression before substitution

New optimizations were introduced that substitute virtual column
expressions (abbr. vcol expr) with index virtual columns (abbr. vcol
field) in WHERE, ORDER BY and GROUP BY.

In this patch we introduce checks that the type of a vcol field is the
supertype to its vcol expr, and if not, do not proceed the
optimization. This ensures that the substitution is safe and does not
for example lose information due to truncation.

For simplicity, the version of the check implemented is strict with
100% precision, meaning that there are instances where vcol field is a
supertype to vcol expr, but the check returns false.

Types not covered in tests: Type_handler_null, Type_handler_composite,
Type_handler_refcursor

Thanks to Jarir Khan <[email protected]> for initial patches
addressing this problem.
Alexey Botchkov
MDEV-37262 XMLTYPE: validation.
Oleg Smirnov
MDEV-32868 SELECT NULL,NULL IN (SUBQUERY) returns 0 instead of NULL

When evaluating (SELECT NULL, NULL) IN (SELECT 1, 2 FROM t), the result
was incorrectly 0 instead of NULL.

The IN-to-EXISTS transformation wraps comparison predicates with
trigcond() guards:

  WHERE trigcond(NULL = 1) AND trigcond(NULL = 2)

During optimization, make_join_select() evaluated this as a "constant
condition". With guards ON, the condition evaluated to FALSE (NULL
treated as FALSE), triggering "Impossible WHERE". At runtime, guards
would be turned OFF for NULL columns, but the optimizer had already
marked the subquery as returning no rows.

Fix: check can_eval_in_optimize() instead of is_expensive() in
make_join_select(). Unlike is_expensive(), can_eval_in_optimize()
also verifies const_item(), which returns FALSE for Item_func_trig_cond
Oleg Smirnov
MDEV-32868 Cleanup (no change of logic)

Switch branches of the `if` condition to make the code
flow more naturally.
Mikhail Pochatkin
MDEV-39173 Replace sprintf with snprintf, remove deprecated pragma

Remove the unscoped #pragma GCC diagnostic ignored
"-Wdeprecated-declarations" from include/violite.h and replace all
sprintf/vsprintf calls with snprintf/vsnprintf across the codebase.

Where possible, pass actual buffer sizes as function parameters
instead of using hardcoded constants or recomputing strlen().
Notable API changes:
- get_date(): added to_len parameter for buffer size
- nice_time(), end_timer() in client/mysql.cc: added buff_size
- generate_new_name(): added name_size parameter
- calc_md5(): added buffer_size parameter (fixes off-by-one)
- MD5_HASH_TO_STRING macro: added _size parameter
- Added MYSQL_UDF_MAX_RESULT_LENGTH constant in mysql_com.h

Also fixed a pre-existing bug in sql/rpl_mi.cc where the ellipsis
"..." was written to the wrong buffer (dbuff instead of buff).

Vendored code is excluded: extra/readline, extra/wolfssl,
wsrep-lib, libmariadb, storage/mroonga/vendor, zlib.
Aleksey Midenkov
MDEV-25529 ALTER TABLE FORCE syntax improved

Improves ALTER TABLE syntax when alter_list can be supplied alongside a
partitioning expression, so that they can appear in any order. This is
particularly useful for the FORCE clause when adding it to an existing
command.

Also improves handling of AUTO with FORCE, so that AUTO FORCE
specified together provides more consistent syntax, which is used by
this task in further commits.
Aleksey Midenkov
MDEV-25529 Auto-create: Pre-existing historical data is not partitioned as specified by ALTER

Adds logic into prep_alter_part_table() for AUTO to check the history
range (vers_get_history_range()) and based on (max_ts - min_ts)
difference compute the number of created partitions and set STARTS
value to round down min_ts value (vers_set_starts()) if it was not
specified by user or if the user specified it incorrectly. In the
latter case it will print warning about wrongly specified user value.

In case of fast ALTER TABLE, f.ex. when partitioning already exists,
the above logic is ignored unless FORCE clause is specified. When user
specifies partition list explicitly the above logic is ignored even
with FORCE clause.

vers_get_history_range() detects if the index can be used for row_end
min/max stats and if so it gets it with ha_index_first() and
HA_READ_BEFORE_KEY (as it must ignore current data). Otherwise it does
table scan to read the stats. There is test_mdev-25529 debug keyword
to check the both and compare results. A warning is printed if the
algorithm uses slow scan.

Field_vers_trx_id::get_timestamp() is implemented for TRX_ID based
versioning to get epoch value. It works in vers_get_history_range()
but since partitioning is not enabled for TRX_ID versioning create
temporary table fails with error, requiring timestamp-based system
fields. This method will be useful when partitioning will be enabled
for TRX_ID which is mostly performance problems to solve.

Static key_cmp was renamed to key_eq to resolve compilation after
key.h was included as key_cmp was already declared there.
Aleksey Midenkov
vers_calc_hist_parts()
Aleksey Midenkov
MDEV-25529 cleanup for vers_set_starts() and starts_clause
Aleksey Midenkov
MDEV-25529 converted COMBINE macro to interval2usec inline function
Marko Mäkelä
MDEV-39263 innodb_snapshot_isolation fails to prevent lost updates under contention

lock_clust_rec_read_check_and_lock(): Refine the check whether
the transaction that last modified the record is still active.
The only thing that should matter is whether we are allowed to see
the record. If the implicit lock holder transaction was active and
we succeeded in acquiring the lock, this means that the transaction
had been committed. We must return DB_RECORD_CHANGED (ER_CHECKREAD)
in that case. We want to avoid returning it before the lock wait,
in case the other transaction will be rolled back.

Thanks to Vadim Tkachenko of Percona for reporting this bug,
as well as Kyle Kingsbury for the broader testing that led to this
finding. Vadim's test case was simplified by me and the root cause
analyzed with https://rr-project.org and an additional patch that
added std::this_thread::yield() at the start of
trx_t::commit_persist(). An even better spot should have been
right after the call to trx_t::commit_state().

The addition to the test innodb.lock_isolation is based on the work
by Teemu Ollakka, which was essential for refining this fix.
Thirunarayanan Balathandayuthapani
MDEV-19574: innodb_stats_method is not honored when innodb_stats_persistent=ON

Problem:
=======
When persistent statistics are enabled (innodb_stats_persistent=ON),
the innodb_stats_method setting is not properly utilized during
statistics calculation.

The statistics collection functions always use a hardcoded default
behavior for NULL value comparison instead of respecting the
configured stats method (NULLS_EQUAL, NULLS_UNEQUAL, or
NULLS_IGNORED). This affects the accuracy of n_diff_key_vals
(distinct key count) and n_non_null_key_val estimates, particularly
for indexes with nullable columns containing NULL values. This
impacts the query optimizer makes decisions based on inaccurate
cardinality and estimates.

Solution:
========
Introduced IndexLevelStats which is to collect statistics
at a specific B-tree level during index analysis

Introduced PageStats which is to collect statistics
for leaf page analysis

Refactored the following functions:
dict_stats_analyze_index_level() to IndexLevelStats::analyze_level()
dict_stats_analyze_index_for_n_prefix() to IndexLevelStats::sample_leaf_pages()
dict_stats_analyze_index_below_cur() to PageStats::scan_below()
dict_stats_scan_page() to PageStats::scan()

Add stats method name to stat_description in case of non
default innodb_stats_method variable value

Added the new stat name like n_nonnull_fld01, n_nonull_fld02 etc
with stats description to indicate how many non-nulls value exist
for nth field of the index. This value is properly retrieved and
stored in index statistics in dict_stats_fetch_index_stats_step().

rec_get_n_blob_pages(): Calculate the number of externally
stored pages for a record. It uses ceiling division with actual
usable blob page space(blob_part_size) and now correctly handles for
both compressed and uncompressed table formats for
accurate BLOB page counting.

When InnoDB scan the leaf page directly, assign leaf page
count as number of pages scanned in case of multi-level index.
For single page indexes, use 1. This change leads to multiple
changes in existing test case.

n_non_null_key_vals calculation:
Only leaf pages have actual null/non-null distinction that
matters for statistics.
For NOT NULL columns, n_non_null_key_vals = n_diff_key_vals
For nullable columns, n_non_null_key_vals is calculated using

n_ordinary_leaf_pages * (n_non_null_all_analyzed_pages
                        / n_leaf_pages_to_analyze)
Marko Mäkelä
unsung heroes are unsigned, not unsinged
Raghunandan Bhat
MDEV-34951: InnoDB index corruption when renaming key name with same letter to upper case.

Problem:
  InnoDB index corruption occurs when an index is renamed to a name that
  differs only in case (e.g., 'b' to 'B'). The SQL layer uses
  case-insensitive comparison and fails to recognize the change.

Fix:
  Use case-sensitive comparison when matching index names during
  ALTER TABLE to correctly identify and handle case changes.
Sergei Golubchik
MDEV-39141 MariaDB crashes in THD::THD() due to misalignment

fix my_malloc() to return 16-aligned pointers

(type_assoc_array.sp-assoc-array-64bit prints changes in memory_used,
and my_malloc() uses more memory now)
PranavKTiwari
MDEV-39184 Rework MDL enum constants and hash behavior.
Introduce NOT_INITIALIZED=0 in enum_mdl_namespace. Adjust all dependent
structures accordingly.

Shifting enum values changes the result of hash_value(). Since
MDL locks are stored in a Split-Ordered List (lock-free hash
table) ordered by my_reverse_bits(hash_value()), this alters
the iteration order observed in metadata_lock_info.

Updated the result files to reflect the new iteration order, as it is an implementation
detail and does not affect MDL correctness.
Marko Mäkelä
Copy log files between file systems, in the backup thread
Yuchen Pei
MDEV-39217 Fix the hash key calculation in session sysvar tracker

MDEV-31751 changed the key from the sys_var pointer to its offset.
This was useful for non-plugin variable aliases, but not so much for
plugin variables which all have offset 0.
Aleksey Midenkov
MDEV-25529 TimestampString for printing timestamps
Yuchen Pei
MDEV-38752 Check that virtual column is a supertype to its expression before substitution

New optimizations were introduced that substitute virtual column
expressions (abbr. vcol expr) with index virtual columns (abbr. vcol
field) in WHERE, ORDER BY and GROUP BY.

In this patch we introduce checks that the type of a vcol field is the
supertype to its vcol expr, and if not, do not proceed the
optimization. This ensures that the substitution is safe and does not
for example lose information due to truncation.

For simplicity, the version of the check implemented is strict with
100% precision, meaning that there are instances where vcol field is a
supertype to vcol expr, but the check returns false.

Types not covered in tests: Type_handler_null, Type_handler_composite,
Type_handler_refcursor

Thanks to Jarir Khan <[email protected]> for initial patches
addressing this problem.
Thirunarayanan Balathandayuthapani
MDEV-39261 MariaDB crash on startup in presence of indexed virtual columns

Problem:
========
A single InnoDB purge worker thread can process undo logs from different
tables within the same batch. But get_purge_table(), open_purge_table()
incorrectly assumes that a 1:1 relationship between a purge worker thread
and a table within a single batch. Based on this wrong assumtion,
InnoDB attempts to reuse TABLE objects cached in thd->open_tables for
virtual column computation.

1) Purge worker opens Table A and caches the TABLE pointer in thd->open_tables.
2) Same purge worker moves to Table B in the same batch, get_purge_table()
retrieves the cached pointer for Table A instead of opening Table B.
3) Because innobase::open() is ignored for Table B, the virtual column
template is never initialized.
4) virtual column computation for Table B aborts the server

Solution:
========
- Introduced purge_table class which has the following
purge_table: Stores either TABLE* (for tables with indexed virtual
columns) or MDL_ticket* (for tables without) in a single union
using LSB as a flag.
For tables with indexed virtual columns: opens TABLE*, accesses
MDL_ticket* via TABLE->mdl_ticket
For tables without indexed virtual columns: stores only MDL_ticket*.

trx_purge_attach_undo_recs(): Coordinator opens both dict_table_t*
and TABLE* with proper MDL protection. Workers access cached
table pointers from purge_node_t->tables without opening
their own handles

purge_sys.coordinator_thd: Distinguish coordinator from workers
in cleanup logic. Skip innobase_reset_background_thd() for
coordinator thread to prevent premature table closure during
batch processing. Workers still call cleanup to release their
thread-local resources

trx_purge_close_tables():
Rewrite for purge coordinator thread
1) Close all dict_table_t* objects first
2) Call close_thread_tables() once for all TABLE* objects
3) Release MDL tickets last, after tables are closed

Added table->lock_mutex protection when reading (or) writing
vc_templ->mysql_table and mysql_table_query_id. Clear cached
TABLE* pointers before closing tables to prevent stale pointer
access

Declared open_purge_table() and close_thread_tables() in trx0purge.cc
Declared reset_thd() in row0purge.cc and dict0stats_bg.cc.
Removed innobase_reset_background_thd()
Aleksey Midenkov
MDEV-25529 TimestampString for printing timestamps
Thirunarayanan Balathandayuthapani
MDEV-39261 MariaDB crash on startup in presence of indexed virtual columns

Problem:
========
A single InnoDB purge worker thread can process undo logs from different
tables within the same batch. But get_purge_table(), open_purge_table()
incorrectly assumes that a 1:1 relationship between a purge worker thread
and a table within a single batch. Based on this wrong assumtion,
InnoDB attempts to reuse TABLE objects cached in thd->open_tables for
virtual column computation.

1) Purge worker opens Table A and caches the TABLE pointer in thd->open_tables.
2) Same purge worker moves to Table B in the same batch, get_purge_table()
retrieves the cached pointer for Table A instead of opening Table B.
3) Because innobase::open() is ignored for Table B, the virtual column
template is never initialized.
4) virtual column computation for Table B aborts the server

Solution:
========
- Introduced purge_table class which has the following
purge_table: Stores either TABLE* (for tables with indexed virtual
columns) or MDL_ticket* (for tables without) in a single union
using LSB as a flag.
For tables with indexed virtual columns: opens TABLE*, accesses
MDL_ticket* via TABLE->mdl_ticket
For tables without indexed virtual columns: stores only MDL_ticket*.

trx_purge_attach_undo_recs(): Coordinator opens both dict_table_t*
and TABLE* with proper MDL protection. Workers access cached
table pointers from purge_node_t->tables without opening
their own handles

purge_sys.coordinator_thd: Distinguish coordinator from workers
in cleanup logic. Skip innobase_reset_background_thd() for
coordinator thread to prevent premature table closure during
batch processing. Workers still call cleanup to release their
thread-local resources

trx_purge_close_tables():
Rewrite for purge coordinator thread
1) Close all dict_table_t* objects first
2) Call close_thread_tables() once for all TABLE* objects
3) Release MDL tickets last, after tables are closed

Added table->lock_mutex protection when reading (or) writing
vc_templ->mysql_table and mysql_table_query_id. Clear cached
TABLE* pointers before closing tables to prevent stale pointer
access

Declared open_purge_table() and close_thread_tables() in trx0purge.cc
Declared reset_thd() in row0purge.cc and dict0stats_bg.cc.
Removed innobase_reset_background_thd()
Aleksey Midenkov
MDEV-25529 ALTER TABLE FORCE syntax improved

Improves ALTER TABLE syntax when alter_list can be supplied alongside a
partitioning expression, so that they can appear in any order. This is
particularly useful for the FORCE clause when adding it to an existing
command.

Also improves handling of AUTO with FORCE, so that AUTO FORCE
specified together provides more consistent syntax, which is used by
this task in further commits.
Raghunandan Bhat
MDEV-39118: `test_if_hard_path` crashes on recursively resolving `$HOME`

Problem:
  When `$HOME` is set to `~/` (or any string starting with `~/`), the
  `home_dir` is initialized to that value. When `test_if_hard_path` is
  called on a path starting with `~/`, it replaces the `~/` prefix by
  recursively calling `test_if_hard_path(home_dir)` leading to infinite
  recursion and a crash.

Fix:
  Add a check in `test_if_hard_path` to see if `home_dir` itself begins
  with `~/`. If it does, skip the recursive call to prevent the
  infinite loop.
Daniel Black
MDEV-39292 Debian build compiles Columnstore aarch64

But doesn't package it. The devel-6 branch of columnstore
is in a low maintaince state. We can't readd aarch64.

To prevent our CI resources building aarch64 columnstore
on the 10.6 and 10.11 branches we adjust our autobake-deb.sh
to keep the disable there.

The version check to 10 is so that when this is merged to
11.4 it becomes no longer impacting.
Aleksey Midenkov
MDEV-25529 Auto-create: Pre-existing historical data is not partitioned as specified by ALTER

Adds logic into prep_alter_part_table() for AUTO to check the history
range (vers_get_history_range()) and based on (max_ts - min_ts)
difference compute the number of created partitions and set STARTS
value to round down min_ts value (vers_set_starts()) if it was not
specified by user or if the user specified it incorrectly. In the
latter case it will print warning about wrongly specified user value.

In case of fast ALTER TABLE, f.ex. when partitioning already exists,
the above logic is ignored unless FORCE clause is specified. When user
specifies partition list explicitly the above logic is ignored even
with FORCE clause.

vers_get_history_range() detects if the index can be used for row_end
min/max stats and if so it gets it with ha_index_first() and
HA_READ_BEFORE_KEY (as it must ignore current data). Otherwise it does
table scan to read the stats. There is test_mdev-25529 debug keyword
to check the both and compare results. A warning is printed if the
algorithm uses slow scan.

Field_vers_trx_id::get_timestamp() is implemented for TRX_ID based
versioning to get epoch value. It works in vers_get_history_range()
but since partitioning is not enabled for TRX_ID versioning create
temporary table fails with error, requiring timestamp-based system
fields. This method will be useful when partitioning will be enabled
for TRX_ID which is mostly performance problems to solve.

Static key_cmp was renamed to key_eq to resolve compilation after
key.h was included as key_cmp was already declared there.
Rophy Tsai
MDEV-38550 add LENENC support for COM_CHANGE_USER

When the server advertises CLIENT_PLUGIN_AUTH_LENENC_CLIENT_DATA,
use mysql_net_store_length() for auth data in send_change_user_packet()
instead of a single-byte length capped at 255. This allows auth plugins
that produce >255 bytes of auth data (e.g. cleartext with long passwords)
to work with COM_CHANGE_USER.
Aleksey Midenkov
MDEV-25529 cleanup for vers_set_starts() and starts_clause
Yuchen Pei
[fixup] Initialise order->in_field_list

Otherwise we may get

sql/table.h:239:16: runtime error: load of value 165, which is not a valid value for type 'bool'
Aleksey Midenkov
vers_calc_hist_parts()
Raghunandan Bhat
MDEV-39118: `test_if_hard_path` crashes on recursively resolving `$HOME`

Problem:
  When `$HOME` is set to `~/` (or any string starting with `~/`), the
  `home_dir` is initialized to that value. When `test_if_hard_path` is
  called on a path starting with `~/`, it replaces the `~/` prefix by
  recursively calling `test_if_hard_path(home_dir)` leading to infinite
  recursion and a crash.

Fix:
  Add a check in `test_if_hard_path` to see if `home_dir` itself begins
  with `~/`. If it does, skip the recursive call to prevent the
  infinite loop.
Rex Johnston
MDEV-39333 main.group_by fails with incorrectly ordered rows

Since commit 80ea16c6209, 'order by column is null' may produce an
indeterminate column order of rows where 'column is null' is
satisfied.  This commit fixes an existing test in main.group_by
for MDEV-6129.