Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
Rucha Deodhar
MDEV-39127: UBSAN : downcast of address X which does not point to an object
of type 'multi_update' in sql/sql_update.cc
| Sql_cmd_update::update_single_table

Analysis:
the 'result' object was being incorrectly used which maybe of the type
multi_update.This caused UBSAN error due to an invalid downcast in
Sql_cmd_update::update_single_table().

Fix:
Introduce a dedicated returning_result object for handling RETURNING output
instead of reusing result. This ensures the correct result handler is used
and avoids unsafe casts.
Rucha Deodhar
MDEV-39213: json range syntax crash

Analysis:
When json is being parsed, the step decreases without a out-of-bound check
resulting in failure.
Fix:
Before decreasing the step, check if it will result into out of bound.
Thirunarayanan Balathandayuthapani
MDEV-38412 System tablespace fails to shrink due to legacy tables

Problem:
=======
- InnoDB system tablespace fails to autoshrink when it contains
legacy internal tables. These are non-user tables and internal
table exist from older version. Because the current shrink logic
does recognize these entries as user table, they block the
defragmentation process required to reduce the tablespace size.

Solution:
=========
To enable successful shrinking, InnoDB has been updated to
identify and remove these legacy entries during the startup:

drop_all_orphaned_tables(): A new function that scans the InnoDB
system tables for entries lacking the / naming convention.
It triggers the removal of these table objects from InnoDB system
tables and ensures the purge thread subsequently drops any
associated index trees in SYS_INDEXES.

scan_system_tablespace_metadata(): Scan SYS_TABLES or SYS_INDEXES
and invoke callback for non-system entries in system tablespace.
This function scans either SYS_TABLES or SYS_INDEXES (based on the
table parameter) and invokes the provided callback for each non-system
table or index found in the system tablespace (SPACE=0).

dict_drop_table_metadata(): Function is to remove specific
table IDs from internal system tables.

fsp_system_tablespace_truncate(): If legacy tables are detected,
InnoDB prioritizes their removal. To ensure data integrity and
complete the shrink, a two-restart sequence may be required:
1) purge the legacy table
2) Defragment the system tablespace and shrink the
system tablespace further

Thanks Marko Mäkelä for the contribution of your patch in
drop_orphaned_tables()
Daniel Black
MDEV-38771 RPM conflicts between MariaDB-common and mysql-common

mysql-common and MariaDB-common don't install the same files.
mysql-common (in MySQL 8.0) installs character set files
(/usr/share/mysql/charsets/*) and /usr/lib64/mysql (directory only).

MariaDB common installs character set files in /usr/share/mariadb
and the same /usr/lib64/mysql directory along with client plugins
in /usr/lib64/mysql/plugin. The RPM rules of conflict
only will cause troubles on directories if they are installed with
different metatadata (selinux, ownership, permissions) which isn't
the case.

As the character sets are at a different location MariaDB-common
isn't obsoleting mysql-common in a way that provides compatibilty
with mysql-common, for mysql-libs or otherwise, so its just creating
an install conflict.

Users installing perl-DBD-MySQL notice this because its mysql-libs
dependency pulls mysql-common, which conflicts with MariaDB-common.

We correct by removing the conflict and the provides of MariaDB-common
with resepect to mysql-common.
Daniel Black
MDEV-36808 json_array_intersect incorrect results

After JSON_ARRAY_INTERSECT returned a NULL result, within the
table scan, all subsequent values where NULL.

The checking of null_value in Item_func_json_array_intersect::val_str
meant that after the first occurrence  of null, all values
were null.

The purpose of this check started in
Item_func_json_array_intersect::prepare_json_and_create_hash
where null_value=1 was used to indicate that an object
wasn't an array. Replaced to use dedicated is_array boolean.

Reviewed by: Rucha Deodhar
Rex Johnston
MDEV-38347 Debugging functions

Extend current dbug_print* functions in such a way as to facilitate easy
high level debugging.  Introduce a single overloaded print function
defined in dbp.h, DBUG_PRINT_FUNCTION, settable to your preference.
Allow a shell environment variable DBUG_PRINT_FUNCTION to override
this definition.
Hemant Dangi
MDEV-39006: Galera test failure on galera_3nodes.galera_garbd_backup

Issue:
sst_disable_innodb_writes() promises no writes to persistent files
between Galera donor state and SST release, and most engine subsystems
already honour that (purge, dict_stats, encryption thread count, FTS
optimizer in fts0opt.cc:2844). The page cleaner did not:
buf_flush_page_cleaner()'s set_almost_idle: block ran
buf_dblwr.flush_buffered_writes() and log_checkpoint() unconditionally
on every wake-up. With LSN advanced past last_checkpoint_lsn (via any
prior background mtr - the most common one being a wsrep applier
writeset), log_checkpoint_low() then appended a fresh FILE_CHECKPOINT
record to ib_logfile0 mid-SST, breaking
galera_3nodes.galera_garbd_backup with a --diff_files mismatch.

Solution:
Add a wsrep_sst_disable_writes gate around those two writes, mirroring
the FTS pattern. In non-WSREP builds the flag is a constexpr false and
the check compiles away. This closes the page-cleaner leak; other
writers (wsrep apply paths, in-flight encryption rotation, ib_buffer_pool
dump) are separate concerns and out of scope.

Also assert !recv_no_log_write after log_write_up_to() in
log_checkpoint_low() so debug builds catch any future write path that
slips through.
Yuchen Pei
Merge branch '10.11' into 11.4
Georgi (Joro) Kodinov
MDEV-39456: Describe the external contributions process in more details

Added a new .md document describing the community contribution process.
Added a reference to it from the CONTRIBUTING.md.
Rucha Deodhar
MDEV-39179: Incorrect NULL handling in UPDATE ... RETURNING result

Analysis:
OLD_VALUE() swapped only field->ptr, leaving null_ptr pointing to the
current record. This caused incorrect NULL results.

Fix:
Store null_ptr_old for record[1] and swap it together with ptr to
preserve correct NULL semantics.
Daniel Black
MDEV-39207: Fix plugin name passed to find_bookmark in test_plugin_options (test postfix)

Under cursor/ps-protocol(Debug buidl) the UNINSTALL SONAME
in the cleanup could generate WARN_PLUGIN_BUSY during shutdown.

Disable the cursor and ps-protocol around the
disconnect and uninstall soname to resolve the test failure.

Concept/Review/Testing with: Yuchen Pei
bsrikanth-mariadb
MDEV-39430: Store non-default system variables

Store non-default system variables into the context.

Additionally, added code to store user variables. But, commented it out.
Thirunarayanan Balathandayuthapani
MDEV-39081 InnoDB: tried to purge non-delete-marked record, assertion fails in row_purge_del_mark_error

Reason:
=======
Following the changes in MDEV-38734, the server no longer marks
all indexed virtual columns during an UPDATE operation.
Consequently, ha_innobase::update() populates the upd_t vector
with old_vrow but omits the actual data for these virtual columns.

Despite this omission, trx_undo_page_report_modify() continues to
write metadata for indexed virtual columns into the undo log. Because
the actual values are missing from the update vector, the undo log
entry is recorded without the historical data for these columns.

When the purge thread processes the undo log to reconstruct a
previous record state for MVCC, it identifies an indexed virtual
column but finds no associated data.

The purge thread incorrectly interprets this missing data as a NULL
value, rather than a "missing/unrecorded" value. The historical
record is reconstructed with an incorrect NULL for the virtual column.
This causes the purge thread to incorrectly identify and purge
records that are not actually delete-marked, leading to abort
of server.

Solution:
=========
ha_innobase::column_bitmaps_signal(): Revert the column-marking
logic to the state prior to commit a4e4a56720c, ensuring all
indexed virtual columns are unconditionally marked during an UPDATE.

The previous "optimization" attempted to manually detect indexed
column changes before marking virtual columns. The manual check
for indexed column modifications is redundant. InnoDB already
provides the UPD_NODE_NO_ORD_CHANGE flag within row_upd_step().
This flag is being used in trx_undo_page_report_modify() and
trx_undo_read_undo_rec() should log or read virtual column values.

Refactored column_bitmaps_signal() to accept a mark_for_update
parameter that controls when indexed virtual columns are marked.
TABLE::mark_columns_needed_for_update() is the only place that needs
mark_for_update=true because only UPDATE operations need to mark
indexed virtual columns for InnoDB's undo logging mechanism.

INSERT operation is already handled by
TABLE::mark_virtual_columns_for_write(insert_fl=true).
Even the commit a4e4a56720c974b547d4e469a8c54510318bc2c9 changes are
going to affect TABLE::mark_virtual_column_for_write(false) and
It is called during UPDATE operation, and that's when
column_bitmaps_signal() needs to mark  indexed virtual columns.

Online DDL has separate code path which is handled by
row_log_mark_virtual_cols() for all DML operations
Marko Mäkelä
squash! 19eddf4cd14ae152b6304da294c6dc6f1034bf9c

log_t::archive_flush_ahead(): Adjust a buf_flush_ahead() target if
we are switching archive log files.

log_overwrite_warning(): Tolerate innodb_log_archive=ON

mtr_t::finish_writer(): Always use the log_close() flushing target,
possibly strengthened with log_sys.archive_flush_ahead().

log_write_up_to(): Trigger asynchronous write-ahead when
we are switching log files.
Rucha Deodhar
MDEV-39119: Improve error handling when using OLD_VALUE as alias name

Analysis:
Since OLD_VALUE_SYM was part of reserved keywords, it did not allow
old_value in the alias.
Fix:
Change OLD_VALUE_SYM from reserved keyword to keyword_sp_var_and_label.
Rucha Deodhar
MDEV-39213: json range syntax crash

Analysis:
When json is being parsed, the step decreases without a out-of-bound check
resulting in failure.
Fix:
Before decreasing the step, check if it will result into out of bound.
Rucha Deodhar
MDEV-39119: Improve error handling when using OLD_VALUE as alias name

Analysis:
Since OLD_VALUE_SYM was part of reserved keywords, it did not allow
old_value in the alias.
Fix:
Change OLD_VALUE_SYM from reserved keyword to keyword_sp_var_and_label.
ParadoxV5
Fix `$target_temp_format` in `rpl.rpl_typeconv`

The MTR snippet `suite/rpl/include/check_type.inc`
was setting `@@GLOBAL.mysql56_temporal_format` with the wrong variable.
Rucha Deodhar
MDEV-39212: JSON_MERGE_PATCH depth crash

Analysis:
The crash happens because we run out of stack space

Fix:
Add a stack overflow check.
Marko Mäkelä
squash! bc87ab87416e4b5f688435b67cf66e3421f10e49

Avoid setting log_sys.need_checkpoint unnecessarily (MDEV-39162).

log_t::archive_flush_ahead(): Adjust a buf_flush_ahead() target if
we are switching archive log files.

log_overwrite_warning(): Tolerate innodb_log_archive=ON

mtr_t::finish_writer(): Always use the log_close() flushing target,
possibly strengthened with log_sys.archive_flush_ahead().

log_write_up_to(): Trigger asynchronous write-ahead when
we are switching log files.
Yuchen Pei
MDEV-39217 Fix the hash key calculation in session sysvar tracker

MDEV-31751 changed the key from the sys_var pointer to its offset.
This was useful for non-plugin variable aliases, but not so much for
plugin variables which all have offset 0.
Sergei Golubchik
MDEV-34570 mariabackup prepare fails with data-at-rest encryption and "-u root" option

rewrite mariadb-backup "early" option parsing to use my_getopt

+ proper handling of values separated from the option by a space (not =)
+ case insensitive and -/_ insensitive comparison
- multiple --defaults-group don't work
- multiple --login-path don't work
- --incremental-dir overwrites --target-dir, not "whatever comes first"
Sergei Golubchik
MDEV-39498 more fixes

* use max_length=640
* also fix mroonga_highlight_html, mroonga_normalize, mroonga_snippet_html
* remove disable_cursor_protocol from all mroonga tests
Rucha Deodhar
MDEV-39127: UBSAN : downcast of address X which does not point to an object
of type 'multi_update' in sql/sql_update.cc
| Sql_cmd_update::update_single_table

Analysis:
the 'result' object was being incorrectly used which maybe of the type
multi_update.This caused UBSAN error due to an invalid downcast in
Sql_cmd_update::update_single_table().

Fix:
Introduce a dedicated returning_result object for handling RETURNING output
instead of reusing result. This ensures the correct result handler is used
and avoids unsafe casts.
Rucha Deodhar
temp
Rex Johnston
MDEV-39499 Updates to derived-with-keys, window functions determining

..records per key

Enabling derived keys optimization for derived.col = const pushed
conditions.

Estimating records per key in derived key for the optimizer
based on form and/or size of components of a derived table.

Consider a derived table of the form

SELECT ..., ROW_NUMBER ()  OVER (PARTITION BY c1,c2 order by ...)
FROM t1, t2, t3 ...
WHERE ...

If the optimizer generates a key on this derived table because of
a constraint being pushed into it, it currently will not consider key
components of the form  col = const

We lift this constraint and add code to TABLE::add_tmp_key to search
for a window function ROW_NUMBER().  From the partition list c1, c2 we
can in infer an estimate of the number of rows we expect to see for
each key value.  The optimizer can then use this number to determine
a better table join order.
Yuchen Pei
MDEV-39361 Assign Name resolution context in subst_vcol_if_compatible to the new vcol Item_field

The pushdown from HAVING into WHERE optimization cleans up and refixes
every condition to be pushed.

The virtual column (vcol) index substitution optimization replaces
vcol expressions in GROUP BY (and WHERE and ORDER BY) with vcol
fields.

The refixing requires the correct name resolution context to find the
vcol fields.

The commit 0316c6e4f21dee02f5adfbe5c62471ee75ca20bb assigns context
from the select_lex that the GROUP BY belongs to, but that may not
work when there are derived table subqueries.

In this commit we assign the correct context during substitution for
the newly constructed vcol Item_field in the substitution.

Also make the walk of vcol expressions with
intersect_field_part_of_key not descend to subquery since the
generated columns can't have subqueries.

Alternative considered:

1. Assign the context when constructing vcol_info, in
unpack_vcol_info_from_frm. This does not work because the
current_context() in parsing is not the correct context, not to
mention that unpack_vcol_info_from_frm is not always called from a
SELECT statement.

2. Get the correct context for vcol_info after its construction and
before the substitution. Debugger with watch -a vcol_info shows that
there are no common functions accessing vcol_info before the
substitution.

3. Cache the name resolution context value to a new vcol_info->context
field and assign value to it in an Item_field constructor as well as
during substitution. The problem with this is that
vcol_info->context->select_lex may be cleared at the end of the one
SELECT statement, which can cause crash in the next SELECT. It looks
like vcol_info is meant for "static"/ddl information, not
statement-specific info, so it is the wrong place to cache the
context.
Rucha Deodhar
MDEV-39212: JSON_MERGE_PATCH depth crash

Analysis:
The crash happens because we run out of stack space

Fix:
Add a stack overflow check.
Rex Johnston
MENT-2483 Degrading performance with CTE query with spatial index

Enabling derived keys optimization for derived.col = const pushed
conditions.

Estimating records per key in derived key for the optimizer
based on form and/or size of components of a derived table.

Consider a derived table of the form

SELECT ..., ROW_NUMBER ()  OVER (PARTITION BY c1,c2 order by ...)
FROM t1, t2, t3 ...
WHERE ...

If the optimizer generates a key on this derived table because of
a constraint being pushed into it, it currently will not consider key
components of the form  col = const

We lift this constraint and add code to TABLE::add_tmp_key to search
for a window function ROW_NUMBER().  From the partition list c1, c2 we
can in infer an estimate of the number of rows we expect to see for
each key value.  The optimizer can then use this number to determine
a better table join order.
Marko Mäkelä
MDEV-39303: Skip ibuf_upgrade() if innodb_force_recovery=6

ibuf_upgrade_needed(): Pretend that no upgrade is needed when
innodb_force_recovery=6.

srv_load_tables(): Test the least likely condition first.

srv_start(): Remove a message that is duplicating one at the start
of recv_recovery_from_checkpoint_start().

This was tested by starting up a server on an empty data directory
that had been created by MariaDB Server 10.6.

Reviewed by: Thirunarayanan Balathandayuthapani
Rex Johnston
Merge branch '11.4' into bb-11.4-MDEV-39499
Rucha Deodhar
MDEV-5092: Implement UPDATE with result set (UPDATE ... RETURNING)

The patch introduces the OLD_VALUE() expression to reference the value
of a column before it was updated. The parser is extended to support
RETURNING and OLD_VALUE(), and RETURNING expressions are stored in a
separate returning_list in SELECT_LEX with independent wildcard tracking.
RETURNING is rejected for multi-table UPDATE.

During setup of RETURNING fields, THD::is_setting_returning is used
when resolving fields, particularly for updates through views.
When resolving view fields, Item_direct_view_ref may point to the
view's item_list, losing the information about whether the value
should be old or new. The original item in returning_list still
contains the correct is_old_value_reference flag, which is copied
back to the resolved item.

OLD_VALUE() is implemented by extending Item_field with a new
Item_old_field class. Item_field::set_field() initializes
Field::ptr_old to the corresponding location in record[1],
which stores the old row during UPDATE execution.

When sending result rows, Item_old_field::send() temporarily switches
the field pointer from record[0] (current row) to ptr_old so
OLD_VALUE() returns the value before the update.

The UPDATE execution path is modified to send a result set when
RETURNING is present instead of an OK packet.
Oleksandr Byelkin
MDEV-39287 (11.4 part) Fix compilation problems with glibc 2.43/gcc 16/fedora 44

Trick compiler with ==.
Thirunarayanan Balathandayuthapani
MDEV-37294  segv in flst::remove_complete(buf_block_t*, unsigned short, unsigned char*, mtr_t*)

Problem:
=======
During system tablespace defragmentation, extent movement occurs
in two phases: prepare and complete.
1) prepare phase validates involved pages and acquires necessary
resources.
2) complete phase performs the actual data copy.

Prepare phase fails to check whether allocating a page will
make the extent FULL. When an extent has exactly (extent_size - 1)
pages used, the prepare phase returns early without latching
the prev/next extent descriptors needed for list manipulation.

Complete phase then allocates the final page, making the
extent full, and attempts to move it from
FSEG_NOT_FULL/FSP_FREE_FRAG to FSEG_FULL/FSP_FULL_FRAG list.
This fails with an assertion because the required blocks were
never latched, causing a crash in flst::remove_complete().

Solution:
========
alloc_from_fseg_prepare(), alloc_from_free_frag_prepare():
call these function only if the extent will be full after
allocation. This makes the
prepare phase to acquire the necessary pages for FSP list manipulation

find_new_extents(): Print more revised information about moving
of extent data and destination extent also.

defragment_level(): Move get_child_pages(new_block) before committing
changes to enable proper rollback on failure. Well, this failure
is theoretically impossible (new block is exact copy of
validated old block).
Raghunandan Bhat
MDEV-37243: SP memory root protection disappears after a metadata change

Problem:
  When a stored routine involes a cursor and metadata of table on which
  the cursor is defined changes, the SP instruction has to be reparsed.
  For ex:
  CREATE OR REPLACE TABLE t1 (a INT);

  CREATE OR REPLACE FUNCTION f1() RETURNS INT
  BEGIN
    DECLARE vc INT DEFAULT 0;
    DECLARE cur CURSOR FOR SELECT a FROM t1;
    OPEN cur;
    FETCH cur INTO vc;
    CLOSE cur;
    RETURN vc;
  END;

  SELECT f1(); - first execution, sp-mem_root marked read-only on exec
  SELECT f1(); - read-only sp-mem_root
  ALTER TABLE t1 MODIFY a TEXT; - metadta change
  SELECT f1(); - reparse, rerun instr and mark new mem_root read-only

  sp_lex_instr is re-parsed after the metadata change, which sets up a
  new mem_root for reparsing. Once the instruction is re-parsed and
  re-executed(via reset_lex_and_exec_core), the new memory root assigned
  to the instruction being reparsed remains writable. This violates the
  invariant of SP memory root protection.

Fix:
  Mark the new memory root created for reparsing with read-only flag,
  after the first execution of the SP instruction.
Yuchen Pei
[fixup] Initialise order->in_field_list

Otherwise we may get

sql/table.h:239:16: runtime error: load of value 165, which is not a valid value for type 'bool'
Rucha Deodhar
MDEV-39179: Incorrect NULL handling in UPDATE ... RETURNING result

Analysis:
OLD_VALUE() swapped only field->ptr, leaving null_ptr pointing to the
current record. This caused incorrect NULL results.

Fix:
Store null_ptr_old for record[1] and swap it together with ptr to
preserve correct NULL semantics.
Georg Richter
Fix memory leak in rpl_api
Oleksandr Byelkin
New CC 3.4