Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
Daniel Black
MDEV-39031 remove Docs/README-wsrep

The contents of this file where not current
and better and more current forms of documentation
related to Galera are on mariadb.com/docs.
Thirunarayanan Balathandayuthapani
MDEV-19194  ASAN use-after-poison in fk_prepare_copy_alter_table upon dropping FK

Problem:
========
The issue occurs when ALTER TABLE contains duplicate DROP FOREIGN KEY
operations (e.g., "DROP FOREIGN KEY f1, DROP FOREIGN KEY f1").
In fk_prepare_copy_alter_table(), Removes from the list all foreign
keys which are to be dropped. But iterator continues to accessing the
removed entry when it encounters duplicate foreign key.

Solution:
=========
mysql_prepare_alter_table(): Add duplicate detection logic to handle
cases where the same foreign key is specified multiple times in a
single ALTER TABLE statement's DROP statement.

fk_prepare_copy_alter_table(): fk_parent_key_list and fk_child_key_list
contain all existing foreign keys where the table acts as parent or child
respectively, and they are populated through storage engine calls.
After this phase, the code iterates through these collections comparing
each foreign key against alter_info->drop_list using case-insensitive
string matching for constraint names and database/table names.

- When matches are found, instead of removing them immediately,
the matching FOREIGN_KEY_INFO objects are added to the
keys_to_remove collection, which acts as a temporary holding area.
Once all matching keys are collected in keys_to_remove, use it to
remove the key entries from original lists.
Vladislav Vaintroub
MDEV-37256 Add permission check for LOAD INDEX INTO CACHE and CACHE INDEX

Previously, no access check was done for these commands. Now, we require
any table-level permission, which consistent with other admin commands
like CHECKSUM.
mohitbalwani
MDEV-38696 Fix infinite loop in my_copy() function

The bug occurred when my_read() returned (size_t)-1 but was
compared against (uint)-1.It was causing the loop to
never exit when reading from a directory or other error conditions.

Changed comparison from (uint)-1 to (size_t)-1 to match the
return type of my_read().
Hemant Dangi
MDEV-36621: galera.GCF-360 test: IST failure

Removed redundant per-node wsrep_provider_options overrides that
were dropping all timeout settings.
Vladislav Vaintroub
MDEV-34482 threads running events are not visible for definer

Fixed to additionally show event worker threads, in SHOW PROCESSLIST
and I_S.PROCESSLIST, if event worker runs under current user context.
Abdelrahman Hedia
MDEV-37842 Skip implicit Using_Gtid warning when value is unchanged

When a replica already has Using_Gtid=No and a CHANGE MASTER TO is
issued with log coordinates (e.g. relay_log_pos, master_log_file),
the server emits a spurious warning:

  Note 4190 CHANGE MASTER TO is implicitly changing the value of
  'Using_Gtid' from 'No' to 'No'

The value isn't actually changing, so the warning is misleading.

In change_master() (sql/sql_repl.cc), when log coordinates are specified
without an explicit master_use_gtid, the code implicitly sets Using_Gtid
to No and emits a warning. The condition only checks whether
master_use_gtid=No was explicitly given but does not check whether
Using_Gtid is already No.

Added a check that the current Using_Gtid value differs from
USE_GTID_NO before emitting the warning. The warning now only fires
when the value actually changes.

Re-recorded rpl.rpl_from_mysql80 which previously expected the
spurious No-to-No warning.

Reviewed-by: Georgi Kodinov <[email protected]>
Reviewed-by: Brandon Nesterenko <[email protected]>

https://github.com/MariaDB/server/pull/4678
Rucha Deodhar
MDEV-38835: json path regression

Analysis:
JSON_VALUE should not allow wildcard in the path since it is not supposed to
return multiple values (as per json standards). However, JSON_QUERY()
should.

Fix:
Make JSON_VALUE() return multiple error on wildcard in the path.
Michael Widenius
MDEV-32745 Add a simple MySQL to MariaDB upgrade helper

The tool is named mariadb-migrate-config-file.
The main purpose of the tool is to change MySQL option
files to work both for MySQL and MariaDB.
There are options to do the changes in the options file inline,
or at-end-of-file. One can also remove or comment unknown options.

The list of supported options is generated compile time from
mariadbd --help. All server options, including compiled plugins, are
supported.

The bulk of the code comes from Väinö.
Monty has updated it with a lot of extra options.
Wlad helped with cmake integration

Other things:
- Fixed a memory leak in sql_plugin.cc
- plugin-load will now in case of errors try to load all given plugins
  before aborted
- If silent-startup is used, plugin-load will not give errors for
  plugins it cannot load or warnings about plugin marturity level.
- my_rm_tree() will now delete symlinks, not the actual file, if
  MY_NOSYMLINK flag is used.
- my_stat() will now give data for symlink if MY_NOSYMLINKS is used.
- Added 'number of lines' option to mysqltest --cat_file

@Authors: Väinö Mäkelä <[email protected]>,[email protected]
Marko Mäkelä
MDEV-38958: Core dump contains buffer pool in release builds

buf_pool_t::create(), buf_pool_t::resize(): After
my_virtual_mem_commit() successfully invoked mmap(MAP_FIXED), invoke
ut_dontdump() so that the buffer pool will continue to be excluded
from any core dump as expected on platforms that implement this
functionality.

This was manually tested on Linux and FreeBSD by executing
killall -ABRT mariadbd
and checking the size of the core dump file, while the following
test case was executing:
--source include/have_innodb.inc
set global innodb_buffer_pool_size=10737418240;
sleep 3600;

This fixes up the following changes:
commit b6923420f326ac030e4f3ef89a2acddb45eccb30 (MDEV-29445)
commit 072c7dc774e7f31974eaa43ec1cbb3b742a1582e (MDEV-38671)
Geng Tian
MDEV-38454 CHANGE MASTER TO master_heartbeat_period does not accept numbers with `+` sign

Fixed parser inconsistency where CHANGE MASTER TO master_heartbeat_period
rejected numeric values with an explicit '+' sign, while other parameters
like master_connect_retry accepted them.

The issue was in sql/sql_yacc.yy where master_heartbeat_period used
NUM_literal (which doesn't accept '+'), while other parameters used
ulong_num (which includes opt_plus).

Solution: Added opt_plus before NUM_literal in the master_heartbeat_period
grammar rule, making it consistent with other numeric parameters.

Added test case to verify:
- master_heartbeat_period=+60 now works (was broken)
- master_heartbeat_period=60 still works (backward compatible)

All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Marko Mäkelä
MDEV-38968 Redundant FILE_CHECKPOINT writes

Concurrent calls to log_checkpoint_low() were possible from multiple
threads, and they could cause redundant writes of FILE_CHECKPOINT
records that refer to the same checkpoint. Let us simplify the logic
by making the dedicated buf_flush_page_cleaner() thread responsible
for checkpoints.

log_t::write_checkpoint(lsn_t end_lsn): Add the parameter checkpoint,
which will replace the data member log_sys.next_checkpoint_lsn.

log_sys.checkpoint_pending: Remove. Only the buf_flush_page_cleaner()
thread will write checkpoints or initiate page writes.

log_checkpoint_low(), log_checkpoint(): Remove the return value,
because there cannot be any concurrent log checkpoint in progress.

buf_flush_wait(): Add a parameter for waiting for a full checkpoint.
This function replaces buf_flush_wait_flushed().

log_t::checkpoint_margin(): Replaces log_checkpoint_margin().

lot_t::write_buf(): Remove a call set_check_for_checkpoint(false)
that commit 7443ad1c8a8437a761e1a2d3ba53f7c0ba0dd3bb (MDEV-32374)
had added. The flag should only be cleared when the checkpoint has
advanced far enough.

log_make_checkpoint(): Simply wrap buf_flush_sync_batch(0, true).

buf_flush_sync_batch(): Add the parameter bool checkpoint, to wait
for an empty FILE_CHECKPOINT record to be written. Outside recovery,
pass lsn=0.

buf_flush_sync_for_checkpoint(): On shutdown, update the systemd
watchdog and keep flushing until a final checkpoint has been written.

buf_flush_page_cleaner(): Revise the shutdown logic so that all
changes will be written out and a checkpoint with just a FILE_CHECKPOINT
record can be written.

buf_flush_buffer_pool(): Remove.

buf_flush_wait_flushed(): Require the caller to acquire
buf_pool.flush_list_mutex.

logs_empty_and_mark_files_at_shutdown(): Simplify the logic,
and return the shutdown LSN.

fil_names_clear(): Fix an off-by-one error that would prevent
removal from fil_system.named_spaces.

innodb_shutdown(): Always invoke logs_empty_and_mark_files_at_shutdown().

srv_undo_tablespaces_reinit(): Simplify the logic and remove the
fault injection point after_reinit_undo_abort, which would cause
a failure on Microsoft Windows. Changing innodb_undo_tablespaces is
not fully crash-safe.

Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Saahil Alam
Raghunandan Bhat
MDEV-36678: Various crashes upon stored procedure querying view with concat/group_concat

Problem:
  A stored procedure querying a view with `CONCAT`/`GROUP_CONCAT` could
  crash due to a NULL pointer dereference during query optimization.

  During condition pushdown, `Item_direct_ref_to_item::deep_copy`
  creates a clone where the clone's `ref` pointer incorrectly remains
  pointing to the original object's `m_item`, rather than its own.

  Because the clone is tethered to the original object, subsequent
  transformations happen on the original item instead of the clone.
  Calling `fix_fields` on malformed `Item_direct_ref_to_item` clone
  resolves to `Item_field::fix_fields` instead of `Item_ref::fix_fields`
  leaving the member `Item_ref::ref` uninitialized (NULL). When this is
  dereferenced in `Item_ref::const_item`, server crashes.

Fix:
  The `Item_direct_ref_to_item::set_item` method is made to update the
  item and reference, making the clone complete. This results in correct
  `fix_fields` resolution, making `Item_ref::ref` safe to access.
Kristian Nielsen
MDEV-38731: Wrong index usage when row-based replication and no PK

This affects row-based replication for UPDATE/DELETE on tables with no
primary key (and no non-NULL unique index).

There was a typo / incorrect merge from the following commit:

commit 3abce27e9d45282084aa8d0972ffb6721bac16f5
Author: unknown <[email protected]>
Date:  Mon Dec 27 22:37:37 2010 +0100

    Merge Percona patch row_based_replication_without_primary_key.patch into MariaDB.

Instead of referencing the index selected for applying the event, part
of the code was referencing the first index (in internal order) in the
table. This is code that checks if an index lookup is unique, or if it
requires a range scan. This could lead the code to incorrectly do a
unique lookup instead of comparing the pre-image against each row in a
range scan, thus applying the event to the wrong row and causing
replication to diverge.

Signed-off-by: Kristian Nielsen <[email protected]>
Vladislav Vaintroub
MDEV-34482 threads running events are not visible for definer

Fixed to additionally show event worker threads, in SHOW PROCESSLIST
and I_S.PROCESSLIST, if event worker runs under current user context.
Kristian Nielsen
Merge 10.6 -> 10.11
kjarir
MDEV-38507: Reduce fadvise() overhead on pipes in Mariabackup

Currently, posix_fadvise(..., 0, 0, POSIX_FADV_DONTNEED) is executed
repeatedly after each my_write. In case of pipes, it evaluates to
ESPIPE and fails. This results in millions of redundant syscalls,
introducing massive overhead.

As proposed by Daniel Black, call posix_fadvise with a length/size
of 0 once when the file is opened, rather than repetitively after
every write. This change moves the fadvise call to the respective
_open functions for ds_local, ds_stdout and ds_tmpfile datasinks.
It fails fast on pipes early on and eliminates the redundant
syscall overhead for subsequent write operations.
Thirunarayanan Balathandayuthapani
MDEV-38993 Assertion `trx->undo_no == 1' fails upon ALTER IGNORE

Problem:
========
During ALTER TABLE ... IGNORE, a partial rollback on duplicate key
error resets trx->undo_no to 0. The subsequent insert then enters
the undo rewrite block with undo_no == 0, hitting the assertion
that expected undo_no == 1.

Solution:
=========
Partial rollback which truncates the last insert undo record
via trx_undo_truncate_end(), which rewrites TRX_UNDO_PAGE_FREE
on the page. By checking trx->undo_no as part of the rewrite
predicate, InnoDB correctly skips the rewrite logic after partial
rollback.

trx_undo_report_row_operation(): Pre-compute the full predicate
(clear_ignore) before trx_undo_assign_low(), since old_offset
and top_offset are not modified by that call.

trx_undo_rewrite_ignore(): Extract the rewrite body into a
separate ATTRIBUTE_COLD ATTRIBUTE_NOINLINE static function.
Marko Mäkelä
Fix GCC-16 -Wunused-but-set-variable
Kristian Nielsen
MDEV-38776 [ERROR] Slave worker thread retried transaction 10 time(s) in vain, giving up

Partially revert this commit:

commit 6a1cb449feb1b77e5ec94904c228d7c5477f528a
Author: Sergei Golubchik <[email protected]>
Date:  Mon Jan 18 18:02:16 2021 +0100

    cleanup: remove slave background thread, use handle_manager thread instead

This restores running the parallel replication deadlock killing in its own
dedicated thread, not in the manager thread shared with other unrelated
processing.

When a parallel replication conflict is detected, multiple threads can be
waiting for each other, potentially in a loop. It is critical for
correctness (as well as performance) that the blocking thread is killed
immediately to allow other threads to continue. If one of the threads being
blocked was the manager thread itself in some unrelated job, the kill could
end up being blocked indefinitely, causing replication to hang, usually
eventually timing out on innodb_lock_wait_timeout and failing replication
with an error like:

[ERROR] Slave worker thread retried transaction 10 time(s) in vain, giving up

Signed-off-by: Kristian Nielsen <[email protected]>
Rex Johnston
MDEV-35333 Performance Issue on TPC-H Query 18

Query 18 of the TPC-H benchmark looks like this

select
    c_name,
    c_custkey,
    o_orderkey,
    o_orderdate,
    o_totalprice,
    sum(l_quantity)
from
    ORDERS
    join CUSTOMER on c_custkey = o_custkey
    join LINEITEM on o_orderkey = l_orderkey
where
    o_orderkey in
    (
        select
            l_orderkey as ok
        from
            LINEITEM
        group by
            l_orderkey
        having
            sum(l_quantity) > 314
    )
group by
    c_name,
    c_custkey,
    o_orderkey,
    o_orderdate,
    o_totalprice
order by
    o_totalprice desc,
    o_orderdate
limit
    100;

Our derived table is converted into a semi-join, but the optimizer
doesn't know how many rows to expect before materialization, so
doesn't know where in the join order is best.

We need to be able to estimate the selectivity of
having sum(l_quantity) > 314 as well as the cardinality of l_orderkey
in the table LINEITEM.

Here we introduce a framework to calculate grouped selectivity, and perhaps
other ignored, non-grouping predicates in the future.

We add selectivity_estimate() to our Item class as the main entry point
for our calculations.  This method is implemented for
aggregate functions Item_func_{gt,ge,lt,le,eq,ne}.
we look for a ref item, indicating an aggregate function and a constant item.
If found, our ref item is asked for a likely distribution
of values.  Implemented ref items are Item_sum_{sum,min,max,count}
each returning something we can compare to our constant item.

The class Expected_distribution encapsulates both the collection of the
expected values from our aggregate function, as well as the comparison
operators (currently implemented using field->val_real()).
In order to calculate the probability of any aggregate function
comparison, we first calculate the required group size, then the
probability of this group size using a poisson distribution.

We also remove a call to the old get_post_group_estimate() to a newer
estimate_post_group_cardinality() that includes estimates of
item list cardinality and provides an entry point to our new
selectivity estimate above.

Updated to estimate selectivity of LIKE predicates in the where clause.
Updated to blanket apply a selectivity to a HAVING clause in a
subselect.
Marko Mäkelä
Merge 10.6 into 10.11
Fariha Shaikh
MDEV-38020 Master & relay log info files read 2^31 and above incorrectly

Master and relay log positions are 64-bit unsigned but were read using
atoi(), which only handles signed 32-bit integers. Values >= 2^31 overflow
and corrupt after restart.

Add init_ullongvar_from_file() using my_strtoll10() to correctly parse
64-bit values. Update rpl_rli.cc and rpl_mi.cc to use ulonglong variables
and this new function.

All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.

Reviewed-by: Brandon Nesterenko <[email protected]>
Marko Mäkelä
MDEV-38989 main.ctype_utf16le SEGV in Ubuntu 26.04 (x86-64-v3)

my_utf16le_uni(), my_lengthsp_utf16le(): Instead of wrongly claiming
aligned access by invoking uint2korr(), inform the compiler of
unaligned access by invoking memcpy(), which will be optimized away.
Sergei Petrunia
MDEV-38240: Selectivity sampling not performed when the table has no indexed conditions

Don't skip range analysis step in such cases.
Amodh Dhakal
MDEV-38922 Fix cosmetic "stage done" output for REPAIR TABLE

report_progress_end() erases progress messages by overwriting them
with spaces. It was called immediately after query execution, before
the result set was fetched. Progress messages can arrive during
result fetching, so moving the call to after the result set is cached
ensures any such messages are erased before the result set is printed.

Signed-off-by: Amodh Dhakal <[email protected]>
Marko Mäkelä
Fix GCC-16 -Wunused-but-set-variable
Marko Mäkelä
MDEV-38958 fixup: Correct the address ranges

buf_pool_t::create(): Invoke ut_dontdump() on the entire
buf_pool.memory_unaligned.

buf_pool_t::resize(): Invoke ut_dontdump() on the grown allocation.

Thanks to Alessandro Vetere for pointing this out.
Marko Mäkelä
Fix GCC-16 -Wmaybe-uninitialized
luckyxhq
MDEV-38915 Fix signed/unsigned type mismatch in setval() for GET_ULONG
Daniel Black
MDEV-39015 Debian - remove libboost-system-dev dependency

In all supported Debian versions, which are all 1.69+,
this package only provided a library stub. The header
files where moved into libboost-dev when upstream removed
the need for a library.
Hemant Dangi
MDEV-38916: Galera test failure on galera_3nodes.MDEV-36360

Issue:
wsrep_slave_FK_checks is deprecated(MDEV-38787) and has no effect,
while applier threads are forced to run with FK checks enabled.
The MDEV-36360 test expects that no FK constraint failures appear
in the error log, which does not happen because of
wsrep_slave_FK_checks has no effect and so test fails.

Solution:
Remove the assert_grep.inc block so the test remains as a check
for table-level X-lock / deadlock behavior only.
Denis Protivensky
MDEV-30612: Fix usage of lex->definer in wsrep_create_trigger_query

Setting thd->lex->definer is excessive as it's only used within the
function call.
Moreover, it would lead to a use-after-free on the second execution
of a CREATE TRIGGER prepared statement.
Monty
Have mariadbd server read [mariadb-X] and [mariadb-X] sections

X is major version, like mariadb-11 and mariadbd-11

This simplifies my.cnf files supporting many MariaDB versions
Jan Lindström
MDEV-38895 : Regression on MDL conflict handling

The reason for this timeout is a regression introduced in commit
https://github.com/MariaDB/server/commit/e40277d29b7c531e1ed6b3bed7ecfc8cfeff4c7e.
Where there is incorrect condition wsrep_thd_is_BF(granted_thd, false)
because if granted thread is BF it may not be BF-killed.

In this patch following has been changed
    * Split wsrep_handle_mdl_conflict to smaller easier to understand parts
    * Improved code comments
    * Improved debug logging to contain thread ids and necessary information
    * Corrected and simplified conditions so that BF threads are not killed
    * Added few debug assertions to verify that BF threads are not killed
    * Code cleanups so that tab-characters are not used
    * Existing test cases already test rest of the cases
Kristian Nielsen
MDEV-37133: Fix parallel replication stalls due to missing wait report

In certain situations, while one transaction was waiting for a row lock,
InnoDB could grant another transaction a lock that conflicted with the waiting
lock, but not with other already granted locks. Then if the page was
reorganised, the granted lock could be moved to the head of the lock queue.
This could end up with a situation where the waiting lock is now waiting for
the granted lock, and this wait was never reported to replication. This could
cause parallel replication to hang for --innodb-lock-wait-timeout if the
waiting transaction is before the granted one.

This patch fixes the issue by adding logic, so that when a lock is granted and
added to a lock queue, a wait is reported against this lock for any already
waiting locks. (This supplements the wait reports made when a waiting lock
is added to the queue).

Signed-off-by: Kristian Nielsen <[email protected]>
Marko Mäkelä
Merge 10.6 into 10.11
Alexey Yurchenko
MDEV-38383 Fix MDEV-38073 MTR test warning

MDEV-38073 MTR test started to fail with a warning after upstream merge
from 11.4 a7528a6190807281d3224e4e67a9b76083a202a6 because THD responsible
for creating SST user became read-only when the server was started with
--transaction-read-only=TRUE.
make sure the readonly flag on THDs created for wsp::thd utility class is
cleared regardless of the --transaction-read-only value as it is intended
only for client-facing THDs.
Varun Deep Saini
MDEV-38873 JSON_EXTRACT truncates result through derived tables

Item_func_json_extract::fix_length_and_dec() underestimated
max_length. The result is passed through json_nice() with LOOSE
formatting which expands separators (e.g. ["a",1] -> ["a", 1]),
but max_length was computed from the input size alone. When the
result flows through a materialized derived table, the field is
created with this too-small max_length and the output gets truncated.

Fix: multiply by 2 to account for LOOSE expansion, add array
framing overhead for multi-path extractions, and use
fix_char_length_ulonglong() for proper character-to-byte
conversion and overflow handling.

Signed-off-by: Varun Deep Saini <[email protected]>
Abhishek Bansal
MDEV-37640: Crash at val_str in Item_func_json* functions

This also fixes MDEV-33984. Item_func_json_normalize::val_str()
and Item_func_json_keys::val_str failed to initialize the character
set of the result buffer. In certain contexts, the buffer can be a
zero-initialized String object with a NULL charset. This led to a null
pointer dereference in String::append(), which relies on the charset
information.

Fixed by explicitly setting the buffer's charset to the item's
collation before appending the normalized JSON string.