Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
Sergei Golubchik
find pcre2.h in a non-default location

for macos builder
Jan Lindström
MDEV-39483 : Hang on mariabackup SST

This regression was caused by commit 794b1d0ec5c
Binlog-in-engine: New binlog implementation integrated in InnoDB.
Mariabackup request BACKUP STAGE BLOCK_COMMIT MDL-lock
using m_bs_con connection. Because we have wsrep,
write_galera_info is called using mysql_connection.
Note that m_bs_con and mysql_connection are different
connections. In write_galera_info write_current_binlog_file
is called and FLUSH BINLOG LOGS is executed. In reload_acl_and_cache
MDL_BACKUP_START MDL-lock is requested. Because we already
have conflicting MDL-lock for BLOCK_COMMIT in different THD
it has to wait. This wait ends on timeout and backup fails
causing mariabackup SST to fail and node will not join the cluster.

Fixed by using same connection for write_galera_info as for
BACKUP STAGE BLOCK_COMMIT i.e. m_bs_con.
rusher
[misc] Use SSL port for MaxScale TLS connections in tests
Yuchen Pei
MDEV-39361 Assign Name resolution context in subst_vcol_if_compatible to the new vcol Item_field

The pushdown from HAVING into WHERE optimization cleans up and refixes
every condition to be pushed.

The virtual column (vcol) index substitution optimization replaces
vcol expressions in GROUP BY (and WHERE and ORDER BY) with vcol
fields.

The refixing requires the correct name resolution context to find the
vcol fields.

The commit 0316c6e4f21dee02f5adfbe5c62471ee75ca20bb assigns context
from the select_lex that the GROUP BY belongs to, but that may not
work when there are derived table subqueries.

In this commit we assign the correct context during substitution for
the newly constructed vcol Item_field in the substitution.

Also make the walk of vcol expressions with
intersect_field_part_of_key not descend to subquery since the
generated columns can't have subqueries.

Alternative considered:

1. Assign the context when constructing vcol_info, in
unpack_vcol_info_from_frm. This does not work because the
current_context() in parsing is not the correct context, not to
mention that unpack_vcol_info_from_frm is not always called from a
SELECT statement.

2. Get the correct context for vcol_info after its construction and
before the substitution. Debugger with watch -a vcol_info shows that
there are no common functions accessing vcol_info before the
substitution.

3. Cache the name resolution context value to a new vcol_info->context
field and assign value to it in an Item_field constructor as well as
during substitution. The problem with this is that
vcol_info->context->select_lex may be cleared at the end of the one
SELECT statement, which can cause crash in the next SELECT. It looks
like vcol_info is meant for "static"/ddl information, not
statement-specific info, so it is the wrong place to cache the
context.
Rex Johnston
MDEV-39499 Updates to derived-with-keys, window functions determining

..records per key

Enabling derived keys optimization for derived.col = const pushed
conditions.

Estimating records per key in derived key for the optimizer
based on form and/or size of components of a derived table.

Consider any derived table of the form

SELECT ..., ROW_NUMBER ()  OVER (PARTITION BY c1,c2 order by ...)
FROM t1, t2, t3 ...
WHERE ...

If the optimizer generates a key on this derived table because of
a constraint being pushed into it, it currently will not consider key
components of the form  col = const

We lift this constraint and add code to TABLE::add_tmp_key to search
for a window function ROW_NUMBER().  From the partition list c1, c2 we
can in infer an estimate of the number of rows we expect to see for
each key value.  In the absence of EITS, we still have an number of
expected records in our referred to table and will fall back to using
that as a worst case scenario.

As a consequence of forcing best_access_path to now consider a generated
index's records per key on any type of derived table, we see a number of
optimization changes, for example a range optimizer might now be
considered best and might return IMPOSSIBLE_RANGE (seen in
rowid_filter_myisam.result).
Daniel Black
MDEV-39212 JSON_MERGE_PATCH depth crash (test fix)

The msan debug CI test was failing with the test case in func_json so
move the test to json_debug_nonembedded and use the debug sync points to
trigger the simulated stack overrun errors so it will work regardless of
stack size.
rusher
[misc] Use ssl_port for MaxScale connections in unittest TLS setup
Oleksandr Byelkin
Merge branch 'bb-10.11-release' into bb-11.4-release
rusher
Add MaxScale container logs output on test failure
rusher
[misc] Remove SKIP_MAXSCALE from test_conn_str and adjust connection string for MaxScale environment
Oleksandr Byelkin
Merge branch 'bb-10.11-release' into bb-11.4-release
Jan Lindström
MDEV-39429 : Galera test failures on 12.3
kolzeeq
[misc] Use SSL port for MaxScale TLS connections in tests
Daniel Black
MDEV-35545 UBSAN Gis_geometry_collection::init_from_opresult

From the UBSAN error:

sql/spatial.cc:3364:10: runtime error: applying non-zero offset 1 to null pointer

In Gis_geometry_collection::init_from_opresult, a pointer argument
was being treated as a counter for the Special case of
GEOMETRYCOLLECTION EMPTY. The memory location was never accessed.

Rather than use points to count and return a difference at the end, the
code is replace to use g_len_total as a counter. This gets a 1 value for
the GEOMETRYCOLLECTION EMPTY case and no ointer undefined behaviours
occur.

As other init_from_opresult functions return uint both g_len and
result use that type.
Sergei Golubchik
MDEV-39512 SIGSEGV in ha_sphinx::create on TRUNCATE regression

dd_recreate_table() had NULL create_info->option_struct.
Let's get it from the table share.
Thirunarayanan Balathandayuthapani
MDEV-34358  Encryption threads consume CPU when no work available

Problem:
--------
1. Encryption threads busy-wait when no work is available. When reaching
fil_system.space_list.end(), fil_crypt_return_iops() is called with
wake=true, causing pthread_cond_broadcast() to wake all threads
unnecessarily, leading to CPU waste.

2. Tablespaces with CLOSING/STOPPING flags (set during DDL operations)
are skipped during iteration. Since DDL completion doesn't wake
encryption threads, these spaces may never be encrypted if threads
sleep indefinitely.

3. For default_encrypt_list iteration, when spaces exist but none are
acquirable, threads need to wake others for
cooperative retry, but this case was not distinguished from
fil_system.space_list.end().

4. IOPS are allocated before searching for tablespaces, wasting resources
during iteration when no I/O occurs.

Solution:
---------
1. Implement timed wait with exponential backoff (5s -> 10s -> 20s -> 40s
-> 60s, max 5 attempts) for fil_system.space_list iteration. After
~135 seconds, switch to indefinite wait. This periodically rechecks
for spaces that become available after DDL completes.

2. Use indefinite wait for default_encrypt_list iteration since other
threads will retry and wake when needed.

3. fil_space_t::next(): Add default_encrypt_list flag to distinguish between
the two iteration modes. Wake other threads only
when this flag is true (spaces exist but unacquirable).

4. Move IOPS allocation from before tablespace search to after finding a
space that needs rotation.

5. Handle wake logic explicitly at call site based on default_encrypt_list flag.

rotate_thread_t changes
------------------------
- default_encrypt_list (bool): Indicates if default_encrypt_list has unacquirable spaces
- timed_wait_count (uint8_t): Counts consecutive timeouts for exponential backoff
- sleep_timeout_ms (uint16_t): Current timeout in ms (5s -> 60s max)
- wait_for_work(): Implements timed/indefinite wait based on iteration mode
- increase_sleep_timeout(): Doubles timeout up to 60s
- reset_sleep_timeout(): Resets timeout to 5s and clears count
rusher
[misc] Add TEST_MAXSCALE_TLS_PORT fallback for ssl_port environment variable
rusher
[misc] Check use_ssl option when determining MaxScale SSL port
Daniel Black
MDEV-26814: UBSAN: offset to nullptr in JSON_ARRAY_INSERT

SELECT JSON_ARRAY_INSERT (0,NULL,1); triggered a UBSAN error.
Specification of JSON_ARRAY_INSERT should return NULL if any arguments
are null.

SQL NULL, aka Item_null::val_str will return a nullptr so check this and
then return a NULL value.
ParadoxV5
Cherry-pick MDEV-39240 to 11.4-merge-attempt

This was *almost* accidentally null-merged
in the wrong branch due to miscommunication.

to be squashed into the merge-to-11.4 commit
Daniel Black
MDEV-36451: blackhole float-cast-overflow

As UBSAN error, the attempt of evaluating a best_acess_path in the
optimizer was using -nan as its worst_seeks value. This didn't cast to
an integer for a rows estimate value resulting in the UBSAN error.

The blackhole engine had a worst_seeks derived from read_time (same
value). This was derived in the default handler::scan_time as
stats.data_file_length / stats.block_size expression where both where 0.

Corrected this by giving the default handler::scan_time an implementation
that just returns 0 for the case where stats.block_size was 0, to avoid
returning a NaN values for all storage engines that leave their
stats block_size as 0, including the backhole.
ayush-jha123
MDEV-38010: Master & relay log info files ignore trailing garbage in numeric lines

This patch fixes an issue where Int_IO_CACHE::from_chars stops parsing at the
first invalid character but fails to consume the remainder of the line. This
caused trailing garbage on a numeric field (like Master_Port) to be interpreted
as the value for the subsequent field.

The fix introduces a strict validation helper is_string_blank_or_empty which
ensures that only whitespace or control characters follow the parsed numeric
value. The init_*_from_file functions now zero-initialize variables, perform
error checking immediately after string conversion, and safely reject files with
trailing garbage.

The test master_info_numeric_validation has been updated to use --move_file
for robust backup and restoration of the master.info file.
Georgi (Joro) Kodinov
MDEV-39456: Describe the external contributions process in more details

Added a new .md document describing the community contribution process.
Added a reference to it from the CONTRIBUTING.md.
rusher
[CI] Add matrix entry that builds against latest C/C 3.3 branch

The CI normally builds against the libmariadb submodule pinned in the
parent tree. Add an additional matrix entry (Ubuntu, MariaDB 11.4) that
overrides the submodule to the tip of the C/C 3.3 branch before cmake,
so we get early signal on upstream changes (e.g. CONC-812) instead of
discovering them only when the submodule pointer is bumped.

The override is keyed off matrix.libmariadb-branch and is a no-op for
all existing matrix entries (no behavior change for default builds).
rusher
[misc] Fix operator precedence in IS_MAXSCALE_ENV macro
Rex Johnston
MDEV-39499 Updates to derived-with-keys, window functions determining

..records per key

Enabling derived keys optimization for derived.col = const pushed
conditions.

Estimating records per key in derived key for the optimizer
based on form and/or size of components of a derived table.

Consider any derived table of the form

SELECT ..., ROW_NUMBER ()  OVER (PARTITION BY c1,c2 order by ...)
FROM t1, t2, t3 ...
WHERE ...

If the optimizer generates a key on this derived table because of
a constraint being pushed into it, it currently will not consider key
components of the form  col = const

We lift this constraint and add code to TABLE::add_tmp_key to search
for a window function ROW_NUMBER().  From the partition list c1, c2 we
can in infer an estimate of the number of rows we expect to see for
each key value.  In the absence of EITS, we still have an number of
expected records in our referred to table and will fall back to using
that as a worst case scenario.

As a consequence of forcing best_access_path to now consider a generated
index's records per key on any type of derived table, we see a number of
optimization changes, for example a range optimizer might now be
considered best and might return IMPOSSIBLE_RANGE (seen in
rowid_filter_myisam.result).
Thirunarayanan Balathandayuthapani
MDEV-34358: Add debug status variables to verify encryption thread wait behavior

Added debug-only status variables Innodb_encryption_timed_waits
and Innodb_encryption_indefinite_waits to track encryption thread
wait types, enabling verification that threads use timed wait with
exponential backoff instead of busy-waiting when idle.

The counters are incremented in rotate_thread_t::wait_for_work()
and exposed via SHOW STATUS in debug builds only. Also added a
debug sync point 'rotate_only_2_timed_waits' to reduce the timed
wait threshold from 5 to 2 for faster testing of the indefinite
wait transition.
Sergei Golubchik
MDEV-32745 followup for 7828fb475b0

* add a test for new --cat_file feature
* fix off-by-one error in --cat_file lines limit
* fix broken INCLUDE_DIRECTORIES in extra/
* fix typos
* remove mtr-specific workaround from the tool, solve it in the test
* remove dead code
* fixed memory leak in mariadb-migration-config-file [client-server] section
* fixed memory leak in mariadbd --character_set_server=xxx
  (execution time for main.mariadb-migrate-config-file went down
  from 150s to 2s)
* moved tool header to the tool dir
* avoid tripple-initialization of plugins
* disable tool builds in ASAN builds (because of RocksDB)
rusher
[misc] Add matrix entry that builds against latest C/C 3.3 branch
Thirunarayanan Balathandayuthapani
MDEV-34358  Detect config changes during encryption iteration

Problem:
=======
When innodb_encrypt_tables or innodb_encryption_rotate_key_age is
changed during encryption thread iteration, threads continue with
stale configuration values, potentially missing tablespaces that
should be encrypted or rotated under the new settings.

Solution:
========
Added atomic version counter fil_crypt_settings_version that is
incremented whenever innodb_encrypt_tables or
innodb_encryption_rotate_key_age changes. Encryption threads capture
the version at iteration start and check for changes during iteration.
If config changed, threads immediately restart iteration from the
beginning to ensure complete coverage with new settings.

fil_crypt_settings_version: Atomic counter to track the
innodb_encrypt_tables or innodb_encryption_rotate_key_age changes

rotate_thread_t::settings_version: To compare the
existing fil_crypt_settings_version to restart the
encryption from the beginning
sjaakola
MDEV-34784 unhandled FK dependency with DML vs DDL

Certain DDL statements (e.g. ALTER TABLE) require innodb table lock
on tables having foreign key constraint reference to the table
under DDL execution. This dependency is not added in write set
key information. However, tables being referenced to will be added
in the key information, so the table locking domain of DDL is only
partially recorded.

One harmful consequence of this missing dependency information happens
when a DML modifies a FK child table's row, which has NULL in the FK
referencing column. In such situation, the FK reference cannot be followed
during DDL execution, and there will be no FK parent table keys recorded
in the write set. Parallel applying (or multi-master access) of such DML
and DDL on the FK parent table will cause applying conflicts.

This  scenario is presented in a new mtr test added in this commit.
The commit has a fix for the DDL FK dependency handling by adding all FK
child table names in the write set key information.
The commit has also fixes for innodb lock0lock.cc error logging to report
lock connflicts of table and record locks correctly.
sjaakola
MDEV-34784 unhandled FK dependency with DML vs DDL

Certain DDL statements (e.g. ALTER TABLE) require innodb table lock
on tables having foreign key constraint reference to the table
under DDL execution. This dependency is not added in write set
key information. However, tables being referenced to will be added
in the key information, so the table locking domain of DDL is only
partially recorded.

One harmful consequence of this missing dependency information happens
when a DML modifies a FK child table's row, which has NULL in the FK
referencing column. In such situation, the FK reference cannot be followed
during DDL execution, and there will be no FK parent table keys recorded
in the write set. Parallel applying (or multi-master access) of such DML
and DDL on the FK parent table will cause applying conflicts.

This  scenario is presented in a new mtr test added in this commit.
The commit has a fix for the DDL FK dependency handling by adding all FK
child table names in the write set key information.
The commit has also fixes for innodb lock0lock.cc error logging to report
lock connflicts of table and record locks correctly.
Sergei Golubchik
find pcre2.h in a non-default location

for macos builder
rusher
[misc] add maxscale testing to CI
rusher
[misc] Skip tests when running in MaxScale environment
Sergei Golubchik
find pcre2.h in a non-default location

for macos builder
Sergei Golubchik
MDEV-32745 followup for 7828fb475b0

* add a test for new --cat_file feature
* fix off-by-one error in --cat_file lines limit
* fix broken INCLUDE_DIRECTORIES in extra/
* fix typos
* remove mtr-specific workaround from the tool, solve it in the test
* remove dead code
* fixed memory leak in mariadb-migration-config-file [client-server] section
* fixed memory leak in mariadbd --character_set_server=xxx
  (execution time for main.mariadb-migrate-config-file went down
  from 150s to 2s)
* moved tool header to the tool dir
* avoid tripple-initialization of plugins
* disable tool builds in ASAN builds (because of RocksDB)
* compilation failure on x86 (signedness)
Sergei Golubchik
cleanup: sphinx tests

run sphinx tests even if no sphinx is installed
require installed sphinx per-test, not for the whole suite,
in case there are tests that can run without it