Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
PranavKTiwari
MDEV-38839: Fix assertion in close_thread_tables on CREATE TABLE...SELECT FOR UPDATE with MyISAM temp table in MIXED binlog mode

FOR UPDATE on a MyISAM table acquires TL_WRITE due to lack of
row-level locking. In decide_logging_format(), this caused
STMT_WRITES_TEMP_NON_TRANS_TABLE to be set for a temp table that
was only being read, not written to. This incorrectly set
MODIFIED_NON_TRANS_TABLE via mark_modified_non_trans_temp_table()
in MYSQL_BIN_LOG::write(), which blocked binlog_truncate_trx_cache()
in MIXED mode, leaving row events stranded in the cache and
triggering the assertion:

Fix: In decide_logging_format(), gate the write flag assignment on
tbl->updating, which is false for FOR UPDATE (read-only access with
write lock) and true for genuine writes (INSERT/UPDATE/DELETE).
Sequences are handled separately via tbl->sequence since
SELECT NEXT VALUE genuinely modifies the sequence table internally
but has tbl->updating=false.

Note: SELECT...FOR UPDATE locking is not replicated to the slave.
In MIXED mode the statement is logged as row events representing
the resulting data changes. The slave replays only those row events
so FOR UPDATE locking semantics are irrelevant on the slave and
are not affected by this fix.
Oleksandr Byelkin
Merge branch 'bb-10.11-release' into bb-11.4-release
PranavKTiwari
MDEV-38839: Fix assertion in close_thread_tables on CREATE TABLE...SELECT FOR UPDATE with MyISAM temp table in MIXED binlog mode

FOR UPDATE on a MyISAM table acquires TL_WRITE due to lack of
row-level locking. In decide_logging_format(), this caused
STMT_WRITES_TEMP_NON_TRANS_TABLE to be set for a temp table that
was only being read, not written to. This incorrectly set
MODIFIED_NON_TRANS_TABLE via mark_modified_non_trans_temp_table()
in MYSQL_BIN_LOG::write(), which blocked binlog_truncate_trx_cache()
in MIXED mode, leaving row events stranded in the cache and
triggering the assertion:

Fix: In decide_logging_format(), gate the write flag assignment on
tbl->updating, which is false for FOR UPDATE (read-only access with
write lock) and true for genuine writes (INSERT/UPDATE/DELETE).
Sequences are handled separately via tbl->sequence since
SELECT NEXT VALUE genuinely modifies the sequence table internally
but has tbl->updating=false.

Note: SELECT...FOR UPDATE locking is not replicated to the slave.
In MIXED mode the statement is logged as row events representing
the resulting data changes. The slave replays only those row events
so FOR UPDATE locking semantics are irrelevant on the slave and
are not affected by this fix.
Oleg Smirnov
The patch extends START TRANSACTION WITH CONSISTENT SNAPSHOT with the
optional FROM SESSION clause:

START TRANSACTION WITH CONSISTENT SNAPSHOT FROM SESSION <session_id>;

When specified, all participating storage engines instead of creating
a new snapshot of data, create a copy of the snapshot which has been
created by an active transaction in the specified session.
session_id is the session identifier reported in the Id column of
SHOW PROCESSLIST. The CONNECTION_ID() function returns the identifier
of the current session.

Currently snapshot cloning is only supported by InnoDB. As with
the regular START TRANSACTION WITH CONSISTENT SNAPSHOT, snapshot clones
can only be created with the REPEATABLE READ isolation level.

For InnoDB, a transaction with a cloned snapshot will only see data
visible or changed by the donor transaction. That is, the cloned
transaction will see no changes committed by transactions that started
after the donor transaction, not even changes made by itself. Note that
in case of chained cloning the donor transaction is the first one in the
chain. For example, if transaction A is cloned into transaction B, which
is in turn cloned into transaction C, the latter will have read view
from transaction A (i.e. the donor transaction). Therefore, it will see
changes made by transaction A, but not by transaction B.

New server variables
--------------------

have_snapshot_cloning

This is a server variable implemented to help other utilities detect
if the server supports the FROM SESSION extension. When available, the
snapshot cloning feature and the syntax extension to START TRANSACTION
WITH CONSISTENT SNAPSHOT are supported by the server, and the variable
value is always YES.

The patch is based on the commit 934b5f8 to Percona Server for
https://docs.percona.com/percona-server/8.0/start-transaction-with-consistent-snapshot.html#snapshot-cloning
Yuchen Pei
MDEV-15621 [wip] tmp
Oleksandr Byelkin
Merge branch 'bb-10.11-release' into bb-11.4-release
Sergei Golubchik
MDEV-39481 ASAN error on malformed WKB polygon

let's make is difficult for wkb and len to desync
Alexander Barkov
MDEV-39546 Unclear error message on OPEN strict_cursor FOR 'stmt'

This statement:
  OPEN strict_cursor_variable FOR 'SELECT ...';
caused an unclear error message:

ERROR 4078: Illegal parameter data types row<2> and row<0> for operation 'OPEN..FOR'

Fixing the error to be clearer:
ERROR HY000: Incorrect usage of c0 and OPEN..FOR <dynamic string>

Note, this statement is not supported:
  OPEN strict_cursor_variable FOR 'SELECT ...';

It can be rewritten:
- either to use a SELECT statement instead of the dynamic string:
    OPEN strict_cursor_variable FOR SELECT ...;
- or to make c0 a weak cursor variable (i.e.without the RETURN clause)
    OPEN weak_cursor_variable FOR 'SELECT ...';
Marko Mäkelä
MDEV-32115: Log checkpoint race with wsrep_sst_method=rsync

Galera snapshot transfer (SST) using the default wsrep_sst_method=rsync
is prone to creating corrupted snapshots. The probability for this is
rather low and might only affect installations that include
ENGINE=InnoDB tables that contain FULLTEXT INDEX.

The function sst_disable_innodb_writes() aims to disable all InnoDB writes
during the time a snapshot transfer (SST) is in progress using the
default wsrep_sst_method=rsync.

The logic based on invoking log_make_checkpoint() almost works, except
for two things: We failed to ensure that fts_optimize_callback() has
stopped executing, and we did not block updates of the log checkpoint
header.

log_checkpoint_low(): Assert that writes to the log are allowed.

buf_flush_page_cleaner(): Do not try to advance the checkpoint while
wsrep_sst_method=rsync is in progress. This prevents the assertion
in log_checkpoint_low() from failing.

fts_optimize_pause(), fts_optimize_resume(): Pause and resume the
fts_optimize_callback().

sst_disable_innodb_writes(): Disable all background writers
before initiating the log checkpoint.

fts_optimize_callback(): Assert that wsrep_sst_method=rsync is not
active, and remove the previous incorrect attempt at fixing this race.
Alexander Barkov
MDEV-39022 Add `LOCAL spvar` syntax for prepared statements and SYS_REFCURSORs

This patch adds the following syntax:

OPEN c0 FOR LOCAL spvar_with_ps_name;
PREPARE LOCAL spvar_with_ps_name FROM 'dynamic sql';
EXECUTE LOCAL spvar_with_ps_name;
DEALLOCATE PREPARE LOCAL spvar_with_ps_name;

OPEN c0 FOR PREPARE stmt;
bsrikanth-mariadb
MDEV-39405: store all the plugin-engines optimizer costs

store all the plugin-engines optimizer costs into the context, so that
the replay server uses the same engine costs while computing the query cost
Alexander Barkov
MDEV-39546 Unclear error message on OPEN strict_cursor FOR 'stmt'

This statement:
  OPEN strict_cursor_variable FOR 'SELECT ...';
caused an unclear error message:

ERROR 4078: Illegal parameter data types row<2> and row<0> for operation 'OPEN..FOR'

Fixing the error to be clearer:
ERROR HY000: Incorrect usage of c0 and OPEN..FOR <dynamic string>

Note, this statement is not supported:
  OPEN strict_cursor_variable FOR 'SELECT ...';

It can be rewritten:
- either to use a SELECT statement instead of the dynamic string:
    OPEN strict_cursor_variable FOR SELECT ...;
- or to make c0 a weak cursor variable (i.e.without the RETURN clause)
    OPEN weak_cursor_variable FOR 'SELECT ...';
bsrikanth-mariadb
MDEV-39405: store all the plugin-engines optimizer costs

store all the plugin-engines optimizer costs into the context, so that
the replay server uses the same engine costs while computing the query cost

Also, fix warnings in mysqltest.cc
Yuchen Pei
MDEV-15621 [wip] Auto add partitions using RANGE COLUMN path

Following test works:

set timestamp= unix_timestamp('2026-05-02 00:00:00');
create table t1 (c datetime)
PARTITION BY RANGE COLUMNS (c)
INTERVAL 1 Day
(
PARTITION p0 VALUES LESS THAN ("2026-04-01")
);
select @@timestamp, current_time(6);
@@timestamp current_time(6)
1777644000.000000 00:00:00.000000
insert into t1 values ('2026-05-01');
show create table t1;
Table Create Table
t1 CREATE TABLE `t1` (
  `c` datetime DEFAULT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci
PARTITION BY RANGE  COLUMNS(`c`)INTERVAL 1 DAY
(PARTITION `p0` VALUES LESS THAN ('2026-04-01') ENGINE = MyISAM,
PARTITION `p1` VALUES LESS THAN ('2026-05-02 00:00:00') ENGINE = MyISAM)
select partition_name, partition_method, partition_expression, partition_description, table_rows from information_schema.partitions where table_name='t1';
partition_name partition_method partition_expression partition_description table_rows
p0 RANGE COLUMNS `c` '2026-04-01' 0
p1 RANGE COLUMNS `c` '2026-05-02 00:00:00' 1
set timestamp= unix_timestamp('2026-05-06 00:00:00');
insert into t1 values ('2026-05-04');
show create table t1;
Table Create Table
t1 CREATE TABLE `t1` (
  `c` datetime DEFAULT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci
PARTITION BY RANGE  COLUMNS(`c`)INTERVAL 1 DAY
(PARTITION `p0` VALUES LESS THAN ('2026-04-01') ENGINE = MyISAM,
PARTITION `p1` VALUES LESS THAN ('2026-05-02 00:00:00') ENGINE = MyISAM,
PARTITION `p2` VALUES LESS THAN ('2026-05-06 00:00:00') ENGINE = MyISAM)
select partition_name, partition_method, partition_expression, partition_description, table_rows from information_schema.partitions where table_name='t1';
partition_name partition_method partition_expression partition_description table_rows
p0 RANGE COLUMNS `c` '2026-04-01' 0
p1 RANGE COLUMNS `c` '2026-05-02 00:00:00' 1
p2 RANGE COLUMNS `c` '2026-05-06 00:00:00' 1
drop table t1;
Oleksandr Byelkin
Merge branch '10.6' into bb-10.11-release
Oleksandr Byelkin
fix
Daniel Black
MDEV-39523 UBSAN on ST_COLLECT (has_cached_value)

UBSAN errored with UBSAN: load of value 165, which is not a valid value for
type 'bool' on SELECT ST_COLLECT.

This was Item_func_collect::val_str reading the has_cached_value. The
member variable has_cached_value was set to true later in this function
when it had a cached value. There was no initialization of
has_cached_value to false in the constructor.

We copy the cached value if the Item_func_collect is copied.
Oleksandr Byelkin
Merge branch '10.11' into bb-10.11-release
PranavKTiwari
MDEV-38839: Fix assertion in close_thread_tables on CREATE TABLE...SELECT FOR UPDATE with MyISAM temp table in MIXED binlog mode

FOR UPDATE on a MyISAM table acquires TL_WRITE due to lack of
row-level locking. In decide_logging_format(), this caused
STMT_WRITES_TEMP_NON_TRANS_TABLE to be set for a temp table that
was only being read, not written to. This incorrectly set
MODIFIED_NON_TRANS_TABLE via mark_modified_non_trans_temp_table()
in MYSQL_BIN_LOG::write(), which blocked binlog_truncate_trx_cache()
in MIXED mode, leaving row events stranded in the cache and
triggering the assertion:

Fix: In decide_logging_format(), gate the write flag assignment on
tbl->updating, which is false for FOR UPDATE (read-only access with
write lock) and true for genuine writes (INSERT/UPDATE/DELETE).
Sequences are handled separately via tbl->sequence since
SELECT NEXT VALUE genuinely modifies the sequence table internally
but has tbl->updating=false.

Note: SELECT...FOR UPDATE locking is not replicated to the slave.
In MIXED mode the statement is logged as row events representing
the resulting data changes. The slave replays only those row events
so FOR UPDATE locking semantics are irrelevant on the slave and
are not affected by this fix.
Mohammad Tafzeel Shams
MDEV-37467: InnoDB Instant ALTER TABLE is not crash safe

Instant ALTER TABLE metadata record includes externally stored
BLOB metadata. The existing BLOB storage path in
btr_store_big_rec_extern_fields() does notperform record insertion,
BLOB write, and field reference update atomically. A crash in between
these steps can leave metadata records in an inconsistent state.

Make metadata BLOB storage atomic by writing all BLOB pages using
the same mini-transaction as the metadata record.

The BLOBs are fully written before the mtr commits, ensuring
that either the complete metadata record, including BLOBs is visible,
or none.

- btr_store_big_rec_extern_metadata_record():
  New helper to store metadata BLOBs using the caller’s mtr. It
  allocates and links BLOB pages and updates field references
  without committing the mtr.

- btr_store_big_rec_extern_fields():
  Detect metadata records via rec_is_metadata() and dispatch to
  the new metadata-specific storage path.

- row_ins_index_entry_big_rec_regular():
  Renamed from row_ins_index_entry_big_rec() to clarify it handles
  non-metadata (regular) records.

- row_ins_index_entry_big_rec_metadata():
  New function to write metadata BLOBs immediately after inserting
  the clustered record, using the same active mtr.

- row_ins_clust_index_entry_low():
  For metadata records, store BLOBs before mtr.commit(). Regular
  records retain the existing post-commit BLOB handling.
Oleg Smirnov
WIP: clone consistent snapshot
Oleksandr Byelkin
Merge branch 'bb-11.4-release' into bb-11.8-release
Yuchen Pei
MDEV-15621 [wip] Stubbing
ParadoxV5
MDEV-39418: Revert most of MDEV-39240 for MDEV-32188

MDEV-39240 fixed how servers before 11.5/11.4-enterprise
accepted timestamps beyond Year 2038 from row-based replication,
which were invalid until 11.5/11.4-enterprise’s MDEV-32188.
MDEV-39240 does not apply after MDEV-32188 extended the valid range,
so those versions should exclude this fix,
as if MDEV-32188 already covers it.

This commit reverts commits 3234045953 and most of f9c34a1442, keeping
only the tweak to the MTR script `include/check_type` for consistency.
Oleksandr Byelkin
fix
ParadoxV5
MDEV-39418: Revert most of MDEV-39240 for MDEV-32188

MDEV-39240 fixed how servers before 11.5/11.4-enterprise
accepted timestamps beyond Year 2038 from row-based replication,
which were invalid until 11.5/11.4-enterprise’s MDEV-32188.
MDEV-39240 does not apply after MDEV-32188 extended the valid range,
so those versions should exclude this fix,
as if MDEV-32188 already covers it.

This commit reverts commits 3234045953 and most of f9c34a1442, keeping
only the tweak to the MTR script `include/check_type` for consistency.
Dave Gosselin
MDEV-39494: UBSAN error on division by zero.

An incorrectly backported test from 11.x revealed an UBSAN error in 10.11, so
fix that problem by preventing a division-by-zero from happening.

Remove the other incorrectly backported tests and relabel the retained test
in terms of the current ticket.
bsrikanth-mariadb
MDEV-39412: parse error reading tabs in ranges

Note:
while reading from information_schema.optimizer_context one level of unescaping
is already done i.e. (\\t becomes \t or \\\\t becomes \\t)

w.r.t the MDEV, there are 2 problems: -

1.
When reading from the sql script file, json parser is not able to parse
the range value in json_read_value() from json_lib.c
"ranges": [
            "(b\t\t\t\t\t\t) <= (b) <= (b???????)"
          ],
mainly the \t\t stuff, and hence a warning.
It also stops loading the context into memory.
Since, a new table is created with empty data, and without context,
we get Impossible WHERE noticed after reading const tables

2.
There is unescaping call being made in read_string() from sql_json_lib.cc
while parsing of the context. With this \\t was becoming \t.
However, print_range() from opt_range.cc already does escaping of the values.
The value "b\t\t\t" was in fact produced as "\b\\t\\t\\t".
Later, we try to compare range values from the query and the context.

Here a mismatch a found because, in one case there was escaping,
and in the other case escaping got removed.

Solutions
=========
For Problem 1. have escaping for ranges.
This should be done while dumping range values into the context.

For Problem 2. Remove unscaping call in read_string().
Sergei Golubchik
MDEV-39540 crash due to narrowing cast in update_ref_and_keys()
Alexander Barkov
MDEV-39546 Unclear error message on OPEN strict_cursor FOR 'stmt'
Sergei Golubchik
MDEV-39516 s3 curl_easy_setopt requires long values otherwise compile failure

update submodule to compile on fc44
Jan Lindström
MDEV-39413 wsrep unsafe handling of parameters

When server is started with unsafe parameter --exec could
return also error 134. Furthermore, --exed pkill could
return error 15. Added these error codes as accepted.
Hemant Dangi
MDEV-36621: galera.GCF-360 test: IST failure

Removed redundant per-node wsrep_provider_options overrides that
were dropping all timeout settings.
Alessandro Vetere
MDEV-38814 Reduce pessimistic update fallbacks

A high rate of index lock SX-to-X upgrades was traced to two patterns in
btr_cur_*_update() that turn benign UPDATEs into pessimistic fallbacks:

1. btr_cur_optimistic_update() returns DB_UNDERFLOW whenever the page
    would drop below BTR_CUR_PAGE_COMPRESS_LIMIT after the
    delete-then-insert, even when the record itself is growing. On a
    freshly split page this re-triggers a merge the moment the next
    update lands, defeating the split that just happened.

2. btr_cur_pessimistic_update() handles the DB_OVERFLOW fallback by
    calling btr_cur_insert_if_possible() unconditionally. When the
    uncompressed page cannot satisfy BTR_CUR_PAGE_REORGANIZE_LIMIT after
    a reorganize, that retry fails too and the same page churns through
    pessimistic_update without ever splitting.

Introduce a system variable innodb_reduce_pessimistic_update_fallbacks
(BOOL, default OFF) that gates both fixes:

* btr_cur_optimistic_update() returns DB_UNDERFLOW only when the new
  record is strictly smaller than the old one, so growing updates do
  not trip the merge path.
* btr_cur_pessimistic_update() skips btr_cur_insert_if_possible() and
  falls through to a page split when the optimistic error was
  DB_OVERFLOW, the page is uncompressed with at least two records, the
  record is growing, and a reorganize would not free
  BTR_CUR_PAGE_REORGANIZE_LIMIT bytes.

Same-size updates take the optimistic path under both fixes (no merge,
no split), matching the intent that only size-changing updates
contribute to space pressure. The trade-off is that an opportunistic
merge that previously triggered on any update to a sparse page now
requires an actual shrink to fire.

To make the impact measurable, expose seven debug-only atomic counters
via SHOW GLOBAL STATUS:

  Innodb_btr_cur_n_index_lock_upgrades
  Innodb_btr_cur_pessimistic_insert_calls
  Innodb_btr_cur_pessimistic_update_calls
  Innodb_btr_cur_pessimistic_delete_calls
  Innodb_btr_cur_pessimistic_update_optim_err_underflows
  Innodb_btr_cur_pessimistic_update_optim_err_overflows
  Innodb_mtr_n_index_x_lock_calls

A new test, innodb.index_lock_upgrade, drives a 1000-row INSERT /
UPDATE / DELETE workload on a 4K-page table across three shapes (PK
only; PK + secondary index on a DATETIME column; same with
ROW_FORMAT=COMPRESSED, KEY_BLOCK_SIZE=2). It snapshots all seven
counters plus the index_page_* INNODB_METRICS between phases. The
test runs in two combinations (`off` and `on`) to lock in counter
deltas for both the legacy and the optimized paths; the compressed
table also covers the deliberate scope limitation that the
pessimistic-side branch is gated to uncompressed pages while the
optimistic-side change applies uniformly.
Alexander Barkov
MDEV-39022 Add `LOCAL spvar` syntax for prepared statements and SYS_REFCURSORs

This patch adds the following syntax:

OPEN c0 FOR LOCAL spvar_with_ps_name;
PREPARE LOCAL spvar_with_ps_name FROM 'dynamic sql';
EXECUTE LOCAL spvar_with_ps_name;
DEALLOCATE PREPARE LOCAL spvar_with_ps_name;

OPEN c0 FOR PREPARE stmt;