Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
Monty
Added --debug-dbug option to mysqltest.cc

This was to get rid of warnings when using mtr --debug
Marko Mäkelä
fixup! 84449dffd0cb69febab7f6698f74e67634bee4ca
Monty
Added --debug-dbug option to mysqltest.cc

This was to get rid of warnings when using mtr --debug
HNOONa-0
MDEV-35369 Add deprecation columns to system_variables

Adds columns:
* IS_DEPRECATED, as a YES/NO if the value is deprecated
* DEPRECATED_REPLACEMENT, as how the server replaces this variable
Marko Mäkelä
Fix encryption.doublewrite_debug
Alexander Barkov
MDEV-10152 Add support for TYPE .. IS REF CURSOR

Adding support for the strict cursor data types:

Example 1a:
  TYPE rec0_t IS RECORD (a INT, VARCHAR(10));
  TYPE cur0_t IS REF CURSOR RETURN rec0_t;

Example 1b:
  TYPE rec0_t IS RECORD (a t1.a%TYPE, b t1.b%TYPE);
  TYPE cur0_t IS REF CURSOR RETURN rec0_t;

Example 1c:
  TYPE rec0_t IS RECORD (a INT, VARCHAR(10));
  r0 rec0_t;
  TYPE cur0_t IS REF CURSOR RETURN r0%TYPE;

Example 1d:
  TYPE rec0_t IS RECORD (a t1.a%TYPE, b t1.b%TYPE);
  r0 rec0_t;
  TYPE cur0_t IS REF CURSOR RETURN r0%TYPE;

Example2a:
  TYPE cur0_t IS REF CURSOR RETURN t1%ROWTYPE; -- t1 is a table

Example 2b:
  r0 t1%ROWTYPE;
  TYPE cur0_t IS REF CURSOR RETURN r0%TYPE;

Example3a:
  CURSOR cursor_sample IS SELECT a,b FROM t1;
  TYPE cur0_t IS REF CURSOR RETURN cursor_sample%ROWTYPE;

Example3b:
  CURSOR cursor_sample IS SELECT a,b FROM t1;
  r0 cursor_sample%ROWTYPE;
  TYPE cur0_t IS REF CURSOR RETURN r0%TYPE;

If a cursor variable is declared with a RETURN clause then:
1. At OPEN type the data type of the SELECT list row is compared
  for compatibility with the cursor RETURN data type.
  The SELECT list row must be assignable to the RETURN type row.
  If case if assignability is not meet, an error is raised
  Assignability means:
  - The arity of the SELECT list must be equal to the arity
    of the RETURN clause
  - Every n-th field of the SELECT list must be assignable to the
    n-th field of the RETURN Clause

2. At FETCH time, the data is fetched in two steps:
  a. On the first step the data is fetched into a virtual table
    with the row type described in the RETURN clause
  b. On the second step the data is copied from the virtual table
    to the target fetch list. Data type conversion can happen
    on this step.

Change details:

Adding new methods:
- sp_cursor::check_assignability_to
- Virtual_tmp_table::check_assignability_from
- Virtual_tmp_table::sp_set_from_select_list
- Virtual_tmp_table::sp_save_in_vtable
- Virtual_tmp_table::sp_save_in_target_list
- LEX::check_ref_cursor_components
- LEX::make_sp_instr_copy_struct_for_last_context_variables
- LEX::declare_type_ref_cursor
- sp_cursor::Select_fetch_into_spvars::send_data_with_return_type

Adding new members:
- sp_instr_copen_by_ref::m_cursor_name
- Select_fetch_into_spvars::m_return_type
- Select_materialize::m_cursor_name
- Select_materialize::m_return_type

Adding new virtual methods:
- Item::resolve_spvar_cursor_rowtype
- Type_handler::Spvar_definition_resolve_type_refs
- Server_side_cursor::check_assignability_to
- Overriding Select_materialize::prepare to raise an error when the cursor
  returned data type is not compatible with the RETURN clause

Making these methods virtual:
- Field::check_assignability_from

Adding new classes:
- sp_type_def_ref
- RowTypeBuffer

Adding new constructors to:
- Spvar_definition

Adding new helper methods (e.g. to reuse the code)
- Field::store_field_maybe_null
- ChanBuffer::append_ulonglong
- sp_pcontext::set_type_for_last_context_variables

Minor changes:
- Making TABLE::export_structure const
- Overriding Item_splocal::type_extra_attributes. It was forgotten in earlier changes.

Adding new error messages
- ER_CANNOT_CAST_ON_IDENT1_ASSIGNMENT_FOR_OPERATION
- ER_CANNOT_CAST_ON_IDENT2_ASSIGNMENT_FOR_OPERATION
Monty
MDEV-23298 Assertion `table_list->prelocking_placeholder == TABLE_LIST::PRELOCK_NONE' failed in check_lock_and_start_stmt on CREATE OR REPLACE TABLE

Fixed by removing wrong assert

Review: Sanja Byelkin
Alexander Barkov
MDEV-38162 Refactoring: Change sp_type_def_composite2::m_def from Spvar_definition* to Spvar_definition

- Changing sp_type_def_composite2::m_def from  Spvar_definition* to Spvar_definition.
- Encapsulating it.

Retionale:

1. sp_type_def_composite2 was allocated in two steps:
  a. An sp_type_def_composite2 instance itself was allocated
  b. Two instances of Spvar_definition were allocated immediately after,
      and assigned to m_def[0] and m_def[1] respectively.
  Such two step allocation was not really needed.
  So this change moves the two instances of Spvar_definition right
  inside sp_type_def_composite2, so now it gets allocated in a single "new"
  instead of three "new" calls.

2. In the upcoming REF CURSOR change, sp_type_def_ref has an
  Spvar_definition member inside it (not a pointer to it).
  It's better to have sp_type_def_composite2 and sp_type_def_ref
  look closer to each other.
Monty
Cleanup binary logging API

- Add is_current_stmt_binlog_format_stmt()
- Replace !is_current_stmt_binlog_format_row() with
  is_current_stmt_binlog_format_stmt(). This is in prepartion for using
  BINLOG_FORMAT_UNSPEC if no binary logging.
- Removed printing of temporary_tables info in
  reset_current_stmt_binlog_format_row() as this is not relevant anymore.
- Added testing of (thd->variables.option_bits & OPTION_BIN_LOG) when
  binlogging create procedure.
Dave Gosselin
MDEV-38934: part 2: ICP+reverse scan, range w/o max endp: handle no-matches

Test error handling in QUICK_SELECT_DESC::get_next() when ha_index_last()
returns an unexpected error for a descending scan not defining a max endpoint.
Sergei Petrunia
MDEV-38934: part 2: ICP+reverse scan, range w/o max endp: handle no-matches

The code in QUICK_SELECT_DESC::get_next() had the logic that

"if index_last() call has returned nothing, then the table must be empty
  so return HA_ERR_END_OF_FILE"

with ICP, index_last() may also return HA_ERR_END_OF_FILE when none of
the rows in the range matched ICP condition. But there can be matching
rows in other ranges, so continue the scan to examine them.
Oleksandr Byelkin
Fix mac compilation for spider
Alexander Barkov
MDEV-38436 Remove Type_handler::Column_definition_fix_attributes()

The method Type_handler::Column_definition_fix_attributes() was an
addition for Type_handler::Column_definition_set_attributes().

The code in virtual implementations of Column_definition_fix_attributes()
has now been joined to corresponding virtual implementations of
Column_definition_set_attributes().

After this change everything is done in Column_definition_set_attributes().

This makes the code clearer. Previous implementation was rather confusing.
For example, for temporal data types:
(1) After Column_definition_set_attributes():
  - Column_definition::length meant the fractional precision
  - Column_definition::decimals was set to 0
(2) After Column_definition_fix_attributes():
  - Column_definition::length means the full character length
  - Column_definition::decimals means the fractional precision

Now everything gets set to (2) right in Column_definition_set_attributes().
Dave Gosselin
MDEV-38649: Wrong result for range w/o min endpoint, ORDER BY DESC

QUICK_SELECT_DESC::get_next() sets a stale end range on the storage engine.
When checking the first range, it sets last_range to the first range and
then, if it cannot avoid descending scan, it sets the minimum endpoint on
the storage engine.  However, get_next() may decide that this range is not
a match after updating the minimum endpoint on the storage engine.  In this
case, it will continue to the next range.  When continuing, it needs to
reset not only the state of 'last_range' to point to the next range that
it's checking, but it also needs to clear the now-stale end range set on
the storage engine.  While this explanation covers the first and second
ranges, the issue at hand extends to the general case of ranges 1...N-1 and N.

Before this change and when get_next() decided that a range was not a
match, it would clear last_range at each loop continuation point.
Rather than clearing the minimum endpoint on the storage engine at
each continuation, we move all loop cleanup to for loop update clause.
This consolidates any logic that needs to be evaluated on loop
continuation to one place and reduces code duplication.

MySQL fixed this same problem at sha b65ca959efd6ec5369165b1849407318b4886634
with a different implementation.
Monty
MDEV-38865 Simplify testing if binary logging is enabled

In the MariaDB server most of the code uses different expressions to
test if binary logging should be done for the current query.

In many cases the current code does a lot of not needed work before
noticing that binary logging is not needed, which affects performance.
For example, in some cases early decision for preparing to do binary
logging was based on current_stmt_binlog_format (which is by default
statement) without checking if binary logging was enable.

There is also a lot of different variables that affects if binary logging
should be used:
- (thd->variables.option_bits & OPTION_BIN_LOG)
- (thd->variables.option_bits & OPTION_BIN_TMP_LOG_OFF)
- WSREP(thd) && wsrep_emulate_bin_log && WSREP_PROVIDER_EXISTS_
- mysql_bin_log.is_open()
- xxx_binlog_local_stmt_filter()
- binlog_filter
- thd->variables.sql_log_bin
- thd->binlog_evt_union.do_union

The goal is to move all states to a single variable (for performance,
simplicity and easier debugging). We should ignore all possible
extra work if binary logging is not needed for a particular query.

We should use the 'original state', from the start of the query, to
test if mysql_bin_log.is_open() or if wsrep_emulate_bin_log is set.
The reason for this is that all test are done without any mutex and
can cause effect if they change during the query execution.  The
testing of the above conditions should be done during a mutex when
deciding of we should copy the binary log changes to the real binary
log.

In most cases we can now use one the following functions to check if
we binary logging is needed.  We need different version to be able
to shortcut code if binary logging is not enabled (with different
code paths if Galera is enabled or not).

- binlog_ready()
- binlog_ready_no_wsrep()
- binlog_ready_precheck()
- binlog_ready_later()

The state of binary logging is stored in thd->binlog_state.
There are a few bits that shows why binary logging is needed and
one bit for every different reasons for disabling binary logging.
By printing this variable (gdb prints all bits with there enum name)
one can exactly see why binary logging is not happening.

The initial bits are set in THD::THD() and THD::init() and verified in
THD::decide_logging_format().

In this commit all testing of OPTION_BIN_LOG and most testing of
mysql_bin_log.is_open() is removed.
We should over time remove all testing of mysql_bin_log.is_open() from the
main code, except in binlog commit, as the value can change 'anytime' if
binlog rotation fails, which is very likely to result in crashes.
(The patch fixes some of these cases).

THD::binlog_state is a new variable that simplifies testing if replication
is permanently off, is on or temporarily off (for a sub statement).

We also set current_stmt_binlog_format to BINLOG_FORMAT_UNSPEC if binlog is
off.

The above changes allows some code changes:

One does not need to test for mysql_bin_log.is_open() if one also tests
for thd->is_current_stmt_binlog_format_stmt() or
thd->is_current_stmt_binlog_format_row().

The change also allows the following transformations:

(WSREP(thd) && wsrep_emulate_bin_log) ||
(mysql_bin_log.is_open()) && (thd->variables.option_bits & OPTION_BIN_LOG)
->
thd->binlog_ready()

(WSREP_NNULL(thd) && wsrep_emulate_bin_log) || mysql_bin_log.is_open())
->
thd->binlog_ready()

(WSREP_EMULATE_BINLOG(thd) || mysql_bin_log.is_open())
->
thd->binlog_ready()

mysql_bin_log.is_open() && (thd->variables.option_bits & OPTION_BIN_LOG)
->
thd->binlog_ready_no_wsrep()

Other transformation are easy to do by using the bits set in binlog_state
by decide_binlog_format().
Note that the new code takes into account if binary logging is
disabled, with the old code did not do (except in the first and last
examples)

Other things:
- Implement THD::tmp_disable_binlog() and THD::reenable_binlog() using
  the new binlog_state framework.
- THD::variables.wsrep_on is now updated by THD:enable_wsrep(),
  THD::disable_wsrep_on() and THD::set_wsrep_on(). I added an assert for
  (WSREP_PROVIDER_EXISTS_ && WSREP_ON_) when setting wsrep_on=1
- Reset sql_log_bin in THD::init(). This is to ensure that change user
  will not get affected by the previous users sql_log_bin state.
- Replaced OPTION_BIN_TMP_LOG_OFF with
  (thd->binlog_state & BINLOG_STATE_TMP_DISABLED)
- Added TABLE->disabled_rowlogging() and TABLE->enable_rowlogging() to
  be used when one want to only disable logging for single write, update
  or delete call. This fixes an possible issue with spider and mroonga
  that used the old tmp_disable_binlog() framework which did not work
  to disable row logging.
- I had to add a test for thd->binlog_evt_union.do_union() in a few
  places as binlog_state contains a state for do_union, which causes
  thd->binlog_ready() to fail if do_union is used.
  Other option would add a new function thd->binlog_ready_or_union()
  that could handle this case by just removing the
  BINLOG_STATE_UNION_DISABLE bit from binlog_state before testing
  disable bits.

Things to do:
- We should consider removing OPTION_BIN_LOG and instead use
  binlog_state. This would allow us to remove reset and setting it in
  decide_binlog_format(). The code is not anymore using the value of
  OPTION_BIN_LOG anywhere.
- Update binlog_state with BINLOG_STATE_FILTER only when database or filter
  is changed.  This means we do not have to call binlog_filter->db_ok() for
  every statement in decide_binlog_format().
- Remove clear/reset xxx_binlog_local_stmt_filter() as this is now handled
  by binlog_state.
- Remove 'silent' option from mysql_create_db_internal and instead use
  tmp_disable_binlog() / reenable_binlog()
- Remove testing if binary log is desabled in:
  MYSQL_BIN_LOG::write(Log_event *event_info, my_bool *with_annotate):
    if ((!(option_bin_log_flag)) ||
        (thd->lex->sql_command != SQLCOM_ROLLBACK_TO_SAVEPOINT &&
        thd->lex->sql_command != SQLCOM_SAVEPOINT &&
        !binlog_filter->db_ok(local_db)))
      DBUG_RETURN(0);
- Remove all testing of mysql_bin_log.is_open(). Instead test for
  binlog_ready() in main code and add testing of is_open() when
  trying to commit the binary log.  This is needed as currently
  mysql_bin_log.is_open() is tested without a mutex which makes
  it unreliable.
- Remove testing of WSREP_PROVIDER_EXISTS_ in WSREP_NULL() as this
  is guaranteed by THD::enable_wsrep()
- Ensure that WSREP_PROVIDER_EXISTS can never change while there are
  any connections with a THD.
- BINLOG_STATE_FILTER need a bit more work. It is currently only used
  in decide_binlog_format()
- Remove BINLOG_STATE_BYPASS (Not needed as some other BINLOG_STATE disable
  bit is set if BYPASS is set).  BYPASS was added mostly to simplify
  testing of the new code.
Monty
Ensure we test WSREP_NNULL() before calling wsrep_thd_is_local()

This is to ensure we follow the protocol defined for wsrep_thd_is_local()

Other things:
- Replace WSREP(thd) with WSREP_NNULL(thd) when we are sure thd cannot
  be NULL.
- Removed DBUG_ASSERT() in wsrep_mysqld.cc as it was already ensured
  by previous code a few lines up.
Dave Gosselin
MDEV-38934: ICP, Reverse scan: range access will scan whole index for range w/o max endpoint

For some range condition x > N, a descending scan will walk off of the
left edge of the range and scan to the beginning of the index if we don't
set an endpoint on the storage engine.

In this patch, the logic to set such an endpoint is used in two places
now, so it's factored to a helper function.
Sergei Golubchik
update tests after cherry-picks
Marko Mäkelä
Review fixes
Monty
MDEV-25292 Atomic CREATE OR REPLACE TABLE

Atomic CREATE OR REPLACE allows to keep an old table intact if the
command fails or during the crash. That is done by renaming the
original table to temporary name, as a backup and restoring it if the
CREATE fails. When the command is complete and logged the backup
table is deleted.

Atomic replace algorithm

  Two DDL chains are used for CREATE OR REPLACE:
  ddl_log_state_create (C) and ddl_log_state_rm (D).

  1. (C) Log rename of ORIG to TMP table (Rename TMP to original).
  2. Rename orignal to TMP.
  3. (C) Log CREATE_TABLE_ACTION of ORIG (drops ORIG);
  4. Do everything with ORIG (like insert data)
  5. (D) Log drop of TMP
  6. Write query to binlog (this marks (C) to be closed in
    case of failure)
  7. Execute drop of TMP through (D)
  8. Close (C) and (D)

  If there is a failure before 6) we revert the changes in (C)
  Chain (D) is only executed if 6) succeded (C is closed on
  crash recovery).

Foreign key errors will be found at the 1) stage.

Additional notes

  - CREATE TABLE without REPLACE and temporary tables is not affected
    by this commit.
    set @@drop_before_create_or_replace=1 can be used to
    get old behaviour where existing tables are dropped
    in CREATE OR REPLACE.

  - CREATE TABLE is reverted if binlogging the query fails.

  - Engines having HTON_EXPENSIVE_RENAME flag set are not affected by
    this commit. Conflicting tables marked with this flag will be
    deleted with CREATE OR REPLACE.

  - Replication execution is not affected by this commit.
    - Replication will first drop the conflicting table and then
      creating the new one.

  - CREATE TABLE .. SELECT XID usage is fixed and now there is no need
    to log DROP TABLE via DDL_CREATE_TABLE_PHASE_LOG (see comments in
    do_postlock()). XID is now correctly updated so it disables
    DDL_LOG_DROP_TABLE_ACTION. Note that binary log is flushed at the
    final stage when the table is ready. So if we have XID in the
    binary log we don't need to drop the table.

  - Three variations of CREATE OR REPLACE handled:

    1. CREATE OR REPLACE TABLE t1 (..);
    2. CREATE OR REPLACE TABLE t1 LIKE t2;
    3. CREATE OR REPLACE TABLE t1 SELECT ..;

  - Test case uses 6 combinations for engines (aria, aria_notrans,
    myisam, ib, lock_tables, expensive_rename) and 2 combinations for
    binlog types (row, stmt). Combinations help to check differences
    between the results. Error failures are tested for the above three
    variations.

  - expensive_rename tests CREATE OR REPLACE without atomic
    replace. The effect should be the same as with the old behaviour
    before this commit.

  - Triggers mechanism is unaffected by this change. This is tested in
    create_replace.test.

  - LOCK TABLES is affected. Lock restoration must be done after new
    table is created or TMP is renamed back to ORIG

  - Moved ddl_log_complete() from send_eof() to finalize_ddl(). This
    checkpoint was not executed before for normal CREATE TABLE but is
    executed now.

  - CREATE TABLE will now rollback also if writing to the binary
    logging failed. See rpl_gtid_strict.test

backup ddl log changes

- In case of a successfull CREATE OR REPLACE we only log
  the CREATE event, not the DROP TABLE event of the old table.

ddl_log.cc changes

  ddl_log_execute_action() now properly return error conditions.
  ddl_log_disable_entry() added to allow one to disable one entry.
  The entry on disk is still reserved until ddl_log_complete() is
  executed.

On XID usage

  Like with all other atomic DDL operations XID is used to avoid
  inconsistency between master and slave in the case of a crash after
  binary log is written and before ddl_log_state_create is closed. On
  recovery XIDs are taken from binary log and corresponding DDL log
  events get disabled.  That is done by
  ddl_log_close_binlogged_events().

On linking two chains together

  Chains are executed in the ascending order of entry_pos of execute
  entries. But entry_pos assignment order is undefined: it may assign
  bigger number for the first chain and then smaller number for the
  second chain. So the execution order in that case will be reverse:
  second chain will be executed first.

  To avoid that we link one chain to another. While the base chain
  (ddl_log_state_create) is active the secondary chain
  (ddl_log_state_rm) is not executed. That is: only one chain can be
  executed in two linked chains.

  The interface ddl_log_link_chains() was defined in "MDEV-22166
  ddl_log_write_execute_entry() extension".

Atomic info parameters in HA_CREATE_INFO

  Many functions in CREATE TABLE pass the same parameters. These
  parameters are part of table creation info and should be in
  HA_CREATE_INFO (or whatever). Passing parameters via single
  structure is much easier for adding new data and
  refactoring.

InnoDB changes
  Added ha_innobase::can_be_renamed_to_backup() to check if
  a table with foreign keys can be renamed.

Aria changes:
- Fixed issue in Aria engine with CREATE + locked tables
  that data was not properly commited in some cases in
  case of crashes.

Other changes:
- Removed some auto variables in log.cc for better code readability.
- Fixed old bug that CREATE ... SELECT would not be able to auto repair
  a table that is part of the SELECT.
- Marked MyISAM that it does not support ROLLBACK (not required but
  done for better consistency with other engines).

Known issues:
- InnoDB tables with foreign key definitions are not fully supported
  with atomic create and replace:
  - ha_innobase::can_be_renamed_to_backup() can detect some cases
    where InnoDB does not support renaming table with foreign key
    constraints.  In this case MariaDB will drop the old table before
    creating the new one.
    The detected cases are:
    - The new and old table is using the same foreign key constraint
      name.
    - The old table has self referencing constraints.
  - If the old and new table uses the same name for a constraint the
    create of the new table will fail. The orignal table will be
    restored in this case.
  - The above issues will be fixed in a future commit.
- CREATE OR REPLACE TEMPORARY table is not full atomic. Any conflicting
  table will always be dropped before creating a new one. (Old behaviour).

Bug fixes related to this MDEV:

MDEV-36435 Assertion failure in finalize_locked_tables()
MDEV-36439 Assertion `thd_arg->lex->sql_command != SQLCOM_CREATE_SEQUENCE...
MDEV-36498 Failed CoR in non-atomic mode no longer generates DROP in RBR...
MDEV-36508 Temporary files #sql-create-....frm occasionally stay after
          crash recovery
MDEV-38479 Crash in CREATE OR REPLACE SEQUENCE when new sequence cannot
          be created
MDEV-36497 Assertion failure after atomic CoR with Aria under lock in
          transactional context

InnoDB related changes:
- ha_innodb::rename_table() does not handle foreign key constraint
  when renaming an normal table to internal tempory tables. This
  causes problems for CREATE OR REPLACE as the old constraints causes
  failure when creating a new table with the same constraints.
  This is fixed inside InnoDB by not threating tempfiles (#sql-create-..),
  created as part of CREATE OR REPLACE, as temporary files.
- In ha_innobase::delete_table(), ignore checking of constraints when
  dropping a #sql-create temporary table.
- In tablename_to_filename() and filename_to_tablename(), don't do
  filename conversion for internal temporary tables (#sql-...)

Other things:
- maria_create_trn_for_mysql() does not register a new transaction
  handler for commits. This was needed to ensure create or replace
  will not end with an active transaction.
- We do not get anymore warnings about "Engine not supporting atomic
  create" when doing a legal CREATE OR REPLACE on a table with
  foreign key constraints.
- Updated VIDEX engine flags to disable CREATE SEQUENCE.

Reverted commits:
MDEV-36685 "CREATE-SELECT may lose in binlog side-effects of
stored-routine" as it did not take into account that it safe to clear
binlogs if the created table is non transactional and there are no
other non transactional tables used.
- This was done because it caused extra logging when it is not needed
  (not using any non transactional tables) and it also did not solve
  side effects when using statement based loggging.
Monty
MDEV-23298 Assertion `table_list->prelocking_placeholder == TABLE_LIST::PRELOCK_NONE' failed in check_lock_and_start_stmt on CREATE OR REPLACE TABLE

Fixed by removing wrong assert

Review: Sanja Byelkin
Marko Mäkelä
Safely export innodb_lsn_archived
KhaledSayed04
MDEV-37608: Increase default binlog row event size

Increase default binlog_row_event_max_size to 64k
to reduce event header overhead and improve
performance on modern networks.

Summary of changes:
- Updated sql/sys_vars.cc for the new 64k default.
- Added binlog_row_event_max_size_basic.test to
verify min/max boundaries and read-only property.
- Audited MTR suite; retained pins only for tests
requiring specific byte-math for cache spills
and event fragmentation logic.
- Standardized some affected .opt files to a single
line format to follow MTR best practices.

# Affected Tests Grouping

## Cache Boundary & Overflow Handling
These tests utilize a 4KB binlog_cache_size (the server minimum).
The 8KB pin ensures that events exceed the memory buffer, forcing
the server into the incremental flush-to-disk and overflow logic.
With 64KB, the 16x scale disparity may trigger 'Event too large'
rejections during allocation, bypassing IO_CACHE spill code paths.

Affected Files:
- mysql-test/main/alter_table_online_debug.opt
- mysql-test/suite/binlog/t/binlog_bug23533.opt
- mysql-test/suite/galera/t/galera_binlog_cache_size.opt
- mysql-test/suite/rpl/t/rpl_row_binlog_max_cache_size-master.opt
- mysql-test/suite/rpl/t/rpl_stm_binlog_max_cache_size-master.opt
- mysql-test/suite/rpl/t/rpl_mixed_binlog_max_cache_size-master.opt
- mysql-test/suite/rpl/t/rpl_create_select_row-master.opt

## Engine Spills & Recovery (OOB/ENOSPC)
These tests exercise Out-of-Band (OOB) offloading or verify
resilience during Disk Full (ENOSPC) scenarios while
flushing the binary log cache.
The 8KB pin is required to maintain the 'Small World'
math where even a single statement forces a transition from
memory to storage.
At 64KB, these transactions would remain 'in-line' in memory,
failing to trigger the temporary file creation,
disk-full error injection, and encryption logic that these
tests are specifically designed to measure.

Affected Files:
- mysql-test/suite/binlog_in_engine/rpl_oob.opt
- mysql-test/suite/binlog_in_engine/xa.opt
- mysql-test/suite/binlog_in_engine/rpl_dual_cache-master.opt
- mysql-test/suite/rpl/t/rpl_row_binlog_tmp_file_flush_enospc-master.opt
- mysql-test/suite/rpl/t/rpl_mdev-11092.opt
- mysql-test/suite/encryption/t/tempfiles.opt
- mysql-test/suite/perfschema/t/io_cache-master.opt

## Micro-Rotation & Space Limits
These tests validate how MariaDB purges binary logs
or manages temporary space under strict constraints.
They utilize limits ranging from 'Micro-Thresholds'
(e.g., max-binlog-total-size=1.5k) to moderate session
quotas (e.g., max_tmp_session_space_usage=512K).
The 8KB pin is retained to maintain the mathematical
proportionality between event size and space limits.

Affected Files:
- mysql-test/suite/rpl/t/max_binlog_total_size-master.opt
- mysql-test/main/tmp_space_usage-master.opt
- mysql-test/suite/rpl/t/rpl_binlog_cache_disk_full_row-master.opt
- mysql-test/suite/binlog/include/binlog_cache_stat.opt

## Event Fragmentation & Protocol Integrity
These tests verify that large rows are correctly
sliced into multiple Rows_log_event fragments.
A 64KB default would allow the test payloads to fit
into a single event, neutralizing the validation
of reassembly and protocol flags.

Affected Files:
- mysql-test/suite/rpl/t/rpl_fragment_row_event_main.cnf
- mysql-test/suite/rpl/t/rpl_loaddata_map-master.opt
- mysql-test/suite/rpl/t/rpl_loaddata_map-slave.opt
- mysql-test/suite/rpl/t/rpl_checksum_cache.opt
- mysql-test/suite/binlog_encryption/rpl_packet.cnf

Reviewed-by: Brandon Nesterenko <[email protected]>
Reviewed-by: Georgi Kodinov <[email protected]>
Dave Gosselin
MDEV-38921: Wrong result for range w/o min endpoint, ORDER BY DESC and ICP

In QUICK_SELECT_DESC::get_next(), when starting to scan a range with
no start endpoint, like "key < 10", clear the end_range on the storage engine.
We could have scanned another range before, like "key BETWEEN 20 and 30"
and after this the engine's end_range points to its left endpoint, the "20".

This is necessary especially when there are multiple ranges considered and
a later range indicates that there's no minimum endpoint (such as in the
attached test case), thus an earlier range endpoint must be removed/cleared.
Dave Gosselin
MDEV-38649: Wrong result for range w/o min endpoint, ORDER BY DESC

QUICK_SELECT_DESC::get_next() sets a stale end range on the storage engine.
When checking the first range, it sets last_range to the first range and
then, if it cannot avoid descending scan, it sets the minimum endpoint on
the storage engine.  However, get_next() may decide that this range is not
a match after updating the minimum endpoint on the storage engine.  In this
case, it will continue to the next range.  When continuing, it needs to
reset not only the state of 'last_range' to point to the next range that
it's checking, but it also needs to clear the now-stale end range set on
the storage engine.  While this explanation covers the first and second
ranges, the issue at hand extends to the general case of ranges 1...N-1 and N.

Before this change and when get_next() decided that a range was not a
match, it would clear last_range at each loop continuation point.
Rather than clearing the minimum endpoint on the storage engine at
each continuation, we move all loop cleanup to for loop update clause.
This consolidates any logic that needs to be evaluated on loop
continuation to one place and reduces code duplication.

MySQL fixed this same problem at sha b65ca959efd6ec5369165b1849407318b4886634
with a different implementation.
Alexander Barkov
MDEV-38768 RECORD in routine parameters and function RETURN

- Allowing types declared by the TYPE declarations in the grammar
  for stored routine parameters and stored function RETURN clause.

- Overriding Type_handler_row::Column_definition_prepare_stage1()
  to copy the record structure from get_attr_const_generic_ptr(0)
  into Spvar_definition::m_row_field_definition.
  This makes TYPE..IS RECORD types work as routine parameters and RETURN.

- Raising an error when type==COLUMN_DEFINITION_ROUTINE_PARAM or
  type==COLUMN_DEFINITION_FUNCTION_RETURN is passed to
  Type_handler_assoc_array::Column_definition_set_attributes().
  The underlying assoc array code is not ready to support
  parameters and RETURN. It will be done separately.

- Adding tests

- Some changes has changed the error from ER_NOT_ALLOWED_IN_THIS_CONTEXT
  to ER_ILLEGAL_PARAMETER_DATA_TYPE_FOR_OPERATION. This is ok, as the latter
  is more informative.
Alessandro Vetere
fixup! fixup! fixup! fixup! MDEV-37070  Implement table options to enable/disable features
Dave Gosselin
MDEV-38921: Wrong result for range w/o min endpoint, ORDER BY DESC and ICP

In QUICK_SELECT_DESC::get_next(), when starting to scan a range with
no start endpoint, like "key < 10", clear the end_range on the storage engine.
We could have scanned another range before, like "key BETWEEN 20 and 30"
and after this the engine's end_range points to its left endpoint, the "20".

This is necessary especially when there are multiple ranges considered and
a later range indicates that there's no minimum endpoint (such as in the
attached test case), thus an earlier range endpoint must be removed/cleared.
Oleksandr Byelkin
Make in compilable under mac: sprintf -> snprintf
Alexander Barkov
MDEV-38109 Refactor sp_add_instr_fetch_cursor to get the target list argument

This is a preparatory refactoring patch to make MDEV-10152 simpler.
Under terms of MDEV-10152 we'll need LEX::sp_add_instr_fetch_cursor()
to get the target list as a parameter, to be able to rewrite it correctly
(to the data type of the REF CURSOR .. RETURN clause, instead of setting
it post-factum with help of the
method sp_instr_fetch_cursor::set_fetch_target_list().

Changes:
- Changing the result type of sp_add_instr_fetch_cursor() from
  sp_instr_fetch_cursor to bool
- Renaming sp_add_instr_fetch_cursor() to sp_add_fetch_cursor()
  to avoid problems during merge (the opposite return value in bool context)
- Adding a new parameter to sp_add_fetch_cursor(), so it looks like this:

    bool sp_add_fetch_cursor(THD *thd,
                          const Lex_ident_sys_st &name,
                          const List<sp_fetch_target> &list);

- Adding a new target list parameter to constructors sp_instr_fetch_cursor,
  sp_instr_cfetch, sp_instr_cfetch_by_ref,
- Remove the methods sp_instr_fetch_cursor::set_fetch_target_list(),
  sp_instr_fetch_cursor::add_to_fetch_target_list(), as they aren't needed
  any more.
- Removing from sql_yacc.yy the grammar rule sp_proc_stmt_fetch_head and
  adding instead a new rule fetch_statement_source.
- Changing the grammar rule sp_proc_stmt_fetch to the following to
  use the new rule fetch_statement_source.
- Adding a new helper constructor List(const T &a, MEM_ROOT *mem_root)
  to create lists consisting of a single element.
- Reusing the new List constructor in a few places.
Alexander Barkov
MDEV-38161 Refactor Type_extra_attributes: change void* for generic attributes to a better type

The first key point of this commit is changing Type_extra_attributes in the way
that the pointer to a generic attribute is now "Type_generic_attributes*" -
a pointer to a new class with vitrual method type_handler().

Before this change the pointer to a generic attribute was
"void *m_attr_const_void_ptr", which was error prone, as allowed
to pass a pointer to a wrong structure without control.

Another key point is that the SET/ENUM data type related structures
are now attributed by the changed class Type_typelib_attributes,
which now looks as follows:

  class Type_typelib_attributes: public Sql_alloc,
                                public Type_generic_attributes,
                                public TYPELIB

(with virtual method type_handler()), instead of being attributed by
TYPELIB directly.

Details:

- Using Column_definition's method typelib() where the member m_typelib
  was used directly. The related lines change anyway, because the member
  is renamed m_typelib_attr.

- Adding a new class Type_generic_attributes.
  Deriving the following classes from it:
  * Type_typelib_attributes
  * sp_type_def

- sp_type_def does not derive from Type_handler_hybrid_field_type any more,
  because the method type_handler() is now implemented virtually.

- Removing Type_extra_attributes::m_attr_const_void_ptr and
  adding Type_extra_attributes::m_attr_const_generic_attributes_ptr instead -
  a pointer to the new class Type_generic_attributes.
  Renaming methods to set and read it according to the new data type name.

- Type_typelib_attributes now derives from TYPELIB instead of having
  a pointer to TYPELIB.

- Adding a new class Type_typelib_ptr_attributes. It's a replacement for
  the old implementation of Type_typelib_attributes.

  Instead of deriving from Type_typelib_attributes, Field_enum now derives
  from Type_typelib_ptr_attributes. The latter can store/read itself into/from
  Type_extra_attributes. It's a bridge between the new Type_typelib_attributes
  and Type_extra_attributes.

- Changing parameter data type in a few methods in Field_enum from
  a pointer to TYPELIB to a pointer to Type_typelib_attributes.

- Removing typelib related methods from Type_extra_attributes.
  Moving this functionality into Type_typelib_ptr_attributes and
  Column_definition_attributes.
  This turns Type_extra_attributes into a data structure independent
  from any data type specific structures/methods.

- Adding methods:
  Column_definition_attributes::typelib_attr() - Column_definition derives it
  Column_definition_attributes::typelib()      - Column_definition derives it
  Type_typelib_ptr_attributes::typelib_attr()  - Field_enum derives it
  Type_typelib_ptr_attributes::typelib()      - Field_enum derives it

- Renaming the member Create_field::save_interval into
  Create_field::save_typelib_attr.
  Changing its data type from a pointer to TYPELIB into a pointer
  to Type_typelib_attributes pointer.

- Removing the method Type_typelib_attributes::store().
  Adding Type_typelib_ptr_attributes::save_in_type_extra_attributes() instead.
  The new method name makes the code more readable.
Monty
MDEV-25292 Atomic CREATE OR REPLACE TABLE

Atomic CREATE OR REPLACE allows to keep an old table intact if the
command fails or during the crash. That is done by renaming the
original table to temporary name, as a backup and restoring it if the
CREATE fails. When the command is complete and logged the backup
table is deleted.

Atomic replace algorithm

  Two DDL chains are used for CREATE OR REPLACE:
  ddl_log_state_create (C) and ddl_log_state_rm (D).

  1. (C) Log rename of ORIG to TMP table (Rename TMP to original).
  2. Rename orignal to TMP.
  3. (C) Log CREATE_TABLE_ACTION of ORIG (drops ORIG);
  4. Do everything with ORIG (like insert data)
  5. (D) Log drop of TMP
  6. Write query to binlog (this marks (C) to be closed in
    case of failure)
  7. Execute drop of TMP through (D)
  8. Close (C) and (D)

  If there is a failure before 6) we revert the changes in (C)
  Chain (D) is only executed if 6) succeded (C is closed on
  crash recovery).

Foreign key errors will be found at the 1) stage.

Additional notes

  - CREATE TABLE without REPLACE and temporary tables is not affected
    by this commit.
    set @@drop_before_create_or_replace=1 can be used to
    get old behaviour where existing tables are dropped
    in CREATE OR REPLACE.

  - CREATE TABLE is reverted if binlogging the query fails.

  - Engines having HTON_EXPENSIVE_RENAME flag set are not affected by
    this commit. Conflicting tables marked with this flag will be
    deleted with CREATE OR REPLACE.

  - Replication execution is not affected by this commit.
    - Replication will first drop the conflicting table and then
      creating the new one.

  - CREATE TABLE .. SELECT XID usage is fixed and now there is no need
    to log DROP TABLE via DDL_CREATE_TABLE_PHASE_LOG (see comments in
    do_postlock()). XID is now correctly updated so it disables
    DDL_LOG_DROP_TABLE_ACTION. Note that binary log is flushed at the
    final stage when the table is ready. So if we have XID in the
    binary log we don't need to drop the table.

  - Three variations of CREATE OR REPLACE handled:

    1. CREATE OR REPLACE TABLE t1 (..);
    2. CREATE OR REPLACE TABLE t1 LIKE t2;
    3. CREATE OR REPLACE TABLE t1 SELECT ..;

  - Test case uses 6 combinations for engines (aria, aria_notrans,
    myisam, ib, lock_tables, expensive_rename) and 2 combinations for
    binlog types (row, stmt). Combinations help to check differences
    between the results. Error failures are tested for the above three
    variations.

  - expensive_rename tests CREATE OR REPLACE without atomic
    replace. The effect should be the same as with the old behaviour
    before this commit.

  - Triggers mechanism is unaffected by this change. This is tested in
    create_replace.test.

  - LOCK TABLES is affected. Lock restoration must be done after new
    table is created or TMP is renamed back to ORIG

  - Moved ddl_log_complete() from send_eof() to finalize_ddl(). This
    checkpoint was not executed before for normal CREATE TABLE but is
    executed now.

  - CREATE TABLE will now rollback also if writing to the binary
    logging failed. See rpl_gtid_strict.test

backup ddl log changes

- In case of a successfull CREATE OR REPLACE we only log
  the CREATE event, not the DROP TABLE event of the old table.

ddl_log.cc changes

  ddl_log_execute_action() now properly return error conditions.
  ddl_log_disable_entry() added to allow one to disable one entry.
  The entry on disk is still reserved until ddl_log_complete() is
  executed.

On XID usage

  Like with all other atomic DDL operations XID is used to avoid
  inconsistency between master and slave in the case of a crash after
  binary log is written and before ddl_log_state_create is closed. On
  recovery XIDs are taken from binary log and corresponding DDL log
  events get disabled.  That is done by
  ddl_log_close_binlogged_events().

On linking two chains together

  Chains are executed in the ascending order of entry_pos of execute
  entries. But entry_pos assignment order is undefined: it may assign
  bigger number for the first chain and then smaller number for the
  second chain. So the execution order in that case will be reverse:
  second chain will be executed first.

  To avoid that we link one chain to another. While the base chain
  (ddl_log_state_create) is active the secondary chain
  (ddl_log_state_rm) is not executed. That is: only one chain can be
  executed in two linked chains.

  The interface ddl_log_link_chains() was defined in "MDEV-22166
  ddl_log_write_execute_entry() extension".

Atomic info parameters in HA_CREATE_INFO

  Many functions in CREATE TABLE pass the same parameters. These
  parameters are part of table creation info and should be in
  HA_CREATE_INFO (or whatever). Passing parameters via single
  structure is much easier for adding new data and
  refactoring.

InnoDB changes
  Added ha_innobase::can_be_renamed_to_backup() to check if
  a table with foreign keys can be renamed.

Aria changes:
- Fixed issue in Aria engine with CREATE + locked tables
  that data was not properly commited in some cases in
  case of crashes.

Other changes:
- Removed some auto variables in log.cc for better code readability.
- Fixed old bug that CREATE ... SELECT would not be able to auto repair
  a table that is part of the SELECT.
- Marked MyISAM that it does not support ROLLBACK (not required but
  done for better consistency with other engines).

Known issues:
- InnoDB tables with foreign key definitions are not fully supported
  with atomic create and replace:
  - ha_innobase::can_be_renamed_to_backup() can detect some cases
    where InnoDB does not support renaming table with foreign key
    constraints.  In this case MariaDB will drop the old table before
    creating the new one.
    The detected cases are:
    - The new and old table is using the same foreign key constraint
      name.
    - The old table has self referencing constraints.
  - If the old and new table uses the same name for a constraint the
    create of the new table will fail. The orignal table will be
    restored in this case.
  - The above issues will be fixed in a future commit.
- CREATE OR REPLACE TEMPORARY table is not full atomic. Any conflicting
  table will always be dropped before creating a new one. (Old behaviour).

Bug fixes related to this MDEV:

MDEV-36435 Assertion failure in finalize_locked_tables()
MDEV-36439 Assertion `thd_arg->lex->sql_command != SQLCOM_CREATE_SEQUENCE...
MDEV-36498 Failed CoR in non-atomic mode no longer generates DROP in RBR...
MDEV-36508 Temporary files #sql-create-....frm occasionally stay after
          crash recovery
MDEV-38479 Crash in CREATE OR REPLACE SEQUENCE when new sequence cannot
          be created
MDEV-36497 Assertion failure after atomic CoR with Aria under lock in
          transactional context

InnoDB related changes:
- ha_innodb::rename_table() does not handle foreign key constraint
  when renaming an normal table to internal tempory tables. This
  causes problems for CREATE OR REPLACE as the old constraints causes
  failure when creating a new table with the same constraints.
  This is fixed inside InnoDB by not threating tempfiles (#sql-create-..),
  created as part of CREATE OR REPLACE, as temporary files.
- In ha_innobase::delete_table(), ignore checking of constraints when
  dropping a #sql-create temporary table.
- In tablename_to_filename() and filename_to_tablename(), don't do
  filename conversion for internal temporary tables (#sql-...)

Other things:
- maria_create_trn_for_mysql() does not register a new transaction
  handler for commits. This was needed to ensure create or replace
  will not end with an active transaction.
- We do not get anymore warnings about "Engine not supporting atomic
  create" when doing a legal CREATE OR REPLACE on a table with
  foreign key constraints.
- Updated VIDEX engine flags to disable CREATE SEQUENCE.

Reverted commits:
MDEV-36685 "CREATE-SELECT may lose in binlog side-effects of
stored-routine" as it did not take into account that it safe to clear
binlogs if the created table is non transactional and there are no
other non transactional tables used.
- This was done because it caused extra logging when it is not needed
  (not using any non transactional tables) and it also did not solve
  side effects when using statement based loggging.
Monty
ha_table_exists() cleanup and improvement

This is part of MDEV-25292 Atomic CREATE OR REPLACE TABLE.

Removed default values for arguments, added flags argument to specify
filename flags (FN_TO_IS_TMP, FN_FROM_IS_TMP) and forward the flag to
build_table_name().

Original patch from: Aleksey Midenkov <[email protected]>
Monty
Fixing Galera test results
Monty
ha_table_exists() cleanup and improvement

This is part of MDEV-25292 Atomic CREATE OR REPLACE TABLE.

Removed default values for arguments, added flags argument to specify
filename flags (FN_TO_IS_TMP, FN_FROM_IS_TMP) and forward the flag to
build_table_name().

Original patch from: Aleksey Midenkov <[email protected]>
Sergei Petrunia
MDEV-38934: part 2: ICP+reverse scan, range w/o max endp: handle no-matches

The code in QUICK_SELECT_DESC::get_next() had the logic that

"if index_last() call has returned nothing, then the table must be empty
  so return HA_ERR_END_OF_FILE"

with ICP, index_last() may also return HA_ERR_END_OF_FILE when none of
the rows in the range matched ICP condition. But there can be matching
rows in other ranges, so continue the scan to examine them.
hadeer
Mroonga: fix SIGSEGV on NULL mroonga_log_file

Add mrn_log_file_check() to validate that
mroonga_log_file is not set to NULL or an empty
string, which previously caused a segfault.
Alexander Barkov
MDEV-38871 Variable declarations accept unexpected data type attributes (lengh,dec,srid)

Some data types:
- RECORD
- assoc arrays
- SYS_REFCURSOR

erroneously accepted non-relevant attributes:
- length
- scale
- character set / collation
- REF_SYSTEM_ID

Fixing to raise an error.

Details:

- Overriding get_column_attributes() in Type_handler_row,
  Type_handler_sys_refcursor, Type_handler_assoc_array.

- Moving a piece of the old code into a new method
  Type_handler::check_data_type_attributes(), to reuse it
  for testing attributes for both plugin types and TYPE-def types.

- Reusing check_data_type_attributes() in methods
  * LEX::set_field_type_udt_or_typedef()
  * LEX::set_field_type_udt()

- Adding opt_binary into the rule field_type_all_with_typedefs,
  into the grammar branch for the plugin types, as XMLTYPE needs it.
Monty
Gtid fixups

- Force table_creation_was_logged=2 for GTT tables. This means that table
  was written to binary log but all updates of GTT tables will use ROW
  binlog format. Drop of GTT table will be binlogged.
- We gave an error for statements mixing GTT tables and normal tables under
  STATEMENT logging. Changed it to revert to row logging in this case
  (as we do for other cases).
- Removed binlogging of DROP [TEMPORARY TABLE] for create or replace where
  the original table did not exists.  binlog_show_create_table now takes
  an extra parameter if the orignal table was dropped.
- Changed binlog of DROP TABLE to be 'direct'. This simplifes replication a
  as it does not need to create a transation for DROP TABLE.
  The different in show binlog output is:

-master-bin.000001      #      Gtid    #      #      BEGIN GTID #-#-#
+master-bin.000001      #      Gtid    #      #      GTID #-#-#
master-bin.000001      #      Query  #      #      use `test`; DROP TABLE IF EXISTS `test`.`t1`/* Generated to handle failed CREATE OR REPLACE */
-master-bin.000001      #      Query  #      #      ROLLBACK

Another consequence of the patch is that "Annotate rows" is binlogged for some
cases where it was not binlogged before. This can be seen in
mysql-test/suite/rpl/r/rpl_mixed_mixing_engines.result
Dave Gosselin
MDEV-38934: ICP, Reverse scan: range access will scan whole index for range w/o max endpoint

For some range condition x > N, a descending scan will walk off of the
left edge of the range and scan to the beginning of the index if we don't
set an endpoint on the storage engine.

In this patch, the logic to set such an endpoint is used in two places
now, so it's factored to a helper function.