Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
Raghunandan Bhat
MDEV-39271: SIGSEGV in `check_word`|(`extract_date_time`|`extract_oracle_date_time`)

Problem:
  Date and time functions - `STR_TO_DATE` and `TO_DATE` (in Oracle mode)
  both use `check_word` to check if a string token matches a valid
  locale-specific day or month name. Server crashed because of an
  un-initialized typelib member(`type_lengths`) for date locales which
  are not present in `Oracle_date_locale` (e.g: `ar_DZ`).

Fix:
  Initialize `type_lengths` for all supported locales by iterating over
  `my_locales` array instead of `Oracle_date_locale` during server
  startup.
Vladislav Vaintroub
MDEV-39258 Make THIRD_PARTY_DOWNLOAD_LOCATION settable

Also, on request of Razvan, increase HeidiSQL download timeout
Michal Schorm
CONC-816: INCLUDE(FindXXX) -> FIND_PACKAGE(XXX)

INCLUDE(FindXXX) directly includes the find module, bypassing
cmake's package search infrastructure. FIND_PACKAGE(XXX) is
the standard form and has been preferred since cmake 2.x.

Co-Authored-By: Claude AI <[email protected]>
Michal Schorm
CONC-816: drop redundant conditions from ENDIF/ELSE

Old cmake style repeated the condition in ENDIF(condition)
and ELSE(condition). Modern cmake uses bare ENDIF()/ELSE().

Co-Authored-By: Claude AI <[email protected]>
Alexey Botchkov
MDEV-38767 XML datatype to be reported as format in extended metadata in protocol.

add respective Send_field metadata for UDT.
Michal Schorm
CONC-816: remove dead cmake version check in misc.cmake

The VERSION_LESS "2.8.7" branch is unreachable since
cmake_minimum_required is 3.12. Keep only the ELSE block
which defines MESSAGE1() with deduplication.

Co-Authored-By: Claude AI <[email protected]>
Raghunandan Bhat
MDEV-39271: SIGSEGV in `check_word`|(`extract_date_time`|`extract_oracle_date_time`)

Problem:
  Date and time functions - `STR_TO_DATE` and `TO_DATE` (in Oracle mode)
  both use `check_word` to check if a string token matches a valid
  locale-specific day or month name. Server crashed because of an
  un-initialized typelib member(`type_lengths`) for date locales which
  are not present in `Oracle_date_locale` (e.g: `ar_DZ`).

Fix:
  Initialize `type_lengths` for all supported locales by iterating over
  `my_locales` array instead of `Oracle_date_locale` during server
  startup.
Michal Schorm
CONC-813, CONC-814 Fix UBSan errors

sint4korr (non-x86 path): the expression
`(int16)(A)[3] << 24` causes signed integer overflow
when byte value >= 128, since 128<<24 exceeds INT32_MAX.
Redefine as `(int32) uint4korr(A)` to compute in unsigned
arithmetic, matching the server (my_byteorder.h:137-140).
The next line sint8korr already uses this same pattern:
`(longlong) uint8korr(A)`.

convert_from_long (MYSQL_TYPE_DOUBLE, MYSQL_TYPE_FLOAT):
casting double/float back to ulonglong/longlong for
truncation detection is undefined when the float value
is outside the representable integer range. For example,
`(double)ULONGLONG_MAX` rounds up to 2^64 in IEEE 754
(not exactly representable with 53-bit mantissa), so
`(ulonglong)dbl` overflows. Similarly for LONGLONG_MAX.
Add range checks before the cast using `>=` for upper
bounds (since the double rounds up past the integer max)
and `<` for LONGLONG_MIN (exactly representable as -2^63).
Out-of-range values set the truncation error flag directly,
short-circuiting the dangerous cast via `||` evaluation.
This matches the server pattern in sql/field.cc and
sql/sql_select.cc (double_to_ulonglong macro).

Co-Authored-By: Claude AI <[email protected]>
Michal Schorm
CONC-816: use NAMELINK_COMPONENT (available since cmake 3.12)

Replace the NAMELINK_SKIP / NAMELINK_ONLY two-INSTALL workaround
with the single-INSTALL NAMELINK_COMPONENT form. The workaround
was needed for CentOS 7's cmake 2.8.12 which is no longer supported.

Co-Authored-By: Claude AI <[email protected]>
Georg Richter
Merge pull request #306 from FaramosCZ/CONC-816

CONC-816: CMake modernization for the 3.12 baseline
Alessandro Vetere
MDEV-38814 Skip root in opposite-intention check

btr_cur_need_opposite_intention() checks whether a structural change at
a page (e.g., a page split inserting a node pointer into the parent,
a merge with a sibling removing one, or a boundary change updating
one) could cascade into its parent.  Handling such a structural
change would also require latching sibling pages at that level, which
cannot be safely acquired under SX without risking deadlock with
concurrent readers; hence the index lock must be upgraded from SX to X.
The root has no parent and no siblings, so such cascades are impossible
and no sibling latches are needed; return false early, avoiding
unnecessary upgrades.

A root page split (btr_root_raise_and_insert()) is a special case that
changes the tree height, but it never cascades upward into a parent.

Add a debug assertion in btr_cur_search_to_nth_level() that freshly
acquiring the root page is only permitted under X-lock or when no page
latch is held (brand-new descent).  Under SX with existing page latches,
the root must already be latched in the mtr, retained from the original
search_leaf() descent.

Add debug-only counters (Innodb_btr_need_opposite_intention_root and
Innodb_btr_index_lock_upgrades) to track how often the root early return
is taken and how often SX-to-X upgrades occur, exposed via SHOW GLOBAL
STATUS in debug builds.

Add a test that inserts and updates 1000 rows in a 4K-page table,
verifying that the root early return fires and that index lock upgrades
are reduced.
Alessandro Vetere
MDEV-38814 Enforce reorganization limit

Issue:
In btr_cur_optimistic_update(), DB_OVERFLOW error condition is returned
if the BTR_CUR_PAGE_REORGANIZE_LIMIT is not met, to prevent CPU trashing
due to excessive reorganization attempts in an almost full page.
In btr_cur_pessimistic_update() though, many of these errors were not
properly followed by a page split attempt, and therefore many
pessimistic fallbacks occur for the same page, which has less
than the BTR_CUR_PAGE_REORGANIZE_LIMIT free space.

Fix:
In btr_cur_pessimistic_update(), if the optimistic update error
is DB_OVERFLOW, the page is uncompressed and the
BTR_CUR_PAGE_REORGANIZE_LIMIT is not met, avoid attempting
btr_cur_insert_if_possible() and fallthrough to attempt a page split.

In the index_lock_upgrade.result file, the counters are updated
accordingly: one extra page split is recorded, and both the numbers of
reorganization attempts and index lock upgrades is reduced.
Thirunarayanan Balathandayuthapani
MDEV-39261 MariaDB crash on startup in presence of indexed virtual columns

Problem:
========
A single InnoDB purge worker thread can process undo logs from different
tables within the same batch. But get_purge_table(), open_purge_table()
incorrectly assumes that a 1:1 relationship between a purge worker thread
and a table within a single batch. Based on this wrong assumtion,
InnoDB attempts to reuse TABLE objects cached in thd->open_tables for
virtual column computation.

1) Purge worker opens Table A and caches the TABLE pointer in thd->open_tables.
2) Same purge worker moves to Table B in the same batch, get_purge_table()
retrieves the cached pointer for Table A instead of opening Table B.
3) Because innobase::open() is ignored for Table B, the virtual column
template is never initialized.
4) virtual column computation for Table B aborts the server

Solution:
========
The purge coordinator thread now opens both InnoDB dict_table_t* and
MariaDB TABLE* handles during batch preparation in
trx_purge_attach_undo_recs(). These handles are stored in a new
purge_table_ctx_t structure within each purge_node_t->tables map,
keyed by table_id. When worker thread needs TABLE* for virtual column
computation, it fetches the TABLE* from purge_node_t->tables.

purge_table_ctx_t(): Structure to hold dict_table_t*, MDL_ticket* and
TABLE* for each table during batch processing. TABLE* is being opened
only when virtual index exists for the table.

innobase_open_purge_table(), innobase_close_thread_table(): wrapper to
make open_purge_table(), close_thread_tables() accessible from InnoDB
purge subsystem

trx_purge_table_open(): Modified to open TABLE* using the
coordinator's MDL ticket

open_purge_table(): Accept and use the MDL ticket from coordinator thread

purge_sys_t::coordinator_thd: To track the coordinator thread in purge
subsystem

purge_node_t::end(): Skip innobase_reset_background_thd() for the coordinator
thread to prevent the premature table closure

trx_purge(): Closes the coordinator's TABLE* objects after all workers
are completed
Sergei Petrunia
Fix in condition pushdown code: don't pass List<Field_pair> by value.

Affected functions:
  find_matching_field_pair(Item *item, List<Field_pair> pair_list)
  get_corresponding_field_pair(Item *item, List<Field_pair> pair_list)

Both only traverse the pair_list.
They use List_iterator so we can't easily switch to using const-reference.
Yuchen Pei
MDEV-38752 [wip] check supertype

- Type_handler_general_purpose_int
- Type_handler_decimal_result
- Type_handler_longstr
Dave Gosselin
MDEV-39209: use iterative cleanup for merged units to avoid stack overflow

Query optimization can merge derived tables (VIEWs being a type of derived
table) into outer queries, leaving behind stranded  st_select_lex_unit objects
("stranded units") for post-query cleanup.

Previously, these were cleaned up recursively. For queries with many merged
derived tables, the deep recursion over the list of stranded units could
exhaust the stack. This change replaces the recursive cleanup with an
iterative loop to prevent stack overflows.
Sergei Golubchik
MDEV-39281 CONNECT OEM tables don't check subtype length

copy the coresponding check from OEMColumns()

Reported by Aisle Research
Sergei Golubchik
MDEV-39289 CONNECT REST support on Windows doesn't escape the url

let's use _spawnlp that doesn't need args to be escaped

Reported by Aisle Research
Alessandro Vetere
MDEV-38814 Avoid underflow-after-split in update

Issue:
In btr_cur_optimistic_update(), freshly split pages are subject
to DB_UNDERFLOW due to the new size (after delete+insert) being
less than BTR_CUR_PAGE_COMPRESS_LIMIT(index) target, even if the record
itself is growing.

Fix:
In btr_cur_optimistic_update(), avoid this behavior by gating the
DB_UNDERFLOW error condition behind record shrinkage check.

Nothing is changed if the record is not shrinking.

The counters in the index_lock_upgrade.result file are updated
accordingly.
The new count of DB_UNDERFLOW optimistic update errors during the test
is reduced to 0.
The index lock upgrades are reduced accordingly.
Michal Schorm
CONC-816: remove dead cmake version checks

plugins.cmake had a VERSION_LESS 2.8.11 branch falling back
to include_directories() instead of target_include_directories().
libmariadb/CMakeLists.txt had a VERSION_GREATER 2.8.7 guard
around OBJECT library creation. Both are always true/false
with cmake_minimum_required 3.12.

Co-Authored-By: Claude AI <[email protected]>
Michal Schorm
CONC-816: remove MSVC_VERSION > 1310 guards in WindowsCache

MSVC_VERSION 1310 is Visual Studio .NET 2003, released in 2003.
The check is always true: cmake 3.12 (our minimum) was released
in 2018 and does not support any MSVC older than VS 2015
(MSVC_VERSION 1900). The guarded functions (strnlen, vsnprintf,
strtok_s) have been available in MSVC since VS 2005 (1400).

Co-Authored-By: Claude AI <[email protected]>
Michal Schorm
CONC-816: ADD_DEFINITIONS -> add_compile_definitions

Replace the deprecated ADD_DEFINITIONS() with the modern
add_compile_definitions() (available since cmake 3.12).
The new form does not require the -D prefix on each definition.

One call passed -Wno-deprecated-declarations (a compiler warning
flag, not a preprocessor definition). ADD_DEFINITIONS() happened
to work because it blindly appends to the compile command line,
but add_compile_definitions() would auto-prepend -D, producing
the nonsensical -D-Wno-deprecated-declarations. Corrected to
add_compile_options(), which is the proper command for compiler
flags.

ADD_DEFINITIONS(${LIBMARIADB_PLUGIN_DEFS}) is left unconverted.
The variable is never set within this repository or mariadb-server
-- presumably set by an external build harness, if at all. Since
ADD_DEFINITIONS expects -DFOO format while add_compile_definitions
expects FOO (without -D), converting without knowing the variable's
content would risk double-prefixing (-D-DFOO).

Co-Authored-By: Claude AI <[email protected]>
Thirunarayanan Balathandayuthapani
MDEV-39261 MariaDB crash on startup in presence of indexed virtual columns

Problem:
========
A single InnoDB purge worker thread can process undo logs from different
tables within the same batch. But get_purge_table(), open_purge_table()
incorrectly assumes that a 1:1 relationship between a purge worker thread
and a table within a single batch. Based on this wrong assumtion,
InnoDB attempts to reuse TABLE objects cached in thd->open_tables for
virtual column computation.

1) Purge worker opens Table A and caches the TABLE pointer in thd->open_tables.
2) Same purge worker moves to Table B in the same batch, get_purge_table()
retrieves the cached pointer for Table A instead of opening Table B.
3) Because innobase::open() is ignored for Table B, the virtual column
template is never initialized.
4) virtual column computation for Table B aborts the server

Solution:
========
The purge coordinator thread now opens both InnoDB dict_table_t* and
MariaDB TABLE* handles during batch preparation in
trx_purge_attach_undo_recs(). These handles are stored in a new
purge_table_ctx_t structure within each purge_node_t->tables map,
keyed by table_id. When worker thread needs TABLE* for virtual column
computation, it fetches the TABLE* from purge_node_t->tables.

purge_table_ctx_t(): Structure to hold dict_table_t*, MDL_ticket* and
TABLE* for each table during batch processing. TABLE* is being opened
only when virtual index exists for the table.

innobase_open_purge_table(), innobase_close_thread_table(): wrapper to
make open_purge_table(), close_thread_tables() accessible from InnoDB
purge subsystem

trx_purge_table_open(): Modified to open TABLE* using the
coordinator's MDL ticket

open_purge_table(): Accept and use the MDL ticket from coordinator thread

purge_sys_t::coordinator_thd: To track the coordinator thread in purge
subsystem

purge_node_t::end(): Skip innobase_reset_background_thd() for the coordinator
thread to prevent the premature table closure

trx_purge(): Closes the coordinator's TABLE* objects after all workers
are completed
Alessandro Vetere
MDEV-38814 Monitor pessimistic update fallbacks

Add 4 debug-only counters:

1. Innodb_btr_cur_n_index_lock_upgrades
2. Innodb_btr_cur_pessimistic_update_calls
3. Innodb_btr_cur_pessimistic_update_optim_err_underflows
4. Innodb_btr_cur_pessimistic_update_optim_err_overflows

to track pessimistic update fallbacks and monitor how often the
exclusive index lock is acquired, exposed via SHOW GLOBAL
STATUS in debug builds.

Add a test that inserts and updates 1000 rows in a 4K-page table,
verifying the current status of the counters.
Georg Richter
Merge branch '3.3' into 3.4
  • cc-x-codbc-windows: 'dojob pwd if '3.4' == '3.4' ls win32/test SET TEST_DSN=master SET TEST_DRIVER=master SET TEST_PORT=3306 SET TEST_SCHEMA=odbcmaster if '3.4' == '3.4' cd win32/test if '3.4' == '3.4' ctest --output-on-failure' failed -  stdio
Michal Schorm
CONC-816: CMAKE_COMPILER_IS_GNUCC -> CMAKE_C_COMPILER_ID

CMAKE_COMPILER_IS_GNUCC is deprecated since cmake 2.6.
The same file already uses CMAKE_C_COMPILER_ID at line 414;
this makes the GCC detection consistent.

Co-Authored-By: Claude AI <[email protected]>
Thirunarayanan Balathandayuthapani
MDEV-39261 MariaDB crash on startup in presence of indexed virtual columns

Problem:
========
A single InnoDB purge worker thread can process undo logs from different
tables within the same batch. But get_purge_table(), open_purge_table()
incorrectly assumes that a 1:1 relationship between a purge worker thread
and a table within a single batch. Based on this wrong assumtion,
InnoDB attempts to reuse TABLE objects cached in thd->open_tables for
virtual column computation.

1) Purge worker opens Table A and caches the TABLE pointer in thd->open_tables.
2) Same purge worker moves to Table B in the same batch, get_purge_table()
retrieves the cached pointer for Table A instead of opening Table B.
3) Because innobase::open() is ignored for Table B, the virtual column
template is never initialized.
4) virtual column computation for Table B aborts the server

Solution:
========
The purge coordinator thread now opens both InnoDB dict_table_t* and
MariaDB TABLE* handles during batch preparation in
trx_purge_attach_undo_recs(). These handles are stored in a new
purge_table_ctx_t structure within each purge_node_t->tables map,
keyed by table_id. When worker thread needs TABLE* for virtual column
computation, it fetches the TABLE* from purge_node_t->tables.

purge_table_ctx_t(): Structure to hold dict_table_t*, MDL_ticket* and
TABLE* for each table during batch processing. TABLE* is being opened
only when virtual index exists for the table.

innobase_open_purge_table(), innobase_close_thread_table(): wrapper to
make open_purge_table(), close_thread_tables() accessible from InnoDB
purge subsystem

trx_purge_table_open(): Modified to open TABLE* using the
coordinator's MDL ticket

open_purge_table(): Accept and use the MDL ticket from coordinator thread

purge_sys_t::coordinator_thd: To track the coordinator thread in purge
subsystem

purge_node_t::end(): Skip innobase_reset_background_thd() for the coordinator
thread to prevent the premature table closure

trx_purge(): Closes the coordinator's TABLE* objects after all workers
are completed
Marko Mäkelä
Merge MDEV-21423
Georg Richter
Fix MARIADB_TIMESTAMP calculation and packing (CONC-815)

- Add a new type MARIADB_TIMESTAMP which includes seconds and
  microseconds since the epoch.
- Fix second and microsecond calculation (seconds are now
  packed in MyISAM format).
- Update documentation to reflect the new type
- Updated Replication/Binlog section of client/server protocol
  documentation
- Add rpl_api tests for automated testing (requires binary
  log enabled on the server).

Credits to Asim Viladi Oglu Manizada who discovered this bug!
  • cc-x-codbc-windows: 'dojob pwd if '3.4' == '3.4' ls win32/test SET TEST_DSN=master SET TEST_DRIVER=master SET TEST_PORT=3306 SET TEST_SCHEMA=odbcmaster if '3.4' == '3.4' cd win32/test if '3.4' == '3.4' ctest --output-on-failure' failed -  stdio
Alexey Botchkov
MDEV-39124 XMLTYPE: allow only well-formed XML.

Necessary checks added to the XMLTYPE.
Thirunarayanan Balathandayuthapani
MDEV-39261 MariaDB crash on startup in presence of indexed virtual columns

Problem:
========
A single InnoDB purge worker thread can process undo logs from different
tables within the same batch. But get_purge_table(), open_purge_table()
incorrectly assumes that a 1:1 relationship between a purge worker thread
and a table within a single batch. Based on this wrong assumtion,
InnoDB attempts to reuse TABLE objects cached in thd->open_tables for
virtual column computation.

1) Purge worker opens Table A and caches the TABLE pointer in thd->open_tables.
2) Same purge worker moves to Table B in the same batch, get_purge_table()
retrieves the cached pointer for Table A instead of opening Table B.
3) Because innobase::open() is ignored for Table B, the virtual column
template is never initialized.
4) virtual column computation for Table B aborts the server

Solution:
========
The purge coordinator thread now opens both InnoDB dict_table_t* and
MariaDB TABLE* handles during batch preparation in
trx_purge_attach_undo_recs(). These handles are stored in a new
purge_table_ctx_t structure within each purge_node_t->tables map,
keyed by table_id. When worker thread needs TABLE* for virtual column
computation, it fetches the TABLE* from purge_node_t->tables.

purge_table_ctx_t(): Structure to hold dict_table_t*, MDL_ticket* and
TABLE* for each table during batch processing. TABLE* is being opened
only when virtual index exists for the table.

innobase_open_purge_table(), innobase_close_thread_table(): wrapper to
make open_purge_table(), close_thread_tables() accessible from InnoDB
purge subsystem

trx_purge_table_open(): Modified to open TABLE* using the
coordinator's MDL ticket

open_purge_table(): Accept and use the MDL ticket from coordinator thread

purge_sys_t::coordinator_thd: To track the coordinator thread in purge
subsystem

purge_node_t::end(): Skip innobase_reset_background_thd() for the coordinator
thread to prevent the premature table closure

trx_purge(): Closes the coordinator's TABLE* objects after all workers
are completed
Alessandro Vetere
MDEV-38814 Skip root in opposite-intention check

btr_cur_need_opposite_intention() checks whether a structural change at
a page (e.g., a page split inserting a node pointer into the parent,
a merge with a sibling removing one, or a boundary change updating
one) could cascade into its parent.  Handling such a structural
change would also require latching sibling pages at that level, which
cannot be safely acquired under SX without risking deadlock with
concurrent readers; hence the index lock must be upgraded from SX to X.
The root has no parent and no siblings, so such cascades are impossible
and no sibling latches are needed; return false early, avoiding
unnecessary upgrades.

A root page split (btr_root_raise_and_insert()) is a special case that
changes the tree height, but it never cascades upward into a parent.

Add a debug assertion in btr_cur_search_to_nth_level() that freshly
acquiring the root page is only permitted under X-lock or when no page
latch is held (brand-new descent).  Under SX with existing page latches,
the root must already be latched in the mtr, retained from the original
search_leaf() descent.

Add debug-only counters (Innodb_btr_need_opposite_intention_root and
Innodb_btr_index_lock_upgrades) to track how often the root early return
is taken and how often SX-to-X upgrades occur, exposed via SHOW GLOBAL
STATUS in debug builds.

Add a test that inserts and updates 1000 rows in a 4K-page table,
verifying that the root early return fires and that index lock upgrades
are reduced.
Georg Richter
Merge pull request #305 from FaramosCZ/CONC-813+CONC-814

CONC-813, CONC-814 Fix UBSan errors
Michal Schorm
CONC-816: drop obsolete cmake policy loop

All policies except CMP0077 already default to NEW with
cmake_minimum_required(VERSION 3.12). Keep only the explicit
CMP0077 set until the minimum is bumped to 3.13.

Co-Authored-By: Claude AI <[email protected]>
Yuchen Pei
MDEV-38752 [wip] check supertype

- Type_handler_general_purpose_int
- Type_handler_decimal_result
- Type_handler_longstr
- time_common
- datetime_common
- date_common
Thirunarayanan Balathandayuthapani
MDEV-39261 MariaDB crash on startup in presence of indexed virtual columns

Problem:
========
A single InnoDB purge worker thread can process undo logs from different
tables within the same batch. But get_purge_table(), open_purge_table()
incorrectly assumes that a 1:1 relationship between a purge worker thread
and a table within a single batch. Based on this wrong assumtion,
InnoDB attempts to reuse TABLE objects cached in thd->open_tables for
virtual column computation.

1) Purge worker opens Table A and caches the TABLE pointer in thd->open_tables.
2) Same purge worker moves to Table B in the same batch, get_purge_table()
retrieves the cached pointer for Table A instead of opening Table B.
3) Because innobase::open() is ignored for Table B, the virtual column
template is never initialized.
4) virtual column computation for Table B aborts the server

Solution:
========
The purge coordinator thread now opens both InnoDB dict_table_t* and
MariaDB TABLE* handles during batch preparation in
trx_purge_attach_undo_recs(). These handles are stored in a new
purge_table_ctx_t structure within each purge_node_t->tables map,
keyed by table_id. When worker thread needs TABLE* for virtual column
computation, it fetches the TABLE* from purge_node_t->tables.

purge_table_ctx_t(): Structure to hold dict_table_t*, MDL_ticket* and
TABLE* for each table during batch processing. TABLE* is being opened
only when virtual index exists for the table.

innobase_open_purge_table(), innobase_close_thread_table(): wrapper to
make open_purge_table(), close_thread_tables() accessible from InnoDB
purge subsystem

trx_purge_table_open(): Modified to open TABLE* using the
coordinator's MDL ticket

open_purge_table(): Accept and use the MDL ticket from coordinator thread

purge_sys_t::coordinator_thd: To track the coordinator thread in purge
subsystem

purge_node_t::end(): Skip innobase_reset_background_thd() for the coordinator
thread to prevent the premature table closure

trx_purge(): Closes the coordinator's TABLE* objects after all workers
are completed
Abhishek Bansal
MDEV-38474: ASAN heap-use-after-free in st_select_lex_unit::cleanup

cleanup_stranded_units() was added at the start of
st_select_lex_unit::cleanup() by 34a8209d6657. This causes a
use-after-free when nested subqueries are merged into their parent
unit. With nested subqueries like:

  SELECT * FROM t1
  WHERE a IN (SELECT b FROM t2
              WHERE a IN (SELECT c FROM t3 WHERE FALSE HAVING c < 0));

the stranded_clean_list chains the units as: Unit1 -> Unit2 -> Unit3.
Because cleanup_stranded_units() was called first, Unit1->cleanup()
would recursively trigger Unit2->cleanup(), which in turn would
trigger Unit3->cleanup(). Unit3's cleanup frees its heap-allocated
join structures. But since Unit3 was merged into Unit2, Unit2 still
holds references to Unit3's structures (e.g., st_join_table). When
control returns to Unit2 for its own local cleanup, it accesses
already-freed memory.

Fix: move cleanup_stranded_units() to the end of cleanup(). This way,
each unit completes its own local cleanup first—clearing its
references to any child structures—before triggering cleanup of its
stranded (child) units. This enforces a parent-first cleanup order.