Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
Alessandro Vetere
fixup: fix options parsing

avoid reading after the end of the current string.
Alessandro Vetere
fixup: implement and test per-index AHI options

- make per-table/per-index `adaptive_hash_index` and per-index
  `for_equal_hash_point_to_last_record` options have 3 possible values:
  DEFAULT (0), YES (1), NO (2)
- renamed `bytes_from_incomplete_fields` to
  `bytes_from_incomplete_field`
- per-index `complete_fields` and `bytes_from_incomplete_field` options
  now default to `ULONGLONG_MAX` (which means DEFAULT/unset) and
  can be set to any legal value including 0
  - `complete_fields` in [0, 64]
  - `bytes_from_incomplete_field` in [0, 16383]
- store all AHI related options in per-index `dict_index_t::ahi`
  bit-packed 32-bit atomic field `ahi_enabled_fixed_mask`
  - static assertions and debug assertions ensure that all options fit
    into the 32-bit field
  - packing details:
    - `enabled`, `adaptive_hash_index` (first 2 bits)
    - `fields`, `complete_fields` (7 bit)
    - `bytes`, `bytes_from_incomplete_field` (14 bits)
    - `left`, `~for_equal_hash_point_to_last_record` (1 bit)
    - `is_fields_set`, `fields` set flag (1 bit)
    - `is_bytes_set`, `bytes` set flag (1 bit)
    - `is_left_set`, `left` set flag (last 1 bit)
    - 5 bits spare after `is_left_set`
- remove unused per-table `ahi_enabled` option in `dict_table_t`
- in `innodb_ahi_enable` set per-index options in any case to avoid
  stale values being picked up later
- in `innodb_ahi_enable` ensure that the primary key is not updated
  twice if both per-table and per-index options are set
- in `btr_sea::resize` avoid losing the previous `btr_sea::enabled`
  setting
- in `btr_search_update_hash_ref` apply the per-index AHI options
  using bit-masking to override internal heuristic values with user
  preferences
- in `innodb.index_ahi_option` replace `ANALYZE FORMAT=JSON` on
  `SELECT` over warmed-up AHI with a stored procedure which
  checks if AHI is used during a burst of index lookups checking
  delta in `adaptive_hash_searches` InnoDB monitor variable as this
  is more stable
- in `innodb.index_ahi_option` test a combination of per-table
  and per-index AHI options
- in `innodb.index_ahi_option` test that the maximum number of fields
  per (secondary) index is 64 (32+32)
- in `innodb.index_ahi_option_debug` test debug builds with
  `index_ahi_option_debug_check` debug variable enabled to verify that
  the proper per-index AHI options are applied during index lookups
- in `innodb.index_ahi_option_debug` test that illegal per-index AHI
  are non-destructive and just lead to no AHI usage
- in `sys_vars.innodb_adaptive_hash_index_basic` replace:
  - `for_equal_hash_point_to_last_record=1`
  with
  - `for_equal_hash_point_to_last_record=no`
  to reflect the new 3-value logic
- in `sys_vars.innodb_adaptive_hash_index_basic check that both
  `complete_fields` and `bytes_from_incomplete_field` can be set to 0
Sergei Golubchik
MDEV-32570 update tests
Brandon Nesterenko
MDEV-38641: Failure of Replication of System Versioning Tables

System versioned table UPDATES would fail to replicate on debug builds
with the debug assertion:

rpl_utility_server.cc:1058: bool RPL_TABLE_LIST::give_compatibility_error(rpl_group_info *, uint): Assertion `m_tabledef.master_column_name[col]' failed.

Though caught with system versioned tables, the problem is
generalizable to any transactions which have multiple Rows_log_events
that update the same table. That is, during the error reporting for
columns which were present in the Rows_log_event but not on the slave
table, there is a debug assertion that validates that the master's
table name exists.  After reporting this error, the pointer to that
master table name is nullified. This means that future Rows_log_events
would not have this table name, and the assertion would fail (release
builds would likely segfault when logging the error).

The fix for this is not to nullify the pointer after reporting the
error, so future Rows_log_events can continue using the pointer to the
master's table name.
Brandon Nesterenko
MDEV-32570 (test): Add tests

This commit adds the following MTR tests for MDEV-32570:
* rpl_fragment_row_event_main.test
* rpl_fragment_row_event_mysqlbinlog.test
* rpl_fragment_row_event_span_relay_logs.test
* rpl_fragment_row_event_err.test

And also fixes existing tests appropriately.
Mohammad Tafzeel Shams
MDEV-38079: Crash Recovery Fails After ALTER TABLE…PAGE_COMPRESSED=1

Issue:
Recovery fails because the expected space ID does not match the space
ID stored in the page.

Root Cause:
- Before the crash, the nth page (n != 0) gets flushed to disk as a
  compressed page.
- Page 0 remains unflushed, and the compressed flag for the space is
  made durable only in the redo logs.
- During recovery, the compressed flag is first set to indicate a
  compressed space.
- Later, while applying redo logs, an earlier LSN may reset it to
  non-compressed and then back to compressed.
- If the nth page is read during this intermediate state, a compressed
  page may be read as non-compressed, causing a space ID mismatch.

Fix:
- recv_sys_t::space_flags_lsn : Added a map to track the last applied
  LSN for each space and avoid stale updates from earlier LSNs.
- recv_sys_t::update_space_flags() : Updates space->flags during
  recovery only if the update comes from the latest LSN.
bsrikanth-mariadb
MDEV-31255: Crash with fulltext search subquery in explain delete/update

ft_handler isn't getting initialized for subqueries inside explain
delete/update queries. However, ft_handler is accessed inside ha_ft_read(),
and is the reason for NULL pointer exception.
This is not the case with non-explain delete/update queries, as
well as explain/non-explain select queries.

Follow the approach the SELECT statements are using in
JOIN::optimize_constant_subqueries(): remove SELECT_DESCRIBE
flag when invoking optimization of constant subqueries.

Single-table UPDATE/DELETEs have SELECT_LEX but don't have JOIN.
So, we make optimize_constant_subqueries() not to be a member
of JOIN class, and instead move it to SELECT_LEX, and then
invoke it from single-table UPDATE/DELETE as well as for SELECT queries.
Marko Mäkelä
fixup! 7442b09bbc9ae6ea25dca14e595333b476ca36cf

Cover SET GLOBAL innodb_adaptive_hash_index=if_specified.
TODO: Only distinct 2 values of the table option
adaptive_hash_index are still being observed.
Monty
MDEV-36290: Improved support of replication between tables of different structure

One can have data loss in multi-master setups when 1) both masters
update the same table, 2) ALTER TABLE is run on one master which
re-arranges the column ordering, and 3) transactions are binlogged
in ROW binlog_format.

This is because the slave assumes that all columns are in the same
order on the master and slave and all columns on the master also
exists on the slave. This happens even if binlog_row_metadata=FULL is
used.  If this is not the case, this will lead to silent data loss.

A new option for slave_type_conversions bit field,
ERROR_IF_MISSING_FIELD, has been added, along with a new error,
ER_SLAVE_INCOMPATIBLE_TABLE_DEF. This allows the user to define if
the slave should abort replication if it is missing some field that
existed on the master. The option is off by default to keep things
compatible with earlier versions.
If a field is missing on the slave and log_warnings >= 1, a warning
will be logged to the error log.

This patch fixes this, when binlog_row_metadata=FULL is used on the
master, by mapping fields with identical names on the master and slave.
If slave has fields that does not exist in the row event, these will
be set to their default value.

The main idea is that we added two conversion tables:
m_tabledef.master_to_slave_map[master_column_index] -> slave_column_index
and m_tabledef.master_to_slave_error[master_column_index] which contains
an error number if the master_column does not exist on the slave or
it is not possible to convert the master data to the slave column.
master_to_slave_error[#] contains 0 if the column exists and is compatible.

General code changes:
- Instead of looping over row fields in the order of slave table
  we are instead looping over fields in the order of the binary log.
- We are using table->write_set to know which fields should be updated
  on the slave. This is reflected in unpack_row
- We are calling TABLE::mark_columns_per_binlog_row_image() to ensure
  that rpl_write_set is properly set. This is needed if the slave also
  is doing binary logging.
- Before replication aborted if the master and slave tables were too
  different.  Now replication is only aborted if the row actually uses
  columns that does not exists on the slave (and ERROR_IF_MISSING_FIELD
  is used) or uses columns that cannot be converted.
  - Instead of giving errors in compatible_with(), used when table is
    accessed by first the row event, we are instead giving errors
    when we examine a row event and notice that it is accessing
    a not existing or not compatible field.

Other code changes:
- Removed conv_table argument from compatible_with() and store it
  directly in RPL_TABLE_LIST->m_conv_table
- table_def::compatible_with() returns now 1 on error (not 0).
- Remove m_width and skip arguments from prepare_record() as we are
  now using table->write_set() to check which elements need a default
  value.
- Moved DBUG_ENTER() to it's proper place (after variable
  declarations) in a few functions.
- Some changes in unpack_row():
  - Replaced null_mask and null_ptr with an indexed bit check for
    simplicity.
  - Removed check of rgi == null and table_found which never worked.
  - Updated comments to reflect current code.
  - Indentation changes as the code now uses 'continue' instead of
    'if-else' in the main loop.
  - The code to throw away 'extra master fields' is not needed as we
    are now looping over fields in binary log, not over fields in
    slave table.
- Simplified get_table_data(TABLE *table_arg) by returning found
  table_list.
- Errors for row events are now initialized in compatible_with(),
  checked in check_wrong_column_usage() and reported in
  give_compatibility_error().

Co-authored-by: Brandon Nesterenko <[email protected]>
Rucha Deodhar
MDEV-38620: Server crashes in setup_returning_fields upon 2nd execution
of multi-table-styled DELETE from a view

Analysis:
The item_list of builtin_select stores the fields that are there in the
RETURNING clause.
During the "EXECUTE" command, a "dummy item" is added into the item_list
of the select_lex(builtin_select) representing DELETE during
Sql_cmd_delete::precheck(). This snippet that adds a dummy item is added
because columnstore needs for temporary table. Results are put into a
temporary table and to create a temporary table we need to know what
columns are there which we get from the select_lex->item_list.
As a result, the item_list now has an item even when there is not really
RETURNING clause, resulting in execution of the setup_returning_fields()
when it should have exited already.

Fix:
Instead of checking whether builint_select's item_list is empty to
determine whether there is RETURNING clause, use a flag.
Monty
MDEV-37070  Implement table options to enable/disable features

Added ADAPTIVE_HASH_INDEX=YES|NO table and index option to InnoDB.
The table and index options only have an effect if InnoDB adaptive hash
index feature is enabled.

- Having the ADAPTIVE_HASH_INDEX TABLE option set to NO will disable
  adaptive hash index for all indexes in the table that does not have
  the index option adaptive_hash_index=yes.
- Having the ADAPTIVE_HASH_INDEX TABLE option set to YES will enable the
  adaptive hash index for all indexes in the table that does not have
  the index option adaptive_hash_index=no.
- Using adaptive_hash_index=default deletes the old setting.
- One can also use OFF/ON as the options. This is to make it work similar
  as other existing options.
- innodb.adaptive_hash_index has been changed from a bool to an enum with
  values OFF, ON and IF_SPECIFIED.  If IF_SPECIFIED is used, adaptive
  hash index are only used for tables and indexes that specifies
  adaptive_hash_index=on.
- The following new options can be used for further optimize adaptive hash
  index for an index:
  - complete_fields (default 0):
    - 0 to the number of columns the key is defined on
  - bytes_from_incomplete_fields (default 0):
    - This is only usable for memcmp() comparable index fields, such as
      VARBINARY or INT. For example, a 3-byte prefix on an INT will
      return an identical hash value for 0‥255, another one for 256‥511,
      and so on.
  - for_equal_hash_point_to_last_record (default 0)
    -  Default is the first record, known as left_side in the code.
        Example: we have an INT column with the values 1,4,10 and bytes=3,
        will that hash value point to the record 1 or the record 10?
        Note: all values will necessarily have the same hash value
        computed on the big endian byte prefix 0x800000, for all of the
        values 0x80000001, 0x80000004, 0x8000000a. InnoDB inverts the
        sign bit in order to have memcmp() compatible comparison

Example:
CREATE TABLE t1 (a int primary key, b varchar(100), c int,
index (b) adaptive_hash_index=no, index (c))
engine=innodb, adaptive_hash_index=yes;

Notable changes in InnoDB
- btr_search.enabled was changed from a bool to a ulong to be
  able to handle options OFF, ON as IF_ENABLED. ulong is needed
  to compile with MariaDB enum variables.
- To be able to find all instances where btr_search.enabled was used
  I changed all code to use btr_search.get_enabled() when accessing
  the value and used btr_search.is_enabled(index) to test if AHI is
  enabled for the index.
- btr_search.enabled() was changed to always take two parameters,
  resize and value of enabled. This was needed as enabled can now
  have values 0, 1, and 2.

Visible user changes:
- select @@global.adaptive_hash_index will now return a string instead
  of 0 or 1.

Other things (for Marko)
- Check in buf0buff.cc buf_pool_t::resize(). The first call to
  btr_search.enable will enver happen as ahi_disabled is always 0
  here.
Alexander Barkov
MDEV-10152 Add support for TYPE .. IS REF CURSOR

Version#2

In progress
Michael Widenius
MDEV-37674: Replace std::string with LEX_CSTRING in Optional_metadata_fields

- Less memory allocations (less fragmentation), less memory usage,
  faster performance, less code (both source and executed).
  - Added option used by RPL_TABLE_LIST::create_column_mapping() to
    only parse and allocate column names.
  - Avoid duplicate memory allocations of column names.
- Some protection for out-of-memory (when allocating field names).
  - More work is needed to remove all allocation problems in
    Optional_metadata_field().
- All allocated strings are ending with \0, which makes code safer.
Mohammad Tafzeel Shams
MDEV-38079: Crash Recovery Fails After ALTER TABLE…PAGE_COMPRESSED=1

Issue:
Recovery fails because the expected space ID does not match the space
ID stored in the page.

Root Cause:
- Before the crash, the nth page (n != 0) gets flushed to disk as a
  compressed page.
- Page 0 remains unflushed, and the compressed flag for the space is
  made durable only in the redo logs.
- During recovery, the compressed flag is first set to indicate a
  compressed space.
- Later, while applying redo logs, an earlier LSN may reset it to
  non-compressed and then back to compressed.
- If the nth page is read during this intermediate state, a compressed
  page may be read as non-compressed, causing a space ID mismatch.

Fix:
- recv_sys_t::space_flags_lsn : Added a map to track the last applied
  LSN for each space and avoid stale updates from earlier LSNs.
- recv_sys_t::update_space_flags() : Updates space->flags during
  recovery only if the update comes from the latest LSN.
Alessandro Vetere
fixup: allocate table options in share root

fix MSAN/ASAN errors due to use after free of `option_struct` in
test `parts.partition_special_innodb`.

TODO somebody more knowledgeable about partitions shall review this
Alexander Barkov
MDEV-10152 Add support for TYPE .. IS REF CURSOR

Version#2

In progress
Sergei Petrunia
MDEV-31255 Crash with Explain DELETE on fulltext search query

Variant 2:
Follow the approach the SELECT statements are using in
JOIN::optimize_constant_subqueries(): remove SELECT_DESCRIBE
flag when invoking optimization of constant subqueries.

Single-table UPDATE/DELETEs have SELECT_LEX but don't have JOIN
so we make optimize_constant_subqueries() not to be a member
of JOIN class, and then invoke it from single-table UPDATE/DELETE.

TODO: test coverage for EXPLAIN UPDATE.
Sergei Golubchik
MDEV-37815 field and index engine attributes in partitioning are broken

just like table attributes, field and index attributes must
be parsed using the underlying engine, not ha_partition.
Sergei Petrunia
MDEV-31255 Crash with Explain DELETE on fulltext search query

Variant 2: use the same approach that SELECTs are using.
TODO: test coverage for EXPLAIN UPDATE.
Michael Widenius
MDEV-19683 Add support for Oracle TO_DATE()

Syntax:
TO_DATE(string_expression [DEFAULT string_expression ON CONVERSION ERROR],
        format_string [,NLS_FORMAT_STRING])
The format_string has the same format elements as TO_CHAR(), except a
few elements that are not supported/usable for TO_DATE().
TO_DATE() returns a datetime or date value, depending on if the format
element FF is used.

Allowed separators, same as TO_CHAR():
space, tab and any of !#%'()*+,-./:;<=>

'&' can also be used if next character is not a character a-z or A-Z
"text' indicates a text string that is verbatim in the format. One cannot
use " as a separator.

Format elements supported by TO_DATE():
AD          Anno Domini ("in the year of the Lord")
AD_DOT      Anno Domini ("in the year of the Lord")
AM          Meridian indicator (Before midday)
AM_DOT      Meridian indicator (Before midday)
DAY        Name of day
DD          Day (1-31)
DDD        Day of year (1-336)
DY          Abbreviated name of day
FF[1-6]    Fractional seconds
HH          Hour (1-12)
HH12        Hour (1-12)
HH24        Hour (0-23)
MI          Minutes (0-59)
MM          Month (1-12)
MON        Abbreviated name of month
MONTH      Name of Month
PM          Meridian indicator (After midday)
PM_DOT      Meridian indicator (After midday)
RR          20th century dates in the 21st century. 2 digits
            50-99 is assumed from 2000, 0-49 is assumed from 1900.
RRRR        20th century dates in the 21st century. 4 digits
SS          Seconds
SYYYY      Signed 4 digit year; MariaDB only supports positive years
Y          1 digit year
YY          2 digits year
YYY        3 digits year
YYYY        4 digits year

Note that if there is a missing part of the date, the current date is used!
For example if 'MM-DD HH-MM-SS' then the current year will be used.
(Oracle behaviour)

Not supported options:
- BC, D, DL, DS, E, EE, FM, FX, RM, SSSSS, TS, TZD, TZH, TZR, X,SY
  BC is not supported by MariaDB datetime.
- Most of the other are exotic formats does not make sence in MariaDB as
  we return datetime or datetime with fractions, not string.
- D (day-of-week) is not supported as it is not clear exactly how it would
  map to MariaDB. This element depends on the NLS territory of the session.
- RR only works with 2 digit years (In Oracle RR can also work with 4
  digit years in some context but the rules are not clear).

Extensions / differences compared to Oracle;
- MariaDB supports FF (fractional seconds).  If FF[#] is used,
  then TO_DATE will return a datetime with # of subseconds.
  If FF is not used a datetime will be returned.
  There is warning (no error) if string contains more digts than what
  is specified with F(#]
- Names can be shortened to it's unique prefix. For example January and Ja
  works fine.
- No error if the date string is shorter format_string and the next
  not used character is not a number.. This is useful to get a date
  from a mixed set of strings in date or datetime format.
  Oracle gives an error if date string is too short.
- MariaDB supports short locales as language names
- NLS_DATE_FORMAT can use both " and ' for quoting.
- NLS_DATE_FORMAT must be a constant string.
  - This is to ensure that the server knows which locale to use
    when executing the function.

New formats handled by TO_CHAR():
FF[1-6]    Fractional seconds
DDD        Daynumber 1-366
IW          Week 1-53 according to ISO 8601
I          1 digit year according to ISO 8601
IY          2 digit year according to ISO 8601
IYY        3 digit year according to ISO 8601
IYYY        4 digit year according to ISO 8601
SYYY        4 digit year according to ISO 8601 (Oracle can do signed)

Supported NLS_FORMAT_STRING options are:
NLS_CALENDAR=GREGORIAN
NLS_DATE_LANGUAGE=language

Support languages are:
- All MariaDB short locales, like en_AU.
- The following Oracle language names:
ALBANIAN, AMERICAN, ARABIC, BASQUE, BELARUSIAN, BRAZILIAN PORTUGUESE
BULGARIAN, CANADIAN FRENCH, CATALAN, CROATIAN, CYRILLIC SERBIAN CZECH,
DANISH, DUTCH, ENGLISH, ESTONIAN, FINNISH, FRENCH, GERMAN,
GREEK, HEBREW, HINDI, HUNGARIAN, ICELANDIC, INDONESIAN ITALIAN,
JAPANESE, KANNADA, KOREAN, LATIN AMERICAN SPANISH, LATVIAN,
LITHUANIAN, MACEDONIAN, MALAY, MEXICAN SPANISH, NORWEGIAN, POLISH,
PORTUGUESE, ROMANIAN, RUSSIAN, SIMPLIFIED CHINESE, SLOVAK, SLOVENIAN,
SPANISH, SWAHILI, SWEDISH, TAMIL, THAI, TRADITIONAL CHINESE, TURKISH,
UKRAINIAN

Development bugs fixed:
MDEV-38403 Server crashes in Item_func_to_date::fix_length_and_dec upon
          using an invalid argument
MDEV-38400 compat/oracle.func_to_date fails with PS protocol and cursor
          protocol (Fixed by Serg)
MDEV-38404 TO_DATE: MTR coverage omissions, round 1
MDEV-38509 TO_DATE: AD_DOT does not appear to be supported
MDEV-38513 TO_DATE: NULL value for format string causes assertion failure
MDEV-38521 TO_DATE: Date strings with non-ASCII symbols cause warnings
          and wrong results
MDEV-38578  TO_DATE: Possibly unexpected results upon wrong input
MDEV-38582  TO_DATE: NLS_DATE_LANGUAGE=JAPANESE does not parse values which work in Oracle

Known issues:
- Format string character matches inside quotes are done
  one-letter-to-one-letter, like in LIKE predicate. That means things
  like expansions and contractions do not work.
  For example 'ss' does not match 'ß' in collations which treat them
  as equal for the comparison operator.
  Match is done taking into account case and accent sensitivity
  of the subject argument collation, so for example this now works:
  MariaDB [test]> SELECT TO_DATE('1920á12','YYYY"a"MM') AS c;
  +---------------------+
  | c                  |
  +---------------------+
  | 1920-12-17 00:00:00 |
  +---------------------+

Co-author and reviewer: Alexander Barkov <[email protected]>
Thirunarayanan Balathandayuthapani
MDEV-38667  Assertion in diagnostics area on DDL stats timeout

Reason:
======
During InnoDB DDL, statistics updation fails due to lock wait
timeout and calls push_warning_printf() to generate warnings
but then returns success, causing the SQL layer
to attempt calling set_ok_status() when the diagnostics area
is already set.

Solution:
=========
By temporarily setting abort_on_warning to false around operations
that prevents warning to error escalation and restore the original
setting after calling HA_EXTRA_END_ALTER_COPY for alter operation.
Marko Mäkelä
Test case by Thiru

FIXME: Correctly implement the per-index parameters and adjust the test
Marko Mäkelä
WIP: Consistently handle the adaptive_hash_index attribute

innodb_ahi_enable(): Apply the adative_hash_index table and index options
to the InnoDB table and index.

FIXME: The value 2, which the logic makes use of, is never being used.
bsrikanth-mariadb
MDEV-31255: Crash with fulltext search subquery in explain delete/update

ft_handler isn't getting initialized for subqueries inside explain
delete/update queries. However, ft_handler is accessed inside ha_ft_read(),
and is the reason for NULL pointer exception.
This is not the case with non-explain delete/update queries, as
well as explain/non-explain select queries.

Follow the approach the SELECT statements are using in
JOIN::optimize_constant_subqueries(): remove SELECT_DESCRIBE
flag when invoking optimization of constant subqueries.

Single-table UPDATE/DELETEs have SELECT_LEX but don't have JOIN.
So, we make optimize_constant_subqueries() not to be a member
of JOIN class, and instead move it to SELECT_LEX, and then
invoke it from single-table UPDATE/DELETE as well as for SELECT queries.
bsrikanth-mariadb
MDEV-31255: Crash with fulltext search subquery in explain delete/update

ft_handler isn't getting initialized for subqueries inside explain
delete/update queries. However, ft_handler is accessed inside ha_ft_read(),
and is the reason for NULL pointer exception.
This is not the case with non-explain delete/update queries, as
well as explain/non-explain select queries.

Follow the approach the SELECT statements are using in
JOIN::optimize_constant_subqueries(): remove SELECT_DESCRIBE
flag when invoking optimization of constant subqueries.

Single-table UPDATE/DELETEs have SELECT_LEX but don't have JOIN.
So, we make optimize_constant_subqueries() not to be a member
of JOIN class, and instead move it to SELECT_LEX, and then
invoke it from single-table UPDATE/DELETE as well as for SELECT queries.
Raghunandan Bhat
MDEV-37109: Assertion in `alloc_root` fails when calling stored procedure with view

Problem:
  When a stored procedure executes a statement involving a view and that
  view uses stored routines, the server registers all the routines used
  by the view for prelocking during the current execution. If the stored
  procedure is in its second or subsequent execution, its memory root is
  marked `READ_ONLY`. However, the view processing logic, during the
  second execution tries to allocate on memory root marked `READ_ONLY`,
  triggering assertion failure and server crash.

Fix:
  When handling such views, if the memory root is marked as READ_ONLY,
  switch to a seperate memory root.
Brandon Nesterenko
MDEV-32570 (client): Fragment ROW replication events larger than slave_max_allowed_packet

This patch extends mysqlbinlog with logic to support the output and
replay of the new Partial_rows_log_events added in the previous
commit. Generally speaking, as the assembly and execution of the
Rows_log_event happens in Partial_rows_log_event::do_apply_event();
there isn’t much logic required other than outputting
Partial_rows_log_event in base64. With two exceptions..

In the original mysqlbinlog code, all row events fit within a single
BINLOG base64 statement; such that the Table_map_log_event sets up
the tables to use, the Row Events open the tables, and then after
the BINLOG statement is run, the tables are closed and the rgi is
destroyed. No matter how many Row Events within a transaction there
are, they are all put into the same BINLOG base64 statement.
However, for the new Partial_rows_log_event, each fragment is split
into its own BINLOG base64 statement (to respect the server’s
configured max_packet_size). The existing logic would close the
tables and destroy the replay context after each BINLOG statement
(i.e. each fragment). This means that 1) Partial_rows_log_events
would be un-able to assemble Rows_log_events because the rgi is
destroyed between events, and 2) multiple re-assembled
Rows_log_events could not be executed because the context set-up by
the Table_map_log_event is cleared after the first Rows_log_event
executes.

To fix the first problem, where we couldn’t re-assemble
Rows_log_events because the rgi would disappear between
Partial_rows_log_events, the server will not destroy the rgi when
ingesting BINLOG statements containing Partial_rows_log_events that
have not yet assembled their Rows_log_event.

To fix the second problem, where the context set-up by the
Table_map_log_event is cleared after the first assembled
Rows_log_event executes, mysqlbinlog caches the Table_map_log_event
to re-write for each fragmented Rows_log_event at the start of the
last fragment’s BINLOG statement. In effect, this will re-execute
the Table_map_log_event for each assembled Rows_log_event.

Reviewed-by: Hemant Dangi <[email protected]>
Acked-by: Kristian Nielsen <[email protected]>
Signed-off-by: Brandon Nesterenko <[email protected]>
Thirunarayanan Balathandayuthapani
MDEV-38667  Assertion in diagnostics area on DDL stats timeout

Reason:
======
During InnoDB DDL, statistics updation fails due to lock wait
timeout and calls push_warning_printf() to generate warnings
but then returns success, causing the SQL layer
to attempt calling set_ok_status() when the diagnostics area
is already set.

Solution:
=========
By temporarily setting abort_on_warning to false around operations
that prevents warning to error escalation and restore the original
setting after calling HA_EXTRA_END_ALTER_COPY for alter operation.
Brandon Nesterenko
MDEV-38117: Replication stops with ERROR when Primary Key is not defined in Multi Master

When replicating tables with different structures between a master and
slave (via binlog_row_metadata=FULL, i.e. MDEV-36290), if the table
didn't have any keys, replication could break from a
  Can't find record in '<table>', Error_code: 1032;
error. This was due to the table's internal tracking of provided values
(i.e. TABLE::has_value_set) persisting across transactions. I.e., when
applying a new row event, the has_value_set bitset would start in a
state indicative of the last event. Then, if the new row event required
first finding a row, the has_value_set could represent a larger bitset
than what was actually unpacked from the master. More specifically, if
the last row event inserted something into the table, where slave-only
columns would be given default/auto-generated values, these would
update the TABLE::has_value_set. However, when the next row event would
come in, those auto-generated values from the last transaction would
not be present, but the row-search would include them to look-up the
row, and not be able to find anything.

This is fixed by resetting the table's internal state of provided
values (TABLE::has_value_set) before unpacking the row data. Doing so
revealed a bug where unpacked fields which explicitly provided NULL
values would skip indicating that an explicit value was provided
(and thereby couldn't be found by find_row()). To fix this, the
slave-side field will always call has_explicit_value() if it is
present in the packed data.

Test case based on work from: Deepthi Sreenivas <[email protected]>

Signed-off-by: Brandon Nesterenko <[email protected]>
Reviewed-by: Monty <[email protected]>
Vladislav Vaintroub
make this brilliant piece of code compiled

- proper dependencies, if it is going to use all server plugins, it needs
to depend on all server plugins (build time)

- Remove C++ for directory creation, scanning the whole build directory
and what not. CMake can do it , better. CMake knows what plugins are there,
and their paths. CMake knows what plugins belong to what config in
multi-config builds.

- Allow ':' as plugin separator in the list on Windows. It was previously
  wrongly disabled, citing it is used after drive letter. We do not have
  paths in plugin_load, and ':' can't even be used in file names on Windows

  This simplifies plugin-load passing in cmake custom command, passing
  semicolon delimited things is a mess, because both CMake and unix shell
  try to interpret it.

- Remove mariadb-migrate-config-file from minbuild list, it is
  there by mistake, it forces max-build due to its dependencies

Hopefully the constant rebuilds of the utility (it depends on the whole
server including all plugins!) will annoy everyone so something better is
going to be written. Hint - extract the variables etc information at
runtime, not at the compile time.
Vladislav Vaintroub
make this brilliant piece of code compiled

- proper dependencies, if it is going to use all server plugins, it needs
to depend on all server plugins (build time)

- Remove C++ for directory creation, scanning the whole build directory
and what not. CMake can do it , better. CMake knows what plugins are there,
and their paths. CMake knows what plugins belong to what config in
multi-config builds.

- Allow ':' as plugin separator in the list on Windows. It was previously
  wrongly disabled, citing it is used after drive letter. We do not have
  paths in plugin_load, and ':' can't even be used in file names on Windows

  This simplifies plugin-load passing in cmake custom command, passing
  semicolon delimited things is a mess, because both CMake and unix shell
  try to interpret it.

- Remove mariadb-migrate-config-file from minbuild list, it is
  there by mistake, it forces max-build due to its dependencies

Hopefully the constant rebuilds of the utility (it depends on the whole
server including all plugins!) will annoy everyone so something better is
going to be written. Hint - extract the variables etc information at
runtime, not at the compile time.
Aleksey Midenkov
MDEV-28619 Server crash and UBSAN null-pointer-use in Window_funcs_sort::setup

Optimization in st_select_lex_unit::prepare() removes ORDER BY for
certain subqueries. That excludes ORDER BY items from being fixed, but
sl->window_funcs still contains window function items and those
related to optimized out ORDER BY are unfixed. The error about missing
window spec is thrown when the item is fixed. Hence we get redundant
processing of window function items without checking window spec
existence.

The fix removes the related window function items when ORDER BY is
optimized out. ORDER accumulates window_funcs at parser stage which
are then removed from SELECT_LEX::window_funcs. The fix also updates
similar optimization in mysql_make_view().
gkodinov
MDEV-38673: Focus the pick-a-task wording in CONTRIBUTING.md

The wording on how to pick a task in CONTRIBUTING.md is a bit weak.
A more focused message is needed to highlight the tasks that
are available for people to work on.
Aleksey Midenkov
MDEV-28619 with_flags cleanup

The best practice is to init as much info as possible in class
constructor. with_flags access may be needed before fix_fields() or
fix_fields() may be not called at all like it takes place in
MDEV-28619. The fix for MDEV-28619 requires WINDOW_FUNC check on
unfixed item.
Rex Johnston
MDEV-30073 Wrong result on 2nd execution of PS for query with NOT EXISTS

Summary: Items reverted with the change_item_tree mechanism are
involved in permanent optimizer transformations.  This commit ensures
that items involved in these permanent transformations are created
during the first execution and re-used for subsequent executions.

Queries affected by this bug are numerous, but will always involve
1) 2nd execution of a prepared statement or procedure
2) a permanent transformation, such as a semi-join optimization

Detail:

Consider this run as a prepared statement
SELECT * FROM t1
  WHERE EXISTS
  (
    SELECT dt.a FROM
      (
        SELECT t2a as a, t2b as b FROM t2
      ) dt
      WHERE dt.b = t1a
  )

During name resolution of field dt.b (in the where clause) we end
up calling find_field_in_view()/.../create_view_field().
This is responsible for creating a wrapper around the found Item
(Item_field*)`test`.`t2`.`t2b`
While this Item_direct_view_ref representing 'dt.b' is allocated on
Statement (permanent) memory the change is registered to be reversed
at the end of statement execution.  This is odd and contrary to the
permanent nature of this transformation.

Item::exists2in_processor() is called during the preparation in the
first execution.
We transform the query from
select * from t1 where
exists
(
  select `test`.`t2`.`t2b` from
    (
      select `test`.`t2`.`t2a` AS `a`,`test`.`t2`.`t2b` AS `b` from `test`.`t2`
    ) `dt`
    where `test`.`t2`.`t2b` = `test`.`t1`.`t1a`
    limit 1
)

select * from t1 where
`test`.`t1`.`t1a` in
(
  select `test`.`t2`.`t2b` from
    (
      select `test`.`t2`.`t2a` AS `a`,`test`.`t2`.`t2b` AS `b` from `test`.`t2`
    ) `dt`
    where 1
)

later, the optimizer merges the derived table dt into it's parent

select * from t1 where
`test`.`t1`.`t1a` in
(
  select `test`.`t2`.`t2b` from t2 where 1
)

then this is transformed into a semi-join

select t1.* from t1 semi join t2 on t1a = t2b

At the end of the first execution, the item t2b above is reverted to
dt.b.  During the subsequent name resolution of dt.b, it is resolved
t2a, and the semi-join executed corresponds to

select t1.* from t1 semi join t2 on t1a = t2a

causing a different result set.

Initial Author: Igor Babaev
Reformatted and refactored by: Rex Johnston ([email protected])
Sergei Golubchik
MDEV-38604 fix SP execution too
Marko Mäkelä
fixup! d52fe8f7e7241d1f780f40eba39f4679b8a7a8f6
Brandon Nesterenko
MDEV-38665: mariadb-binlog --read-from-remote-server | mariadb fails

Piping Partial_rows_log_events read from mariadb-binlog
--read-from-remote-server to a mariadb client fails. This is because
the temp_buf for Log_events read by mariadb-binlog is the network
buffer by default, and is shared by all events (and cleared after
each event is processed). In particular, the Table_map event is
needed after it is processed by Partial_rows_log_events to re-print for
the last event in the group. However, its content was cleared after
being initially processed, and the effect is that the last
Partial_rows_log_event was just printed twice.

This is fixed by having Table_map events manage their own memory
buffer when reading from a remote server. This pattern is already
established by annotate events, and the function is re-used and
re-named to be general.
Rex Johnston
MDEV-30073  Wrong result on 2nd execution of PS for query with NOT EXISTS

Add assert to ensure Item_direct_view_refs are not allocated on
the 2nd execution.