Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
gkodinov
Follow up for MDEV-384999: fix cmake warnings:

Moved the include inside the check for cmake version so that
it doesn't fail on older cmake's.
Oleg Smirnov
MDEV-38574 Rename cloning functions of class Item and descendants

Rename cloning methods of class Item and its descendants
in the following way:

  (from)            (to)
do_build_clone  -> deep_copy
  build_clone  -> deep_copy_with_checks

do_get_copy  -> shallow_copy
  get_copy  -> shallow_copy_with_checks

to better reflect their functionality.

Also make Item::deep_copy() and shallow_copy() protected.
Outside users should call deep_copy_with_checks()
and shallow_copy_with_checks().
Brandon Nesterenko
MDEV-32570 Prep: Refactor functions to handle >32-bit lengths

The functions to read row log events from buffers used 32-bit numeric
types to hold the length of the buffer. MDEV-32570 will add in support
for row events larger than what 32-bits can represent, and so this
patch changes the type for the length variable to size_t so larger
row events can be created from raw memory.
Oleksandr Byelkin
Wolfssl v5.8.4-stable
Sergey Vojtovich
MDEV-37862 - innodb.gap_locks test failure: 0 lock structs, 0 row locks

Test was affected by incompletely closed preceding connections.

Make test agnostic to concurrent connections by querying
InnoDB status only for connections that it uses.

This is an addition to 3b2169f0d1e, which didn't handle a case when
preceding test has active transaction on disconnect.
Thirunarayanan Balathandayuthapani
MDEV-37042  innodb_undo_log_truncate=ON leads to out-of-bounds write

During undo tablespace truncation, pages with LSNs older than the
tablespace creation LSN may still exist in the buffer pool and get
submitted to the doublewrite buffer. When mtr_t::commit_shrink() is
invoked shortly after doublewrite batch submission,
this can lead to out-of-bounds write errors.

Fix:
===
buf_dblwr_t::flush_buffered_writes_completed() : skip doublewrite
processing for pages where the page LSN is older
than the tablespace creation LSN. Such pages belong to the old
tablespace before truncation and should not be written through the
doublewrite buffer.
Marko Mäkelä
MDEV-38595: Simplify InnoDB doublewrite buffer creation

buf_dblwr_t::create(): Create the doublewrite buffer in a single
atomic mini-transaction. Do not write any log records for
initializing any doublewrite buffer pages, in order to avoid
recovery failure with innodb_log_archive=ON starting from the
very beginning.

The mtr.commit() in buf_dblwr_t::create() was observed to
comprise 295 mtr_t::m_memo entries: 1 entry for the
fil_system.sys_space and the rest split between page 5 (TRX_SYS)
and page 0 (allocation metadata). We are nowhere near the
sux_lock::RECURSIVE_MAX limit of 65535 per page descriptor.

Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Saahil Alam
Sergei Golubchik
Add test cases for comment handling in audit plugin

The fix was implemented in 635559a.

Previously with the hand-rolled SQL string parsing in the audit plugin, there
were many simple ways to bypass server audit logging by placing comments
strategically in the query string. The fix in 635559a removes the custom SQL
string parser which addresses the issue.

We now add MTRs for validation.
Brandon Nesterenko
MDEV-32570 Prep: Split Rows_log_event::write_data_body()

To prepare for MDEV-32570, the Rows_log_event::write_data_body() is
split into two functions:
1. write_data_body_metadata(), which writes the context of the rows
    data (i.e. width, cols, and cols_ai), which will only be written
    for the first event fragment.
2. write_data_body_rows(), which allows the writing of the rows data
    to be fragmented by parameterizing the writing of the rows data to
    start at a given offset and only write a certain length. This lets
    each row fragment (for MDEV-32570) to contain a chunk of the rows
    data
Sergei Golubchik
MDEV-32570 update tests
Oleg Smirnov
MDEV-38574 Rename cloning functions of class Item and descendants

Rename `Item::clone_item()` to `clone_constant()`, and do
the same for any overloads in descendant items.
The function returns non-NULL only for items that represent
constant literals.
Sergei Golubchik
MDEV-38604 Assertion `thd->utime_after_query >= thd->utime_after_lock' failed in query_response_time_audit_notify on 2nd execution of SP with query cache

even when PS is served from a query cache, thd->utime_after_query
must be updated.

also, backport the assert from 11.8
Brandon Nesterenko
MDEV-32570 (test): Add tests

This commit adds the following MTR tests for MDEV-32570:
* rpl_fragment_row_event_main.test
* rpl_fragment_row_event_mysqlbinlog.test
* rpl_fragment_row_event_span_relay_logs.test
* rpl_fragment_row_event_err.test

And also fixes existing tests appropriately.
Vladislav Vaintroub
MDEV-37527 Client plugins are underlinked

Provide test case
Sergei Golubchik
fix sporadic failures on main.user_var --view
Oleksandr Byelkin
10.6 adjasts
Raghunandan Bhat
MDEV-38487: Prevent aggregate function cloning when pushing HAVING into WHERE

Problem:
  When building a pushable condition that can be pushed from HAVING into
  WHERE, the server tries to clone aggregate functions. This is not
  necessary because aggregate functions can not be pushed into WHERE
  anyway.

Fix:
  This fix introduces a check within `Item::build_pushable_cond` to skip
  cloning aggregate functions.

Also fixes assert failure in MDEV-38492, by adding a missing copy method
for `Item_aggregate_ref`.
Marko Mäkelä
MDEV-23298 fixup: have_perfschema.inc
Christian Hesse
MDEV-35904/MDEV-19210: use environment file in systemd units for _WSREP_START_POSITION

MDEV-35904 - backport MDEV-19210 to 10.11 as referenced
by unset environment variables become warnings.

We used to run `systemctl set-environment` to pass
_WSREP_START_POSITION. This is bad because:

* it clutter systemd's environment (yes, pid 1)
* it requires root privileges
* options (like LimitNOFILE=) are not applied

Let's just create an environment file in ExecStartPre=, that is read
before ExecStart= kicks in. We have _WSREP_START_POSITION around for the
main process without any downsides.
Oleksandr Byelkin
MDEV-35288 Assertion `!is_cond()' failed in virtual longlong Item_bool_func::val_int()

Boolean function now uses val_bool to get string result (val_str())
Rophy Tsai
MDEV-38431: fix database pointer calculation for long passwords

When a client connects with CLIENT_PLUGIN_AUTH_LENENC_CLIENT_DATA
capability and a password >= 251 bytes, the server incorrectly
calculates the database name pointer.

For passwords >= 251 bytes, LENENC uses a 3-byte prefix (0xFC + 2 bytes),
but the old code assumed a 1-byte prefix. Fix by using the passwd pointer
which has already been advanced past the length prefix by
safe_net_field_length_ll().

Also fix db pointer calculation for old protocol (!CLIENT_SECURE_CONNECTION)
where the password is null-terminated and needs +1 to skip the terminator.
Oleksandr Byelkin
new CC
Mohammad Tafzeel Shams
MDEV-38140: InnoDB index corruption after UPDATE affecting virtual
columns

Issue:
- Purge thread attempts to purge a secondary index record that is not
  delete-marked.

Root Cause:
- When a secondary index includes a virtual column whose v_pos is
  greater than the number of fields in the clustered index record, the
  virtual column is incorrectly skipped while reading from the undo
  record.
- This leads the purge logic to incorrectly assume it is safe to purge
  the secondary index record.
- The code also confuses the nth virtual column with the nth stored
  column when writing ordering columns at the end of the undo record.

Fix:
- In trx_undo_update_rec_get_update(): Skip a virtual column only
  when v_pos == FIL_NULL, not when v_pos is greater than the number
  of fields.
- In trx_undo_page_report_modify(): Ensure ordering columns are
  written based on the correct stored-column positions, without
  confusing them with virtual-column positions.
Brandon Nesterenko
MDEV-32570 (client): Fragment ROW replication events larger than slave_max_allowed_packet

This patch extends mysqlbinlog with logic to support the output and
replay of the new Partial_rows_log_events added in the previous
commit. Generally speaking, as the assembly and execution of the
Rows_log_event happens in Partial_rows_log_event::do_apply_event();
there isn’t much logic required other than outputting
Partial_rows_log_event in base64. With two exceptions..

In the original mysqlbinlog code, all row events fit within a single
BINLOG base64 statement; such that the Table_map_log_event sets up
the tables to use, the Row Events open the tables, and then after
the BINLOG statement is run, the tables are closed and the rgi is
destroyed. No matter how many Row Events within a transaction there
are, they are all put into the same BINLOG base64 statement.
However, for the new Partial_rows_log_event, each fragment is split
into its own BINLOG base64 statement (to respect the server’s
configured max_packet_size). The existing logic would close the
tables and destroy the replay context after each BINLOG statement
(i.e. each fragment). This means that 1) Partial_rows_log_events
would be un-able to assemble Rows_log_events because the rgi is
destroyed between events, and 2) multiple re-assembled
Rows_log_events could not be executed because the context set-up by
the Table_map_log_event is cleared after the first Rows_log_event
executes.

To fix the first problem, where we couldn’t re-assemble
Rows_log_events because the rgi would disappear between
Partial_rows_log_events, the server will not destroy the rgi when
ingesting BINLOG statements containing Partial_rows_log_events that
have not yet assembled their Rows_log_event.

To fix the second problem, where the context set-up by the
Table_map_log_event is cleared after the first assembled
Rows_log_event executes, mysqlbinlog caches the Table_map_log_event
to re-write for each fragmented Rows_log_event at the start of the
last fragment’s BINLOG statement. In effect, this will re-execute
the Table_map_log_event for each assembled Rows_log_event.

Reviewed-by: Hemant Dangi <[email protected]>
Acked-by: Kristian Nielsen <[email protected]>
Signed-off-by: Brandon Nesterenko <[email protected]>
KhaledR57
MDEV-36107 MDEV-36108 Enhance mysqltest language with expression evaluation and variable substitutions

mysqltest had limited scripting capabilities, requiring complex
workarounds for mathematical calculations and string manipulations
in test cases. This commit solves these limitations by adding a new
`$(...)` syntax that enables direct evaluation of mathematical, logical,
and string expressions within test scripts.

Expression Evaluation (MDEV-36107):
- Recursive descent parser supporting arithmetic, logical, comparison,
  and bitwise operators with proper precedence
- Support for integers (decimal, hex, binary), booleans, strings, and
  NULL values
- Variable substitution within expressions
- Integration with existing mysqltest control flow

String Functions (MDEV-36108):
- Base conversion functions supporting bases 2-62
- String manipulation and processing functions
- Regular expression functions
- Conditional and numeric utility functions

The implementation enhances mysqltest's scripting capabilities while
maintaining full backward compatibility.
gkodinov
MDEV-38642: Missing Null terminator in the definition of mysqldump's --system typelib

There was a missing NULL element terminator for --system's type
library definition.

This was causing a crash in find_type_eol when e.g. an incomplete
value was passed to --system where it keeps iterating until it
finds the NULL as a typelib element.

Fixed by appending a NullS to the definition.
Test case added.
Sergei Golubchik
fix rpm upgrade tests after MDEV-37726

MDEV-37726 moved wsrep-start-position to INSTALL_RUNDATADIR
and made the latter to be created by systemd-tmpfiles.

Now postin scriptlet has to run systemd-tmpfiles explicitly
to make sure INSTALL_RUNDATADIR exists before restarting
the server.

followup for 649216e70d87
Marko Mäkelä
MDEV-38618 Unused variable dict_table_t::fk_max_recusive_level

Ever since mysql/mysql-server@377774689bf6a16af74182753fe950d514c2c6dd
was applied in commit 2e814d4702d71a04388386a9f591d14a35980bfed
the data member dict_table_t::fk_max_recusive_level is never being
read, only initialized as 0. Let us follow the lead of
mysql/mysql-server@b22ac19f104c3b3654601b387c37ee82af180d7a
and remove this useless field.
Tony Chen
Add additional password obfuscation test cases for server audit plugin

- GRANT SELECT ... IDENTIFIED BY
- CHANGE MASTER ... MASTER_PASSWORD
- CREATE SERVER ... PASSWORD
- ALTER SERVER ... PASSWORD

All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license
Tony Chen
Simplify event filtering logic in server_audit plugin

Replace if-else chain with a single bitwise AND check when filtering query
events by type (DDL, DML, DCL, etc.).

The removes the need for the goto statements.

Additionally, we remove the orig_query variable as it served no purpose.

All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license
Daniel Black
MDEV-37726 wsrep-new-cluster and wsrep-start-position in wrong directory with wrong selinux permissions

After moving the systemd service to using environment files
instead of `systemctl set-environment` in 11.6 (MDEV-19210),
they (wsrep-new-cluster and wsrep-start-position) are located
in /var/lib/mysql along with the socket file in
Fedora/RHEL-based distros. This causes them to have incorrect
selinux permissions and therefore be not readable by systemd.

A solution is to generate these files in the run directory,
instead, which already has correct selinux label mysqld_var_run_t
mysql-selinux-1.0.12. Dissociating these files and the socket
in CMake configs can also prove useful for other things.

This also corrects some of the duplicate code in the build
scripts and made INSTALL_RUNDATADIR into a proper location
and used this for the tmpfiles where the temporary files
are created.

Debian's location is /run/mysqld/ matching its INSTALL_UNIX_ADDRDIR,
which is now a temporary location controlled by tmpfiles.
Vladislav Vaintroub
pcre2 10.47
Marko Mäkelä
MDEV-21816 Suboptimal implementation of my_convert()

my_convert(): Correctly identify unaligned access by invoking memcpy(),
which will be translated to a single x86 MOV instruction already by
GCC 4.8.5. Also, process the data 64 bits at a time when possible.

The use of memcpy() prevents GCC from emitting a SIMD instruction that
expects aligned memory (MDEV-37148, MDEV-37786, MDEV-38398) and
allows us to enable the fast path across all ISA.

Reviewed by: Alexander Barkov
Reviewed by: Vladislav Vaintroub
Brandon Nesterenko
MDEV-32570 Prep: Split read_log_event into non-checksum version

Preparation for MDEV-32570. When fragmenting a large row event into
multiple smaller fragment events, each fragment event will have its own
checksum attached, thereby negating the need to also store the checksum
of the overall large event.

The existing code assumes all events will always have checksums, but
this won't be true for the rows events that are re-assembled on the
replicas. This patch prepares for this by splitting the logic which
reads in and creates Log_event objects into two pieces, one which
handles the checksum validation; and the other which reads the raw
event data (without the checksum) and creates the object.

All existing code is unchanged which uses the checksum-assuming version
of the event reader. MDEV-32570 will be the only case which will bypass
the checksum logic, and will directly create its rows log events from
memory without validating checksums (as the checksums will have already
been validated by each individual fragment event).
Brandon Nesterenko
MDEV-32570 (server): Fragment ROW replication events larger than slave_max_allowed_packet

This patch solves two problems:
  1. Rows log events cannot be transmitted to the slave if their
    size exceeds slave_max_packet_size (max 1GB at the time of
    writing this patch, i.e. MariaDB 12.3)
  2. Rows log events cannot be binlogged if they are larger than
    4GB because the common binlog event header field event_len is
    32-bits.

This patch adds support for fragmenting large Rows_log_events
through a new event type, Partial_rows_log_event. When any given
instantiation of a Rows_log_event (e.g. Write_rows_log_event, etc)
is too large to be sent to a replica (i.e. larger than the value
slave_max_allowed_packet, as configured on a replica), then the rows
event must be fragmented into sub-events (i.e.
Partial_rows_log_events), so the event can be transmitted to the
replica. The replica will then take the content of each of these
Partial_rows_log_events, and join them together into a large
Rows_log_event to be executed as normal.  Partial_rows_log_events
are written to the binary log sequentially, and the replica assembles
the events in the order they are binlogged.

To control the size of each Partial_rows_log_event, a new system
variable is added: binlog_row_event_fragment_threshold. All
Partial_rows_log_events within the same group will have this size,
except for the last, which will take up the remaining length of the
underlying Rows_log_event.

Each Partial_rows_log_event stores its sequence number (seq_no) in the
overall series of fragments, the total number of fragments needed to
re-assemble the Rows_log_event (total_fragments), a uchar for flags for
embedding extra data, and any additional data as specified by the
flags. Currently, only the first event in a grouping will have
additional data: it will set the first bit in the flags
(FL_ORIG_EVENT_SIZE) to indicate it will be storing the total size of
the underlying Rows_log_event.

The cached Rows_log_event data is fragmented into
Partial_rows_log_events as follows. The primary will still generate a
Rows_log_event to write to the binlog; however, during the actual
writing process, the raw data of the rows event is split into
fragments, each covering some continuous section of the rows data. A
Partial_rows_log_event is created for each continuous section, and the
Partial_rows_log_events are written sequentially in-place of the
too-large Rows_log_event. The original data to be fragmented will
include a header and data header; however, will not include a checksum,
as each Partial_rows_log_event will have a checksum for validation, as
well as a sequence_number and total number of fragments to ensure all
fragments are present.

The re-assembly and execution of the original Rows_log_event on the
replica happens in Partial_rows_log_event::do_apply_event(). The rgi
is extended with a memory buffer that holds all data for the
original Rows_log_event. As each Partial_rows_log_event is
ingested, its Rows_log_event content is appended to this memory
buffer. Once the last fragment has added its content, a new
Rows_log_event is created using that buffer, and executed.

A new error message is added to indicate that the slave has received an
invalid stream of Partial_rows_log_events:
ER_PARTIAL_ROWS_LOG_EVENT_BAD_STREAM.

Note this commit only adds the server logic for fragmentic and
assembling events, the client logic (mysqlbinlog) is in the next
commit.

Alternative designs considered were:
  1. Alternative 1: Change the master-slave communication protocol
    such that the master would send events in chunks of size
    slave_max_allowed_packet. Though this is still a valid idea,
    and would solve the first problem described in this commit
    message, this would still leave the limitation that
    Rows_log_events could not exceed 4GB. Eventually, this change
    should still be addressed (e.g. in MDEV-37853), and for users
    of binlog-in-engine (MDEV-34705) which supports out-of-band
    binlogging of events, this MDEV-32570 work will be superseded.
  2. Alternative 2: Create a generic “Container_log_event” with the
    intention to embed various other types of event data for
    various purposes, with flags that describe the purpose of a
    given container. This seemed overboard, as there is already a
    generic Log_event framework that provides the necessary
    abstractions to fragment/reassemble events without adding in
    extra abstractions.
  3. Alternative 3: Add a flag to Rows_log_event with semantics to
    overwrite/correct the event_len field of the common event
    header to use a 64-bit field stored in the data_header of the
    Rows_log_event; and also do alternative 1, so the master would
    send the large (> 4GB) rows event in chunks. This approach
    would add too much complexity (changing both the binlogging
    and transport layer); as well as introduce inconsistency to
    the event definition (event_len and next_event_position would
    no longer have consistent meanings).

Reviewed-by: Hemant Dangi <[email protected]>
Acked-by: Kristian Nielsen <[email protected]>
Signed-off-by: Brandon Nesterenko <[email protected]>
Daniel Black
MDEV-15502 debian: systemd, with tmpfiles install not required

With PermissionsStartOnly deprecated, remove this from the
systemd service file.

Replace Debian's ExecStartPre "install -d" with a tmpfile
configuration directive creating the directory with this.

Debian's ExecStartPost of the mariadb upgrade uses the !
special executable prefix added in systemd v231 to use
root privs.
Monty
MDEV-28136 MariaDB ASAN Unknown Crash in Item_func_to_char::parse_format_string

Fixed by adding checks we do not go outside of string area
Sergei Golubchik
MDEV-38604 fix SP execution too