Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Categories: connectors experimental galera main
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

connectors experimental galera main
Alexander Barkov
MDEV-19635 System package SYS.DBMS_SQL

In progress
Rex Johnston
MDEV-35333 Performance Issue on TPC-H Query 18

Query 18 of the TPC-H benchmark looks like this

select
    c_name,
    c_custkey,
    o_orderkey,
    o_orderdate,
    o_totalprice,
    sum(l_quantity)
from
    ORDERS
    join CUSTOMER on c_custkey = o_custkey
    join LINEITEM on o_orderkey = l_orderkey
where
    o_orderkey in
    (
        select
            l_orderkey as ok
        from
            LINEITEM
        group by
            l_orderkey
        having
            sum(l_quantity) > 314
    )
group by
    c_name,
    c_custkey,
    o_orderkey,
    o_orderdate,
    o_totalprice
order by
    o_totalprice desc,
    o_orderdate
limit
    100;

Our derived table is converted into a semi-join, but the optimizer
doesn't know how many rows to expect before materialization, so
doesn't know where in the join order is best.

We need to be able to estimate the selectivity of
having sum(l_quantity) > 314 as well as the cardinality of l_orderkey
in the table LINEITEM.

Here we introduce a framework to calculate grouped selectivity, and perhaps
other ignored, non-grouping predicates in the future.

We add selectivity_estimate() to our Item class as the main entry point
for our calculations.  This method is implemented for
aggregate functions Item_func_{gt,ge,lt,le,eq,ne}.
we look for a ref item, indicating an aggregate function and a constant item.
If found, our ref item is asked for a likely distribution
of values.  Implemented ref items are Item_sum_{sum,min,max,count}
each returning something we can compare to our constant item.

The class Expected_distribution encapsulates both the collection of the
expected values from our aggregate function, as well as the comparison
operators (currently implemented using field->val_real()).
In order to calculate the probability of any aggregate function
comparison, we first calculate the required group size, then the
probability of this group size using a poisson distribution.

We also remove a call to the old get_post_group_estimate() to a newer
estimate_post_group_cardinality() that includes estimates of
item list cardinality and provides an entry point to our new
selectivity estimate above.
Dave Gosselin
MDEV-38934: ICP, Reverse scan: range access will scan whole index for range w/o max endpoint

Test to show the bug.
bsrikanth-mariadb
MDEV-38805:Store optimizer_context into a IS table

Currently, optimizer context is written as JSON sub-element in the
Optimizer Trace.

In this task, we separate it out from the optimizer trace, and instead
store it in optimizer_context Information Schema table.

The structure of the context is changed to look like below: -
----------------------------------
CREATE TABLE t1 ( ... );
-- in case it is a constant table
REPLACE INTO t1 VALUES (...);

CREATE TABLE t2 ( ... );
...

set @context='{ JSON with all the captured calls }';
set @optimizer_context='context';

... the original query;
----------------------------------

The IS can be used to read the current stored context, as well as to
dump it to a sql file which can later be replayed in a different
environment.

It is done like below: -
--------------------------------------
set optimizer_record_context=ON;
set optimizer_trace=1

-- sample query

select into outfile '/tmp/captured-context.sql' context from information_schema.OPTIMIZER_CONTEXT;
---------------------------------------

All the existing tests are modified to query OPTIMIZER_CONTEXT IS table
sjaakola
MDEV-21935 Mariabackup file limits set incorrectly

Fixed a build time regression happening in x86-debian-12 platform
Marko Mäkelä
MDEV-38947 Reimplement SET GLOBAL innodb_buffer_pool_size

We deprecate and ignore the parameter innodb_buffer_pool_chunk_size
and let the buffer pool size to be changed in arbitrary 1-megabyte
increments.

innodb_buffer_pool_size_max: A new read-only startup parameter
that specifies the maximum innodb_buffer_pool_size. On 64-bit
systems other than IBM AIX the default is 8 TiB and the minimum
8 MiB. On other systems, the default and minimum are 0, and
the value 0 will be replaced with the initial innodb_buffer_pool_size
rounded up to the allocation unit (2 MiB or 8 MiB).  The maximum value
is 4GiB-2MiB on 32-bit systems and 16EiB-8MiB on 64-bit systems.
This maximum is very likely to be limited further by the operating system.

The status variable Innodb_buffer_pool_resize_status will reflect
the status of shrinking the buffer pool. When no shrinking is in
progress, the string will be empty.

Unlike before, the execution of SET GLOBAL innodb_buffer_pool_size
will block until the requested buffer pool size change has been
implemented, or the execution is interrupted by a KILL statement
a client disconnect, or server shutdown.  If the
buf_flush_page_cleaner() thread notices that we are running out of
memory, the operation may fail with ER_WRONG_USAGE.

SET GLOBAL innodb_buffer_pool_size will be refused
if the server was started with --large-pages (even if
no HugeTLB pages were successfully allocated). This functionality
is somewhat exercised by the test main.large_pages, which now runs
also on Microsoft Windows.  On Linux, explicit HugeTLB mappings are
apparently excluded from the reported Resident Set Size (RSS), and
apparently unshrinkable between mmap(2) and munmap(2).

The buffer pool will be mapped to a contiguous virtual memory area
that will be aligned and partitioned into extents of 8 MiB on
64-bit systems and 2 MiB on 32-bit systems.

Within an extent, the first few innodb_page_size blocks contain
buf_block_t objects that will cover the page frames in the rest
of the extent.  The number of such frames is precomputed in the
array first_page_in_extent[] for each innodb_page_size.
In this way, there is a trivial mapping between
page frames and block descriptors and we do not need any
lookup tables like buf_pool.zip_hash or buf_pool_t::chunk_t::map.

We will always allocate the same number of block descriptors for
an extent, even if we do not need all the buf_block_t in the last
extent in case the innodb_buffer_pool_size is not an integer multiple
of the of extents size.

The minimum innodb_buffer_pool_size is 256*5/4 pages.  At the default
innodb_page_size=16k this corresponds to 5 MiB.  However, now that the
innodb_buffer_pool_size includes the memory allocated for the block
descriptors, the minimum would be innodb_buffer_pool_size=6m.

my_virtual_mem_reserve(), my_virtual_mem_commit(),
my_virtual_mem_decommit(), my_virtual_mem_release():
New interface mostly by Vladislav Vaintroub, to separately
reserve and release virtual address space, as well as to
commit and decommit memory within it.

The function my_virtual_mem_reserve() is only defined for Microsoft Windows.
Other platforms should invoke my_large_virtual_alloc() instead.

my_large_virtual_alloc(): A new function, similar to my_large_malloc(),
for other platforms than Microsoft Windows.
For regular page size allocations, do not specify MAP_NORESERVE nor
MAP_POPULATE, to preserve compatibility with my_large_malloc().

After my_virtual_mem_decommit(), the virtual memory range will be
inaccessible.

opt_super_large_pages: Declare only on Solaris. Actually, this is
specific to the SPARC implementation of Solaris, but because we
lack access to a Solaris development environment, we will not revise
this for other MMU and ISA.

buf_pool_t::chunk_t::create(): Remove.

buf_pool_t::create(): Initialize all n_blocks of the buf_pool.free list.

buf_pool_t::allocate(): Renamed from buf_LRU_get_free_only().

buf_pool_t::LRU_warned: Changed to Atomic_relaxed<bool>,
only to be modified by the buf_flush_page_cleaner() thread.

buf_pool_t::shrink(): Attempt to shrink the buffer pool.
There are 3 possible outcomes: SHRINK_DONE (success),
SHRINK_IN_PROGRESS (the caller may keep trying),
and SHRINK_ABORT (we seem to be running out of buffer pool).
While traversing buf_pool.LRU, release the contended
buf_pool.mutex once in every 32 iterations in order to
reduce starvation. Use lru_scan_itr for efficient traversal,
similar to buf_LRU_free_from_common_LRU_list().
When relocating a buffer page, invalidate the page identifier
of the original page so that buf_pool_t::page_guess()
will not accidentally match it.

buf_pool_t::shrunk(): Update the reduced size of the buffer pool
in a way that is compatible with buf_pool_t::page_guess(),
and invoke my_virtual_mem_decommit().

buf_pool_t::resize(): Before invoking shrink(), run one batch of
buf_flush_page_cleaner() in order to prevent LRU_warn().
Abort if shrink() recommends it, or no blocks were withdrawn in
the past 15 seconds, or the execution of the statement
SET GLOBAL innodb_buffer_pool_size was interrupted.
After successfully shrinking the buffer pool, announce the success.
The size had already been updated in shrunk().  After failing to
shrink the buffer pool, re-enable the adaptive hash index
if it had been enabled before the resizing.

buf_pool_t::first_to_withdraw: The first block descriptor that is
out of the bounds of the shrunk buffer pool.

buf_pool_t::withdrawn: The list of withdrawn blocks.
If buf_pool_t::resize() is aborted before shrink() completes,
we must be able to resurrect the withdrawn blocks in the free list.

buf_pool_t::contains_zip(): Added a parameter for the
number of least significant pointer bits to disregard,
so that we can find any pointers to within a block
that is supposed to be free.

buf_pool_t::is_shrinking(): Return the total number or blocks that
were withdrawn or are to be withdrawn.

buf_pool_t::to_withdraw(): Return the number of blocks that will need to
be withdrawn.

buf_pool_t::usable_size(): Number of usable pages, considering possible
in-progress attempt at shrinking the buffer pool.

buf_pool_t::page_guess(): Try to buffer-fix a guessed block pointer.
Always check that the pointer is within the current buffer pool size
before dereferencing it.

buf_pool_t::get_info(): Replaces buf_stats_get_pool_info().

innodb_init_param(): Refactored. We must first compute
srv_page_size_shift and then determine the valid bounds of
innodb_buffer_pool_size.

buf_buddy_shrink(): Replaces buf_buddy_realloc().
Part of the work is deferred to buf_buddy_condense_free(),
which is being executed when we are not holding any
buf_pool.page_hash latch.

buf_buddy_condense_free(): Do not relocate blocks.

buf_buddy_free_low(): Do not care about buffer pool shrinking.
This will be handled by buf_buddy_shrink() and
buf_buddy_condense_free().

buf_buddy_alloc_zip(): Assert !buf_pool.contains_zip()
when we are allocating from the binary buddy system.
Previously we were asserting this on multiple recursion levels.

buf_buddy_block_free(), buf_buddy_free_low():
Assert !buf_pool.contains_zip().

buf_buddy_alloc_from(): Remove the redundant parameter j.

buf_flush_LRU_list_batch(): Add the parameter to_withdraw
to keep track of buf_pool.n_blocks_to_withdraw.
Keep evicting as long as the buffer pool is being shrunk,
for at most innodb_lru_scan_depth extra blocks.
Disregard the flush limit for pages that are marked as freed in files.

buf_flush_LRU_to_withdraw(): Update the to_withdraw target during
buf_flush_LRU_list_batch().

buf_pool_t::will_be_withdrawn(): Allow also ptr=nullptr (the condition
will not hold for it).

buf_flush_sync_for_checkpoint(): Wait for pending writes, in order
to guarantee progress even if the scheduler is unfair.

buf_do_LRU_batch(): Skip buf_free_from_unzip_LRU_list_batch()
if we are shrinking the buffer pool. In that case, we want
to minimize the page relocations and just finish as quickly
as possible.

buf_LRU_check_size_of_non_data_objects(): Avoid a crash when the
buffer pool is being shrunk.

trx_purge_attach_undo_recs(): Limit purge_sys.n_pages_handled()
in every iteration, in case the buffer pool is being shrunk
in the middle of a purge batch.

recv_sys_t::wait_for_pool(): Also wait for pending writes, so that
previously written blocks can be evicted and reused.

This ports the following changes from the 10.11 branch:
commit b6923420f326ac030e4f3ef89a2acddb45eccb30
commit 027d815546d45513ec597b490f2fa45b567802ba
commit 58a36773090223c97d814a07d57ab35ebf803cc5
commit 669f719cc21286020c95eec11f0d09b74f96639e (MDEV-36489)
commit a096f12ff75595ce51fedf879b71640576f70e52
commit f1a8b7fe95399ebe2a1c4a370e332d61dbf6891a (MDEV-36646)
commit 8fb09426b98583916ccfd4f8c49741adc115bac3 (MDEV-36759)
commit 56e0be34bc5d1e967ad610a9b8e24c3f5553bdd8 (MDEV-36780)
commit bb48d7bc812baf7cbd71c9e41b29fac6288cec97 (MDEV-36781)
commit 7b4b759f136f25336fdc12a5a705258a5846d224 (MDEV-36868)
commit cedfe8eca49506c6b4d2d6868f1014c72caaab36 (MDEV-37250)
commit 55e0c34f4f00ca70ad8d6f0522efa94bb81f74fb (MDEV-37263)
commit 21bb6a3e348f89c5cf23d4ee688c57f6078c7b02 (MDEV-37447)
commit 072c7dc774e7f31974eaa43ec1cbb3b742a1582e (MDEV-38671)
bsrikanth-mariadb
MDEV-38805:Store optimizer_context into a IS table

Currently, optimizer context is written as JSON sub-element in the
Optimizer Trace.

In this task, we separate it out from the optimizer trace, and instead
store it in optimizer_context Information Schema table.

The structure of the context is changed to look like below: -
----------------------------------
SET var_name1=val1;

SET var_name2=val2;
.
.
.

CREATE TABLE t1 ( ... );
-- in case it is a constant table
REPLACE INTO t1 VALUES (...);

CREATE TABLE t2 ( ... );
...

set @context='{ JSON with all the captured calls }';
set @optimizer_context='context';

... the original query;
----------------------------------

The IS can be used to read the current stored context, as well as to
dump it to a sql file which can later be replayed in a different
environment.

It is done like below: -
--------------------------------------
set optimizer_record_context=ON;
set optimizer_trace=1

-- sample query

select into outfile '/tmp/captured-context.sql' context from information_schema.OPTIMIZER_CONTEXT;
---------------------------------------

All the existing tests are modified to query OPTIMIZER_CONTEXT IS table
Marko Mäkelä
fixup! daaf6768b9414aa6e57cda99ec1899690300738f
Sergei Petrunia
Testcase cleanup
Amodh Dhakal
MDEV-38349 Fix assert thd->abort_on_warning == 0 in mysql_insert

When prepare_for_replace() fails, the code returned early via
DBUG_RETURN(1), bypassing the abort label which resets
thd->abort_on_warning to 0. This left the THD in a dirty state,
triggering the dispatch_command assertion at end of command dispatch.

Signed-off-by: Amodh Dhakal <[email protected]>
Lawrin Novitsky
Fix of the build

Header inclusion was missing
Marko Mäkelä
fixup! 99959b558e7089f6611d0cadf0b58b8a608a79b3
Dave Gosselin
MDEV-38934: ICP, Reverse scan: range access will scan whole index for range
            w/o max endpoint

For some range condition x > N, a descending scan will walk off of the
left edge of the range and scan to the beginning of the index if we don't
set an endpoint on the storage engine.

In this patch, the logic to set such an endpoint is used in two places
now, so it's factored to a helper function.
Rex Johnston
MDEV-35333 Performance Issue on TPC-H Query 18

Query 18 of the TPC-H benchmark looks like this

select
    c_name,
    c_custkey,
    o_orderkey,
    o_orderdate,
    o_totalprice,
    sum(l_quantity)
from
    ORDERS
    join CUSTOMER on c_custkey = o_custkey
    join LINEITEM on o_orderkey = l_orderkey
where
    o_orderkey in
    (
        select
            l_orderkey as ok
        from
            LINEITEM
        group by
            l_orderkey
        having
            sum(l_quantity) > 314
    )
group by
    c_name,
    c_custkey,
    o_orderkey,
    o_orderdate,
    o_totalprice
order by
    o_totalprice desc,
    o_orderdate
limit
    100;

Our derived table is converted into a semi-join, but the optimizer
doesn't know how many rows to expect before materialization, so
doesn't know where in the join order is best.

We need to be able to estimate the selectivity of
having sum(l_quantity) > 314 as well as the cardinality of l_orderkey
in the table LINEITEM.

Here we introduce a framework to calculate grouped selectivity, and perhaps
other ignored, non-grouping predicates in the future.

We add selectivity_estimate() to our Item class as the main entry point
for our calculations.  This method is implemented for
aggregate functions Item_func_{gt,ge,lt,le,eq,ne}.
we look for a ref item, indicating an aggregate function and a constant item.
If found, our ref item is asked for a likely distribution
of values.  Implemented ref items are Item_sum_{sum,min,max,count}
each returning something we can compare to our constant item.

The class Expected_distribution encapsulates both the collection of the
expected values from our aggregate function, as well as the comparison
operators (currently implemented using field->val_real()).
In order to calculate the probability of any aggregate function
comparison, we first calculate the required group size, then the
probability of this group size using a poisson distribution.
Vladislav Vaintroub
MDEV-14443 DENY statement

Implements DENY/REVOKE DENY and associated tasks.
Vladislav Vaintroub
MDEV-14443 DENY statement

Implements DENY/REVOKE DENY and associated tasks.
Rex Johnston
MDEV-35333 Performance Issue on TPC-H Query 18

Query 18 of the TPC-H benchmark looks like this

select
    c_name,
    c_custkey,
    o_orderkey,
    o_orderdate,
    o_totalprice,
    sum(l_quantity)
from
    ORDERS
    join CUSTOMER on c_custkey = o_custkey
    join LINEITEM on o_orderkey = l_orderkey
where
    o_orderkey in
    (
        select
            l_orderkey as ok
        from
            LINEITEM
        group by
            l_orderkey
        having
            sum(l_quantity) > 314
    )
group by
    c_name,
    c_custkey,
    o_orderkey,
    o_orderdate,
    o_totalprice
order by
    o_totalprice desc,
    o_orderdate
limit
    100;

Our derived table is converted into a semi-join, but the optimizer
doesn't know how many rows to expect before materialization, so
doesn't know where in the join order is best.

We need to be able to estimate the selectivity of
having sum(l_quantity) > 314 as well as the cardinality of l_orderkey
in the table LINEITEM.

Here we introduce a framework to calculate grouped selectivity, and perhaps
other ignored, non-grouping predicates in the future.

We add selectivity_estimate() to our Item class as the main entry point
for our calculations.  This method is implemented for
aggregate functions Item_func_{gt,ge,lt,le,eq,ne}.
we look for a ref item, indicating an aggregate function and a constant item.
If found, our ref item is asked for a likely distribution
of values.  Implemented ref items are Item_sum_{sum,min,max,count}
each returning something we can compare to our constant item.

The class Expected_distribution encapsulates both the collection of the
expected values from our aggregate function, as well as the comparison
operators (currently implemented using field->val_real()).
In order to calculate the probability of any aggregate function
comparison, we first calculate the required group size, then the
probability of this group size using a poisson distribution.
Vladislav Vaintroub
MDEV-14443 DENY statement

Implements DENY/REVOKE DENY and associated tasks.
Rucha Deodhar
MDEV-34723: NEW and OLD in a trigger as row variables

Implementation:
NEW and OLD represent the entire table row. So it can be thought of as
list of Item_trigger_field. When the old mode is appropriately set, we
are in a trigger and NEW or OLD is encountered, create Item_trigger_row
object with same constructor as Item_trigger_field, it will also be used
later while creating Item_trigger_field objects.
Populate the m_fields list while fixing fields. Create a corresponding
instruction sp_instr_set_trigger_row which will be used to set the values
Rucha Deodhar
MDEV-34723: NEW and OLD in a trigger as row variables

Implementation:
NEW and OLD represent the entire table row. So it can be thought of as
list of Item_trigger_field. When the old mode is appropriately set, we
are in a trigger and NEW or OLD is encountered, create Item_trigger_row
object with same constructor as Item_trigger_field, it will also be used
later while creating Item_trigger_field objects.
Populate the m_fields list while fixing fields. Create a corresponding
instruction sp_instr_set_trigger_row which will be used to set the values
Dave Gosselin
MDEV-38934: ICP, Reverse scan: range access will scan whole index for range w/o max endpoint

Test to show the bug.
Dave Gosselin
MDEV-38934: ICP, Reverse scan: range access will scan whole index for range
            w/o max endpoint

For some range condition x > N, a descending scan will walk off of the
left edge of the range and scan to the beginning of the index if we don't
set an endpoint on the storage engine.

In this patch, the logic to set such an endpoint is used in two places
now, so it's factored to a helper function.
Vladislav Vaintroub
MDEV-14443 DENY statement

Implements DENY/REVOKE DENY and associated tasks.
Abhishek Bansal
MDEV-35211: Make VEC_FromText return VECTOR type instead of VARBINARY

VEC_FROMTEXT() was incorrectly inheriting its type handler from
Item_str_func, causing it to be identified as VARBINARY. This led
to incorrect column types (VARBINARY instead of VECTOR) when using
CREATE TABLE ... AS SELECT.

This change:
- Overrides type_handler() in Item_func_vec_fromtext to return
  type_handler_vector.
- Overrides create_field_for_create_select() to ensure the field is
  created correctly using the vector type handler during table
  creation.
- Adds a regression test in vector_funcs.test to verify that
  CREATE TABLE ... AS SELECT now correctly creates a column of
  type VECTOR.
forkfun
MDEV-26112 STR_TO_DATE should work with lc_time_names
Raghunandan Bhat
MDEV-38864: Use mmap for MEMORY engine allocations

MEMORY engine blocks used to store data and indexes are allocated using
malloc/my_malloc. This created internal fragmentation within the system
allocator.

Switch to memory mapping for MEMORY engine's allocations.
Rex Johnston
MDEV-35333 Performance Issue on TPC-H Query 18

Query 18 of the TPC-H benchmark looks like this

select
    c_name,
    c_custkey,
    o_orderkey,
    o_orderdate,
    o_totalprice,
    sum(l_quantity)
from
    ORDERS
    join CUSTOMER on c_custkey = o_custkey
    join LINEITEM on o_orderkey = l_orderkey
where
    o_orderkey in
    (
        select
            l_orderkey as ok
        from
            LINEITEM
        group by
            l_orderkey
        having
            sum(l_quantity) > 314
    )
group by
    c_name,
    c_custkey,
    o_orderkey,
    o_orderdate,
    o_totalprice
order by
    o_totalprice desc,
    o_orderdate
limit
    100;

Our derived table is converted into a semi-join, but the optimizer
doesn't know how many rows to expect before materialization, so
doesn't know where in the join order is best.

We need to be able to estimate the selectivity of
having sum(l_quantity) > 314 as well as the cardinality of l_orderkey
in the table LINEITEM.

Here we introduce a framework to calculate grouped selectivity, and perhaps
other ignored, non-grouping predicates in the future.

We add selectivity_estimate() to our Item class as the main entry point
for our calculations.  This method is implemented for
aggregate functions Item_func_{gt,ge,lt,le,eq,ne}.
we look for a ref item, indicating an aggregate function and a constant item.
If found, our ref item is asked for a likely distribution
of values.  Implemented ref items are Item_sum_{sum,min,max,count}
each returning something we can compare to our constant item.

The class Expected_distribution encapsulates both the collection of the
expected values from our aggregate function, as well as the comparison
operators (currently implemented using field->val_real()).
In order to calculate the probability of any aggregate function
comparison, we first calculate the required group size, then the
probability of this group size using a poisson distribution.

We also remove a call to the old get_post_group_estimate() to a newer
estimate_post_group_cardinality() that includes estimates of
item list cardinality and provides an entry point to our new
selectivity estimate above.
Marko Mäkelä
MDEV-38947 Reimplement SET GLOBAL innodb_buffer_pool_size

We deprecate and ignore the parameter innodb_buffer_pool_chunk_size
and let the buffer pool size to be changed in arbitrary 1-megabyte
increments.

innodb_buffer_pool_size_max: A new read-only startup parameter
that specifies the maximum innodb_buffer_pool_size. On 64-bit
systems other than IBM AIX the default is 8 TiB and the minimum
8 MiB. On other systems, the default and minimum are 0, and
the value 0 will be replaced with the initial innodb_buffer_pool_size
rounded up to the allocation unit (2 MiB or 8 MiB).  The maximum value
is 4GiB-2MiB on 32-bit systems and 16EiB-8MiB on 64-bit systems.
This maximum is very likely to be limited further by the operating system.

The status variable Innodb_buffer_pool_resize_status will reflect
the status of shrinking the buffer pool. When no shrinking is in
progress, the string will be empty.

Unlike before, the execution of SET GLOBAL innodb_buffer_pool_size
will block until the requested buffer pool size change has been
implemented, or the execution is interrupted by a KILL statement
a client disconnect, or server shutdown.  If the
buf_flush_page_cleaner() thread notices that we are running out of
memory, the operation may fail with ER_WRONG_USAGE.

SET GLOBAL innodb_buffer_pool_size will be refused
if the server was started with --large-pages (even if
no HugeTLB pages were successfully allocated). This functionality
is somewhat exercised by the test main.large_pages, which now runs
also on Microsoft Windows.  On Linux, explicit HugeTLB mappings are
apparently excluded from the reported Redident Set Size (RSS), and
apparently unshrinkable between mmap(2) and munmap(2).

The buffer pool will be mapped to a contiguous virtual memory area
that will be aligned and partitioned into extents of 8 MiB on
64-bit systems and 2 MiB on 32-bit systems.

Within an extent, the first few innodb_page_size blocks contain
buf_block_t objects that will cover the page frames in the rest
of the extent.  The number of such frames is precomputed in the
array first_page_in_extent[] for each innodb_page_size.
In this way, there is a trivial mapping between
page frames and block descriptors and we do not need any
lookup tables like buf_pool.zip_hash or buf_pool_t::chunk_t::map.

We will always allocate the same number of block descriptors for
an extent, even if we do not need all the buf_block_t in the last
extent in case the innodb_buffer_pool_size is not an integer multiple
of the of extents size.

The minimum innodb_buffer_pool_size is 256*5/4 pages.  At the default
innodb_page_size=16k this corresponds to 5 MiB.  However, now that the
innodb_buffer_pool_size includes the memory allocated for the block
descriptors, the minimum would be innodb_buffer_pool_size=6m.

my_large_virtual_alloc(): A new function, similar to my_large_malloc().

my_virtual_mem_reserve(), my_virtual_mem_commit(),
my_virtual_mem_decommit(), my_virtual_mem_release():
New interface mostly by Vladislav Vaintroub, to separately
reserve and release virtual address space, as well as to
commit and decommit memory within it.

The function my_virtual_mem_reserve() is only defined for Microsoft Windows.
Other platforms should invoke my_large_virtual_alloc() instead.

my_large_virtual_alloc(): Define only outside Microsoft Windows.
For regular page size allocations, do not specify MAP_NORESERVE nor
MAP_POPULATE, to preserve compatibility with my_large_malloc().

After my_virtual_mem_decommit(), the virtual memory range will be
inaccessible.

opt_super_large_pages: Declare only on Solaris. Actually, this is
specific to the SPARC implementation of Solaris, but because we
lack access to a Solaris development environment, we will not revise
this for other MMU and ISA.

buf_pool_t::chunk_t::create(): Remove.

buf_pool_t::create(): Initialize all n_blocks of the buf_pool.free list.

buf_pool_t::allocate(): Renamed from buf_LRU_get_free_only().

buf_pool_t::LRU_warned: Changed to Atomic_relaxed<bool>,
only to be modified by the buf_flush_page_cleaner() thread.

buf_pool_t::shrink(): Attempt to shrink the buffer pool.
There are 3 possible outcomes: SHRINK_DONE (success),
SHRINK_IN_PROGRESS (the caller may keep trying),
and SHRINK_ABORT (we seem to be running out of buffer pool).
While traversing buf_pool.LRU, release the contended
buf_pool.mutex once in every 32 iterations in order to
reduce starvation. Use lru_scan_itr for efficient traversal,
similar to buf_LRU_free_from_common_LRU_list().
When relocating a buffer page, invalidate the page identifier
of the original page so that buf_pool_t::page_guess()
will not accidentally match it.

buf_pool_t::shrunk(): Update the reduced size of the buffer pool
in a way that is compatible with buf_pool_t::page_guess(),
and invoke my_virtual_mem_decommit().

buf_pool_t::resize(): Before invoking shrink(), run one batch of
buf_flush_page_cleaner() in order to prevent LRU_warn().
Abort if shrink() recommends it, or no blocks were withdrawn in
the past 15 seconds, or the execution of the statement
SET GLOBAL innodb_buffer_pool_size was interrupted.
After successfully shrinking the buffer pool, announce the success.
The size had already been updated in shrunk().  After failing to
shrink the buffer pool, re-enable the adaptive hash index
if it had been enabled before the resizing.

buf_pool_t::first_to_withdraw: The first block descriptor that is
out of the bounds of the shrunk buffer pool.

buf_pool_t::withdrawn: The list of withdrawn blocks.
If buf_pool_t::resize() is aborted before shrink() completes,
we must be able to resurrect the withdrawn blocks in the free list.

buf_pool_t::contains_zip(): Added a parameter for the
number of least significant pointer bits to disregard,
so that we can find any pointers to within a block
that is supposed to be free.

buf_pool_t::is_shrinking(): Return the total number or blocks that
were withdrawn or are to be withdrawn.

buf_pool_t::to_withdraw(): Return the number of blocks that will need to
be withdrawn.

buf_pool_t::usable_size(): Number of usable pages, considering possible
in-progress attempt at shrinking the buffer pool.

buf_pool_t::page_guess(): Try to buffer-fix a guessed block pointer.
Always check that the pointer is within the current buffer pool size
before dereferencing it.

buf_pool_t::get_info(): Replaces buf_stats_get_pool_info().

innodb_init_param(): Refactored. We must first compute
srv_page_size_shift and then determine the valid bounds of
innodb_buffer_pool_size.

buf_buddy_shrink(): Replaces buf_buddy_realloc().
Part of the work is deferred to buf_buddy_condense_free(),
which is being executed when we are not holding any
buf_pool.page_hash latch.

buf_buddy_condense_free(): Do not relocate blocks.

buf_buddy_free_low(): Do not care about buffer pool shrinking.
This will be handled by buf_buddy_shrink() and
buf_buddy_condense_free().

buf_buddy_alloc_zip(): Assert !buf_pool.contains_zip()
when we are allocating from the binary buddy system.
Previously we were asserting this on multiple recursion levels.

buf_buddy_block_free(), buf_buddy_free_low():
Assert !buf_pool.contains_zip().

buf_buddy_alloc_from(): Remove the redundant parameter j.

buf_flush_LRU_list_batch(): Add the parameter to_withdraw
to keep track of buf_pool.n_blocks_to_withdraw.
Keep evicting as long as the buffer pool is being shrunk,
for at most innodb_lru_scan_depth extra blocks.
Disregard the flush limit for pages that are marked as freed in files.

buf_flush_LRU_to_withdraw(): Update the to_withdraw target during
buf_flush_LRU_list_batch().

buf_pool_t::will_be_withdrawn(): Allow also ptr=nullptr (the condition
will not hold for it).

buf_flush_sync_for_checkpoint(): Wait for pending writes, in order
to guarantee progress even if the scheduler is unfair.

buf_do_LRU_batch(): Skip buf_free_from_unzip_LRU_list_batch()
if we are shrinking the buffer pool. In that case, we want
to minimize the page relocations and just finish as quickly
as possible.

buf_LRU_check_size_of_non_data_objects(): Avoid a crash when the
buffer pool is being shrunk.

trx_purge_attach_undo_recs(): Limit purge_sys.n_pages_handled()
in every iteration, in case the buffer pool is being shrunk
in the middle of a purge batch.

recv_sys_t::wait_for_pool(): Also wait for pending writes, so that
previously written blocks can be evicted and reused.

This ports the following changes from the 10.11 branch:
commit b6923420f326ac030e4f3ef89a2acddb45eccb30
commit 027d815546d45513ec597b490f2fa45b567802ba
commit 58a36773090223c97d814a07d57ab35ebf803cc5
commit 669f719cc21286020c95eec11f0d09b74f96639e (MDEV-36489)
commit a096f12ff75595ce51fedf879b71640576f70e52
commit f1a8b7fe95399ebe2a1c4a370e332d61dbf6891a (MDEV-36646)
commit 8fb09426b98583916ccfd4f8c49741adc115bac3 (MDEV-36759)
commit 56e0be34bc5d1e967ad610a9b8e24c3f5553bdd8 (MDEV-36780)
commit bb48d7bc812baf7cbd71c9e41b29fac6288cec97 (MDEV-36781)
commit 7b4b759f136f25336fdc12a5a705258a5846d224 (MDEV-36868)
commit cedfe8eca49506c6b4d2d6868f1014c72caaab36 (MDEV-37250)
commit 55e0c34f4f00ca70ad8d6f0522efa94bb81f74fb (MDEV-37263)
commit 21bb6a3e348f89c5cf23d4ee688c57f6078c7b02 (MDEV-37447)
commit 072c7dc774e7f31974eaa43ec1cbb3b742a1582e (MDEV-38671)
Sergei Golubchik
MDEV-23507 Wrong duplicate key value printed in ER_DUP_ENTRY

repair_by_sort() does not use table->record[0]
but print_keydup_error() expects to see the conflicting row there.
Dave Gosselin
MDEV-38921: Wrong result for range w/o min endpoint, ORDER BY DESC and ICP

Test to show the bug.
Raghunandan Bhat
MDEV-38864: Use mmap for MEMORY engine allocations

MEMORY engine blocks used to store data and indexes are allocated using
malloc/my_malloc. This created internal fragmentation within the system
allocator.

Switch to memory mapping for MEMORY engine's allocations.
Thirunarayanan Balathandayuthapani
Merge branch 10.6 into 10.11
Vladislav Vaintroub
MDEV-14443 DENY statement

Implements DENY/REVOKE DENY and associated tasks.
Dave Gosselin
MDEV-38921: Wrong result for range w/o min endpoint, ORDER BY DESC and ICP

In QUICK_SELECT_DESC::get_next(), when starting to scan a range with
no start endpoint, like "key < 10", clear the end_range on the storage engine.
We could have scanned another range before, like "key BETWEEN 20 and 30"
and after this the engine's end_range points to its left endpoint, the "20".

This is necessary especially when there are multiple ranges considered and
a later range indicates that there's no minimum endpoint (such as in the
attached test case), thus an earlier range endpoint must be removed/cleared.
Rex Johnston
MDEV-35333 Performance Issue on TPC-H Query 18

A patch illustrating how estimating the selectivity of a
grouping derived table with a having clause at 0.5 can have
a positive effect on the execution time of TPC-H Query#18

We introduce selectivity_estimate() into the Item heirarchy.
This is called from estimate_post_group_cardinality().
estimate_post_group_cardinality() is substituted for
get_post_group_estimate() in Item_in_subselect::optimize()
for grouping subqueries.
bsrikanth-mariadb
MDEV-38805:Store optimizer_context into a IS table

Currently, optimizer context is written as JSON sub-element in the
Optimizer Trace.

In this task, we separate it out from the optimizer trace, and instead
store it in optimizer_context Information Schema table.

The structure of the context is changed to look like below: -
----------------------------------
SET var_name1=val1;

SET var_name2=val2;
.
.
.

CREATE TABLE t1 ( ... );
-- in case it is a constant table
REPLACE INTO t1 VALUES (...);

CREATE TABLE t2 ( ... );
...

set @context='{ JSON with all the captured calls }';
set @optimizer_context='context';

... the original query;
----------------------------------

The IS can be used to read the current stored context, as well as to
dump it to a sql file which can later be replayed in a different
environment.

It is done like below: -
--------------------------------------
set optimizer_record_context=ON;
set optimizer_trace=1

-- sample query

select into outfile '/tmp/captured-context.sql' context from information_schema.OPTIMIZER_CONTEXT;
---------------------------------------

All the existing tests are modified to query OPTIMIZER_CONTEXT IS table
Thirunarayanan Balathandayuthapani
Merge branch 10.6 into 10.11
Vladislav Vaintroub
MDEV-14443 DENY statement

Implements DENY/REVOKE DENY and associated tasks.
Raghunandan Bhat
MDEV-38864: Use mmap for MEMORY engine allocations

MEMORY engine blocks used to store data and indexes are allocated using
malloc/my_malloc. This created internal fragmentation within the system
allocator.

Switch to memory mapping for MEMORY engine's allocations.