The following snippet shows how to temporarily create an index, in order to evaluate its effect on execution plans.
-- source: https://www.mysqltutorial.org/mysql-transaction.aspx/-- 1. disable automatic COMMITs within transactionsSET autocommit = 0;-- 2. start a new transactionSTART TRANSACTION;-- EXPLAINEXPLAIN SELECT *; -- your query-- 3. add indexALTER TABLE foo ADD INDEX index_name (a,b,c);-- EXPLAIN again to see improvementsEXPLAIN SELECT *; -- your query-- 4. Reset your changesROLLBACK;
MySQL has multiple levels of caching: first, query results themselves can be cached; second, recently loaded tables might be kept in memory. For accurate benchmark times, you want to avoid caching, and you want to flush tables from memory.
SET profiling = 1; -- enable profiling for current sessionSET SESSION innodb_max_dirty_pages_pct = 0;SET SESSION query_cache_size = 0;SET SESSION query_cache_type = 0;SELECT SQL_NO_CACHE foo...; -- do stuff with SQL_NO_CACHE after SELECTFLUSH TABLES; -- reset between test runsSHOW PROFILES\G -- print profiling information for session
If all columns in the select list are in an index (order-independent), the optimizer will be able to retrieve values from the index itself rather than the actual row, avoiding the overhead of loading the row into memory. This can help for very wide tables.
Replicas can have different indexes than primary (source). Though ideally you shouldn't need to run this on a production replica at all.