Book Image

MySQL 5.1 Plugin Development

Book Image

MySQL 5.1 Plugin Development

Overview of this book

MySQL has introduced a Plugin API with its latest version – a robust, powerful, and easy way of extending the server functionality with loadable modules on the fly. But until now anyone wishing to develop a plugin would almost certainly need to dig into the MySQL source code and search the Web for missing bits of the information.This is the first book on the MySQL Plugin API. Written together with one of the Plugin API primary architects, it contains all the details you need to build a plugin. It shows what a plugin should contain and how to compile, install, and package it. Every chapter illustrates the material with thoroughly explained source code examples.Starting from the basic features, common to all plugin types, and the structure of the plugin framework, this book will guide you through the different plugin types, from simple examples to advanced ones. Server monitoring, full-text search in JPEG comments, typo-tolerant searches, getting the list of all user variables, system usage statistics, or a complete storage engine with indexes – these and other plugins are developed in different chapters of this book, demonstrating the power and versatility of the MySQL Plugin API and explaining the intricate details of MySQL Plugin programming.
Table of Contents (16 chapters)
MySQL 5.1 Plugin Development
About the Authors
About the Reviewer

SQL extension by Storage Engine plugins

This is another new feature that is at the moment only available in MariaDB 5.2. In various storage engines, tables, fields, and indexes often have properties that a user may want to tune, but has no way of doing it. There is only a fixed set of attributes accessible from SQL such as MAX_ROWS, AVG_ROW_LENGTH, INSERT_METHOD, CONNECTION, KEY_BLOCK_SIZE, ROW_FORMAT, PACK_KEYS, and so on. Lucky engine authors, who had access to the MySQL source code, could modify the parser to add support for new attributes—such as InnoDB or MyISAM-specific values of ROW_FORMAT, MERGE-specific parameter INSERT_METHOD, or Federated only attribute CONNECTION. Other engines had to use the COMMENT field, as in the following:

) ENGINE=unlucky COMMENT='count=1200:mode="wrap"'

This has limited functionality, prevents the user from using the table comment for its original purpose, forces every engine to implement the same code of parsing of the comment string, and simply looks unprofessional from the end user point of view.

The new extension of the Storage Engine API allows engines to provide a list of attributes that will be supported in the CREATE TABLE and ALTER TABLE statements. For example:

cv BLOB AVG_SIZE=2048,
UNIQUE (name) BUCKET=adaptive
) ENGINE=hypothetical TAGS="data,name,CV" TRAIL=ON;

These statements show that tables in some hypothetical engine may have four additional attributes—TAGS, which takes a string, TRAIL, which is a Boolean attribute and can be on or off, COMPRESSION, that takes not an arbitrary string, but a value from a predefined set, similar to the ENUM column type in SQL, and, lastly, LEVEL, which is a numeric attribute. Also, we see two field attributes—APPROVED and AVG_SIZE, and one index attribute—BUCKET.

This hypothetical engine only needs to declare these attributes, parsing and saving them in the .frm file would be done by the server. In this example, there are four table attributes—one string, one number, one enumeration, and one Boolean. First, the engine declares a structure to store them:

struct ha_table_option_struct
char *tags;
ulonglong comp_level;
uint compr;
bool is_trailed;

This is the structure that stores the values of the attributes. The only detail worth noting is that it uses unsigned int for the enumeration, not C enum type. The size, in bytes, of the enum type can depend on the compiler and compilation options, and this structure should only use stable types that have the same size in the server and in the loadable engine.

Now, having the structure, we can map it to the SQL attribute names:

ha_create_table_option table_option_list[]=
HA_TOPTION_STRING("tags", tags),
HA_TOPTION_BOOL("trail", is_trailed, false),
HA_TOPTION_ENUM("compression", compr, "none,gzip,bzip2", 0),
HA_TOPTION_NUMBER("level", comp_level, 4, 0, 9, 1),

This is all. The array declares four table options (HA_TOPTION_*):

  • TAGS: accepts a string as a value and stores it in the tags member of the structure

  • TRAIL: accepts a Boolean value (that is, one of ON, OFF, YES, NO, 1, 0) with the default being false, and stores it in the is_trailed member of the structure

  • COMPRESSION: accepts one of NONE, GZIP, BZIP2 with the default being NONE, and stores the ordinal number of the value in the compr member of the structure

  • LEVEL: accepts a number from 0 to 9 with the step 1 and the default 4, which is stored in the comp_level member of the ha_table_option_struct structure.

Field and index attributes are declared similarly using HA_FOPTION_* and HA_IOPTION_* macros.

Putting it to use

We could have used this feature in the book, for example with our HTML engine:

create table test (
a int,
b timestamp,
c varchar(50)
) engine=html style='border:1; color:#001177';

We could also use it with the TOCAB engine (with the option names and values as documented in the Tokyo Cabinet manual):

create table t (
a int primary key,
b int,
c int,
d int,
e blob,
key bcd_key (b,c,d)
) engine=tocab lmemb=64 nmemb=128 bnum=32767 apow=6 fpow=8;

Let's try to implement the second example. To add support for table attributes to our TOCAB engine, we need to start with the Tokyo Cabinet manual. The function tcbdbtune() takes a closed database handle and six tuning parameters:


Number of elements in the leaf page; the default is 128.


Number of elements in the node page; the default is 256.


Number of elements in the bucket array; the default is 16381.


Record alignment. The record will be aligned by the 2apow bytes. The default is 8, which means the alignment at the 256-byte boundary.


Maximum number of elements in the free block pool. Again, the actual value will be 2fpow. The default is 10, that is 1024 elements.


Bitwise OR of flags BDBTLARGE (the file can be larger than 2GB), BDBTDEFLATE, BDBTBZIP, BDBTTCBS (what compression algorithm to use).

To map them to SQL, we can create five numeric attributes, one Boolean—for BDBTLARGE—and one enumeration:

struct ha_table_option_struct
ulonglong lmemb, nmemb, bnum, apow, fpow;
bool large;
uint compression;
ha_create_table_option table_option_list[]=
HA_TOPTION_NUMBER("lmemb", lmemb, 128, 1, 65535, 1),
HA_TOPTION_NUMBER("nmemb", nmemb, 256, 1, 65535, 1),
HA_TOPTION_NUMBER("bnum", bnum, 16381, 1, 65535, 1),
HA_TOPTION_NUMBER("apow", apow, 8, 0, 256, 1),
HA_TOPTION_NUMBER("fpow", fpow, 10, 0, 256, 1),
HA_TOPTION_BOOL("large", large, false),
HA_TOPTION_ENUM("compression", compression, "none,deflate,bzip2,tcbs", 0),

Now, we need to tell the server about these attributes by setting the handlerton appropriately:

static int tocab_init(void *p)
handlerton *tocab_hton = (handlerton *)p;
tocab_hton->create = tocab_create_handler;
tocab_hton->table_options = table_option_list;
return 0;

It is the same tocab_init() function as before, with one new, highlighted line. We have told the server about new table attributes; the server will handle the rest. The only thing that the server cannot do is to use them. Indeed, the job of calling tcbdbtune() is ours. But the server makes it easy, by giving us a ha_table_option_struct structure filled with values:

static TCBDB *open_tcdb(const char *name, TABLE *table, int *error)
char fname[FN_REFLEN+10];
strcpy(fname, name);
strcat(fname, ".tocab");
*error = 0;
TCBDB *dbh = tcbdbnew();
if (!dbh) {
*error = HA_ERR_OUT_OF_MEM;
return 0;

So far, it is the same open_tcdb() function as in the last chapter. Now, we can add a few more lines to it:

ha_table_option_struct *opts= (ha_table_option_struct*)table->s->option_struct;

We take the structure, table‑>s‑>option_struct, and cast it from the generic void* to a pointer to our table option structure. We call tcbdbtune() with all of the tuning parameters:

if (tcbdbsetmutex(dbh) &&
tcbdbtune(dbh, opts->lmemb, opts->nmemb, opts->bnum,
opts->apow, opts->fpow,
(opts->large ? BDBTLARGE : 0) |
(opts->compression == 1 ? BDBTDEFLATE : 0) |
(opts->compression == 2 ? BDBTBZIP : 0) |
(opts->compression == 3 ? BDBTTCBS : 0)) &&
tcbdbsetcmpfunc(dbh, tocab_compare, table->s) &&
tcbdbopen(dbh, fname, BDBOWRITER|BDBOCREAT))
return dbh;
*error = tc_error(dbh);
return 0;

As usual, the added lines are highlighted. Done. The server ensures that all members of the structure get the values—as specified by the user or defaults—and that these values are valid, within the ranges that we specified. The server stores the values in the .frm file and gives them to us both when a table is created and every time it is opened—which is important, as open_tcdb() is called both from ha_tocab::create() and ha_tocab::open(), and both times it needs to know the values of the tuning parameters.

Test drive

We can create a table and see how options are recognized by the server:

mysql> create table t ( a int primary key, b int, c int, d int, e blob, key bcd_key (b,c,d)) engine=tocab lmemb=64 nmemb=128 bnum=32767 apow=6 fpow=8 compression=tcbs large=yes;
Query OK, 0 rows affected (0.01 sec)

Looking in the debugger, you would have noticed that tcbdbtune() is correctly called with these values. The server takes care of showing them too:

mysql> set sql_quote_show_create=0;
Query OK, 0 rows affected (0.00 sec)
mysql> show create table t\G
*************************** 1. row ***************************
Table: t
Create Table: CREATE TABLE t (
a int(11) NOT NULL,
b int(11) DEFAULT NULL,
c int(11) DEFAULT NULL,
d int(11) DEFAULT NULL,
e blob,
KEY bcd_key (b,c,d)
) ENGINE=TOCAB DEFAULT CHARSET=latin1 lmemb=64 nmemb=128 bnum=32767 apow=6 fpow=8 compression=tcbs large=yes
1 row in set (0.00 sec)

Note that invalid values are not accepted:

mysql> create table t1 (a int primary key) engine=tocab compression=gzip;
ERROR 1651 (HY000): Incorrect value 'gzip' for option 'compression'