Compiling Drizzle on CentOS 5.4

April 6, 2010

I was able to compile Drizzle on CentOS today thanks to Neil Armitage post on his website.

Clean install centos 5.4 with Development Tools and Development Libraries

yum groupinstall “Development Tools”
yum groupinstall “Development Libraries”

Setup the drizzle user account and allow it to sudo

/usr/sbin/visudo
uncomment %wheel ALL=(ALL) NOPASSWD: ALL
useradd drizzle
gpasswd -a drizzle wheel

Install Required Dependencies

yum install autoconf autoconf.noarch bison-devel.x86_64 \
bison.x86_64 bzr cpp.x86_64 e2fsprogs-devel.i386 e2fsprogs-devel.x86_64 \
gcc gcc-c++.x86_64 gcc.x86_ glib2-devel glibc-devel.x86_64 glibc.x86_64 \
gperf libevent-devel.x86_64 libstdc++.i386 libtool.x86_64 ncurses-devel.i386 \
ncurses-devel.x86_64 ncurses.x86_64 pcre-devel.x86_64 pcre.i386 pcre.x86_64 \
readline-devel.x86_64 readline.x86_64 zlib-devel.x86_64

Install Protobufs

wget http://protobuf.googlecode.com/files/protobuf-2.3.0.tar.gz
tar -xvf protobuf-2.3.0.tar.gz
cd protobuf-2.3.0
./configure
make
make install

Install bzr

wget http://launchpad.net/bzr/2.1/2.1.0b4/+download/bzr-2.1.0b4.tar.gz
tar -xvf bzr-2.1.0b4.tar.gz
cd bzr-2.1.0b4
python setup.py install

Make the Local bzr Repo

su – drizzle

mkdir ~/bzrwork
bzr init-repo –2a ~/bzrwork
cd ~/bzrwork

Build libdrizzle

bzr branch lp:libdrizzle
cd libdrizzle
./config/autorun.sh
./configure
make
sudo make install

Build Drizzle

cd ~/bzrwork
bzr branch lp:drizzle
cd drizzle
./config/autorun.sh
./configure
make
sudo make install

Run the tests

cd tests
./test-run

Configure and Start Drizzle

sudo mkdir /usr/local/var
sudo chown drizzle.drizzle /usr/local/var
cd /usr/local
/usr/local/sbin/drizzled –no-defaults –drizzle-protocol-port=9306 \
–basedir=$PWD –datadir=$PWD/var >> $PWD/var/drizzle.err 2>&1 &

Connect to drizzle

drizzle

Mark Grennan

REVIEW – MySQL Admin Cookbook

April 2, 2010

MySQL Admin Cookbook
99 great recipes for mastering MySQL configurations and administration
by Daniel Schneller,  Udo Schwedt
Published by PACKT Publishing

This book is a great first attempt at a book that has been needed for a long time.   It covers the everyday practical parts of managing a MySQL server.

I have been hungry for this book for a long time. Most books about MySQL focus on the SQL language and give you long chapters on how to form complex queries. Or, they dive deep into MySQL commands command and the use of my.cnf. I have a bookshelf full of these books. None have given me help making backups on-the-fly without interrupting customer services or preventing mysqldump from failing with error 2013. This book, “MySQL Admin Cookbook”, covers everyday issues database administrators (DBA) face.

I’m calling it a “first attempt” because you’ll get a taste of the practical side but the authors don’t give you all the tools. It covers transferring connections settings between different machines with a network share. Good. It doesn’t talk enough about the tools every MySQL DBA know, like MaatKit, openarkkit, mycheckpoint, mysqltuner and so many more.

Another example, Chapter 5 covers the common details for loading large data sets using the LOAD DATA INFILE command. Common. But it doesn’t talk about how you can replace a CSV or MyIsam files directly with tools like mysql-replace-tables-live.

If you are NOT a DBA and MySQL is something you’re using, I fully recommend this book.  If you are a DBA, unless your new, you will find most of this book covers things you have already learned on your own.

This book starts like a very good HOWTO with replication in chapter one without mentioning
the monitoring tool mk-heartbeat. By chapter six it get cooking with good tips on analyzing, monitoring and configuring your MySQL and then simmers to an ending with managing users and schemas.

99 GREAT recipes?   No.  Everyday foods is more what it is.

The OpenArk Kit

March 25, 2010

Shlomi Noach is DBA,  authorized MySQL instructor software developer and Winner of the MySQL Community Member of the Year, 2009.

He has published his own collections of MySQL scripts and you might find them useful.  Shlomi calls them the “openark kit“.

The available tools are:

  • oak-apply-ri: apply referential integrity on two columns with parent-child relationship.
  • oak-block-account: block or release MySQL users accounts, disabling them or enabling them to login.
  • oak-chunk-update: Perform long, non-blocking UPDATE/DELETE operation in auto managed small chunks.
  • oak-kill-slow-queries: terminate long running queries.
  • oak-modify-charset: change the character set (and collation) of a textual column.
  • oak-online-alter-table: Perform a non-blocking ALTER TABLE operation.
  • oak-purge-master-logs: purge master logs, depending on the state of replicating slaves.
  • oak-security-audit: audit accounts, passwords, privileges and other security settings.
  • oak-show-limits: show AUTO_INCREMENT “free space”.
  • oak-show-replication-status: show how far behind are replicating slaves on a given master.

All tools are written in Python, and require Python 2.3 or newer, and the python-mysqldb driver. Some tools require MySQL 5.0 or higher; see the docs for each tool.

The openark kit is released under the BSD license.

Thanks Shlomi.

Enjoy!

Loading Bulk CSV Tables

March 23, 2010

In my job I use many data tables that are transient.  New weather data is received all the time and old data is purged.  Most of these table are received as CSV files.  The data is then loaded into MySQL tables and indexed to be used with geographic queries.

Most of these tables never see an insert or update.  It would be nice if  you could build make these CVS tables read only and build byte pointer indexes for each row.  (Maybe some day I’ll code this into MySQL.)

Most people load large data tables at night time with the LOCK & LOAD method.  It goes like LOCK TABLE…; LOAD DATA INFILE…; UNLOCK TABLE.  In other words, nobody will read data or generate reports during while this is running.

With the script I developed I have been able to load 33,000,000 records from a CSV file into a MySQL table, with indexes, in 22m 36.282s minutes without creating long LOCK times effecting the users.

Here is what I’m doing.  This is a proof of concept script written in BASH.

Continue Reading »

A Cache of Its Own

March 12, 2010

Most system have one table that just gets pounded.  If this table was very small you could load it into memory using the memory storage engine. For larger tables that have random requests this doesn’t make since and so like other tables you relay on caching. The problem is cache memory is shared. Another query can push your always needed data out of memory and your back to long disk reads.

If you use “named” index cache you can give you most critical indexes memory space of their own.
Structured System Variables let you create a named index key cache of a give size. The CACHE INDEX command allows you to pic the index you want to load into the named cache. And the LOAD INDEX INTO command to pre-load the index into memory. The only questions is now big should you make the cache for your indexes?

Continue Reading »

Find Abandoned Tables

March 2, 2010

Some times it’s the simple function that helps.

This simple query will give return tables that haven’t be updated in over a year.  They are most likely.  Each should be reviewed to be dropped or converted into a compressed archive table.

USE information_schema;
SELECT TABLE_SCHEMA,
    TABLE_NAME,
    TABLE_ROWS,
    CREATE_TIME,
    UPDATE_TIME
FROM `TABLES`
    WHERE (TABLE_SCHEMA != "information_schema")
        AND (TABLE_SCHEMA != "mysql")
        AND (UPDATE_TIME 
						

MySQL performance flow char

March 1, 2010

Words of wisdom from a server administrator


(REPOSTED FROM RACKER HACKER)

As much as some people might disagree, optimizing MySQL isn’t rocket science. There are some basic steps that every server administrator can follow to improve the performance of their MySQL server. I’ve outlined these steps in the flow chart included in this post.

While it won’t include all situations, it will give ideas to those who are just starting down the path of MySQL optimization. If you find any corrections or if you have suggestions for improvements, please let me know!

Single Query Profiling

February 26, 2010

Every wounder what was taking MySQL so long to run your query.  You can ask it.

mysql > set profiling=1;
mysql> SELECT * FROM country WHERE city like ‘okla’  order by name;
mysql> SHOW PROFILES;

You should now see a table with a single row, with the number 1 to the left of your query.

mysql> SHOW PROFILE FOR QUERY 1;

Continue Reading »

MySQL Table Caching

February 25, 2010

Too much of a good thing…

I have had a lot of people asking me about MySQL lately so I thought is was time to share something I learned.  Too much caching space in MySQL can be a bad thing.

You would thing the more you can stuff your entire database into memory the better off you would be and the faster your system would work.  That is not completely true. I have made mprovements by lowering table_cache variable on every version my MySQL 5. From what I have read this seems to be due to overhead managing file descriptors and caching tables.  Like most things there is a point of diminishing returns when dealing with cache tables.  At some point your hit rate is overtaken by the management threads.

This seems to work best.  The read_rnd should be four times the sort and join buffer and they are four times the read buffer.  Something like this;

sort_buffer_size = 16M
read_buffer_size = 4M
read_rnd_buffer_size = 64M
join_buffer_size = 16M

Send me feedback at Mark at Grennan.com

Hello world!

February 23, 2010

Yes, I’ve started yet another blog. You think I would learn.

This time it’s bout MySQL. The database we …

 



Powered by Wordpress and MySQL. Theme by Shlomi Noach, openark.org
Creative Commons License
MySQL Fan Boy by Mark Grennan is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License.
HOME