Skip to content

Db2 Ends 2024 Very Strong and Marks My 28th Year of Working with Db2 on LUW

With the release of DB2 12.1 IBM has incorporated AI into an already stellar product. The advanced Optimizer incorporates additional AI to assist companies with improved performance from continuous feedback. Support for Object Storage and increased AWS integration and offerings will continue to place Db2 in the forefront of Cloud Based database solutions. Its been a great ride and I’ve seen so many improvements over the years to make Db2 very easy for DBAs to implement, support, and monitor all aspects of database performance. With most settings set to automatic and with automatic storage tablespaces, companies can get up and running very fast with Db2 and get good performance right out of the box. With advanced optimization and AI, developers can spend less time optimizing SQL queries while Db2 takes care of it in the background. These improvements enable DBAs to spend more time on high availability, clustering, backup and recovery, disaster recovery planning and ways to exploit all the capabilities offered in Db2 12.1. For more information refer to my previous post on this release. Cheers!

Db2 12.1 for Linux Unix and Windows Released on 14 November 2024

Get the details here: https://www.ibm.com/new/announcements/ibm-unveils-db2-12-1-the-next-generation-enterprise-database-powered-by-ai

Db2 12.1 TPC-E Benchmark

Check out how Db2 12.1 outperformed PostgreSQL in a comparison using the TPC-E benchmark.

https://community.ibm.com/community/user/datamanagement/blogs/christian-garcia-arellano/2024/12/12/db2-vs-postgresql

AIX Live Update

check out this new capability that will enable you to take Db2 to new heights!

https://www.ibm.com/docs/en/aix/7.3?topic=updates-live-update

Db2 12 Announced

iBM recently announced AI powered Db2 12! See the announcement at https://www.ibm.com/new/announcements/ibm-unveils-db2-12-1-the-next-generation-enterprise-database-powered-by-ai

Db2 Provided Monitoring Tools — A Plethora!

I though I would do this post to highlight many of the monitoring tools that are provided with Db2 and that I use on a regular basis for monitoring and tuning databases and applications. The first tool I use when troubleshooting an application performance or locking problem is db2top (or dsmtop if you prefer). I find db2top useful for reviewing sessions and applications that are connected to the database. You can easily find the top executing application in terms of cpu, IO or a bunch of other metrics. You can then go right into an explain of the SQL within the tool. The next set of tools I use is the MONREPORT reporting package. Prior to this package becoming available, I had a script that used a RANKING function to capture and rank the Top 10 SQL. With the MONREPORT package, you can capture currently executing SQL by issuing the “db2 call monreport.currentsql” command. This will show you the top executing SQL in terms of CPU, Rows Read/Written and Direct Reads/Writes. You can then use db2exfmt to explain and review the SQL. Next, the MONREPORT.PKGCACHE option will show you dynamic and static SQL in the package cache and will rank them in 10 categories, such as CPU, IO, WAIT, Lock wait, etc…What I then do with the highest ranked SQL in terms of cost (or a specific SQL of interest) is take the EXECUTABLE_ID of the SQL and run that through the EXPLAIN_FROM_SECTION stored procedure which will capture the explain information into the explain tables. I then use db2exfmt to format the SQL and access path for review. I can also take the SQL and run it through design advisor to see if there are any indexes that might offer a big improvement. But the first option in tuning SQL I use is to see if there is a rewrite of the SQL to improve performance, along with checking to make sure runstats are correct and current. Only after that do I pursue an index solution, however, don’t discount the use of design advisor as I have found that in many client databases, indexes have been poorly designed and big improvements can be made. And finally, I use the new LOCKING event monitor if I am pursuing locking problems. This event monitor should be defined and running on all your databases as in conjunction with the db2evmonfmt tool it can provide you and application developers with the information necessary to correct lock-contention issues.

I also use db2pd and many of its options to review OS information, bufferpool performance, memory usage and transaction, application and locking activity. The combination of transaction, application and locking activity can be used to identify lock holders and waiters and help you to pinpoint lock contention issues ( example: db2pd -d dbname -trans -applications -locks showlocks). A very powerful option with db2pd is the -tcbstats index option. When specified, it will show you insert, update, delete activity for indexes and tables and shows much of the meta data associated with a table and tablespace. It can be used to identify the most heavily used indexes, tables and tablespaces, and overflows. An lastly, I use the db2pd -dbptnmem option to review instance and database memory allocation and usage. Some of the above monitoring tools require either that snapshot monitoring switches or request and activity based monitoring DB CFG parameters be set so that the information is collected for use with these tools. I didn’t touch on the MON_GET table functions which provide a wealth of Db2 information at very low overhead. They will be covered in a later post.

Still using SMS Tablespaces for user data? They have been deprecated for quite sometime…

There are a couple methods that can be used to move data to new Automatic Storage Tablespaces. The ADMIN_MOVE_TABLE supplied stored procedure can be used and offers lots of options. It is best used with tables with small to medium amounts of data. LOAD FROM CURSOR is another method and can be used to move large amounts of data. To enable a database for automatic storage, add a stogroup and specify multiple paths. Db2 will then balance the table data over the available paths which can help to provide balanced IO. To restore a database that uses both automatic storage and SMS for system data to another non-automatic storage database, generate a redirected restore script and specify the correct STOGROUP.

Is Suboptimal SQL Impacting your Database?

After DBM and DB CFG settings have been set properly, whether through the use of automatic settings and STMM or from setting them manually, suboptimal SQL is likely to be your number one performance problem. Db2 contains a plethora of internal tools and facilities to assist you with identifying and tuning suboptimal SQL. Prior to the introduction of the MONREPORT reporting tool, I used a custom script to capture and rank the top 10 SQL in terms of CPU usage, rows read, number of executions, etc. Now I use the MONREPORT CURRENTSQL and PKGCACHE reports to capture suboptimal SQL. I then use either db2exfmt, db2expln or the EXPLAIN_FROM_SECTION stored procedure to review suboptimal SQL. In fact, the EXPLAIN FROM SECTION SP is often my first choice. I then feed the output to db2exfmt to explain the SQL. As a matter of routine, I then use DB2 Design Advisor to see if there are any index recommendations which would provide a significant improvement. I then review existing indexes on tables involved to determine if there are already too many indexes on tables involved, review index recommendations for redundancy and then consider application usage and possible SQL rewrite before applying any recommended indexes. I hope this discussion helps you to develop your own procedures for capturing, identifying and tuning suboptimal SQL.

Db2 SQL Procedures — An Oldie but Goodie

A presentation I gave at the 2002 Db2 Tech Conference in Anaheim, CA.

https://slideplayer.com/amp/8146026/?fbclid=IwAR1CR6X3M3ZuTTFOIRUMQNiTAOoS3pGUXcjqejoTRQqhSyaDEI7-ybUHWLo

State of Db2 for Linux, UNIX and Windows for end of year 2022

This post is based on my perspective with over 25 years of experience with Db2 LUW. First, I still have clients continuously running Db2 for over 17 years that I have continuously supported for all those years. What kind of industries do you see using Db2 on Linux, UNIX and Windows? As previously stated, a client of mine has been running Db2 on Windows for over 17 years in a high volume OLTP environment with HADR and reads on standby. Also, a client is using Db2 to store and analyze WIFI performance data on RedHat Linux. Another client on RedHat Linux uses Db2 as their frontend for their Web Commerce portal with HADR. And finally, a client uses Db2 on many AIX servers for their health benefits and Medicare processing application with some on-prem and some in the cloud (SAAS). Some are still running Db2 10.5 and 11.1, while some have recently upgraded to 11.5.7. In my opinion, whether using Db2 on-prem or in-cloud, Db2 has an offering for you and is going strong. Oh, and the most interesting use of Db2 that I was involved in over the last 25 years was a small gold mine in Ontario using Db2 in their mining process!