ISAM HVDB Cleanup
The ISAM HVDB (High Volume DataBase) is used to store transient and persistent data for many of the RTSS runtime functions. The HVDB will store things like AAC device signatures, user access tokens, session attributes, and can act as a dynamic cache for the federation component. When this data expires, sometimes shortly after being created, it is routinely cleaned up by the ISAM RTSS runtime using cleanup task threads. As the features have evolved, additional cleanup tasks have been added to perform this important function to keep the database healthy.
At the time of writing, ISAM v9.0.7.1 now has 5 configurable cleanup tasks:
- RBA Session cleanup
- Distributed Map cleanup
- OAuth Token Cache cleanup
- MMFA Authenticators cleanup
- Authentication Service cleanup
IBM recommends using an external HVDB database in a Production environment for capacity, availability and troubleshooting reasons. Using an external database gives you the opportunity to monitor and externalize the cleanup tasks. Take a few moments to review the best practices and tuning recommendations from IBM Support.
You can easily schedule the cleanup to execute on an off hours schedule rather than relying on intervals between cleanup tasks. Keep in mind that this is not without risk, if done incorrectly, a warning that is up front in the documentation. Be sure to involve your DBAs in this endeavor.
One of the first tasks that you may want to undertake is to get an idea of what the current state of the HVDB cleanup is. In some cases, the cleanup thread just stops running with or without errors. In extreme cases, the cleanup failures cause a cascading failure of the database when transactions roll back and the file system becomes full of archived transactions. You create a simple shell script to run periodically on the database server as the instance owner:
#!/bin/ksh
#
# **************************************************************************
VERSION="rowcounts.sh - v1.05_2020-AUG-27"
# *
# * Kevin Jeffery
# *
# * HVDB Rowcounts for cleanup
# *
# **************************************************************************
# Default setting of attributeCollection.sessionTimeout (seconds)
SESSION_TIMEOUT=1800
# Default setting of deviceRegistration.inactiveExpirationTime (days)
INACTIVE_EXPIRATION_TIME=100
# Current time in seconds since epoch
CURRENT_TIME_SECONDS=$(date +%s)
# Current time in milliseconds since epoch
CURRENT_TIME_MILLIS=$(date +%s%3N)
echo "**********************************************************************"
echo "*** $(date)"
echo "*** ${VERSION}"
function usage {
echo "*** Usage: ${0} -s (schema_name)"
echo "*"
echo "* -s Schema name of the HVDB user tables."
echo "* -t Attribute Collection Session timeout in seconds (attributeCollection.sessionTimeout Default: 1800)"
echo "* -d Device registration inactive expiration time in days (deviceRegistration.inactiveExpirationTime Default: 100)"
case $1 in
s) echo -e "* Schema name missing\n"
;;
esac
exit 1
}
while getopts ":s:t:d:" option;
do
case $option in
s)
typeset -u schema_name=${OPTARG}
;;
t)
typeset -i SESSION_TIMEOUT=${optarg}
;;
d)
typeset -i INACTIVE_EXPIRATION_TIME=${optarg}
;;
:)
echo "option -$OPTARG needs an argument"
;;
*)
echo "invalid option -$OPTARG"
;;
esac
done
[[ -z ${schema_name} ]] && usage s
. ~/sqllib/db2profile
db2 connect to HVDB
echo "*** ${schema_name}.AUTH_TXN_OBL_DATA"
db2 "select count(*) as ROWCOUNT from ${schema_name}.AUTH_TXN_OBL_DATA"
db2 "select count(*) as EXPIRED from ${schema_name}.AUTH_TXN_OBL_DATA where REC_TIME < (CURRENT TIMESTAMP - ${SESSION_TIMEOUT} SECONDS)"
echo "*** ${schema_name}.RBA_DEVICE"
db2 "select count(*) as ROWCOUNT from ${schema_name}.RBA_DEVICE"
db2 "select count(*) as EXPIRED from ${schema_name}.RBA_DEVICE where timestampdiff(16, char(CURRENT TIMESTAMP - LAST_USED_TIME)) > ${INACTIVE_EXPIRATION_TIME}"
echo "*** ${schema_name}.RBA_USER_ATTR_SESSION"
db2 "select count(*) as ROWCOUNT from ${schema_name}.RBA_USER_ATTR_SESSION"
db2 "select count(*) as EXPIRED from ${schema_name}.RBA_USER_ATTR_SESSION where REC_TIME < (CURRENT TIMESTAMP - ${SESSION_TIMEOUT} SECONDS)"
echo "*** ${schema_name}.DMAP_ENTRIES"
db2 "select count(*) as ROWCOUNT from ${schema_name}.DMAP_ENTRIES"
db2 "select count(*) as EXPIRED from ${schema_name}.DMAP_ENTRIES where DMAP_EXPIRY < ${CURRENT_TIME_MILLIS} AND DMAP_EXPIRY <> 0"
echo "*** ${schema_name}.AUTH_SVC_SESSION_CACHE"
db2 "select count(*) as ROWCOUNT from ${schema_name}.AUTH_SVC_SESSION_CACHE"
db2 "select count(*) as EXPIRED from ${schema_name}.AUTH_SVC_SESSION_CACHE where EXPIRY < ${CURRENT_TIME_MILLIS}"
echo "*** ${schema_name}.OAUTH20_TOKEN_CACHE"
db2 "select count(*) as ROWCOUNT from ${schema_name}.OAUTH20_TOKEN_CACHE"
db2 "select count(*) as EXPIRED from ${schema_name}.OAUTH20_TOKEN_CACHE where LIFETIME < (${CURRENT_TIME_SECONDS} - DATE_CREATED/1000)"
echo "*** ${schema_name}.OAUTH20_TOKEN_EXTRA_ATTRIBUTE"
db2 "select count(*) as ROWCOUNT from ${schema_name}.OAUTH20_TOKEN_EXTRA_ATTRIBUTE"
db2 "select count(*) as EXPIRED from ${schema_name}.OAUTH20_TOKEN_EXTRA_ATTRIBUTE where STATE_ID not in (select STATE_ID from ${schema_name}.OAUTH20_TOKEN_CACHE where LIFETIME > (${CURRENT_TIME_SECONDS} - DATE_CREATED/1000))"
echo "*** ${schema_name}.OAUTH_AUTHENTICATORS"
db2 "select count(*) as ROWCOUNT from ${schema_name}.OAUTH_AUTHENTICATORS"
db2 "select count(*) as EXPIRED from ${schema_name}.OAUTH_AUTHENTICATORS where STATE_ID not in (select STATE_ID from ${schema_name}.OAUTH20_TOKEN_CACHE where LIFETIME > (${CURRENT_TIME_SECONDS} - DATE_CREATED/1000))"
db2 terminate
Context-based Access Cleanup
ISAM Context-based access uses three sets of functional tables that require regular cleanup: Obligation Data, Attribute Collection and Registered Device Fingerprint. By default, the cleanup thread runs on all MGA appliances whether they are in a cluster or not. Starting with ISAM 9.0.6, this cleanup can be disabled by setting session.dbCleanupInterval in Advanced Configuration Parameters to 0. Refer to the Product Documentation for further information regarding appropriate settings for your environment.
The AUTH_TXN_OBL_DATA, AUTH_TXN_OBL_PARAMETERS_DATA, and AUTH_TXN_OBL_CTX_ATTRS_DATA tables are linked by a foreign key relationship on the TXN_ID column. Deleting expired records from RBA_USER_ATTR_SESSION will remove the child table entries. The REC_TIME column in AUTH_TXN_OBL_DATA contains the timestamp of the creation time and the value of the Advanced Configuration property attributeCollection.sessionTimeout (default 1,800 seconds) subtracted from the current time will allow you select the expired records for deletion.
The RBA_USER_ATTR_SESSION and RBA_USER_ATTR_SESSION_DATA tables are also linked by a foreign key relationship on the SESSION_ID column. The REC_TIME column in RBA_USER_ATTR_SESSION contains the timestamp of the creation time and the value of the Advanced Configuration property attributeCollection.sessionTimeout (default 1,800 seconds) subtracted from the current time will allow you select the expired records for deletion.
The RBA_DEVICE, RBA_DEVICE_FINGERPRINT, and RBA_USER_DEVICE are linked by a foreign key relationship on the DEVICE_ID column. The LAST_USED_TIME column in RBA_DEVICE contains the timestamp of the last time the device was used. The value of the Advanced Configuration property deviceRegistration.inactiveExpirationTime (default 365 days) can be used to compute a TIMESTAMPDIFF in days between the current time and the last used time, allowing selection of records greater than this value.
Distributed Map Cleanup
The distributed map is a single table in the database for storing shared temporary data. When you are performing a distributed map clean-up, rows are deleted when their expiry is reached. By default, cleanup on this table runs frequently on every ISAM MGA node, but it can be adjusted to run only on the master node in a cluster or disabled completely by setting distributedMap.cleanupWait to 0. Manual cleanup of the table is relatively simple.
The EXPIRY column in the DMAP_ENTRIES contains the epoch time in milliseconds that an entry expires. Note that a value of 0 indicates the entry does not expire. This column can be compared with the current epoch time in milliseconds to determine expired rows for deletion, excluding those with a value of 0.
OAuth Token Cleanup
OAuth tokens are stored in the ISAM HVDB in two related tables, OAUTH20_TOKEN_CACHE and OAUTH20_TOKEN_EXTRA_ATTRIBUTE. These tables are NOT linked by a foreign key relationship. The OAUTH20_TOKEN_EXTRA_ATTRIBUTE table is related to the OAUTH20_TOKEN_CACHE by the STATE_ID column. When an authorization code is exchanged for an access token, or an access token is refreshed, the rows in the OAUTH20_TOKEN_EXTRA_ATTRIBUTE table are moved to the new token. This is one of the two more complex cleanups documented for the ISAM HVDB.
Entries are deleted from the OAUTH20_TOKEN_CACHE table when they’re expired by adding the values of the DATE_CREATED (epoch time in milliseconds) plus the LIFETIME (Token lifetime in seconds) and comparing the result to the current epoch time in milliseconds.
When performing clean-up on the OAUTH20_EXTRA_ATTRIBUTE table, a row is deleted when the STATE_ID is not found in the OAUTH20_TOKEN_CACHE table.
Mobile Multi-Factor Authentication Cleanup
The is the second complex cleanup documented for ISAM HVDB. The OAUTH_AUTHENTICATORS table is also related to the OAUTH20_TOKEN_CACHE by the STATE_ID column.
When performing cleanup on the OAUTH_AUTHENTICATORS table, rows are deleted when the STATE_ID is not found in the OAUTH20_TOKEN_CACHE.
Authentication Service Cleanup
The Authentication Service table is only populated when authsvc.stateMgmt.store Advanced Configuration parameter is set to HVDB. When the clean-up is performed, rows are deleted when they expire based on the value of the EXPIRY column (epoch time in milliseconds).
Cleanup Stored Procedures
Externalizing and automating the cleanup process is accomplished in two parts. A set of stored procedures are created for each table which will select and delete expired data from the database. The rows are deleted individually to minimize database contention and committed in batches of 1000. Each procedure takes at least one input parameter to limit the scope of the cleanup to a finite number of rows (usually 1,000,000). Some of the procedures require a second input parameter to provide the current setting of your configuration or the current time in either seconds or milliseconds. All of the procedures have three output parameters: current number of rows in the table, number of expired rows before cleanup started, and the number of rows purged from the database.
The sample cleanup script calls the stored procedures in sequence. Pass the appropriate parameters for your environment. Other than the name of the database user (the instance name in a default installation), the other parameters are optional if you are using default values.
Have your DBA review all of the procedures and IBM documentation prior to implementing a manual cleanup on the database. Pay particular attention to the current settings in your environment. Several of the procedures require parameters that are identical to your configuration. Sample procedures and scripts are located on my companion github site:
Leave Comments
You must be logged in to post a comment.