is currently readonly. We are migrating to, please continue working there on Monday 14/12. See:

  • Ard Schrijvers's avatar
    REPO-2007 [Backport 11.2] Reset invalid RUNNING locks to free when a cluster node starts · a8536671
    Ard Schrijvers authored
    If a cluster node (say 'node1') starts and it finds locks in the lock
    table that are for 'node1', it means that the cluster node did not release
    all its locks during shutdown (graceful or ungraceful) and it did come
    up again before other cluster nodes had reset the lock to FREE because
    the lock did not yet reach its expiration time (or there were no other
    cluster nodes).
    In normal situations the DbResetExpiredLocksJanitor takes care of freeing
    expired locks, for example locks that belonged to a shut down cluster
    node. But in the scenario above, the DbResetExpiredLocksJanitor did not
    yet free the locks, resulting in
    org.onehippo.repository.lock.db.DbLockManager#createLock not allowing the
    lock to be created BUT the DbLockRefresher starts refreshing the database
    lock nonetheless, hence, no thread in any cluster node can reclaim the
    lock any more.
    A solution could be that the DbLockRefresher only refreshes locks that
    are present in the
    org.onehippo.repository.lock.AbstractLockManager#localLocks object.
    However that would make the refresh statement complexer, so instead,
    during start up now invalid live locks are reset.
    (cherry picked from commit 2fdd5458ae3439092b58b7febb380f69057fca7f)
Last commit
Last update
src Loading commit data...
pom.xml Loading commit data...