- 16 Aug, 2018 1 commit
-
-
Erdem Karakus authored
(cherry picked from commit 283c2eec)
-
- 26 Jun, 2018 1 commit
-
- 11 Jun, 2018 2 commits
-
-
Bert Leunis authored
-
Bert Leunis authored
-
- 29 May, 2018 2 commits
-
-
Ate Douma authored
-
Peter Centgraf authored
-
- 28 May, 2018 1 commit
-
-
Ate Douma authored
-
- 09 May, 2018 1 commit
-
-
Peter Centgraf authored
(cherry picked from commit e06e2a41)
-
- 07 May, 2018 2 commits
-
-
Peter Centgraf authored
-
Ard Schrijvers authored
If a cluster node (say 'node1') starts and it finds locks in the lock table that are for 'node1', it means that the cluster node did not release all its locks during shutdown (graceful or ungraceful) and it did come up again before other cluster nodes had reset the lock to FREE because the lock did not yet reach its expiration time (or there were no other cluster nodes). In normal situations the DbResetExpiredLocksJanitor takes care of freeing expired locks, for example locks that belonged to a shut down cluster node. But in the scenario above, the DbResetExpiredLocksJanitor did not yet free the locks, resulting in org.onehippo.repository.lock.db.DbLockManager#createLock not allowing the lock to be created BUT the DbLockRefresher starts refreshing the database lock nonetheless, hence, no thread in any cluster node can reclaim the lock any more. A solution could be that the DbLockRefresher only refreshes locks that are present in the org.onehippo.repository.lock.AbstractLockManager#localLocks object. However that would make the refresh statement complexer, so instead, during start up now invalid live locks are reset. (cherry picked from commit 2fdd5458ae3439092b58b7febb380f69057fca7f)
-
- 24 Apr, 2018 2 commits
-
-
Peter Centgraf authored
-
Peter Centgraf authored
-
- 20 Apr, 2018 1 commit
-
-
Peter Centgraf authored
-
- 11 Apr, 2018 1 commit
-
-
Sergey Shepelevich authored
-
- 03 Apr, 2018 1 commit
-
- 02 Apr, 2018 1 commit
-
-
Ate Douma authored
This introduces the new hippo-repository-tika module which now takes care of managing all the tika-parsers related dependencies (and exclusions), and provides a new TikaFactory for loading the hippo-repository specific tika-config.xml, and creating new Tika instances using the corresponding TikaConfig. The tika-core/tika-parsers dependency management previously configured in the hippo-cms7-project parent no longer can/should be used, and thus will be removed. (cherry picked from commit 56ff16ba)
-
- 29 Mar, 2018 3 commits
-
-
Arent-Jan Banck authored
-
Arent-Jan Banck authored
REPO-1 Remove unused tomcat-embed-logging-log4j dependency. The dependency does not exist for Tomcat above 8.5.2
-
Arent-Jan Banck authored
-
- 13 Mar, 2018 1 commit
-
-
Arent-Jan Banck authored
REPO-1963 Update commons-beanutils and json-lib versions and properly manage the versions through dependency management
-
- 07 Mar, 2018 3 commits
-
-
Jeroen Hoffman authored
(cherry picked from commit f69fe4e6)
-
Jeroen Hoffman authored
REPO-1960 [back port to 11.2] remove parent-check logic: lot of code for a situation that is very very unlikely (2 or 3 found variants with different parent) (cherry picked from commit 0c15259e)
-
Jasper Floor authored
(cherry picked from commit 558e52a7)
-
- 27 Feb, 2018 1 commit
-
- 12 Feb, 2018 2 commits
-
-
Arent-Jan Banck authored
-
Arent-Jan Banck authored
-
- 11 Feb, 2018 3 commits
-
-
Ard Schrijvers authored
Make sure that *if* the Hippo lock table exists, that it really contains an index on lockKey (regardless whether it is a primary key or a unique index). If it doesn't contain one, the lock table was not created correctly and contains locks that do not work. Hence, truncating the lock table is ok. After that, correct the table scheme. Right after the table scheme has been fixed, other running cluster nodes can log some errors wrt locks. This is not a problem and expected (cherry picked from commit 90938409)
-
Ard Schrijvers authored
(note this was at this moment not a real bug because uniqueIndexes always contained just 1 value) (cherry picked from commit bff17130)
- 25 Jan, 2018 2 commits
-
-
Arent-Jan Banck authored
-
Arent-Jan Banck authored
-
- 23 Jan, 2018 1 commit
-
-
Arent-Jan Banck authored
-
- 16 Jan, 2018 2 commits
-
-
Jeroen Hoffman authored
REPO-1927 [Back port to 11.2] SecurityManager doesn't sanitize userId in case of external providers to get memberships - sanitize user id
-
- 03 Jan, 2018 3 commits
-
-
Ard Schrijvers authored
(cherry picked from commit f5f06d2c)
-
Ard Schrijvers authored
(cherry picked from commit 77fca1aa)
-
Ard Schrijvers authored
Since the lock via the LockManager does not involve any jcr locking, it does *never* trigger a cluster sync. This means that we can get the lock for a node that has been processed and finished just by another cluster node that also already freed the lock again. Making sure that after we obtained the lock to invoke a session refresh should result in that process the job. (cherry picked from commit 075c95c1)
-
- 04 Dec, 2017 2 commits
-
-
Arent-Jan Banck authored
-
Arent-Jan Banck authored
-
- 23 Nov, 2017 1 commit
-
-
Ate Douma authored
-