java - ConcurrentHashMap memory management or alternative? -


the memory of concurrenthashmap of java-websocket websocketclients values growing unceasingly.

i've isolated section of code tries reconnect whitelist of peers every second. i've tested see if entries being removed after connection failure, , are. i've looked @ can find "concurrenthashmap memory leak".

i have no idea how implement this possible solution. solution solve problem? if so, please provide example code.

i tried implement this suggestion

concurrenthashmap<string, myclass> m =   new concurrenthashmap<string, myclass>(8, 0.9f, 1); 

and think slowed growth rate somewhat, have no idea how tweak it. correct approach? if so, please provide example code.

i tried switching hashtable recommended here, concurrentmodificationexceptions immediately, think that's out.

how can memory of concurrenthashmap implementation managed when there rapid insertion , removal of thousands of entries per second? if can't, there alternative?


pass reference instead of copy

i added hashmap stores new websocket, , connection can re-attempted if it's not present there concurrenthashmap.

this improved memory management, still leaks little, approximately 0.1 mb per ~5 seconds 5 attempts per second.

is problem code, websocket's "destructor", or concurrenthashmap's management of removed values?


dead objects?

the growth rate has again been reduced because remove() concurrenthashmap before calling close() on websocketclient , remove()ing hashmap.

how & why websocketclient still linger after being closed , removed both maps?


system.gc();

the leak rate has been reduced again!

i call after remove().

it highly unlikely map leaking. happening, if it's related map @ all, failing remove data map when you're done it. can't gc'd long map pointing it.


Comments

Popular posts from this blog

php - regexp cyrillic filename not matches -

c# - OpenXML hanging while writing elements -

sql - Select Query has unexpected multiple records (MS Access) -