java - Super Fast File Storage Engine -


i have 1 big gigantic table (about 1.000.000.000.000 records) in database these fields:

id, block_id, record

id unique, block_id not unique, contains 10k (max) records same block_id different records

to simplify job deals db have api similar this:

engine e = new engine(...); // method must thread safe fine grained locked (block_id) improve concurrency e.add(block_id, "asdf"); // asdf 1 kilobyte  max  // must concatenate added records added block_id, , won't need bigger 10mb (worst case) average <5mb string s = e.getconcatenatedrecords(block_id); 

if map each block file(haven't done yet), each record line in file , still able use api

but want know if have peformance gain using flat files compared tunned postgresql database ? (at least specific scenario)

my biggest requirement though getconcatenatedrecords method returns stupidly fast (not add operation). considering caching , memory mapping also, don't want complicate myself before asking if there made solution kind of scenario ?

it sounds have running in postgres - can post schema you're using? it's possible better well-tuned database in specific scenarios, turns out vastly more work imagine going in (especially if you're synchronizing writes).

are using cluster index? storage settings table?

and how large can table before queries become slow?


Comments

Popular posts from this blog

html - Sizing a high-res image (~8MB) to display entirely in a small div (circular, diameter 100px) -

java - IntelliJ - No such instance method -

identifier - Is it possible for an html5 document to have two ids? -