remove is just a
get followed by a
delete. This way I need not worry
about reading bytes. Then I can add
delete which is capable of deleting a
range of data without using it (could be useful).
To make it so deletion is simpler and does not require position updates for each chunk, the deletion can be performed from the higher position to the lower one.
Add should be as simple as remove and get.
I suppose something which would be simple, would be writing to the tail until there is no space left. If there is a write in the middle of the chunk then it will be split and will start writing into another chunk.
Some bytes could very well get lost when chunks are split and then removed when empty. One could split all of the chunks causing a single byte to be used, then when they are removed they all go away and their space is just wasted. To remedy this, all I need is a list of which chunks are associated with a given byte array. When a chunk is removed it just goes away. Alternatively I can just not actually remove the chunks so that their memory is never freed, but that would just waste memory.
Set would be very similar to get, it just changes pre-existing bytes.
The dynamic byte buffer is rather complex again. It works, but all data will just end up getting fragmented. My previous partition idea might be the best solution. Although I can have a pool of bytes that partition data is grabbed from. I just need to keep the allocated partitions sorted in a tree. There can be a master buffer which is dynamic in size.
The data partitions have links to each other physically and logically. If a byte is added to a partition and there is physical space next to it then it is consumed. Keeping two lists for this would be simple. There would need to be an index that partitions must be kept up to date for both sorted lists. I can use a BST to find logical partitions and their data. The backing data would be essentially a region which knows which partitions are inside of it. Partitions must never go outside of a region. The only thing is that logical partitions must never cross region bounds, otherwise things will get harder to implement. This means that the DBB class gets a list of logical partitions. Then there is a list of regions where sorting that does not really matter. Then physical partitions are listed in each region.
It is also quite possible for regions to be defragmented. Note that defragmenting the entire buffer would have a higher cost, but if regions are defragmented first then defragmenting the upper regions can be done with an array containing the current partition and a search for the lowest value compared to all the regions.
Thinking about it, the dynamic byte buffer only really works when the chunk
size is rather large and is an actual significant source where it is randomly
adjusted all the time. The general usage right now really just writes at the
start and the end of the data. When it comes to lower memory systems, the
dynamic byte buffer will likely end up using more memory than a plain
buffer. So in general, probably not worth it. The dynamic history input stream
uses it as a queue and so does
does. However these three cases, they are all deques. So the
SlidingByteWindow should instead use
ByteDeque. DHIS also uses it, but
again that is also a queue.
Seems I have a
NullPointerException in the build system.
Now this is really strange.
Had a dependency named incorrectly.
ByteDeque just needs an peek of data somewhere else
in the queue.
For efficiency the sliding buffer ...
I would suppose for the
ByteDeque I can keep it simple and have a list of
byte arrays for the most part. They do not really need to be an
LinkedList could work just the same (and not require resize). The only
issue would be that non-ended reads would be a slightly bit slower. However,
it does not really matter as much. As for the head and tail positions, they
can just be masked to the individual block size. So if the list has a size of
3 blocks then the first and last only get their associated head/tail counted
and the blocks in the middle count as whole blocks. However in reality, for
the size I can just keep a note of the current size so that calculation is
not required at all. So in essence the
ByteDeque will be a bit faster and
better on memory because
DynamicByteBuffer was insanely complex and too
featureful. If I need one in the future, I can always bring it back.