2016/08/04

08:27

Looks like the tail is not being set properly for an add operation or after a remove. The tail points to valid data and not 00.

09:57

Ok so with a block size of 8:

[H6e  54  65  73 T74  69  6e  60][H00  6e  6e  6e T55  00  00  00]

And a block size of 16:

[H00  54  65  73  74  69  6e  67  54  54  65  73  74  69 T6e  00]

I should probably avoid writing code when half asleep with a headache. So from the looks of it at the trailing block edge it gets erased to zero, perhaps with every read. Since I clear the data that should be the cause of losing a byte. Also, the sequence 6e 6e 6e is kind of odd. So just in case a block size of 32 results in the same data as the one for 16. Notice how 6e is at the end of the other block.

10:04

Had some variable confusing checking the removal code. A case where the first block is full and is the only block (there are exactly block size bytes in a block), then the tail would be zero. So if the head was also zero or greater than the tail, it could cause a negative value.

10:21

So now the first add is this (previously before the last one):

[H0b  49  2d T2e  c9  cc  4b  0f][H81  50  00 T00  00  00  00  00]

And the block size of 16

[H0b  49  2d  2e  c9  cc  4b  0f  81  50  00 T00  00  00  00  00]

So now the data is consistent.

10:32

And now removeFirst should be fixed because the inflate algorithm works for a simple sequence, must check the others. D fails with an IOOB.

12:53

Seems the sliding window get is off by 8 or so. While in the inflate test it is only off by 1. So likely the array-like get in ByteDeque is incorrect.

14:20

So the sliding byte window tests pass, but now the decompression code just returns a single byte.

14:46

Checking revision f429555e874c7819c0fd8370f72bffd3df3ffb43. Fossil's fuse feature is quite nice. That revision fails. Now checking 2d060afcff8466b373b4c246e7e9d723afee7894 and that revision works. The diff is quite simple. The code was replaced by two method calls, get followed by delete. The code is very much the same or the most part.

14:53

However test D for that revision fails.

14:58

A block size of 16 also fails. However, a block size of 2 causes an exception so I must investigate this.

17:16

Took a rest. Check delete, that appears to be fine, it appears the problem is in get.

DEBUG -- GET 0 0 1 0b
DEBUG -- 98fd0c20 in=false T=11 h=0 t=3
DEBUG -- [H0b  49  2d T2e][Hc9  cc  4b T0f][H81  50  00 T00]
DEBUG -- GET 0 1 1 00
DEBUG -- 98fd0c20 in=false T=10 h=1 t=3
DEBUG -- [ 00 H49  2d T2e][ c9 Hcc  4b T0f][ 81 H50  00 T00]

The second set, the vlaue should be 49 and not 00. The "useless" addition that I removed before actually has a purpose. And now it works. Interesting that the code worked for much longer after it was removed. So essentially it was correct code that caused the same thing to happen despite being incorrect.

17:21

Now I need to determine if removing DBB and using BD for everything was worth it. I need to see how fast the decompression algorithm runs. I do hope it is faster. And the result is that it is pretty much the same speed as before. It still remains as slow once 32KiB is reached.

17:24

Something that might work better is from getting from the end if the address is higher than half of the buffer.

17:31

Something that might work better is having an extra list (an ArrayList) which is then used to access the data. Basically, the ArrayList when a block is added at the end, can just be given the block which was added easily. However, when a block is removed from the start it requires recalculation. One issue with an ArrayList is moving values around in the list although when it is not linked it would be easier to work with and use less allocations. So I really just need a kind of virtual ring buffer of sorts. However on generic queue operations it can get complex.