2016/03/13

00:17

I actually do not need to use the descriptor because that is only really needed during streams. It is also placed after the compressed data. This means I need a reference to the directory offset that a file belongs to for the obtaining of the size.

00:12

It is also really only used for streaming deflate compression since writing and seeking back may be impossible, so the checksum and the actual sizes of the data is not known at all. This makes perfect sense.

00:25

For dynamic huffman, I need another sample input which is a bit larger.

00:38

The later big data tests would be better as hex data in resources, that way it does not complicate the class code.

01:04

So what is next is reading the dynamic huffman table, will be interesting. I suppose my huffman class will have to be clearable so a single instance can be reused and such.

15:56

The dynamic huffman table is the most complex part of this.

16:35

I believe I get the code lengths part. Basically it is the number of code lengths that exist for a given length. I suppose that it is used to determine how many bits a specific value uses or similar.

16:50

The one thing though is figuring out how the huffman table is generated from the length codes, which are used for further processing.

19:18

Going to take a break from deflate and work on class loading (which I have done before). A -0 flag in the build script will be handy for skipping compression, since I should not just be working on the decompression library although it is an important part of it.

21:07

Looks like the earliest version of CLDC (J2ME) does not support floating type math. Also, the subroutine instructions are not supported at all it seems in all versions, so implementing and verification should be much easier.