I actually do not need to use the descriptor because that is only really needed during streams. It is also placed after the compressed data. This means I need a reference to the directory offset that a file belongs to for the obtaining of the size.
It is also really only used for streaming deflate compression since writing and seeking back may be impossible, so the checksum and the actual sizes of the data is not known at all. This makes perfect sense.
For dynamic huffman, I need another sample input which is a bit larger.
The later big data tests would be better as hex data in resources, that way it does not complicate the class code.
So what is next is reading the dynamic huffman table, will be interesting. I suppose my huffman class will have to be clearable so a single instance can be reused and such.
The dynamic huffman table is the most complex part of this.
I believe I get the code lengths part. Basically it is the number of code lengths that exist for a given length. I suppose that it is used to determine how many bits a specific value uses or similar.
The one thing though is figuring out how the huffman table is generated from the length codes, which are used for further processing.
Going to take a break from deflate and work on class loading (which I have done
-0 flag in the build script will be handy for skipping
compression, since I should not just be working on the decompression library
although it is an important part of it.
Looks like the earliest version of CLDC (J2ME) does not support floating type math. Also, the subroutine instructions are not supported at all it seems in all versions, so implementing and verification should be much easier.