2016/04/19

08:06

Actually with my argument handling, I can simplify.

08:19

I can also have an interpreter on the interpreter test, which would be likely exponentially slow when running tests, however it would see if the interpreter can interpret itself however.

10:16

WeakHashMap just using HashMap will not work because I may need reference queues and the ability to use iterators without worrying about concurrent modification of the key values. Otherwise keys will only be removed during iteration. However, with multiple iterators it would get confusing as to which entries are gone which are not. So basing it off that will not work.

10:20

I suppose for my implementation of the hashmap, I will have buckets where the keys are identified by their hash. The keys would be sorted by their hash. When a lookup is performed a binary search is made for a matching key. If one is found, then its value is obtained. When the load amount is reached then the number of entries would be doubled in size. Entries in a bucket will be split between the two buckets. To determine the bucket that a key is within there will be a basic binary search based on an initial starting hash. So for example the starting bucket has a hash range between 0x0000_0000 and 0xFFFF_FFFF. Values are placed accordingly. When the capacity or load factor is reached then the bucket is split down the middle and entries are sparsly placed in the arrays. with an even amount of spacing between them. The only thing to determine though is if a position is blank is could possibly affect the binary search. So if an area is blank then it should share the same key as an entry before it. Then for values I can have a special value identity which says if the key is actually valid or not. If a binary search does not find the entry then it would specify the location where the item would be and if it points to an invalid entry then it will be cleared. Since HashMap, LinkedHashMap, and WeakHashMap will be very similar. Thus, I can have a common hash class which can be used internall by the code in a private namespace .Then HashSet and LinkedHashSet could just use the HashMap and LinkedHashMap and go through their keys then use dummy object for usage in add since it must know if the data actually changed.

10:46

I suppose instead of NARG or IOOB and such I can give them actual error codes when null is passed. However that will create a lot of error codes for just null checks. So I suppose for the main class library I will use error codes like that instead. With two sets of letters, I have 1,296 error codes available. If this is not enough then I can always add another letter. I suppose for the class library instead of CL I should use JA instead.

12:39

It would be simpler for type determination if the old values were only sane for what the type an existing value is.

18:59

I may have enough of a partial interpreter to write an SSA optimized variant of it which might be cleaner than the current interpreter and much faster. Currently hit a bug in the interperter when popping frames. However the interpreter is very complex and slow. The SSA representation should be similar to the native code representation. Need to have an efficient means of handling exceptions in the SSA code. Perhaps a kind of lookup table instructionally speaking.

19:09

I will need two registers, the address being handled and the exception being handled. The exception handler lookup will be at the start of a method. It will check if there is an exception, if there is then it will look for the given instruction in a specific range and if one of the correct type is found then it will go to the exception handler. This can get really branchy however, although it would not require a lookup table. An alternative however is a special instruction which contains a lookup key. If the exception handler of an instruction changes then that register becomes the lookup key. Then when an exception occurs type checking will occur to determine which exception was hit except if the given exception is Throwable (which catches everything). In the case of Throwable it will just be a jump to the exception handler.

19:16

I believe I have a verification operation which is not correct which causes the error I hit in the interpreter.

19:25

Appears the topmost float is not passed to the constructor.

19:35

The class program code and the current interpreter are quite large:

3221    javame-class-program java=3221
2659    javame-basic-interpreter java=2659

And consist of a number of methods and classes. I must find a way to simplify them so that they are together as one and perhaps in only 4000 lines. It must also handle SSA too. The class file code is about 2000 lines of code.

20:47

When I look up the exceptions I can generate the code at the very start and put special markers as such for handling exceptions. For the stack map table and the byte code, I can process them linearly at the same time. Verification of types can be inlaid to the SSA variable data. So instead of a verification and then an interpretation/compilation pass, they are both performed at the same time. So instead of determination and op handlers, it would be combined into a single unit. Many operations are pretty much the same. So it would be similar to the compute machine except it would be given actions which could be merged to and from SSA code.