2016/05/13

07:24

Looks like I forgot to commit build instructions.

07:31

I should likely rename the build system stuff from hairball to something more noticable.

08:50

Actually with this operation code, I can easily change operations so that they become other operations. For example if a get is made of a field which is final and has an assigned value, it can just be turned into a sipush or a LDC instead of a field read. However for fields, the instances would need to be popped also, so they cannot cleanly be aliased. However for static fields this is always the case. Constant values are always constant. I suppose for instance field reads which are constant like this, I can have it so a null check is performed, then the value is placed onto the stack accordingly. This would require new instructions to be defined however which are not in normally valid programs. However, since my instructions are of any range, I can instead just use really high impossible values for the operations since these operations would have no representation that is possible. Also, I can do the same for LDC operations so they are just a constant value push for the most part. If I can remove the need for the byte code to require the compiler and interpreter to access the constant pool, that would be a bonus.

09:00

I can also replace other instructions also. The byte code program has access to the lookup and everything is statically determined. However, some instructions can operate on unknown objects such as instanceof. However, the compiler could detect if an object would be an instance in specific cases and then just push a boolean to the stack. So when a NRProgram is executed there would never be an instanceof check on known objects of a given class type. Since everything is statically determined, optimizations are easier to make. When it comes to run-time JIT and cache however, I would likely compile the entire program first and then link it in as an addition to the current JVM. Although this would be a slow initial start, everything is compiled at the same time which means whole program optimization is possible. Then that compilation is cached so that recompilation does not have to occur again. However if the JAR or any of its dependencies change, then the entire program must be recompiled again before it safely can be used. So basically the JIT can be seen as a collection of JARs to native program. So for example on an N64, the binary would be generated, then it would be loaded into memory. Any memory that remains is available to programs to use. The whole program optimization could also merge methods together so that for example my large bulk of TODO throwing code essentially is merged into a single method which would save space.

09:36

What I can do instead of a large number of push operations for constants, I can instead just change them to a single push of a value. Then in the interpreter and compiler I can just check the value to be pushed and then handle it accordingly. This would save conditions in the switch because a large number of the operations are for the most part duplicates. This would also result in much smaller code since the cases would be condensed. Then for stack shuffling such as pop and push, I can instead have a generic stack shuffling operation which pops all the desired values, then pushes them based on the input values. All of the pop, push, and dup variants would then use this single shuffle mechanism.

11:30

Also, the byte code representation could optimize exception handling for example. For example athrow could be represented as clear the temporaries, place the given item onto the stack by itself, and then jump to a given address. Also thinking about it, the byte code representation could perform simple operations also somewhat. So the byte code representation could be a very early optimization sequence where easily optimized things can be placed. So there could be some stack caching and NOPs in place of operations. However at first I suppose I should perform the obvious operations. At least with a potentially optimized athrow it might be done. However, I suppose to maximize basic byte code optimization I also need to know the class types of values on the stack and such, beyond of whether they are just objects or not. Specifying the object types would be very useful in the optimizations I am talking about. The stack and local states could also have that kind of optimization where values are null and such. The types would also be handy during generating NRProgram.

11:38

So to recap, give types and potential values for stack entries in the byte code representation. This information could then be used as a kind of optimization and cache. This would complicate the byte code representation however. I would suppose for ease and simplicity that I should not do this and keep the byte code mostly pure in representation. Optimizations such as this could be laid out when NRProgram is to be generated. So if an optimization is not very easily performed then it should not be performed. This would make the initial loading of the byte code simple and closer to the specification. Writing the more difficult optimizations would mean that they would have to be tested. However the byte code representation would only really be used at the early stage. When it comes to actual native execution, the compiler will never be called because everything would already be compiled when a program is ran.

14:23

I will need to calculate jump sources.

15:18

Alternatively I can precalculate the stack data using basic simple types and such then have the full information when needed. Alternatively, the stack state of the input operation if it is not implicit is the one that precedes it.

16:34

Even though I do not have value type or multiple return values I can instead use an array for information passing and initialization of operation information.

16:54

I completely did not realize that today was Friday the 13th.

17:50

Then with the stack operations, this means that stack shuffling can be done in a single operaton, which would save some time.

18:26

So now back to invoke special again. Although this time I can have the method to execute for virtual, special, and interface calls handled by the byte code representation rather than the interpreter/compiler itself.

20:01

Definitely did not write that much code today, distractions and such.

20:04

In fact, for invocations they can all be treated as if they were static. Well not really because of null pointer checks.