2016/06/08

09:21

For the memory pool manager I can have multiple base addresses. The base address would be used as a means to determine the address where data is placed and would be used for pointers in objects for example.

09:31

I was thinking of having the memory pool manager handle object allocations and such, however I believe instead that should be placed in another project. This way the object in memory management can be shared by the kernel and the interpreter potentially.

09:36

I need a package which can handle comparison and other operations of unsigned values.

10:53

For the memory pool, it should be used by the manager and then associated with the kernel potentially.

11:20

Going to need a common and generic object manager. Monitors and locks can be managed by atomic read/writes of values. So the memory pool will also need compare and set/test or similar for the native types.

11:43

Hopefully 16 bytes reserved at the start of the memory pool is sufficient to handle virtualized atomic operations and such (in case there is no native support for it).

12:21

Actually that would be a bad idea. The memory pool should just be a memory pool which can be read and written to. An object manager can reserve space and such. This way the memory pools can shared with the simulator and such.

12:23

The atomic operations also cannot be in the abstract pool either because the reserved bytes are gone now.

14:46

I can test differently sized pointer values in the interpreter. However one thing to consider is that this would limit the interpreter's maximum amount of memory. Each instance of a loaded class for each virtual machine would need the Class object allocated for classes along with their static fields.

16:55

So the question is, do I allocate stacks used in the interpreter using the memory pool? If I do then that means saving the current state of execution is virtually just limited to storing the PC address in the currently executing method, the stack pointers, and a few other details. With this model, there actually could be no local variables used and where the code executes in a way where everything is on the stack. Local variables as in registers.

17:00

I wonder what the maximum stack size I can run my current code instance in with is. This would at least be using JamVM.

The stack sizes are dependent on the VM itself however. From all of the overflows they are all essentially happening in the class loader.

17:05

What I can do however, is if the stack is too small, it can be extended to be used in another allocated region (which would be locked). So really the object manager would be more than just objects and arrays, it would also have to handle stacks and possibly temporary executable code fragments (which were natively compiled).

17:13

Stack growing across extensions would technically allow stacks that are really low in the execution space to be moved around and potentially swapped away or compressed. That could be a bonus for speed usage. Due to the way Java works no other method refers to another method's stack entries. So this can actually be used as a memory based optimization. Also 6K is not enough to run the Java compiler. 8K works until the kernel has to be built. When GC is performed however, the stacks will need to be swapped in, decompressed, and locked to determine which objects exist on it. Using the previous plan of having a duplicated object storage space would make determining which objects are actually referenced quite simple rather than finding out another way of a value that contains an int value points to an actual object.