Today is my birthday. Yay.
The immutable configuration should match getters with the mutable one for simplicity.
The simulations can act as a group. Instead of having individual simulations there would just be a system simulation. If a system instance does not exist then it will be created. When a program is requested to run on a given system then it will be passed to it. So this way, multiple programs can run on any given system but they would essentially be very standalone in their own execution.
One thing to consider however is that the simulation could launch programs
that exist in the simulated filesystem or from the real filesystem. I should
likely just support it only from the simulated filesystem. With my current plan
there would be a system filesystem and a home one. However, some operating
systems may require a combined user based filesystem on top of a system based
one. Also, some operating systems do not have a filesystem at all. For example
Palm OS has a database filesystem while expansion cards act as traditional
filesystems. The Nintendo 64 has only ROM and block based storage (although if
a 64drive is used, the main cart can contain a filesystem). So having an
external filesystem support could be slightly odd. However, for a Linux
based system I should be able to run
fossil. This way I can have a bootstrap
build environment (assuming I can also get the Java compiler and interpreter
also simulated). I will bump into a chicken and the egg problem however. Right
now I have no class library and no graphical interfaces. For the simulator to
work better, I need a graphical interface since for some systems such as
Palm OS, everything uses graphics. However for testing, I can have a virtual
serial port that the user can use which the operating system can output to
perhaps for a given process. So on Linux, the test program would pipe its
output to a virtual serial device, on Windows it would output via a COM1 or
such. On the Nintendo 64's 64drive, it would output to its USB connection.
This way I can have the test system output while being simulated be parseable
for errors and such. If a test fails on a real system then that is incorrect,
while if a test fails in just the simulator the simulator is incorrect.
Well, Google never wished me a Happy Birthday, but Microsoft did.
When it comes to the new
JITLogicAcceptor, I should only create it when it
actually is needed when a method needs to get its logic handled. Before I was
definitely going to clutter
JITOutput with logical operations, but that
would be nasty. However with the
JITLogicAcceptor I can essentially have a
multiplexer which can output to multiple logical acceptors with the same
information. Then this way, I can handle caching and runtime JIT at the same
time. Thinking about other things, I wonder if in the game of life it would
be possible to create standalone simulations (not using the super cells that
run the same simulation) but a way where I can create a glider and a few other
objects at specific positions within the world. If that could be done then a
JIT would be possible in the game of life.
Actually what I need is some kind of cache creator callback that could be
specified in the output. So basically, there would be a (
class writer of sorts.
JITOutput would return this. It would essentially be
beginClass with the namespace and the name of the class. If the output is
to be written to a cache then that can be handled by
if it is a binary that could also be handled too. So then
JITOutput gets a
producing class. Finishing the class can be handled by whatever implements the
I am going to have to have a creator interface for cached forms. The config
will be given an interface which would create
OutputStreams for writing to
the disk. One thing that I could do on Linux, is create actual object files
and then create a linker similar to
gcc which can combine all of the object
files together with a set entry point which performs the work. If I match the
native format I could manually link all the classes together myself and set
an entry point so to speak. Although this is not required at all. It would
be interesting for hybrid programs however.
JITCacheCreator that can sort of be hidden in a way by the
JITCacheCreator would be sent to the configuration. If
it is sent then that means
JITOutput should output to a cache. I just need
the creator because the
JITOutput has no means of determining exactly where
to place the
OutputStream and if it even wants to be stored on the disk.
So I need to copy over the class flag code and use that in the decoder since that will be very important.
What I need though is a name. The CI code would basically be imported as-is for the most part, except with some slight changes. Well, perhaps not changed at all. The CI family will basically be going away with the new JIT since there is no need to really keep it around. A future Java compiler could just write the class data directly anyway.
JIT would not really work out well.
Class would not work.
So basically what I need to determine now is how
beginClass is to work. I
have thought of it before. Currently I just have caching used. However with a
multiplexing output for classes I would not need to worry about duplicating or
having lots of branches. However what would essentially happen is that when
class code is generated, the work would essentially be performed twice. One
for being cached and the other if directly executed. Since that would be a bit
of a waste I would suppose that there would be branched handlers for output.
Generally when directly executed just some basic class details are needed.
However the cached form can also be directly executed and placed in memory and
initialized. So what I really could just do is have just a cached form writer
for now. Also, with the generic operating system handling and such, it is
very possible to compile for multiple systems at the same time although
linking may be complicated by that fact. One consideration with the simulator
is that I could directly feed it blobs. However it should be able to run actual
OS code because that would need to be tested more than just the native machine
code that is being executed. So one thing to consider at this point, for
systems such as Linux, is if as noted before blobs should match the standard
compilable object format. Then the blobs could be linked into a static binary
and initialized with basic C code for now. However in my case, it would
essentially just be a blob wrapped in an ELF file.
One thing to determine is if the ELF symbol table is unique to a given OS or
if it is standard for the format. I believe symbol tables are non-standard.
Appears that the symbol table is standard. One thing though is, if I can get
away without needing a symbol table at all. However, going without one would
be a bit ugly. However, at least with ELFs and symbol tables though I can kind
of easily generate a binary even without linking. When it comes to system
specific virtual machine calls (
unsafe) then those calls can just be bound
to special symbol names rather than their normally intended method. So
instead of calling a mangled name for the method, it is instead just a call
__squirreljme_foo. However likely it would be best to have an
alternative static OS implementation somewhere where methods are bound to
instead. So basically the JIT would turn
unsafe calls into
foo.bar.ImplementingClass with compatible signatures. Then this implementing
class would have the magical code such as direct assembly access and such.
This way assembly can stay out of the main libraries and exist only in a small
portion of the code. One thing I need though is a kind of ELF output libary
and a common name mangling scheme which would work in C for the most part.
Technically I could just store everything in an ELF and then link the end
result anyway. Alternatively, I can always have a blob to ELF converter for
that. If all blobs have their size information, I could essentially just
concatenate every single blob, setup a base linker script, and create a small
wrapper program which initializes data from the blob.
So I would say then that blobs would be best which include their detailed information. With a backwards linking chain of sizes, pointers to classes could be created. Finding a class definition would be a bit slow, but there could be a generic large table at the end which points to every blob that exists similar to what I have thought up of before.
So I will need a good way to have an output class format. However, since
testing things will be a bit unknown while I have code generation, I should
switch to working on an interpretive handling of classes. I would like the
runtime classes to be purely interpreted (the JVM classes) in a kind of
bootstrap state, however that would be very complex. However, an alternative
would be to kind of target an abstract machine of sorts that can easily handle
the logic. I could for example output to plain C code. This would in a way
test how standalone the code is. While an intrepreter in Java will essentially
JVM instance. However since
JVM is essentially the kernel, it would
need a way to setup sub-objects and processes. However the major issue is that
the interpreter would be completely unable to call into the normal code and
access kernel space objects as if they were user-space since the objects
would be in completely different domains. However it could still work if the
interpreter based machine is simple enough. The only initial problem is that
the run-time kind of will expect all of the classes to exist and be built
into the executable. Any dependencies would also be included too. There would
just be enough classes to have a JIT and to run the launcher. I just hope that
including all of them does not complicate and bloat it for some targets that
are extremely limited in size. However, all my JARs are essentially just
476KiB right now. For a wide range of systems that is enough. The actual size
is 386KiB however (due to sector round up). Since the blobs would essentially
have no debugging information, that would be a bit lighter. When running,
each class in its compilation state will need to access a lookup table which
refers to other classes and methods. One thing that would be very efficient
would there to be a completely merged table. Otherwise there are going to
be many instances to
java.lang.Object for example. So I will say that when
it comes to
JITOutput that it always merge all possible classes and place
it into a single index. Since
JITOutput is initialized by something, the act
close on the output could finish off anything it needs to perform.
Then this way to the JIT, all classes in a single namespace can be compiled at
once and essentially a JITted JAR would be loaded 100% at-run time or at least
memory mapped. Entire programs would be precompiled and resources included
also. This would also probably be the best result also. Since this is the
more efficient route and has much constant deduplication I must take it.
Wasting lots of space on the same copies of a string would be a waste.
So considering this, I should probably split the class and global tables to
namespaces. This way, single blobs get their own namespace. Doing it this way
would mean that
JITOutput would not need to be closed. Any and all classes
in the same namespace would be compiled together and use the same constant
data. At least with it this way, when programs are running as user-space they
do not need to have all the namespaces the kernel uses visible to the process.
JVM needs access to all of the default namespace areas. So
basically then a
JITNamespaceWriter is created which then has a
beginClass without a namespace. Since everything within a single JAR is
visible to each other this namespace split makes sense. It would also make it
easier for the launcher showing default namespaces JARs.
Then this also means that
JITNamespaceWriter to target namespaces.
Then for the builder, since the
JVM and the builder will generally take an
input JAR and recompile all of it, I can instead have a generic class in
which has a basic common interface to all of the JAR handling so that
namespace recompilation is handled using the same code. Because as it stands
right now, I would need to duplicate that work in the
JVM and any changes to
it may change and potentially break apart.