Elision toki pona CLOS Mapping Experiment With this year, I seek to make progress in pursuing my machine text system which I've named Elision. Unlike the MMC, this idea has found no issues in existing and advancing without any implementations. I've well enough finished what is effectively but a toy, a mapping of some of those ideas of Elision to Common Lisp's CLOS; knowing precisely what I want in Elision, and how to implement it, I've found Common Lisp nothing but a hindrance, preventing me from specifying types as precisely as wanted, and from locking implementation details away as nicely as I desire. I'm building a statue, and the tool I favour provides only soft and flexible building materials I don't want for. I can undoubtedly use Common Lisp for implementing Elision, but unusually find it useless for the initial design I pursue. Targeting toki pona has been another misstep. That silly little language is too useless for my mind to allow me to remember much of its small vocabulary; I've found no issue resuming my Latin studies, as I know that's not useless, and it only strengthens my knowledge of other languages. I'll be much better served by building a small English dictionary to start out, as my Latin is much too immature. I did learn from this regarding the implementation of the auxiliary dictionary, which is intended to be built piecewise, as opposed to the primary dictionary built at once, by realizing I need to split the maintenance of ordering properties from dictionary creation. Ideally, the creation of auxiliary dictionaries would be handled while such texts are being written, using an Elision editor, and could divorce itself from using a purely Elision internal representation, through deferring the assignment of indices until the texts are finished. Unfortunately, building such an editor now is unrealistic. I provide two versions of the dictionary creator function, with the latter optimizing away character storage for words which are subsets of others; this removed explicit storage for roughly one quarter of the words, but was less of a total improvement than I desired. Optimizing away storage for words which are subsets of multiple others concatenated is much more complicated, and also wouldn't result in great savings, necessarily; a more complex algorithm would associate words encompassed by others, as opposed to searching the entirety of that storage, but a toy needn't feature the best algorithms, and dictionary creation may be rare and fast enough that greatly optimizing it using nontrivial ways is wasted effort, considering Elision expects larger machines; dictionaries may be considered cache. I've decided to continue by: collecting a list of the most common English words; and then building a batch tool, likely Ada, capable of translating from character sequences to Elision, and from Elision to character sequences. I'll seek to implement the most base program which resembles Elision first; following, I'll extend it to support more, and gradually build up the higher layers. With a working implementation, the ideas will be more proven, and I can later build an approximation of the editor. .