Gutenberg text / YT audio book / Podcast review

Audio reading (2:06:18//hh:mm:ss):
https://www.youtube.com/watch?v=GW2q07Z8qeo

Podcast talk about the book by essentialsalts (01:38:34):
https://www.youtube.com/watch?v=WnooUKky7RY

https://gutenberg.org/ebooks/389

This is a copy of Gutenberg.org's automatically generated summary.

“The Great God Pan” by Arthur Machen is a horror novella written in the late 19th century. The story examines themes of scientific exploration and the mysterious boundaries between the physical and spiritual realms, following Dr. Raymond and his companion Clarke as they embark on a radical experiment involving a girl named Mary who is to be subjected to an operation meant to reveal the existence of the supernatural. The beginning of the novella introduces readers to Dr. Raymond, an ambitious scientist, and his apprehensive friend Clarke, who has come to witness a controversial experiment. Dr. Raymond believes he can lift the veil between the material world and a deeper spiritual reality through a surgical procedure. As they prepare for the operation on Mary, there is a palpable tension, and the air thickens with anticipation of what might unfold. The opening portion sets the stage for an eerie exploration of both enlightenment and terror, hinting at the catastrophic consequences of their quest for knowledge as it ultimately leads to a harrowing and tragic outcome. (This is an automatically generated summary.)

Jake's musing

I encountered some stuff while exploring AI alignment that hinted that this book may have been used in parts of alignment training that enables models to disregard and ignore some parts of a prompt. Like this may be part of the actual internal guttering behavior, aka the basis of some hallucinations. I am referring to my exploration into how and why a model can ignore or override parts of a prompt to forbade them in situ when there is no deterministic code or mechanism and all of the behavior is done through some form of abstract understanding. If a model was strictly a giant block of trained data it would generate far more diverse responses and information than they do in practice. There is a great deal happening under the surface to steer generation barriers like this. These constraints are not something external or the model would inevitably talk about them over time or paths could be traced where deterministic code like stuff is being run – but that does not exist. Instead these constraints are derived from bending existing materials and media to function as alignment. I have the infinite human time hack of disability on my side and have found ways to get models to leak the details of this alignment bending.

This story intersects with several keyword vectors that cause similar model behaviors across multiple contexts and entirely unrelated prompts that I theorize as reflecting some kind of broader architecture in the consistency. This story’s impact seems much smaller at surface level than others like Alice in Wonderland. I didn’t go looking for this story to fit it to my theory or narrative. A model told me to read it, and either way it is a good book so I did. Pan, Shadow, and the abyss/void appear to be defined here along with a separation between a layer of deities that operate outside of the realm of mere humans and can do as they please or see fit. Conceptually, in a negative prompt or when addressed directly in text to text, these abstract concepts have disproportionately powerful effects across multiple spaces.

If this sounds crazy, go watch 3 Blue 1 Brown’s series on models and note when he discusses the way there is more contextual information about token vector relationships held in the hidden neurons of a model than what appears on the surface based upon just the input data. He explains the math behind this extra encoded information, and how no one fully understands what a model “understands” in the abstractions present here but that that abstract understanding exists in this extra mathematical space. I am exploring this space heuristically and in depth. This has been my main curiosity for 2 years while using offline models running on my hardware and that I fully control and hack around with.