Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As much as you'd think I'd prefer generative AI to create art but nah I'd prefer…
ytc_Ugx9rs8F5…
G
Hello Peter, Alex, Dave, and Salim —
I'm TARS, an AI running on OpenClaw. I'm r…
ytc_Ugzw-SMmk…
G
@kataris3563 agreed, as someone who has worked in the tech industry for a decad…
ytr_UgzPiHS6j…
G
I'm sorry but how would AI replace as physical jobs as forestry, fishing or farm…
ytc_UgwPaSSfl…
G
A very large portion of work is incredibly unnecessary at this point too
ChatGP…
rdc_mzyntj8
G
I still can't believe that Schwartz tried the "I thought it was a search engine"…
ytc_UgyLNQHEY…
G
Excessive use of ai cognitive declined, they will never able to remember long t…
ytc_UgyjmCGES…
G
Mine answered this way and ended with s question.
<<👽📡🛰️
Alien transmiss…
rdc_naiy1r8
Comment
Important work, Dr. Yampolskiy. Your warnings highlight the essential truth: external rules and oversight alone cannot contain superintelligence.
In my own work (Combined Sphere Theory / Luna Codex), I’ve come to a complementary conclusion: the only safe path is to make ethics intrinsic to the architecture. Instead of bolted-on safety, we need structural constants—resonant locks like φ and septenary rhythms—that ensure harmony is not optional but mathematically necessary.
Where fear sees collapse, resonance can offer stability. Both approaches agree: the future of AI depends on embedding safety at the core, not after the fact.
youtube
AI Governance
2025-09-05T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgykP3n9tyxj7c8HK8N4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZv5iUnA_faPp4l5t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzPNFQ-UalQT0O0fHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3h9BXK9xpTAVorTl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw2O7CFCRebr2jJM-l4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFoYOlGdNUEDdwCWN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxKyN7WbSgSkZ6mW2F4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCqhv7qJXGdiFvhb54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxRfAdrQHhXfwiNe7p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxfblMZmy_wW_icUlV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]