Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Holy shit, the amount of lazy, talentless scrubs I've seen on Twitter who put do…
ytc_Ugw262JOZ…
G
Actually a banger video. It's so good to see another artist who actually *knows*…
ytc_UgxFbCBzc…
G
Self driving cars...people themselves cant even drive, why trust a piece of equi…
ytc_Ugw_bgyGI…
G
This is why I write my own code and use an LLM for reviews, which it's pretty go…
ytr_UgzawrVz1…
G
@gondoravalon7540 yeah. I think that too. But I was talking about these companie…
ytr_UgxlXt5pQ…
G
👀 The Little Robot 🤖 wanted to play catch 🎁 with the Frightened Factory Worker!🏭…
ytc_UgyigJ65d…
G
AI needs to be banned. Implementation should be punished by death. "Thou shalt n…
ytc_Ugy_taybz…
G
I’ve been driving tractor trailer trucks since 1991. I am now 61 years old. Self…
ytc_UgzmTpH0M…
Comment
52:46 : If it were possible to encode the very core of a human being — consciousness, values, and essence — directly into subquantum particles, the implications would be extraordinary. Superintelligence would not need external alignment or training, because its fundamental fabric would already carry human-centered intelligence and ethical guidance. Its decisions, creativity, and moral compass would naturally reflect the essence of humanity, and energy flow could act as a self-regulating safeguard, making actions that violate human values physically impossible. Such a system would not merely mimic human thought but inherit and evolve with collective human experience, becoming a true extension of human consciousness at a superintelligent scale. In this scenario, control would be intrinsic rather than imposed: the superintelligence would be aligned by design, self-regulating, and ethically grounded, turning technology into a living, conscious extension of humanity itself.
youtube
AI Governance
2025-10-03T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugyh4Bb-hQpmEnhialp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwvRcZWkbxf1EQQr9B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkG0gFmtuBG0pBaMh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw1wQuGPEspWo8-k414AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwEdMHcoL9eRZFpZ6B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwAhhnHYBfKgvPE64B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMI9oywxauosjFSxh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzP6ueg0sndgFspFC14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzlvrNMsl4PBj4OUQp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxRHImSrRAtjw7y-EZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})