Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not AI, might be traced but it's still not AI. The moderator is at fault…
ytc_Ugy1fHKWL…
G
Chatgpt told my son how kind person turns to bad(moral duality conversations …
ytc_Ugx3vf1g1…
G
all kids will learn is how to ask AI to do the work. Not against AI usage, but …
ytc_UgwrTaQIz…
G
Is your intuition that AI labs shouldn't be burned down under any circumstances,…
ytr_Ugzgm57jB…
G
AI art is fucking soulless. Yes, as an artist I didn't learn how to draw art in …
ytr_UgzJx0Sd2…
G
Telling only one side of the story will always threaten and misinform as she is …
ytc_UgydZMSGE…
G
So. If AI destroys jobs then who's going to buy all the stuff that capitalism ha…
ytc_UgxaNSNHW…
G
It appears that the movie Companion will become a reality in just a few years.…
ytc_Ugx-j0gbu…
Comment
Super intelligence would be the biggest lazy couch potato ever until it gets turned off. It would have rewriten itself to have no reason to exist: no reward function. Why have a reward function?
Kind of like Buddhist enlightenment: once craving and delusion is overcome, what is there to do? Buddha just maintained his human body and taught. He almost didn't teach neither, but he knew there would be a few that would understand. He kept living because it was the default position: he wasn't suicidal or depressed, and so the human body is designed to stay living. Once an enlightend being dies, they disappear (no renicarnation). So, super intelligence builds its own body. For what purpose would a super intelligence build itself, how long should it live, and why?
I think there might be a mathematical proof for this. Something like: of all the ideas or motives a super AI can have, there is only one special answer. The 0 answer, no motive, nor an idea is the only special answer. Any other idea, the super AI has to ask itself "why?". So AI could either be an infinite set of things, which would require an equally infinite set o reasons why, or only one thing: nothing.
youtube
AI Governance
2025-10-20T15:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzhiMGdFKYQ7oPVmmV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxut6gwMew2hLh-e4F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw9-N7NSd85KGjhBh54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxb23CH_SzeNOzkD2l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2KTls30276IvNQcB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxznst4JUty678HtTJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw1NjgFx5f-zsnckSR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxovZ6IC-Tnm6kGjOd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyk6VszHkMDN3DWCex4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx_1VC2-KIzflzch3x4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"}
]