Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans love to create. We can't help it. We invented spear points and we just co…
ytc_UgyRrbm2-…
G
Dude this was me trying to get ChatGPT to acknowledge sentience lmfao BUT HE FUC…
ytc_UgxouqKqf…
G
STEAL LIKE AN ARTIST. But when AI does it it's not fun anymore.
This situation…
ytc_Ugzvt96ju…
G
I've never seen a more visceral description of somebody approaching "AI psychosi…
rdc_n7lavjv
G
the problem is once you start this train you cannot stop it. Same as nuclear wea…
ytc_UgwdGjDPB…
G
I see this case as follows (but maybe am I wrong ?): if a human reproduces verba…
ytc_UgydEieeJ…
G
The issue with the "replicating a style" point is that for an artist it can take…
ytc_UgyBxIvy3…
G
I don't mind AI but where I draw the line is people trying to profit off of Ai "…
ytc_UgxGNNb1w…
Comment
As we race toward an era dominated by AI, isn’t it paradoxical that the very technology meant to enhance human life could ultimately undermine our existence? The urge to create superintelligent systems is driven by greed and competition, yet we seem largely unaware of the existential risks. What if this quest for advancement takes us to a point of no return? Shouldn’t we be prioritizing ethical frameworks and safety mechanisms instead of simply chasing profits? In a world where AI could potentially outsmart us, how do we redefine what it means to be human?
youtube
AI Governance
2025-12-11T12:5…
♥ 14
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzLaYMnzbpQaXPV7g54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5z_Yg7AsBOZeZhih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzcZEpd0EdO7X2z1114AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxyD_oaV2YtjV5kvap4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyp3aTu5sVIOqXCoDN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxgog57TM0Kv23JzU14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugzp8MlnwiJrrQAPdpR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz2WKZcpiCPdDZ5_Ld4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz97yKBhSVU4FsK7Kp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzdS6jG4aqWc9EV03t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]