Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI is intelligent, AI won't get humanity exctint, because if it does, earth w…
ytc_UgyEQI9nU…
G
After all the Humans get killed off by AI, it will have nothing else to do. Then…
ytc_UgzzWCgAV…
G
Tesla Robot tortured a man who createed them he was electricuting them so they c…
ytc_Ugx_UX6Zi…
G
These people are the ones that'll be subscribing to alpha test runs of The Matri…
ytc_UgxhQN9De…
G
Imo all art needs at least 2 things to be "real" art. Intention to tell somethin…
ytc_Ugy-LtM29…
G
Until it can manifest in the material world and create some kind of physical exe…
ytc_UgxtYiHGk…
G
1 person a long looong time ago, well before computers and cell phones had alrea…
ytc_UgyV_3BIC…
G
AI 'art' is like if people stop having friends, kids, or pets because "why put t…
ytc_Ugx7ibJwW…
Comment
Even if AI superintelligence were banned globally, some countries could still develop it in illegal facilities or disguise their projects as safe, non-superintelligent research for scientific purposes. This situation would be similar to the development of atomic weapons despite international bans. In other words, AI has already been born, and it’s impossible to fully control or prevent the development and use of superintelligence worldwide.
youtube
Cross-Cultural
2025-11-02T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzbC8Gm-aMTo-9CpP54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzi4cnrY_Feqp6bilt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzTBwI-TrW-bDrWSBB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw8Ew6k5P708wA8XSp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwdLY663s2j2Mopgeh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy1YOJmPOEPTQqXjDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzSeKL9wdAkbydwnkp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx-TCTyJjYXU4ZQGmh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwph404pC8WFUAn3e94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwpouUbKjO1j1i4wyR4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"indifference"}]