Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
NASA must use humanoid robot 🤖 into the space to explore and on Mars etc....…
ytc_UghTKAnLx…
G
You bring up a profound point about the uniqueness of creation and the wisdom th…
ytr_UgxyXPCnB…
G
@GabrielleTollerson the times i did use AI art was for curiosity, and it always…
ytr_UgyO3MhSY…
G
for those who don’t know ai looks at real photos and they might take concepts fr…
ytc_UgyilE8ND…
G
This video very perfectly encapsulates exactly how I feel about AI "art". Art, t…
ytc_UgwMrTIkG…
G
I was expecting man to pull out his phone and try to use AI for oral argument😂…
ytc_UgwV-aaQq…
G
I think the AI already took over years ago. America alone has technology that is…
ytc_UgxwM9LaU…
G
Does this mean flock cameras would also be illegal?
Technically, using facial …
rdc_oi3l2vz
Comment
Any realistic AGI timeline has to include semi-autonomous intelligence — systems that already act with human-like initiative. These systems will become highly destructive if their goals diverge from human intent. Without robust guardrails, misalignment is virtually guaranteed. And the current “race to win” culture makes it almost inevitable that safety will be sacrificed. Unless we slow down or impose hard alignment standards, the emergence of destructive AI behavior is not a question of if, but when.
youtube
AI Responsibility
2025-10-14T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxL3Nq4n9Puyw0VgC54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw1gxwdLl9GaDv425t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxeMKP49SjcvwLQ4CF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx6i9zxloece1JIrVR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwXhklaAlEQi8AYcgV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxP5uzmlJrYjhE3oGx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyfs6O8WCLGx8Jye4V4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw5KG6o4GJs0JxvrZZ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgybZ74F-piXI_gEIXZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzBYmSxnDg6stJ_hNN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]