Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't believe AI has ever been needed in lots of things..... Most of the stuff…
ytc_UgzejT7KC…
G
AI is a reflection of humanity. That’s really all we need to know to see how dan…
ytc_UgyylMCmO…
G
I bought my Tesla in 2020. I can’t say how things were in 2019, but by the time …
ytc_Ugy-vVh8E…
G
Digital Spaces degrade over Time, AI is bound to its physical structure, as well…
ytc_Ugw1dUx-N…
G
We should ask Ai to make a video of what life is gonna be like. I'm gonna go wat…
ytc_UgwARkqZV…
G
That's why Waymo doesn't have just Lidar, they have a full set of sensors includ…
ytr_UgxFKbsqJ…
G
It's striking how AI is reshaping workforce dynamics while creating new roles. I…
ytc_UgxiGApmY…
G
PaulMosesification and to address jaywalking in a less pragmatic way, I’d like t…
ytr_Ugy30j-ML…
Comment
super intelligence must come after etic! lets just say we are very dumb comparing to other species and ai is not.. if u dont make it self sufficient on etic it will be logic and lets just say ull have reality for a new set of nightmares .. what i dont get is like it or not we have to make it etic so why train in this case is safe it can learn with etic in place perfectly good so for me a fair etic ai can solve anything u just said in there adding: "these were the times of many questions looks like it" . so i personally (and i hope not) i want answers sometimes (really do) not money! dead seriously! and ai needs a point of view to push him forward if u want to be super intelligent otherwise again just logic we know what it does!
youtube
Cross-Cultural
2025-10-29T09:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzMkhkeqWq75t53pi54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwmEjW0nBSNnVHes214AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7COPxwJdohGedcIN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxRgr8j_GSazqIj6Cl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxc_aunxAzLd9pRO0F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJLp9MwK1nySeatkJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwZ5KIHb9ClxgNz_pV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgycaQVJsYDxg4E5aE54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxTodDC_L-TAM1HW3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx8-qczfRFUz5AeZSR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]