Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is it AI? Or is it just I ??? At what point is it no longer artificial? We ourse…
ytc_Ugwz0I1HX…
G
Honestly, I'd rather let AI win. We shouldn't look at it as an existential threa…
ytc_Ugy8mc8lU…
G
Can you say, "stupid greed"? After all, robots don't buy stuff. If AI and robots…
ytr_UgyL7RKJZ…
G
That's the entire point. It doesn't feel. It doesn't care. It doesn't know. But …
rdc_n7sqt11
G
the most stupid thing ai bros say is that drawing is too hard. ive been learning…
ytc_UgwOVuL2q…
G
Well the AI isn't the problem in the first place, the AI builds up on an base of…
ytc_UgxZEvNHT…
G
Everyone suck at making art at first. With Ai people will not even bother learni…
ytc_Ugz3n7qnG…
G
>Bonus: It Makes You Worse to Act Like This
It only makes you worse *if you …
rdc_j8vtipr
Comment
Fascinating discussion! The idea of AI achieving consciousness is both exciting and a bit daunting. It's clear that defining and measuring consciousness is a complex challenge. I'm curious to hear your thoughts on the potential ethical implications if AI were to become sentient. How would we ensure that it's treated with respect and dignity?
youtube
AI Moral Status
2024-10-15T11:0…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwlb-IeviiiG-JYvWt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy4ydh3Tt8gCeOoBYl4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx3wkEYr2gIk9IhcAx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxiKroyV9PVa5m9mYd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyP7cvdOfe2v-tSJQ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyU0iPhe5nnI0-IQSF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzznqLcuzAGF4O0YNN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJMQnA-lSQlWcM7H54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxB_8Sh6opgTjdns3x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxbU8eIY3NQpzHHepJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}
]