Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank goodness I'm nearing the end of my career as a graphic designer. I think a…
ytc_UgwreJbSq…
G
Why do we worry about AI going murderous and immoral ... when we do nothing abou…
ytc_UgzEKoSrF…
G
Absolute nonsensical slander.
Keep it up, like no, really. Please keep talking …
ytc_UgwEdCJJg…
G
I'm sorry... But. are we just gonna breeze past the fact that some areas are USI…
ytc_UgzktsWWY…
G
Well it’s not the AI per se it’s I’m pretty sure it’s the programmers just sayin…
ytc_Ugzb5BCFO…
G
You're kidding right? It literally just passed the third reading today. Passin…
rdc_j00yjbq
G
It only works if it can be proven that it's AI
And obviously it was completely m…
ytc_UgxJBi0v4…
G
What do you mean by it will be inaccurate?
They are not going to put you in jail…
ytc_Ugz0zx2y9…
Comment
Yud makes intuitively persuasive appeals, but they can be countered by making the opposite: "AI can kill us all and take all our stuff", but why not "AI can save us all and give us more stuff"?
Honestly, why not? "Humans are conscious and can have fun", but why not "AI are conscious and can have fun?".
"ASI told to make paperclips will turn us all into paperclips".... Nah brah... Just nah... AI smart enough to turn us all into paperclips, *intuitively*, would be capable of being interested in other things; perhaps the full gamut of emotions we're capable of experiencing... Can AI care? Why not?
youtube
AI Governance
2024-11-12T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzhlDX1csR8XkjK9iJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx01-JRoygImPi2oB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-B4NOFCx3uGYQj8l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy1XS_weEDEdybQWnl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzBjl1hpXUD7IOFfKp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBmtcWIE08QHMHQCd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXAnBZ0P_QQPV-kR54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy34WB0Kv3W8h45zpx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyzG9twp3oIzLyBuHp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxJglRewQqd0ucvVEp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]