Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI can outsmart the hackers then wahoo. (I’ll pretend the rest (the end of th…
ytc_Ugy5gWDGt…
G
As an AI influencer, architect, and artist—all AI, by the way—smart people are t…
ytc_UgzRJOeA9…
G
11:03 Exactly—These AI bros are removing the purpose to enjoy life. Why they cho…
ytc_UgyB41Nv9…
G
Hi Prince, you got the right answer. Kudos.
The contest is over and winners have…
ytr_UgycgSX3M…
G
Yep, I’m a recent graduate. Literally graduated as the pandemic was getting star…
rdc_gkqmuj9
G
@deskatcat533 I have found that it isn't what you know; but who you know, so I …
ytr_UgzJ5KLVV…
G
There is a fallacy in this logic. If a businessman is introducing AI in his comp…
ytc_UgyYELmcc…
G
I don't understand, for sure the Ai tools Amazon and others are to use are not b…
ytc_Ugx4gMjaS…
Comment
"X-risk, short for existential risk, refers to the potential for highly advanced artificial intelligence to pose an existential threat to humanity through unintended consequences or goal misalignment."
Seems like something we should rush towards without guardrails.
youtube
AI Moral Status
2025-06-08T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx91JctrdDFzElx0GB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIxVfD3UGWpiq1fKV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzlHL5N7RFfZGQQizV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzurWab6cvUUM9GtZp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwpsrwcDuB9wDMRxuh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz86Iwim1NR6PBcFT54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxDp3WxoSksglHft654AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwcQNMZ6B65o8mq96V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy26w05WZDlEIJUITR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyMG3U1uX9e0LS3lol4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]