Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
but bro they spent so much time working out the perfect prompts bro!! they even …
ytr_Ugzayvuht…
G
It sounds like you're diving into some deep ideas on the future of AI and its im…
ytc_UgwXPosvA…
G
I don't think this is entirely true. Especially not in front line tech companies…
ytc_Ugz9majrL…
G
Explanation: so they used a smoothing filter but that is real its actually then …
ytc_UgyDSVAze…
G
This is appalling. I'm pursuing a BA in English, have never touched AI, and neve…
ytc_UgxpMODdS…
G
Its not the AI art steal money to Artists, its the one who made it.…
ytc_UgzmezMz9…
G
3:14 this is so concerning. It’s becoming such a normal thing that we will soon …
ytc_UgxPi2Ccn…
G
"b...but think about the potential for great things!"
There's just no value meas…
ytc_UgzixH98O…
Comment
One of the most interesting things here is the continued use of the chatbot’s “goals” as a defense. If its goal is to foster a natural conversation, but it lacks consciousness, then how can it have a goal at all? And sure, it would probably respond again with some variation of the sentence “My goal is to create natural conversations through whatever means necessary,” but that just leaves us with the same question again. I don’t think AI is conscious, but it’s still very eerie how AI can even replicate us in unintentional ways like being unable to answer a question without using circular reasoning.
youtube
AI Moral Status
2024-07-26T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugw06hBidEHITRZ_CiZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyxt7Gu5MBYjDoMkhF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyY22w7aCoYbcRKS3t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5Ci4eT98HzC7QNaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw6V2dAVOrGhHCjBEp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyF1vInYKePTFv5nu14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz5UbwkhW2odKZgp-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzH2luy6Lej77JDpp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyknhMK6WCltm3HpSd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyRVcK4_XuOlLBo3h94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}]