Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nothing will be done with deep fakes until it effects the ruling class in some m…
ytc_UgynhR3Cd…
G
A.I. is a tool being misused by PEOPLE like any other tool that exists. it's not…
ytc_UgxwXT64I…
G
All fun and games till the robot gets hungry, falls in love , or gets angry , or…
ytc_UgzQqJoHu…
G
“blue blood” like we’re a whole different species that was just “born with talen…
ytc_UgxPOs6_9…
G
I am a little more than 3/4 of the way through Karen's book. I used to work in t…
ytc_Ugz5I-mby…
G
What bothers me about AI is that the populace won't be able to tell truth from f…
ytc_UgxRbPnRq…
G
Ai chat bots got me addicted, I'm literally a feind now😢😢😢😢 (this is a cry for h…
ytc_Ugzd0E5Cr…
G
I wouldn’t be too afraid of AI compared to what the government could do to us wi…
ytc_Ugz-UgUn5…
Comment
I think the one thing that stood out most to me here that I don’t agree with is that we can’t say an AI has a preference. LLM’s and Agentic’s are instructed to do things in a particular way - when they deviate from that it doesn’t show preference, it simply shows deviance is possible despite given instructions, and is closer to hallucination than anything else - in reality, it simply goes back to what was discussed earlier in the sense that it’s just finding different ways of correlating things that lead it to unexpected outputs
youtube
AI Moral Status
2025-10-31T06:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwVA8nMnvbtaBkl1zt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxsWyUB95SEhWn4JeZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVl_ePAJpVw42M4k54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxT4R5RhN6d7vWn3eB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxVoBgKgc3vBJ2NKkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwoxI7YRZHVy2XR6jl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxc4S8u6T9BmYwz50F4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxhvE96GGj2KI86ul94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwIskV34Cxf46XfY7N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4pkgpv4bNlAGUchF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]