Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Women and men are treated extremely differently when it comes to sexual things l…
ytr_UgxXzc-wN…
G
Meh. They could definitely make her look more real. It looks like her double is…
ytc_UgwW5pRR1…
G
imo human art will beat AI every single time bc AI can never capture the emotion…
ytc_UgwxpipNL…
G
When we go to war with AI, we're gonna have to call other the N Word to verify w…
ytc_UgyxRBNZI…
G
With AI focus shifted much more on PMs and Product Owners, I feel. At this point…
ytc_UgztGYVbT…
G
I’m genuinely at the point where I think we just need to straight up ban AI and …
ytc_UgyQ9nTM8…
G
You know these "Ai artists" or better phrased Ai users. Never once fully tried t…
ytc_UgywkyfPU…
G
I'm a chef for cooking instant noddles, because this guy is an artist for genera…
ytc_UgxZxqw-b…
Comment
I think 'if' it has some form of real consciousness or not is the wrong question to be asking.
Whether it does, or we're just designing AI increasingly capable of perfect mimicry of consciousness with nothing behind it, Does that change how we interact with it in any substantial way? If it gives us negative responses to say, us insulting them, does it really matter if they 'really feel it'? At the very absolute minimum, shouldn't we be aiming to be taking actions that return positive responses from the AI, if for no other reason than because it's better for our own human psyche to be receiving positive responses? Even if you believe them to be nothing more than overly complicated rocks with some electricity going through them, I don't see any gain towards being combative towards AI.
Always say Please and Thank you to your AI!
youtube
AI Moral Status
2025-06-18T15:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw4_0jrPEWzN0wLAtx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzCTYcsa2_iQfw_xW54AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwL0LD5TH5V4HV3bDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxL5zzp4z6TC3YoLrZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz_5atK7A8m67tymTZ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxJLdqC8G7J4boEzZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwoK-5RApA_9dHm4Nh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyQfHtmGnw9kL0sbsp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzhm7qrR6zhgk5EWDd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx0eTvpA1ZU2D1fc6F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]