Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Computer scientist here! So much of AI's supporters come from horribly misinform…
ytc_UgwG-K_D2…
G
I've known people that are driven by purpose everyday...they have jobs they enjo…
ytc_Ugweb46ED…
G
airplanes have been flying themselves for over 30 years. pilots still get top pa…
ytr_UghTRaDw5…
G
Become a hairdesser, I can’t imagine a robot /ai cutting or styling hair.🤣 we as…
ytc_UgyHL-229…
G
Very good conversation and interview, glad they let him talk and the questions w…
ytc_UgzPi1RDA…
G
a robot would never be a person, at best it would be an intelligent being, like …
ytc_UghI0aFNX…
G
I keep saying to current AI fans that "current AI" is not even on the path to be…
ytc_UgwMgqpIh…
G
I had an ai bro tell me generative AI is inclusive to black people and to be aga…
ytc_UgwEFqOLq…
Comment
AI is for the most part task oriented. 20 years ago the defense department was working on cruise missiles that would loiter over an area hunting, identify targets, drop bomblets, then find and hit its primary target. This was way before AI. Question: If someone deliberately gave AI a long range task to destroy humanity, and part of its task was to not be caught doing it, or be ordered away from its primary humanity destroy mission, could it do it? Also, would it do it? If the answer is no, I’d say we don’t have anything to worry about from AI.
youtube
AI Moral Status
2025-11-06T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzjrbpRdEv7Lp1rlWF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxhZGq48_hqIdEiDpZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxDDnbTNlcCAGuodm94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyaaBTWD9UC41kLy-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxt0CQY6QpKAcWuWWd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw5HukF2RcMN0WpT2h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzovZCrU5xfW1sCc1p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwxIFamg0dXdRLZxcF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzbvxVM09OXWW-Byvp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgweNS0EGRRcIF0lVfB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]