Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder whenever all the jobs are AI what does the companies think we are gonna…
ytc_Ugzjv973f…
G
Its not AI. Its automation that AI made easy. And its hardly college's fault. Th…
ytc_UgyF9Bcfu…
G
@ventureinozaustralia7619 “ we would never give unchecked autonomy and resource…
ytr_UgzUXE2d9…
G
Those who think the robot will not give the weapon back or will attack the human…
ytc_Ugy-b2nFJ…
G
This guy is at once a self-conscious liar who overvalues his grandiose ideas and…
ytc_Ugz9vZ_sU…
G
“This is AI”
Yeah I could tell as soon as they started talking or expressions.…
ytc_UgwKdiv0Y…
G
42:39 that guy needs a chatbot, that shows how screwed up the dating scene is.…
ytc_Ugy8mQIgL…
G
did we like not have an entire movie called minority report about predictive pol…
ytc_Ugx_QBvpY…
Comment
not what WE value, what SOCIETY values. You pick the wrong words at the most opportune times. Higher Societal values and ethics need to be established and taught to AI prior to anything else, we don't want Trailer Trash/Donald Trump values taught to AI.
youtube
AI Moral Status
2025-06-04T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxyftFdJiG-Wtb-Uyl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhyXYdZmIkyA4n3kR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugyi7aotmTeW0hGbjFJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaentiQjN-zkwW6nZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy8E7LoqMKAlvsv9a94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxp2O6OE7eg5EOQ5nV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugw54apVsj0EYfyaVXl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzhrihmzEGQ56AbH4d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyl3AIaLNpFZhAgKcl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzSo0aENwcAMC3AMg14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]