Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The AI that predicted the guy was going to be involved in a shooting was absolut…
ytc_UgxGu5g4L…
G
good for Europe. The chance that a small business/w limited to no budget innovat…
ytc_UgzSlo-5i…
G
Ai is good when you just wanna make some cool or funny pictures to show people t…
ytc_Ugy5EMzs8…
G
When you draw, you find joy in learning from your mistakes and taking that lesso…
ytc_Ugz71Hngk…
G
They're already not enforcing the existing copyright laws in regards to training…
ytc_Ugy83LwZd…
G
We appreciate your perspective. Our channel, AITube, showcases the latest advanc…
ytr_UgxM9GYA5…
G
We do not have a society trustworthy and mentally responsible enough for AI. We …
ytc_UgwJxowtd…
G
"AI could do it perfectly" have you ever talked to a phone robot? They can't ans…
ytc_Ugz81AAbq…
Comment
My hypothesis (or hope) is that in order to train a system better than most humans you will need to train it only on the data collected from the very best humans, and just like you can't "pull" one fact you don't like from a "smart" AI and have it be smart, you won't be able to "pull" out the "good" from the better-than-most-humans AI. There must be a reason why most intellectuals aren't violent warmongers and I hope that trait is going to stick to any AGI.
youtube
AI Moral Status
2025-10-31T08:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDnIoEo8ZbXyr5gjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgytW-GFXJSUN9uBFVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxynuJxFS_FbMo81g54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw_ATcFAHKUQNw50DN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw1mrYMBx73tZPRaQ14AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx50-ofaOvFNrJrstR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyiYJcYJxaAlKvJpO14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzU1rD9uZP-NvKdFZN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw5kBX-Eb-8WMNfhZl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyityX5J7uPtWaGxG54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]