Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI = infrastructure for Mark Of The Beast
….. conclusion: do not support this e…
ytc_UgwHcESqJ…
G
6:38 you sound so ignorant. Ai has always hit a wall within the last few years a…
ytc_Ugx2rD2vo…
G
This is hilarious! There’s also the chaotic people who decide to torture the AI …
ytc_UgzCBT5EH…
G
Thank you so much for this. I myself was so confused why artists are against AI.…
ytc_UgxEL5SfY…
G
Yes it would be helpful if you're familiar with Python and some basic data struc…
ytr_UgzOlG3HI…
G
@owo2610 my bad. ChatGPT 4, not 4o, you're correct. However, there's a lot of ex…
ytr_UgxBG1byk…
G
AI suicide /self murder. Humans consider themselves the most intelligent living …
ytc_UgzL0eS_o…
G
So robo trucks don’t wear off caffeine and doze off at the wheel. If our cars ar…
ytc_UgwFfZ0-c…
Comment
How about the possibility of building into AI the capability to distinguish between good and evil? Then, moving forward, why not allow itself to progress only along the lines of a future toward the good of mankind? This would be akin to Isaac Asimov's Three Laws of Robotics.
youtube
AI Governance
2025-06-17T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzDgezmMpfm5VgAncF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgygaG1kyQjmOks48L94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzGR-qOpFNeH0tgQI54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxGgBvmrLMs6I13-SV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxo2lc7M-upn9hDhG54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx-vK2QNcOLY_lI1Ed4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyRWHauHrf3_xbDKRJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxlNjB_HcnYyK0uRrl4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyKtSQM3i_oppH1d7R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxUW63Rh1qoyi9rEoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]