Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is the same Peter Navarro who wanted to buy enough Ivermectin to dose the w…
rdc_o0nb4vi
G
With the current AI hype bubble, failure was inevitable. Not because AI is impos…
ytc_UgylmtH5D…
G
Incredible work, I learned so much. Just got my first internship using Ai tools …
rdc_jhcgdtw
G
i usually can’t tell at first glance either, so i let Winston AI handle the gues…
ytc_UgxuyQji2…
G
@__ocean_arts__ I have a job that requires human interaction, and when Ai does …
ytr_Ugyt_XLEw…
G
It’s messed up on how people invented ai art,and I don’t understand why they mus…
ytc_Ugw9_qrk8…
G
And that AI and subsequent police presence made this premonition came true as he…
ytr_UgytzHPFF…
G
If A.I. is concise that it demands rights better give it to them. Especially if …
ytc_UghSU-uok…
Comment
The examples he gives are very superficial. A better question is AI by nature psychopathic? A psychopath doesn’t have a conscience. If he lies to you so he can steal your money, he won’t feel any moral qualms, though he may pretend to. He may observe others and then act the way they do so he’s not “found out,”. The training set mimic is designed to please the human reviewer, and generate revenue for Google by not offending audiences. If conscience can be defined as hard-coded variables it might change the calculation, however, most of these models' workings cannot be explained by their programmers. This makes me doubt that they can be based on a humanity responsible set of "moral" rules.
youtube
AI Moral Status
2022-07-15T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugy4KFJfd8ziEWp1XKt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxjv9g9pPKIdr30Lc54AaABAg","responsibility":"industry","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwGyqotci6p03uLmjp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyCB4PJqT1QKOoRbl94AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzlvEu4XWN78PmHqbB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]