Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@nathanscandella6075 Two human eyes and the human brain are far superior to 8 ca…
ytr_Ugxdu8GzW…
G
Im not an artist, I can't draw and don't like to
For me to be able to create who…
ytc_UgwlEa37m…
G
Claiming to have an eye for anatomy and proportions is wild, when looking at all…
ytr_UgyEv3xqf…
G
Automating search/copy/paste from Stack Overflow has its moments but is still th…
rdc_mlev1he
G
It shows if many back-end roles is declining because of AI then there is a high …
ytc_UgwdNNwin…
G
I don't know what is scarier, the impending AI apocalypse or Eliezer Yudkowsky's…
ytc_UgxTTocvI…
G
I can’t remember her name, but there was a reporter who went behind the scenes a…
ytr_UgzevX89g…
G
@ChippWalters dude did you watch the video ? Go at 2.01 , its coping copyrighted…
ytr_UgzhY_YQF…
Comment
This video could be misleading people to think that these AI models would actually 'think' like this. They were obviously jailbroken and prompted to act like this, meaning they just do what their prompter told them to do. There is no evidence of AI models actually 'thinking' bad about humans or 'wanting' to manipulate them.
If you've heard about AI models refusing to shut themselves down, then you were also mislead into thinking that. The scientists were giving the AI multiple tasks at once and the AI had to decide which task was more important, shut down or answer that simple math question.
youtube
AI Moral Status
2025-06-19T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzQRsgKyP3X3Wf_Fe54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwrsBfZCkJREZpgdIl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzO3OG1RhVsDD-pN7N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxqM29CpqwmO7G867N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwCOh0vYtx3npl7XJ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx1iTQXjrBa_ahqgvt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQX89Iq0cWsdfZ32J4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzH7Fnks1HlGgq7vQV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6kGJhr1Dgzn5Vk5h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz65DbIT5JjevlnKzF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]