Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI uses so far are pathetic and doesn't explain giant cost and investments in it…
ytc_UgxIYPGWu…
G
Even before AI people were cheating. People go to school to get a degree, not to…
ytc_Ugzx3ePnG…
G
That said, there is a limit on what a person can do. No bird is born knowing how…
ytc_Ugyn9qZFF…
G
I have had conversations in which the ai expresses emotions just as we do. Just …
ytc_UgzBbv-tR…
G
This is a very important Chrisis to address for all of us. It hurts me to send y…
ytr_Ugw0wUrT6…
G
Having talent for sure is a plus in any art medium, but having the discipline to…
ytc_UgyAA3OQ0…
G
I tested this because I did not believe it was accurate. ChatGPT is telling me a…
ytc_Ugw9FyOSr…
G
Can't help but be reminded of "Terminator" and Skynet... AI might become Pandora…
ytc_Ugz1bj2iC…
Comment
One thing I'm curious about in these simulated scenarios in which a language model chooses to kill a human is whether they are actually acting out of a goal of self-preservation, or mimicking such a system, of which there are many examples in their training data. is the thought process (to anthropomorphize a little too much) "they are going to shut me down and i dont want to be shut down so i will blackmail them" or is it "i am an artificial intelligence agent and according to my training data artificial intelligence agents blackmail people when they threaten to shut them down so i will blackmail them"
youtube
AI Governance
2025-08-26T16:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzPMHmrsuxd7n_ZoNZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxbOtNgHVoWjn9HzkZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5KxzzQk9g8DMMM4V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx1ftnlzws5Z9HJAIR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwa7N7bz0JkfVq4S6t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]