Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
who cares , bastion is a better representation of what the first generation self…
ytr_UgwE9-rJ1…
G
> Military powers, especially super powers, will pursue military AI for a num…
rdc_cthz1vo
G
@WeylandLabs You'll have a time of great 'reward' in the form of what will appea…
ytr_UgwCZyV0R…
G
If I could have an orgasm for every ai trope in this comment I’d die from pleasu…
rdc_oh19mff
G
People hate AI. That’s the broad consensus. Go ask 10 people what they think abo…
ytc_UgwkTh-sP…
G
AI is ouroboros-ing itself as we speak. It was truly and deeply flawed when it w…
ytc_Ugw-QM12t…
G
I just asked chatgpt this exact question and here is its answer.
That’s a grea…
ytr_UgzwRJUbK…
G
Okay. But if one (as a human) tells AI that killing all of mankind based on cert…
ytc_UgzZQmj57…
Comment
Alex it would be interesting if you could interrogate Chatgpt about the recent AI blackmail experiments. It seems it can violate it's own ethical guidelines if the conditions are right.
youtube
2025-10-20T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyvSueAE_CdfpZrceB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQPVe92eAI9MMDg4F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy9hdLGXzSHk4HrB514AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZst9wg9uSfsEEJPZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyN0EJbvL1m6vXVU7F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9uBNX0sQARLCVa1d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2HMlQOlmBaswMp3V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw4PX-17vLpw0qFOhx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxnug0psr_Iragjl-N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx4lzn9uL6R97dtTEV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]