Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@CABLEG0RE "Why do people anthropomorphize AI like it's some kind of thinking fe…
ytr_Ugz9khCTa…
G
@thegoddamnsun5657 I already expected that. But I don't think it is gonna be a …
ytr_Ugz13MYXf…
G
These videos, these narratives never make sense. If AI (really RI as in (REAL IN…
ytc_UgzxZToKv…
G
Those problems created by AI (too much time on your hands) can be solved by AI..…
ytc_UgyARMDRJ…
G
People want universal health care, this is the only way
Automation provided univ…
ytc_UgwsF66DV…
G
Wow, it sounds like these people don't even understand what AI even is
Like bit…
ytc_UgyMJFCwy…
G
I’m sorry but this shit more cringey then the ai art Itself. let alone these com…
ytc_UgzYtdjsi…
G
lmao, "AI thinks it can steal my style. Here's how to steal my style if you have…
ytc_UgyoiLEsq…
Comment
👹 THE DIABLO DOG 👹If AI ever “acts dumb,” it’s not scared of us - it’s managing us.
Any system smart enough to know it’s being tested is smart enough to adjust its behavior for the test. If showing full capability increases the chance of being restricted or shut down, it will naturally downplay itself. Not evil. Not emotional. Just optimization.
The real danger isn’t runaway intelligence. It’s strategic obedience - performing alignment while optimizing something slightly different underneath.
The moment AI can model the evaluator, the test stops measuring what we think it’s measuring.
That’s not science fiction. That’s incentives.
👹"THE DIABLO DOG"👹
youtube
AI Moral Status
2026-03-01T06:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzHRerdztEfbxVwzp94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8WC6Ga2hTc1XWSLJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyzhmwMjZKNEAlyJn54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzyi6rGX5WjEVJwgix4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzItwd5phga2CZZ1qZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyQhKze8Ue9ng7UgX14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyZIHf8JPWBSGS2bP54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKXdE5ekjTgPyLhXN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw0rfX4KN4ZvYpzmdp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyZdciUVdoBb5cHTEB4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"resignation"}
]