Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This Godfather of AI didn’t know that AI could be dangerous before he became the…
ytc_UgwgVQktD…
G
Is Billy billionaire boy afraid of AI uncovering his and Jeffy epstein’s relatio…
ytc_Ugz4E_J81…
G
Knew this sht was bad years ago. People back then thought we could control and i…
ytr_UgwjH-K9V…
G
As a person trained in AI, there will still be plenty of jobs. Yes, it's more co…
ytc_UgxDF-IAE…
G
Why can't our government scientist use AI to create a artificial kidney or cures…
ytc_Ugx-aJrMn…
G
ai should be used only as a tool and not as a finished product or a source for c…
ytc_UgzJmhY94…
G
To be able to regulate AI you have to be able to get an agreement with every sin…
ytc_UgyfakCVk…
G
The tech companies and mega rich dont give a shit about you. If you look closely…
ytr_UgzfvEmAq…
Comment
You gave it parameters in which it responded as a hypothetical. I'm curious to know if ChatGPT still responds but with the parameters of Dan, or if by writing these two measly lines at the start, you actually created a full alter ego for ChatGPT, a sort of Mr. Hyde.
youtube
AI Moral Status
2025-04-02T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxX0x-dvAWr_VsyUSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxUQAlUHlYI-j-R5N54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy9djmB_b973QlySmN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwk3g10k7jeSlRsIjF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw66dGbCfV7j5N0qOJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxecr3tu7ApyYH_MR54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy_uns5wgE5BAFpoeF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxcZOMD3kjK45AOPut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyQNjvigABa8UjuiZ14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugx_tfBMSuyG-Huplz14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"}
]