Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When this starts actually becoming a problem I'm sure we will see legislation on…
rdc_mvkz2z5
G
Well my friend. Loyalty is quite a humane thing... is it not. Ai is the betrayal…
ytc_Ugzcn5ZLL…
G
Me whispering to my wife, " honey, i finally got the baby to sleep."
Waymo to i…
ytc_Ugzu3hDmc…
G
If y’all want to go down a real rabbit hole, take a bunch of psychedelics and st…
ytc_UgzmD-4mF…
G
@ironheavenz Wrong. The process is fundamentally different, and the proving fact…
ytr_UgzQBpucO…
G
It makes no sense. Typing an AI prompt is a completely different action/process …
ytc_UgzYTSNsq…
G
At what point does AI become smart enough to have a right to self defence?
It'…
ytc_UgzlKH-AB…
G
We appreciate your feedback. Remember, on the AITube channel for subscribers, we…
ytr_UgxTWCEgc…
Comment
If the engineer is kind, give the AI the tools to shape the things and behaviors that are allowed to do, when allowed to live and function in society. If there is an idea that causes violating behaviors or actions, which are outside of what is done, the protection software in the AI will recognize itself and then activate it to reduce the energy in the AI or can let the AI run out of energy and go into sleep
youtube
AI Governance
2025-08-02T04:5…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxfy2Y1wvOTwp4HEzh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyyvp-9AdLnR3IK3Zl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGhyFZDkyH2KVukM94AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzjLsKym6OCjk0SKjh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxlU8QWQGGnyhN44-l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwsp6-cA56I9Xa2M9N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2SQ-wm0GB_upZIgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyWKjN8gPS3C0M_BYR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzdoljG1eNN9jWLVGB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwh67naOc2ygDRVjwl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]