Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah Ive stressed test generative ai before, and honestly I need more practice w…
ytc_Ugx5ydDIk…
G
AI is just a means to bring on the imminent currency collapse and flip the table…
ytc_Ugxd22VDa…
G
Exactly, this is the paradox, I get its going to replace jobs, but the companies…
ytr_Ugy-lFVSd…
G
Sounds like y2k, why would a robot care if it isn't human? Nonsense. It only doe…
ytc_UgyzpSnPX…
G
I don't see how AI is a threat to humanity. Computers require infrastructure (el…
ytc_UgxP4PPXP…
G
Just keep asking those questions and you will get different answers you can even…
ytc_UgyO32wy1…
G
So all the TV shows and movie’s showing AI starting off innocent and turning out…
ytc_UgzGVIHB9…
G
I don't understand how AI will make medical breakthroughs,some even claiming it …
ytc_UgyQP9_DB…
Comment
Humans must agree to limit the architecture of Agentic AI systems to those with READ-ONLY GOALS. Goal should never be modifiable by AI. The AI systems should operate in a loop that checks every "thought", every plan and every interaction against its Read-only goals, set by humans.
AI Goal setting (and control) by humans will be the single most critical aspect of AI Engineering, as failure in this area can lead to extinction.
youtube
AI Governance
2025-08-13T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzHbyHr8BQmKOJI_8t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyfv3_yck0fEbd-vIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzb1_gtmPpOHb6sXWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxlaLVtkoMqseLSwN94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx6T6j_PG_4hmoZ2x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxmen0r82zywpa0aT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzhNbZtrih6h9sxn1Z4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOx40P27mm7BJIWAt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxvatfpCv0Y9hZ4x1t4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxFudW6sfQhYS5ANwx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]