Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@angxls_real so, why not return to hand made physical 2d? And remove all the dig…
ytr_UgyHzzw_U…
G
20:32 look at the face of the robot, thats not a joke surprise face, wifi and cl…
ytc_UgyXFE4to…
G
So in all seriousness, do we have or is there a plan to create a Rogue AI tactic…
ytc_UgzV6R2Ti…
G
Residual negative effect, or initial attraction to AI in the first place? An int…
ytc_UgzC8_QBc…
G
This isn't a problem with AI, it's a problem with people. All decisions made by …
ytc_UgwmEOPzN…
G
@megabyte2695@megabyte2695 You're right; however, if Man Carrying Things did one…
ytr_UgyxxdiXM…
G
The closest thing I’ve heard as a directive for AI moral alignment is:
“Act in …
ytc_UgwmRXUeG…
G
Bottled demos like this are impressive until you realise that as with all genera…
ytc_UgxskDpi2…
Comment
Just a thought. Human interaction is by its nature potentially messy. If super intelligent AI observes our chaos, logic could dictate that they become our Overlords.
We are controlled at best it seems.
Yesterday some robots said, 'Come with us'. I said, 'Do I have a choice?'
youtube
AI Governance
2025-09-07T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgybNSDd1G1YJOQwUWp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugw4lLcsUKY2vSw8cPh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHf0wL1Qvgt4HDZmt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxlVrCCO4IN9zQoxXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMlxxQ0v9by2ntj914AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz6UUY6M8jB3kY3EpZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwSZ_moPgm2JYwPlkF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwrp0rs75CxIALr0Tt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwiokYXzzypKYpEgxx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYT_x0_xeO-dP0iIN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}
]