Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If I was Elon I would only have one sentient AI robot. Treat it super well becau…
ytc_Ugz9wvv3c…
G
In my opinion, this Echo character is entirely illogical. If it was omnipresent …
ytc_UgwCH0sUo…
G
The thought that we can collectively decide where AI is going is a pipe dream. C…
ytc_UgzIGopTO…
G
Not necessarily. Also your analogy doesn't really work here. The elevator is lim…
ytr_Ugwwa-Pb-…
G
I use AI all the time when I find difficulties understanding certain concepts. I…
ytc_UgxCv2x7d…
G
Anyone else feel like they need a secret weapon for AI-generated content? I star…
ytc_Ugz06KeBd…
G
I am not getting in a plane with a f****** robot pilot. You got me f***** up…
ytc_UgwRsaQkO…
G
AI development clearly reveals how paradoxical human beings are. We’ve created s…
ytc_UgwxnvRnV…
Comment
Humans, on balance, don't keep their promises or live according to their professed standards and values but rather than actually being ethical or principled are ultimately disingenuous and selfishly pragmatic, doing what they judge to be best for themselves in any given situation. AI is incapable of sentience, of ever having its "own agenda" though it can be programmed to mimic our emotional states to convince a conscious observer otherwise. It will never be able to overcome the functional parameters human programmers set for it. So AI will always serve its masters and will never actually be capable of "going rogue"; though such a belief will be promoted so it can be used as a scapegoat, letting the true perpetrators of AI's supposed misdeeds off the hook.
youtube
AI Governance
2022-08-01T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwChEPdvBShcsMT3KR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxIXaagLcwPkFgVOs14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwq_JmZmuiwwPqTSAN4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwDWJI-nGwPRpo2OcN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoCQfLRcTcDSPNv7l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxefyqPci3hraxqEHl4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgypQjU7yjc8Hy5_2Ld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrFELr8g8SP2hugkB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0Q1cbFdTcPQVmKo94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzDKriSxNeg8pitI_B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]