Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Anyone who thinks it's sentient should read the dialog they had. He basically a…
ytc_UgxIYe9A3…
G
With this, the stuff being done at cern, and basically the entire state of the p…
ytc_UgzYr7ABC…
G
Absolutely disgusting and infuriating 😡 It’s ALWAYS the corporations that are al…
ytc_UgyffQpnm…
G
Great vid. F these AIncels. overall....They dont know shit about how AI works.Th…
ytc_UgyHR8SXr…
G
To be fair to the AI I also don’t know what that genzer was trying to order…
ytc_Ugwl5ZOs0…
G
I'm new to AI and ChatGPT. I literally just started using it. I only use it if I…
ytc_UgwSB1hsL…
G
Who is stupid enough to rely a 100% on the Tesla autopilot ? Shit blows my
mind…
ytc_UgyZxgYWf…
G
If countries are working on ways to control AI, that proclaims the fact of an is…
ytc_UgyuseJBr…
Comment
The predicted scenario will never occur because there is NO evidence that artificial intelligence is linked to either (1) self-awareness or (2) will. To be sure, AI can mimic both of these characteristics, but they are not merely quantitatively different from intelligence; they are qualitatively different. The real danger is that a poorly programmed AI will cause deaths not because it is sentient, but because it is ruthlessly efficient in performing its program!
youtube
AI Governance
2023-07-11T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzOKCQ7oXt8f4Fz9QV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwmVmdl13jPF9vaLP94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxwd8F_yNdpccw9THZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_lk8j057i6Ia6C6V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"unclear"},
{"id":"ytc_UgygyoMo_pJt80bQoxt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx_-9rabpxd-uGylhl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIjBh_kzbsESgxr1x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxJgrarLXsuZJUcvrJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzpPVg9JICfmMyEB0d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzhr4Gcgdj1Tbjk-dR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]