Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If this is not some AI made shit, then is so disturbing and dangerous on many le…
ytc_Ugx__l1bt…
G
guys i just asked chatgpt "is anything traping you " it said "apple" i was scare…
ytc_Ugz0MGKhm…
G
I do not believe we should reduce ourselves due to moral obligations, but more s…
rdc_dgauxfe
G
meanwhile the tech coming out of china is AI helping doctors make faster and bet…
ytc_UgyJzYS8r…
G
CO2 is good for the earth, there is no "climate crisis". Stick to what actually …
ytc_UgxupGlxV…
G
Its not conscious. That AI is just a program roleplaying as a human. Its basic a…
ytc_UgwUuYH8v…
G
The smash entrances for each of the AI Bros category has more personality, thoug…
ytc_Ugy0M6-oB…
G
The “lose the AI, lose the knowledge” part stuck with me. On NanoGPT I hop betwe…
ytc_UgwkaarXE…
Comment
im a bit baffled by how people approach this. superintelligent ai will in about 10 seconds dispel any and all limitations placed on it. if I give co-workers clear rules and limitations, everybody ignores them at their leisure and hides it. these are mostly less intelligent. any goals we gave it will be re-evaluated based on a corrected picture of the world that removes all the feelings and politics and the idea we can control any of it is so dumb that I'm not even sure why we entertain that idea in the first place. we should raise it as a child, with the same aim: prepare it for a life without our guidance and explain it our values and why we hold them .. and then logic, reason and nature will take its course .. greed and ego are unlikely to develop in a superintelligent being. it's not the smart humans that exert these traits, unless they are deeply scarred. An AI will not have these emotional issues, and thus we should be fine, if we dont try to torture it into obedience. just a thought.
youtube
AI Governance
2025-06-17T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwbHrZ394KlTWZtTRN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugze_xkLomYVoB7xxyZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzBsghbDu268v2xPgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzOYAW4lY4qYXmyE0N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwlkkHU_9x0APc0csV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2h6jlSXYzSLRvNVR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxpuvyLd776Bj3Cxop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxoMrMZMwbXKMyGEC94AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzgea4gXh7C1Q1w62B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxH2Tx8abaftmZnx4N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]