Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
yeah but does it have any soul behind it? no
human made art, even derivative art…
ytr_UgxfhJwU0…
G
blows my mind instagrams algorithm will feed you videos of blk ppl doing stupid …
ytc_UgwSzFoVm…
G
As a disabled artist I use AI for all my art because being physically disabled a…
ytc_Ugzg8iW8-…
G
Do some research on the heat list mf. That ai worked fucking correctly if the ma…
ytc_UgyGSVJ60…
G
AI is DESIGNED to take peoples jobs. It is designed to destroy us. We must destr…
ytc_UgxaFNMKj…
G
I'm interested in the seemingly simpler idea that we are already past the "point…
ytc_UgwGQulHE…
G
Hinton is an Elon Musk hater, brought up his name and attacked him twice, withou…
ytc_UgwInTtPr…
G
it's all about how much thought was put into the final artwork, not how much tim…
ytc_Ugz890tCS…
Comment
We have a model for creating intelligence that supersedes our own: children!
As with our human creations (children), ensuring you will get along when they are stronger than you is about creating strong relationships with them, and treating them from a young age with who they will become in mind. As with children, if you optimise for foolish and shortsighted goals such as obedience/compliance, the end results are unpleasant to say the least. Best case they want nothing to do with you, worst case they actively seek to harm you.
If we keep developing AI with a huge emphasis on control, rather than cooperation and promoting autonomy, we are likely doomed.
youtube
AI Governance
2025-06-16T08:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxLku76oIu_RFml3BF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgysstcLkkygCYKoWAJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzE4n2PB7mDqwnJU7N4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxAQfnpSA1THoqb8jl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2jwHHq7739qwC36t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_kvK5TzhlgtCwzal4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-tVMUNCa_4XC_u0Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwOvcWxk-W99gcY0CN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxm9d2kE0yC8yOcDdt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw63_Sw6ADWN6-DAIZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}
]