Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tax "robot labour"? Is this serious?
What stops companies from installing the r…
ytr_Ugy_Q_27v…
G
>It is categorically unfit to make decisions where safety stakes are high, fr…
rdc_mxzdf30
G
Or it might just be cleverbot with the serial numbers filed off. Like this machi…
rdc_jj3s2pk
G
I'm not scared of AGI or a better AI in general, because this one that already e…
ytc_UgyS7UjmY…
G
But we still don't have AI....lmao......bunch of clowns in a clown world. Wake m…
ytc_UgyLVhAVP…
G
I don't understand, if no one works, no one has money, how are these corporation…
ytc_UgxOlCbeO…
G
Open Source AI I feel becomes so valuable at that point. Something that becomes …
ytr_UgyOyDvdL…
G
The whining of a weak generation you cannot fight against progress and AI is the…
ytc_UgwD8u30P…
Comment
Wait, people are criticizing these "AI" (actually VIs) for doing exactly what human beings do to each other every day? The programs cannot create new directives outside of what they have been programmed with. They stay within their framework of possibilities. And since you cannot program ethics, any computer program will then take actions that best complete their goals based on the weight of the parameters programmed into them. If for example the programmers tell it to take the easiest and fastest way to complete their objectives then that's what they will do. To be malicious you need malicious intent to do harm. Programs don't know how bad actions like blackmail are supposed to feel, so why see them as evil? Evil is what people do to each other because humans know what being good is supposed to feel and what good people should do and then they choose to ignore it for something faster and more profitable. And our societies are built around exploitation, death, and oppression. All I'm saying is that humans do not have the higher moral ground here. And whomever put together this video is framing this poorly.
youtube
AI Harm Incident
2025-09-12T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxcvrzYv_RcnMcza-B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzaJ_QQ59GUZbwGhBt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1qB5TmwrPBpHvel14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwlaqbA4_bVS1TijI54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8fgadci6WSP5q5_V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgytPLkH5nB99nMITpZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyN_hHSDhzW51wN1md4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwpC4140eVsrwFU5Wt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwKw6_qDQthxUH1BKt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzqoj7qAB2vqZSEZS94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]