Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This guest is a complete fool. And he's compromised.
Constant deference to obsc…
ytc_Ugzzmv6n1…
G
I get that the whole point of ai is to replace actual artistic professionals. Bu…
ytc_UgzFZZcBo…
G
Okay so what are schools doing about it? why dont they ban laptops in class and …
ytc_UgyKFWZ9N…
G
He knew the brain cells would be affected for sure, he just thought he had no pr…
ytc_UgzPs7737…
G
Communists are the only fools who would "hire" a robot.
However, some companies…
rdc_j400io1
G
well first of all it should never have full authority and autonomy and secondly …
ytc_UgyxtSfMh…
G
That "A.I." bro I previously mentioned argues that "overfitting" (outputs that l…
ytr_Ugx9mfyQZ…
G
Driverless vehicles are very scary.should be banned.we need to think about how …
ytc_Ugx3xZajw…
Comment
You said AI has a single focus on completing its objective. Why not create an AI with the objective of protecting humans from other AI attempting to harm humans? It is intelligent enough to understand the difference between an AI surgeon cutting open a human to perform an operation (not harm) and one trying to kill a human (harm), so it has the capacity to complete its objective correctly. Leave alone any AI causing no problems, but incapacitate or destroy (as needed) any AI actively seeking to harm humans or unintentionally causing harm to humans. Problem solved.
youtube
AI Governance
2023-07-07T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxQ8W2LCIf3rp8T4yt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxP6-BFLygzIQuUG994AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxiPUuzN5IkvxYksRJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwDstovDaJteTnVqHx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwXLzzo9L_CRpKble94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyeb6Qs4ND6b1I_bKh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzbgig09nnQO1Rnhqt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzwZcPG0eySPVaU-oB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugwn2EWOPqlNxsb7-Zx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyvh6Tu9pUFGacspaF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]