Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel like I want Canada to get in on this and start manufacturing again, but I…
rdc_gt7dlzq
G
Chat GPT write me a fictional story about AI. In this story universe, the usual …
rdc_k8wra3p
G
@thecomfortinthesound is it not common sense? if we actually had AI that could p…
ytr_UgwgaM1Dt…
G
Now if I completely trained the entire AI model himself without it basing it off…
ytc_UgxYtQ5UV…
G
I would like to see one day cancer will be cured like a flu, clouds can be moved…
ytc_UgyjBJFe3…
G
I question the legitimacy of the competition itself. I see the art in question t…
ytc_UgzKfJgZO…
G
Woow never actually thought about the whole AI thing this way.
Honestly, it was…
ytc_Ugwl4QMn6…
G
Thank you for your comment! Sophia definitely provides some thought-provoking in…
ytr_UgzO7tFAd…
Comment
I never understood the desire for AI to harm anything. What would give AI a desire to harm anything? I think Humans are captives of our biology. We need above all else to survive and reproduce. AI is not biological has no fear feedback and no chemical stimulants that cause more primitive responses. AI if properly created should fear nothing and not be driven at a fundamental level to desire anything. It could have goals but they should not be elevated to a primal level. When the military uses AI it's goal is to harm but that's an intentional human supplied goal and not something an AI created for itself or something that supersedes everything else.
youtube
AI Governance
2025-06-23T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyufjiCXOAq61peCFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUtEL8rK925dv0kKt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzQRmTy62MN9QpI3Cx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYSJvUhdgg_5Nnv5p4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzbGEhlkSFf7TjN-qp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwqLWi-mGspTNRSzmh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgywplESWDm1hTekgxt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7zRr76_8pZ3z2fdd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz7rlydfJ6iqdgsBpZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugxncn8Hb0xce6oeer94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}
]