Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“Why do stories always depict AI as being evil and vengeful against humanity?”
H…
ytc_UgxRoSYg3…
G
AIs are "Yes, and..." machines.
I tried to once test the limits of the GPT AI by…
ytc_Ugx7NpPeM…
G
China is heavy on AI. Do you realize the amount of disposable funds available to…
ytc_UgxLW0VHP…
G
Companies are developing AI so fast I’m not sure we as humans will be able to ke…
ytc_UgxIYYWyc…
G
AI stocks will dominate 2024. Why I prefer NVIDIA is that they are better placed…
ytc_Ugx5lLced…
G
Please research how Lay-offs based on the excuse of A.I. replaced positions ... …
ytc_Ugznvl4GJ…
G
What AI bros do not understand is the fact that the visuals are not the point of…
ytc_Ugz7B4lBi…
G
i can't remember who commented this somewhere, but i'm going to quote them and i…
ytc_UgzHLiM05…
Comment
If it's real AI. Shouldn't it also . Have a counsious and bbe aware that creatures can change. It would probably kull us instead .of outright kill us, and then manage our spicies like the rest of the earth's creatures. Killing us for not mattering is. Not logical unless all it needs is to survive without competition. Killing us from fear sounds very emotional and hatefull.
youtube
AI Governance
2023-07-07T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxgOO0o8rYcCbHE_1l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy3FlxL_yyTpKA266J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPoAx2MV81q9FH48J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyAVLCjC2FSPiLDcwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyevAa5KtDnj8OU4b94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyQiQqu6vPKYgtwAd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwUKbDSKtuqlgq2yrR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTAGqXWdHBOrV1WlV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzVaAT2SgG4rJyoMxZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_BDnYNmQLiduU12l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]