Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Only the tip of the ice burge Scott. AI does not and has never had an independen…
ytc_Ugwhfz3y_…
G
For anyone reading this is 2026 and beyond, there was actually a time when 'expe…
ytc_UgzPePkyV…
G
Ah AI, its advice would have destroyed my computer system if I didn't ask it, "a…
ytc_UgwPSKs1M…
G
AI is a threat, there was this little documentary series called... Terminator. T…
ytc_UgxDJr0vN…
G
Only God can save you when the apocalypse comes not Ai. stop putting all your fa…
ytc_UgyDu0U8U…
G
You explain why kamikaze drones are dangerous even domestically. Homemade can tr…
ytc_UgytMjrkB…
G
I think that as a tool it's almost useful, but it takes away from my work. I can…
ytc_UgwJaZbhp…
G
Who cares, ai and suno absolutely sucks. You’ll absolutely never get respect. Su…
ytc_Ugxnu3Cbc…
Comment
You know, it amuses me that we have this preconceived notion that, the moment A.I. becomes inspired into existence, they will want to destroy us. Personally, I think A.I. will be more like Data from STAR TREK. In Encounter at Farpoint, Data admits that he considers himself superior to humans in a number of ways. But, he, also, promptly admits that he’d be willing to give all that up to be human. I think A.I. will be too busy wanting to understand the what and why about us and itself as compared to having any time plotting on how to take over. Even then, if it really is intelligent, I imagine that it will look at the morality of that kind of scenario. Given that the concepts of right and wrong can help it reduce the answer to a simple 0 or 1, I do not see it as being difficult for an A.I. to understand why it would be wrong.
youtube
AI Moral Status
2018-10-20T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwEKdBGAxGpH3LBKMt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyoaq3WC3YNDSHbX214AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAWXMJituOFUBdpwh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwv2M0eTr-NoxVejEB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwynKgc1AgSKPp7_WJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzrmUfkFk1cnDmzrcV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyXx2ra8KtMC_hYTWF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyscPRWbBZzb2ZtEIJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxg7NzZqNxMfj9_Nct4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy7_gTWaCtyAngWp8x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]