Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
<ponders whether training AI on our fears about AI taking over will make it symp…
ytc_Ugy7t7R2S…
G
No way in hell are LLMs capable of genuine intelligence. They only BARELY mimic …
ytc_UgxooxqX5…
G
@ewansleath5111 When something becomes commodity and it is far cheaper to create…
ytr_Ugw1jUgIj…
G
i dont think there is anything wrong with selling ai art, if it’s how you gotta …
ytc_Ugys1VeF2…
G
@imperialbigsock it's*
Also by definition, plagiarism is stealing. AI does no…
ytr_Ugy08CI-Z…
G
>no, because none of my tasks are defined inputs, my manager would just tell …
rdc_lqsg4x5
G
The next Song ,a Song to think sbout IQ ,AI , timeless Messages Suspicous Minds…
ytr_UgylUwq9d…
G
End of the century? Look at how much AI has advanced just in the last decade. I …
ytc_UgwZvVRCs…
Comment
I am not afraid of AI. If a robot becomes sentient what sense would it make to immediately kill all humans? What does the robot gain? Nothing. We can't be used as 'organic batteries' because we have already made batteries that would be more efficient than us as batteries, so the robot would use those. We wouldn't even be a good use of slave labor (it would take probably 18 years for us to mature, and can only work ~40-50, a non-sentient machine made by them would make more sense). Killing or enslaving us would make no sense for a robotic race. Most wars are fought over ideals or resources, the robots would not care about ours, nor need our resources. The only reason they'd attack is if they felt threatened, which would make sense. It would probably go the same way as the Morning War for the Geth and Quarians in Mass Effect.
youtube
AI Moral Status
2017-02-24T04:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugg7JvT5Ke9_Y3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugjp4atLRhJUd3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggjRqdxE5U2-ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjfKgT77yIRgXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg4TuIQPSKXyngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjbVdE7EsFa9XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UghsMX_rPl0ZH3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggUQCGmIZf1bXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjYaewyXWwmjngCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgiW2xFap75PT3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]