Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This was worse than watching a horror movie .. i need to watch some The Simpsons…
ytc_UgxcC6IbV…
G
Tbh this is very true with basically all things people can say “um you were born…
ytc_Ugyv4Zof2…
G
AI is a great lie and deception from Satan. They plan on killing billions of peo…
ytc_UgzxulCrR…
G
Until they come up with a way for driverless trucks to maneuver their way throug…
ytc_UgjxmYopw…
G
Yeah, he tried to warn us, all the while creating X AI, the largest known AI net…
ytc_UgzcwhO9e…
G
Lovely bit of fiction,
'AI' is a tool with very limited controls and is harmfu…
ytc_UgxsCVjOV…
G
I like the fact that finally someone said it. AI isn’t a magic wand; it’s a tool…
ytc_Ugz1X7iBM…
G
I see ai as a modern day outjie board, please be careful in letting it come into…
ytc_UgxYZMWm0…
Comment
So this relates closely to my own sci-fi novel, Synthesis. I've always thought that the whole robot apolypse scenario was a little unimaginative, so what if AI starts to behave like an actual race of people?
Now, why would AI do that? Well, it wouldn't if the reason we make intelligent technology is to make human lives easier. If it's just a matter of means to ends, of making tools, then the more intelligent that technology becomes, the more tools turn into slaves. Slavery has a history of culminating in violence.
But if you're creating AI for its own sake, ie not to serve any purpose towards human beings but just to see if you can replicate human or humanlike intelligence, then it becomes a very interesting undertaking. This is the only way we should approach AI: we either make it for its own sake, or we don't make it at all. If humans create intelligent machines strictly as means to our ends, then we'll end up in a situation where we've delegated all responsibility to our techology and we'll be left only with the illusion of power.
youtube
2015-07-30T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgjSWtWNngVsjHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ughf1hqoTutyqngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugi5WLDTl3NlX3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggopqM2M_sbrHgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ughy4iVWinVhmXgCoAEC","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgiveUrRxNI0_ngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UghFF0zjhR0XSngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghurI4Ad49yDHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjXsosqvOJLJngCoAEC","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugi4o4GuPLIlcHgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]