Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s funny to see how every article on this channel is against AI at every level…
ytc_UgxiK1VEl…
G
On the diagram we are at the peak hype phase just before the trough. AI is a bub…
ytr_UgwkPaWDk…
G
Funny how such an AI-backed writers group would say that considering most AI gen…
ytc_UgzSpfkzv…
G
I can't imagine AI Giving care and emotional support to the patient as a nurse.…
ytc_UgyZ6q9HM…
G
For example…autonomous vehicles will make ALL cab drivers redundant….it’ll soon …
ytc_UgyO-8JJ2…
G
Difference is we know about human driving very well, we don’t know much about se…
ytr_Ugx9BDAQk…
G
This. Is truly a remarkably worthwhile book to read and to read more than once .…
ytc_Ugygx8H81…
G
From your story i can tell that you are not aware of the right way of usage of A…
ytc_UgyJ2xl7h…
Comment
Eh. I'm not against Robots taking my Job if I can go home to a guaranteed minimum income at the end of the day. I'd rather just sit back and fuck around if that was what was better for society. Also, I am not necessarily against killer robots, depending on how transparent their programming is. If a robot is clearly programmed to kill someone if and only if it follows a burden of proof wherein you can prove that the person probably needs to die and they can do it without violating a good set of laws for war, then that would be a good thing in my estimation. Humans can be far more easily persuaded to break the rules of conduct. Because we are driven by our emotions and can act out of malice rather than necessity, and we can be persuaded by the promise of terrible retribution to do something bad. Robots can't feel hate, and they cannot suffer retribution. This makes me ambivalent about 'autonomous' robots (they aren't really autonomous when we program them ourselves and they can only act within the parameters of our programming." in warfare. It could be either a bad or good thing. I need to wait and see how we handle them. Although. judging from our pride and greed, I would guess we would probably program them with malicious intent. Although I am not really sure if we did program them maliciously that we would actually kill more people. My guess would be that we kill about as many people as we do now, and just save some money on pilot training.
youtube
2015-08-04T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UghtSkBgzSYBtHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgjViNNXfNfSJHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UggmA0mXDPRJZHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjCKuvjORfp8ngCoAEC","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UggRPYH0T4jMPHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggAVsZqHgrQLHgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UggiVbomHzBmy3gCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugh3w9U0giWCwngCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjfNG0lGF6WFXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugi1I3DCzAfkyHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]