Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i don't think you understand AI...copying a dataset as an exact replica is copyi…
ytc_UgzBX5Xrt…
G
Ik this has nothing to do but we need to stop AI ASAP all of us are gonna lose o…
ytc_UgwE9F2Bn…
G
Gemini is fucking terrible.
I googled some football stats and it didn’t even g…
rdc_m26vp9a
G
I'd better be able to believe anything said here if I didn't watch the war in th…
ytc_Ugyaz8Z-B…
G
I was talking to “celestial” beings on the outskirts of earth operating from
A …
ytc_Ugzicu0Qr…
G
The thing is, most people don't give a shit about art or any 'point of view', th…
ytc_UgwDAT916…
G
So, wait a second...
According to the article, the flaw is that it was using su…
rdc_e7ja25w
G
As someone who works in government classified cloud AI. No. They just get models…
rdc_o87yctx
Comment
LOL this is always one of the dumbest arguments to me. People are worried that if we give AI things like feelings and such then do we have to give them rights and does that open them to abuse? Here's my solution: DON'T GIVE THEM FEELINGS AND EMOTIONS! Do i really need to state that? There is NO reason to give robots feelings and emotion. There is a very real threat in creating AI's that can build AI's better than themselves, that's just Pandora's box right there.
At this point computers and robots are simply machines, if you're so worried about robots being "abused" then maybe the whole "hey let's make a human robot" idea is not a good one. As Goldblum said, "Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should."
youtube
AI Moral Status
2017-02-25T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghKoK55MKjPi3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjLk4dwj6E7c3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgifpkGSnco6Q3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugh3oGA9UWKfbXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ughlf5QYg265ZngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugjui8lyYzSrvHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiuXt-nUv5jbngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggvBcByL6n803gCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjWed3DMfpEnHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UghAqkiQfyzw4HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]