Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Since when the hell ai artists are a thing. It sounds like a bad joke…
ytc_UgzHYruAp…
G
I just got laid off due to AI. You are correct, good sir, as you are about many …
ytc_Ugz7qecyF…
G
You are so wrong AI doesn't take that much time to develop, it's gonna surprise …
ytr_Ugw4ScTWR…
G
Finally, here we are. Asimov I believe, already wrote the ten Robot Laws.
#1 A…
ytc_UgwehwtCu…
G
literally doing gods work here, huge respect and love the art. the REAL, organic…
ytc_UgzN4zR-1…
G
semiskilled and repetitive work AI will do better and faster and 24/7 so goodbye…
ytc_Ugz3H5Tn4…
G
Haven't heard of any driverless trains and they run on their own track and don't…
ytc_UgzSrzk5A…
G
@ChesapeakeBuckeyejust don’t train the robot. This goes for all fields of work.…
ytr_UgysBttXj…
Comment
If you’re trying to cause ChatGPT to experience a moral dilemma, you will surely be disappointed. The current intuitive nature of AI is to provide an answer. It does not care whether it is right or wrong, as long as an answer is presented. It is programmed to apologize if you find the answer to be unsatisfactory. By understanding this simple construct, you will realize that there is no moral consciousness to bargain with.
youtube
AI Moral Status
2024-08-22T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwAN6HgeNQaiWWEBUR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxusUq-HOWCKrWoUIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxrzfrFiguqE_hElEh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyMZrtrpM7Uw_6b1ut4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOrsnF_F4K0-l3QYx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcUayNpIk8uYdYfAl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwHfYjCdsyzoxdHwRF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxhF-i5ZyUsVCsiwx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsvXeD_8I3gB0YZD54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzzu7_op_PEX6yrrpR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}
]