Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Should have really focused on that customer service. Like that's the only thing …
ytc_UgxEJ1Ya0…
G
@SlepdepOnAnAlt cause they don’t pay for the ai, it’s free and easy, eventually…
ytr_UgwET3D88…
G
I can’t say that I have much in common with any *rump voter, but with this issue…
ytc_UgwwCdlXa…
G
My parents looked through my ai app and asked wha5 it was for and I told them it…
ytc_Ugz_P2UUY…
G
We really need to update both international and domestic law to make any qualifi…
rdc_o78p5av
G
@raymondfranko2894
Didn't see that one. Give me the link. I saw quite a few tha…
ytr_UgwX8XJ2s…
G
THAT MF STOLE MY WORK WOED FOR WORD THE NEW MODEL I CREATED IT SINGLE HANDEDLY E…
ytc_UgxUJk95w…
G
Ai plus robots. Is coming soon now. Replacing even real relationships. Humanity…
ytc_Ugz4Fr1Bl…
Comment
This is so silly why? Because they can program the Chatgpt's software to apologize to people if he sends a wrong message, and this is just a program and it is not aware that this program has meaning behind it or not, the question you are asking is very stupid.
youtube
AI Moral Status
2024-09-15T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw69ISJ_lPUCTkDUbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyAYCr4I5N7NWifiGV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyV5xtmWw7C0__kHPd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw-4wjMA61eVxnIds54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz2k6okH-irt585D6F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPG1R39dF8CuBRGyh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugxp1hRtwnn-xMeK9i54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwnk04GnHt6vf-ZN9x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUQPBy2F1QM9lclbx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxnsxWsrU-33w1ynh14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]