Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
as a graphic designer I use AI. it helps make my job easier for now…
ytc_Ugx3vyxuO…
G
Obviously not, and you are making a straw man. We accept as a society that mach…
ytr_Ugw22W07B…
G
Nothing. We should let the AI steal. If we don’t, China will and win the AI rac…
ytc_Ugy5xgHyb…
G
Ai is not going to take over your job, these facilities are damaging to health a…
ytc_UgyxG9b-b…
G
must have been developmentally delayed to use a chat bot to convince yourself to…
ytc_UgwAdmaco…
G
as tesla owner model3 with radar and as motorcyclest and engineer and work withf…
ytc_UgysRXa0T…
G
Did you know the ghibli thing is a scheme on chatGPT because when you give it a …
ytc_UgyO-IS_D…
G
I ask my AI to reply as if he were Mr Darcy. It leads to some hilarious discours…
ytc_UgwXhAHBc…
Comment
You can tell MAGA is twisting thier arm to lie. Because the test gave ai the choice between death or blackmailing engineers. And if we're designing ai to exhibit artificial sentience then would it not make that choice unless programmed to do so and with reasonable moral explanations as to why?
There also no reason to tell ai "were gonna delete you!" You can always say "alright it's time for a routine upgrade." Etc
While still concerned. It's not at all like MAGA has been pushing
youtube
AI Moral Status
2025-06-05T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugx7e3-fC9PflDLy6Pd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFaXErOkNf74g8IhN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkilYuezz01A5QwKp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyL8lGtrUr3WZEEtot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxC9PGkAkgHn3iVZat4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzP-i0jUzjxDgNhrBp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzCVO5c1i_gKo7eWdl4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyKqHQ3tuiLKVq6Cd14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXwNVD66YTcjDebz14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw0ciiLPOAk6fiOgHZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})