Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Medical, psychological (therapists), religious, some education, and most fields …
ytc_UgzvSjnLu…
G
We need regulation without over doing it how do you get balance? AI may suffer g…
ytc_Ugxf568cU…
G
Nope none will avoid the Robot Apocalypse...Nothing of the old system will exist…
ytc_Ugz5_M9fs…
G
the way ai supporters call people ableists because "not everyone can physically …
ytc_Ugzkpi5Z6…
G
I'm gonna use chatgpt to tell me what's important from this and save me 3 hours.…
ytc_UgwwyX5L3…
G
Good, white collar workers are professional slackers.
A.I. is going to force p…
ytc_UgyHqOI3l…
G
This is very interesting to think about. It also raises an assumption here: In t…
ytc_UghcPoA1N…
G
This is extra crazy to me because for the last six months at least, basic engine…
ytc_Ugzg5_M4Z…
Comment
The vocal tics remind me of this quote from Gilfoyle in Silicon Valley (the TV show): "This thing is addressing problems that don't exist. It's solutionism at its worst. We are dumbing down machines that are inherently superior."
On the other hand, it's probably very effective as a way to increase technology acceptance for the users by making the conversation feel more natural, which is probably exactly what OpenAI wanted with it.
youtube
AI Moral Status
2024-08-24T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwwOwZLCD829r7S7h94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzOYIiNvOeR9u0QBzJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuHit_rZBm2uM82gZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyRdA5I-YOQXsxWZtF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzA0Nf2U_XaS12nYVp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzCtOtGfY_oPqlm0nl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxs_fOKnJiasveZs654AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz_MEmmq1WNw2AHX5R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw-zv2W-4AxQYnNMd94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6elt8DfjV0hRe-Gp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}
]