Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Two possible 'solutions': A. Don't let the masses know how the car is working an…
ytc_UghkEkbZM…
G
I love how we can question if an ai is conscious but we don't even know what it …
ytc_UgwqQ-8gf…
G
This is why OSHA and guarded doors exist. Man shouldnt be anywhere near an activ…
ytc_Ugx7Dr_9X…
G
My issue with it is that it scrapes art off of websites and then copies it. None…
ytc_Ugy80NaL2…
G
You can write code to automate chatGPT prompts.
You can also write code to aut…
ytr_UgwRL55aZ…
G
My art teacher made us use AI for a graded project. It was forced and some kids …
ytc_Ugwhr8V4C…
G
AI is bullshit. Their just lying about everything and there be plenty of jobs fo…
ytc_Ugy3U8DXE…
G
Some people use actual chatgpt for therapy. Others use character ai and other ai…
ytr_UgwWTm5Ie…
Comment
There seems to be a pile of paradoxes with AI. It can’t actually think. It needs the staff to train it and check it. It’s like employing a naughty untrustworthy kid to run your company. It can be rewarded for producing quantity over quality. If everything is AI, what are people doing? Probably not using AI. If there’s no people, what do we need AI for?
Ultimately, trust will drive AI. If people can’t trust it, they won’t use it.
youtube
AI Responsibility
2025-10-21T18:1…
♥ 15
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz9WLpmRzRlYBoGlGB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyI8DMnB8ID9MLMO6d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxsJod9FFAcQ_PWYpp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxK1UVb1MgEI7RTfDR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyA6uhIWaG-ya7TL8d4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwv55iLXRXSeO_njS94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzm3XjRgWGcAOjkhll4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyW3-MezjgKg93ajid4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyAY6uqE8Kttebxe8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgygIMJvWodyJYb0_c94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]