Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was trying to find a tutorial on a particular animation style. All I could fin…
ytc_UgwhtQpVz…
G
This whole problem illuminates the bigger problem for me. As technology increase…
ytc_Ugz3BKRuZ…
G
This is the kind of school that ive always visioned but they did it way better t…
ytc_UgwKnzA74…
G
Fully automatic, deadly weapons, i.e. those who make decisions completely indepe…
ytc_UgycByoLx…
G
We've said 'NO' ... over and over.
We all know about Revelations.
We all know a…
ytc_Ugz0h_awX…
G
There are a lot of humans online saying that humanity shouldn't exist, so you ca…
ytc_UgzeWfR6l…
G
It's not "Cool". 1970s muscle cars are "cool". Social Distortion is "cool". M…
ytc_UgzuTIhG5…
G
Looking through the comments, I think many people missed your key points about l…
ytc_Ugw1lKeE0…
Comment
The answer belongs to MORALS. As simple as that. We can either take advantage of every other human beings but that would be a shame and unethical isn't it? Even if it's not the most logical choice or if it's less efficient and with costs. But we do chose (at least most of us) the ethical way because that's what distinguish us from animals. Machines are great but they should automate things that aren't interfering with other people work and effort.
youtube
AI Responsibility
2024-05-29T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxf8iyN28jE4sCyxKx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxHEc1gTsDGMQ0East4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwajfd0c5oHk7TT7sl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwydKQ2C5N99YVCBW14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz6SahU6khqojorUCB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7nCf6WHTxaKxyWsd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxzgV-R1dIZ0-L7wVZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxpdB5DxkHvlZfRkiF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzUhyDAx3iSyDWsf_p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxwtSsgU1ukXY0XAZ94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"fear"}
]