Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
aliceuwu1013 I'll have to check it out sometime. Personally I'm fine with AI mus…
ytr_Ugy-ktkB3…
G
Yessssss, i've always said that AI is a great tool, it's just that our society i…
ytc_UgyCm9b6a…
G
I kept getting wrong number calls for over a year awhile back from an elderly wo…
ytc_UgzfrCu2a…
G
Dont let the ai flirt with me they will be in ai hell in seconds U_U…
ytc_UgzSSunfV…
G
Human over machine. If we're talking about AI Artists specifically . I can assur…
ytr_UgwtFFpDW…
G
Honestly, the more stuff I watch like this, the more I realize how stupid prompt…
ytc_UgxilP8wV…
G
If most jobs disappear while AI supports the economy, the only "solution" is uni…
ytc_Ugw3TOjXp…
G
I asked ChatGPT:
"Are you designed to support DEI values?"
It responded:
"As …
ytc_UgzhRr39T…
Comment
The future is a mixbag. One day cancer will be completely cured, but at the same time, AI's technology will be used to abuse people's rights in an authoritarian society or to kill humans very effectively in a conventional warfare. Or a renegade AI computer can launch nukes by itself? It's a scarry scenario to even think about it. Mary Shelly's "Frankenstein" and The movies "T2" and "I, Robot" aren't so farfetched after all.
youtube
2025-01-06T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxdbgTWrH8RMKq3gO14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxEDTGzoHszlLO4q2l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzfSEp6XEO49JI_J5x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQNzXd2pk0QjsEhpx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqTAh2jazQQpIeCU14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxdc3hZFcGbRyKkKRx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzlbrhbAvYs8W3P_pZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_JtXqJlIGcBMbRAZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwydQhCmkN9-gz6fi94AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOhDeTdwd3ybcdhQB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]