Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He said its releasing a demon yet he keeps making them robots and now trying to …
ytc_UgzkcW33A…
G
I wouldn't even call it Art. Sure, there are some pictures that I like and it's …
ytc_UgyUhrBPF…
G
The difference between an artist taking inspiration or doing a study on another …
ytc_Ugzt4fvms…
G
You are talking about precise matters - but that is what current gen language mo…
ytr_UgzuN1zEv…
G
For people everywhere in the world (Even in the comments) probably might consid…
ytc_Ugxvz7zyP…
G
Self-driving cars need to be banned from use on public roads. They're a danger t…
ytc_UgyQZ89Gf…
G
Re generated images, it all sounds too easy to flag them as such, but any modern…
ytc_Ugxf_TXzr…
G
AI is too dangerous to be our children's teachers or any part of their major edu…
ytc_UgzfHnw9B…
Comment
I have something to say about whether resources will be distributed well to people in a state of super prosperity by AI. Many people will think it is unfair that those who harm others or are lazy receive the same rewards as those who do not. How about differentiating rewards through people's memories and recorded behaviors? It is not simply about rewarding in order of contribution to the world. It is about introducing a fair, sophisticated, and wise evaluation system that many people can accept. Of course, some vested interests may be reluctant to accept this evaluation system.
youtube
2025-06-09T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzp2LL46qW8BaSKG2R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSSkszKafESEd1kHJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy5wc1JyBhBgjZZbvZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxDcYL0UDartkZi-fR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugz-MqAtBhxHmwTYxiF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxyTizTg7Ue0dlSDMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEJk-gYt8JYewiunl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyvONBHtF84BS_kpQh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzkckSJ__55vekEt2x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyawWFwQtaGXnl9e_d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]