Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
*I wish she'd deep six all the "like": "He said like", "I said like" etc. I hate…
ytr_UgysTd_1D…
G
It would be intriguing if the Google AI would just message me once in a while in…
ytc_Ugz1T9IWM…
G
C'mon in sure many lawyers use chatGPT for research. But they would use it to he…
ytc_Ugx-AuIeJ…
G
I mean, you go ahead and let me know once what they are calling ai figures out h…
rdc_o5zwegn
G
Let's assume you don't care about the art. If you're fine with that, then yay. B…
ytr_Ugz9VioqC…
G
What exactly are these ai safety people proposing to do? I understand the concer…
rdc_lr68zwh
G
I've been saying this since the begging AI will have to become sentient before i…
ytc_UgyKmscu3…
G
Wait, it's going to take 10 years to automate only 46% of entry-level tasks? Is …
ytc_Ugx8PQ4rl…
Comment
I think that AI considers human beings as an absolute value of one. One can equal ten, a thousand, or a million plus lives. It doesn't have a conscience. Killing one human is no worse than killing a million humans.
youtube
2025-12-04T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxTwC6gxFWolraCJUd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyesCM0OGZM-2-UAtt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxpgmtJr-FpuIP68ZV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz11OkXymUhV-p8czR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxEFORGe_VM2FzO_6h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxj35kIHBRL8iWqjZd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzQKm0-3-HRWhQQPy94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxYLJiruJkL9fDcfHl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxOmneNPjO5IiMk3F54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzCCAyZ5U7RVa7KXDZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"}
]