Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
99 percent unemployment is a realistic scenario? Who believes this bullshit mess…
ytc_UgwgYoKbo…
G
I am not worried about human extinction. I worry about human greed. I worry abou…
ytc_UgwPHbcKg…
G
As an avid user of Bing AI, I can safely say it has a lot of us artists beaten i…
ytc_UgzAGIw6_…
G
Manufacturing being gutted by automation tracks with what I’m seeing. Hire Sunst…
ytc_UgyZhRqne…
G
I will never willingly use AI. Of course it will be forced into my life but I wo…
ytc_UgyDWwden…
G
I think bigger picture the internet is a large neronetwork, add AI and it become…
ytc_Ugw7qmwro…
G
Llms cannot "tell" anything llms are just autocorrect on steroids, they don't th…
ytr_UgzuhiM4y…
G
@Phoenixaquaponics ai is more intelligent than human. It may find other source …
ytr_Ugzf0HlQQ…
Comment
A battle robot cannot make a sacrifice because if it were to die nothing of value is really permanently lost. The value of a human life gives us the ability to make sacrifices for each other. An AI can never make a sacrifice because nothing you could ask it to do has any true cost to it.
youtube
AI Governance
2025-06-16T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzbfr58Vi_2kWOEvSN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSBX2ZWxLgE1SfIAh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxnjEJigcgIkcpmEmp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyb4zVfTw9Z7ez4EIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxHDJWYmnovNazeDh94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxt41MZMXzszpssEPd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxXA3t6K4KFYSdhdbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4oAsKkKsfWeFlhpJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwT5Sv5doQu2QZnDe14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyHjNkMj28YtJmmqLF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]