Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
3:06 Is there a list of sources available for the specific claims made in this v…
ytc_UgyHroRQ8…
G
Whenever the discussion of AI taking jobs or killing all humans I muse about all…
rdc_ld1hu3s
G
If you didn't write the code, the code isn't yours and you cannot claim IP over …
ytc_UgxRV-8cF…
G
People are giving these things the wrong attention the inappropriate considerati…
ytc_Ugxb3x6Ql…
G
It seems like they did nothing ,
It's a robot why would he say I'm alive…
ytc_UgyoMzMl0…
G
Or, without autopilot he would have been paying a better attention to his mirror…
ytc_Ugzu03TL-…
G
The only thing i disagree with, is that i dont accept that lying can only be don…
ytc_Ugx_wRE0l…
G
you do know that AI doesn't take inspiration,it steals without consent..right? a…
ytr_UgzZEEy7V…
Comment
Has anyone considered that AI developers have to develop a set of World Morality weights and then hope that once developers use those morals in the development of their agents, what's going to happen when (borderline) agents work with each other and produce a combined distortion of the original task. AI Should NEVER make decisions about anything until the World comes up with a Moral set of values that is agreed to by ALL. In other words never. Systems can be corrupted by the average fault of many components but each component when checked passes the tests, therefore providing a blameless platform to do intentional harm like component viruses built to combine to create a threat but each individual piece doesn't pose a threat by itself.
youtube
2025-10-09T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyAMGLYBaoHJDVr3A14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwN-evorx6RHjXAU4Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyhNFOnH0AMhcnUpB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzS7gNgHADUEG5HNXF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzGIRYcO7M9fyTyEWx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwvM82dmJvjQ32kCHV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyTb_Yx8GWkiW8WQ14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwf8ONw1UL2MhuhvIp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzBHjG_fSCKe1ly1YB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPju0cCZqXN6oRYCd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]