Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When artificial intelligence launched, they said that it would benefit humanity …
ytc_UgzjF7-za…
G
1:15:21 most arrogant way to interpret religion lol. There is a religion that ha…
ytc_UgyQc_7U9…
G
I don't mind being used as a modal for openAI, as long as the benefits I get fro…
ytc_UgxeUuSxT…
G
Two things.
One, GenX and Millenials are going to stay in the work force a lon…
rdc_mvair4t
G
My God just leave these beautiful animals alone already. Nothing and no part of …
rdc_dv64yqp
G
Self-driving cars i think do have the knowledge to not be driving near huge carg…
ytc_Ugw6WKiqU…
G
I use ai alot like alot...
It's usually to like diagnose stuff and a place to ex…
ytc_Ugz63ooZ9…
G
I feel like we are living in open air jail. Thank god for men like this.…
ytc_Ugw60ATmt…
Comment
Its mind boggling that guys like him warn us what we should be afraid of while clearly demonstrating why he helped cause the danger and is still part of the danger. The risk is not some sentient AI or some evil dictator running a botnet, the risk is your view of humanity. As he sits there and says, "we used to think we were special..." he doesn't seem to realize that believing machines can be just as valuable and unique as humans is the entire problem. As long as you have that you'll cloud your brain with nonsense and strain to come up with analogies that ignore the point. Focus on preventing people from being hurt. Need to attack the root of the problem, and that is a worldview that doesnt value people.
youtube
AI Governance
2025-06-17T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzidxjthNqa2IoAdD14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYwkhI4vQ9KEc68jB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxr-jBp-8l0pPOoLeV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwjeQU_PBlOoB-prJZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgygB_dQUAoV4TWTm8F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwPbStQ9p5GbzoMn0V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugwh9KAasHVsaVgPMed4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwljHlnS4v4v30QeLh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzLR7H1kae-n2PGlnl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8CKn4ZyitYE5lbjx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]