Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
More cameras more ai in life not as good I think as they all thinks…
ytc_Ugw6xXsIt…
G
One emotion humans can program into robots is respect, that way no matter what w…
ytc_UgxHioWGQ…
G
Everything we do or learn can be learned. Except for our humanity. So maybe focu…
ytc_UgzO1Xvo4…
G
@chesapeake566 US will become Mad Max not Bangladesh if they let the 1% control …
ytr_UgyDLclhN…
G
Ezra is unreasonably positive on Waymo. There is no good evidence that they are …
ytc_UgxxHmJTO…
G
So….if they don’t record facial recognition…why are there so many still function…
ytc_UgwbtST32…
G
8:29 Am I crazy or do they have completely different ears? We use multiple…
ytc_UgzNQ1Nui…
G
i'm a dev myself for now 30y. and i'm using ChatGPT (3.5.) for creating some cod…
ytc_UgwqZ8iNI…
Comment
Why can't professor Roman start building a parallel AI to stop all the malicious acts which super intelligence agentic AI is likely to do in future....machine learning should be able to help in learning from human history how the WRONGs have been halted or brought to an end...G7 summits should be discussing this..however greed for dominance in world order can be a deterrent..which again sparks the counter argument that agents should be able to predict in advance who is getting greedier than necessary...
youtube
AI Governance
2025-09-05T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6gGG7FzPhOAlXoK54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBRXPJ8LUMzuym8MJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyiJoxDWDUT03Yfuo14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwvwvXPzBCNK2No5uZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYVYrd6IzUbrdsaDJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy-ZkoADoJRVCBxf9h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz3gmEyCQ6_dGsxHJ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgygpZ1ETacGeO0Q75N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"sadness"},
{"id":"ytc_UgzOYM-l3ccsmedjnh54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgybUXgbCiC3ZssaPKh4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]