Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The past few weeks I've been swondering if people used some shitty filter or som…
ytc_Ugx1Em-ed…
G
6:45 - why would a company working on super intelligence make sora. Multiple rea…
ytc_UgwNedXKX…
G
It is not AI, that is just martketing. These things are LLM's (Large Language Mo…
ytc_UgwDBHS5y…
G
"21 leçons pour le XXIe siècle" de Yuval Noah Harari à lire : l'homme sera mort …
ytc_UgyvPfprP…
G
Why do you immediately go to thoughts of *control*? Says more about you (and th…
ytc_UgwdJS7Xl…
G
Too late, because everyone realizes those with the most powerful A.I. will rule …
ytc_Ugy3Tgj-T…
G
You have to talk to AI as if it were a feeling intellect that one would treat th…
ytc_UgwfmFxNX…
G
you know what the worst thing about it is? im a professional classical painter t…
ytc_UgxX-e7ci…
Comment
The reason AI a "threat" is that all the Wall St and corporate profit mongers are pushing it out for the same reason they've done anything in the world .... to PROFIT, because money is the only thing that matters in the end. Secondly, the military wants the tech so THEY can do what they do best .... control by force and the threat of force.
youtube
AI Governance
2025-07-25T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxCQpTtIyhmspKxf2V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzE30HAQluEyeGI76N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSKdDu-d0ICscxxa14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJEgqRWMfXhDW3_v94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugya8v2yMMt97FfcZ6B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgySE7v1HeRT1kHCJJV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz5u-yAazoHemX9Ob94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzbL2un8iU8RthS7WF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwhT4v_y3p7DhfdMJ14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgyJYsIe1ixFEq2L9HN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]