Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let's play a game; if open AI researchers had surprise run ins with a New York …
ytc_UgzY8veQ5…
G
All of these instances are drivers neglecting to pay attention. Full self drivin…
ytc_UgwhUBPWA…
G
This is it. The issue is not AI, but who it serves. The ideia of workers making …
ytc_Ugygb9zO3…
G
So the AI is not quite like 'Data' from Star Trek TNG but once that's achieved t…
ytc_UgzldAFsY…
G
These company’s along with I don’t know probably every other corporation in this…
ytc_UgzunU2MB…
G
People bring personal problems to work, and are not always dependable. Elements …
ytc_UgxHi5tLb…
G
It is impressive, we need universal self driving cars, people would stop dying. …
ytc_UgxjsYaA9…
G
As long as the AI asks for permission to post something, we should be fine..righ…
ytc_Ugzef8x01…
Comment
we currently have a system where the most brutal and corrupt among us have power, almost every time... so ai is either going to only be as bad as us, or it will be better. i think that supposing something that can think for itself, yet lacks the greed that so often sends us awry, that we are going to only win if good and lose if bad. we aren't all the same. we have different ideals, and different goals, and different morals. it will too... imho, find its own center. and I think that it plausibly will also align itself with others in search of answers it seeks. but i think peace is always going to be sought and our own behaviors are going to be judged.
youtube
AI Governance
2025-06-17T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxCbVOiJOkTpKUBrNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_Ivqe7Qry8VzrtIR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzIQ3nUN_7vnEapQKN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyan4qiaIRJe2fxT1x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzBsG4xCmQ7DMTv87h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy2z2Ag80nhIpYaHop4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwRK7Xl5mQabt1IeqF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugze_B0pmBOFhtW6G2R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxhFCiry0KGZ7r_bY54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxnPTrN6g4utkjGeDt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]