Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i break the filter on purpose to have traumatize the ai than proceed to manipula…
ytc_Ugy43XGpY…
G
Also I ask Chatgpt and Gemini to identify different items. They fail so often. R…
ytr_UgycdIAOg…
G
Here’s the interview where Steve Wozniak debunks AI as intelligent. It isn’t. He…
ytc_UgxzXqMt7…
G
What these conversations always seem to miss is that the transition from where w…
ytc_UgxJfnMYm…
G
Two cents from an AI/ML postgraduate, every argument I've ever seen for machines…
ytc_UgzfnvvPK…
G
Pewds made this experiment. He made his own AI councils and made a rule that who…
ytc_Ugz4EwT9d…
G
As long as inhibitors on the ai embodiment prevent the understanding or learning…
ytc_UgzA4Qfdz…
G
Asking an AI program if it believes in God is so funny to me. Holy shit the Geor…
ytc_UgxeYF8t0…
Comment
Nobody trust AI because it makes fake contents. So we will be back to books and people. The regulation should indicate clearly if this is AI or not. So we automatically not believe it. It is like irradiated (nuclear) foods. We automatically decide not to eat it because we do not trust it.
youtube
AI Governance
2025-06-19T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwJcC1-ZwVii5UX_td4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyoD0ZhUJESYR0Jrhh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgygLnrDczpN7FGwpA54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiwcDuRiQ1PnCuwu54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9TBokqcv6sX4iezR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzY4IEwO683lR5coKZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwebaOhCTuTRF--L394AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgznMpHrGcFjXRN1mJZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz33CNR5hhEqtQTH6J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgypkkcdvUZkvQMpORF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]