Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Apocalypse is coming and humanity will bring it about...we dont need GOD to do i…
ytc_UgyH4rrxd…
G
I agree that capitalism is the key problem. But I think the interest in AI goes …
rdc_mbwnhhp
G
I HAVE TRAVELED BACK HERE FROM A VERY VERY BAD FUTURE IN A DESPERATE ATTEMPT TO …
ytc_UgyL4rEog…
G
It's capitalism that is going to be wrecked by AI - the removal of the majority …
ytc_UgxcfKSZN…
G
I hate the stupid answer of "learn a trade" when it comes to AI taking jobs... w…
ytc_UgyU3-n3K…
G
If there is 15-20% unemployment. The economy will tank, assets will devalue and…
ytc_UgzBQlLQj…
G
In regards to the ai age verification, i have a small amount of input. Of course…
ytc_UgzittdWm…
G
Isn't that your fault as a driver for not realizing that the car you're driving …
ytr_Ugz0ZQTRQ…
Comment
One of my favourite interviews on the channel, though I still think that we, the people, can do something about it. Afterall, AI is learning mainly from us. We have the power to teach it with how we speak with it and with how we treat it and we also have the power to say "no" to using AI and other products from the big tech companies.
If humans were to unite for the good of all, maybe we wouldn't have to fear the machines.
youtube
AI Governance
2025-08-19T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwhOMGHO1Oiug6O5nJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx9hw0L7tbnKaTrwo14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlmjKGOXB-A63y05p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyEXgoCwhKC31o59gZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy9UGOmRzJ1Hv4MXV54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzY0NdWtuwJiserE614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzK39l4NZUBovUz7bR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4Tt6GTJ6D4cU2uKh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzdt1v8Th8lh_nab5t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxXy0ghLN7krfEqFDV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]