Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Karen Hao is incredibly right, but at the same time incredibly wrong. She undere…
ytc_UgxGesliM…
G
Yeah I ain’t paying 200 for art buddy if I want a silly goofy picture I’ll ask a…
ytc_UgyDnYqVp…
G
Ai would have to be regulated as a utility or a monopoly if its all the means of…
ytc_UgzgudT8q…
G
How have we as a society reached a point where people turn to AI chatbots for su…
ytc_UgwwL4Hny…
G
The way to go is restraining order and document everything privately and present…
rdc_nrtlxq3
G
In reference to your comment at 53 minutes; if there were a one percent chance t…
ytc_UgxO2fgt9…
G
All of this is true Mr. Sanders. On a 10 year timescale. But the people won't st…
ytc_UgyG7uf1X…
G
@@SetaroDeglet-Noor Yes. But GPT-4 isn't an existential threat. It is not AGI.
A…
ytr_UgwHyJCTs…
Comment
I think that AI’s major affect is that it will take over a lot of major tasks, but it will still require human input, thus not as many layoffs as originally expected. It won’t necessarily result in the loss of a job, but instead a decrease in pay (30% or more). And I almost find that to be more problematic.
Greed only works if people are able to afford your product, which means that AI would result in a net loss if everyone is laid off. Who knows, but I think it’ll be more bad than good for society
youtube
AI Governance
2025-11-03T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxm5AujfDPVHOnpYTZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKOwW8MYJfJI2eWoR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzGUApaN3WCdQSM6vF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw_mt2ZeciTOXowGGx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxE7TJ4wC1t_8NaPEl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzWY0Oli3hdS78WdyV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyEzIUdvnsXP2nmYxB4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwkKcfC3v7ksC631lB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwmDVlo4k-F3XgxAtN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyPTN6ciu5pHY7zhKl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]