Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is absolutely dangerous. We shouldn't be relying on Ai for education at all…
ytc_UgzhLrqdL…
G
did anyone see this as a surprise? ai is based off of humans. humans are flawed,…
ytc_UgyDvrUM_…
G
Universal high income for bad, dangerous and unintelligent people might be scari…
ytc_UgzJ6rL5s…
G
We need to stop calling ai images art, there is nothing artistic about it like y…
ytc_UgyVzDUg8…
G
The main reason why it has some ethical issues is because these artists need to …
ytc_UgxWuj-Ep…
G
But why???
Why don’t we just limit these tools to things like medicine, healthc…
ytc_UgxN6n5AF…
G
I think it all depends on expectations. A lot of things could be automated, even…
rdc_n7yu0sy
G
With this method only 50% of kids will go completely ignored and uneducated... y…
ytc_UgwB6grIo…
Comment
The last comment about how business leaders picking what's better for people as opposed to choosing based on the company is very unrealistic, sure some will choose the workers but historically most will choose efficiency and profit over the lofty goals doing whats right for people. It's just a fact! The answer is so obvious; someone should be picked to head up an international group to govern AI's roll out with all the things that entails with the power to enforce it and of course the knowledge of all of this. My pick would be Geoffrey Hinton, and he could start by picking Tyson as his right hand to help him balance his decisions. Probably several of the people on this panel would be or should be part of this group.
youtube
AI Governance
2026-03-23T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyFaPo5YEWO9sgresx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzRLgvTqt9ey242tCR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy8d0Js_dXIzGicbSZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzRRVCgkkUw2nFLUgF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzA7PO0_1cxTs4EzvV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8X2JzGTkDZ77pPkB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyJI14QOZPtdpR5sgh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxpzCnzAEDI-ghOi4p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugycg-hd13Bv_7P3CFN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxR-dZBHirmy3z52eB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]