Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In March 2018, a self-driving Uber Volvo XC90 struck and killed 49-year-old Elai…
ytc_UgxpVTkXF…
G
Plot twist: there’s people behind the scenes being paid by the government to voi…
ytc_UgzGmPlSu…
G
The robot is good but there is a huge advantage, the robot does not feel pain, t…
ytc_UgxsQ8ULb…
G
It seems to me that history has shown us that how things become stable is thru m…
ytc_UgyCEsxJj…
G
"I consume the PRODUCT, not the METHOD"
XQC's streams are a PRODUCT that his vie…
ytc_Ugw-P-chG…
G
The people that fear AI the most also fear the “global order”. Yet somehow they …
ytc_Ugz1JzHYN…
G
don't forget the industry is always 2 to 5 years behind big tech as far as adopt…
ytr_UgxRjYYGf…
G
A lot of L takes. I get brandon has to make a business decision. He has a large …
ytc_UgwvnoXSS…
Comment
He's not wrong. But if AI is smarter than us, then we probably won't ever know that it's in control. Like The Matrix. Mother nature might dish out some chosen humans who can see the truth and work to free others from its hypnotic hold on humanity.
Ah well, let's see what happens. Sounds like an exciting page of future history that we're all part of.
youtube
AI Governance
2025-12-06T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzaYvGncT03U0EEwht4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw84M4SJj66fqORqlJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx7QtITYinPyHpVlPV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx9qr8sQ5LW71ToueJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyRINgs_yau6K3lOqp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKhnQdyc5St-YtOuN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx8Pkvz8qtwZ_MzZEh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugye9QNJmoF8KOFHLcF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyMeUTRv0497wIvD3p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwEgTh_kpYXkJNO2yh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]