Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So nobody works and receives a basic allowance. And if it works, who buys the e…
ytc_Ugyv5gAB5…
G
4:46 If AI boosts our work done per hour, we won't see the benefit of it as work…
ytc_UgwXnNPVE…
G
When it comes LLMS the US is winning hands down, there’s no question about it…
ytr_UgzVkF_Nr…
G
I love how Charlie assumes his Heisenbergs are “quality” AI art. 😂 It’s some pre…
ytc_Ugw_89L6g…
G
If ai is learning then it needs to be taught how to be nice and why to be nice l…
ytc_UgzDHIM6P…
G
Aw man. We used to just be able to replace 5 million human jobs with just automa…
rdc_cz35j0w
G
people can be controlled by AI especially if they rely on it in making decisions…
ytc_Ugzv3G-Gc…
G
Humanity was created to create AI. There is other AI in other planets that may…
ytc_Ugx93Xqni…
Comment
I have been working very closely with AI every day for the last 3 years. I just would like to say that you should be very careful taking advice from any 'expert' when they are rendering opinions about things the ultimately do not know about. AI is a frontier science and AGI and ASI are within the perceivable future, however no one has definite knowledge of what 'will' happen. Stuart Russel is a thoughtful, strongly ethical man, however in the interview he has misreported information or referred to events which supports his opinion... or misrepresented facts (like the 30% chance of AI ending humanity) because they support his internal beliefs. No fault to him for this... as we are all looking for evidence to support our own beliefs. I just want to say that he really really does not know the future any more than anyone else. Also if you log over 500+ hours with AI systems... you will come to understand how very far away it is from real human intelligence and how it really is far far far less dangerous than things like cars, or processed sugar (both of which killed far more humans in the last three years than any AI system)
youtube
AI Governance
2025-12-30T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzZ8GaPVkMJQ-rimPl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzSJz5maN0Gsxli-S94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1Qs0L7a6DC7NwS854AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwcUZyNkTj2OKu-WEV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxpIedcG7bzCC9xWaN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyDqq3EV1U4DxtHe1l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxDD19HymOoY65PoVh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3ge_flAvjoEQtdc94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwiq1ku6e-Tp-ze1Rh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgySASjZ1SJBRHOLqiN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]