Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Regressive opinions. Private data set plus open source model equals AI. The diff…
ytc_Ugxfda22P…
G
If autonomous is the future, the question doesn't become how to protect trucking…
ytc_UgyAElhuk…
G
God that’s such a weird thing to say.. nobody cares if it will work on you, it c…
ytr_UgyKnN6XA…
G
but at some point, some computer has to make a decision: who's life is prioritiz…
ytr_UgwCHoJvE…
G
First weed fried your brain like an egg. OK. Sure.
Next it was video games tha…
ytc_Ugz1ch5jh…
G
No wonder AI turns out like this when you feed it real world data like 13/56…
ytc_UgwKI_kXi…
G
When will people realize, AI isn't always accurate.
It was only recently that A…
ytc_Ugx1cKu0O…
G
The problem is, what AI is fed is propaganda and garbage. This is not real infor…
ytr_UgzdklKFD…
Comment
Interesting discussion, seems like Steven's more onboard now about the AI risks :) Also 2 interesting points and questions to ponder:
1. Professor Russell mentioned that Singapore has a coherent AI strategy for future, what is that strategy and where can I read about this?
2. Professor is working on trying to keep AI systems below human capabilities to ensure controllability, is there a scenario where we can balance the AI capabilities to be equal to human intelligence and maintain control? This could potentially expand human wellbeing without being subjugating or being subjugated
3. What is the one thing that would incentivise more safety prioritisation for large tech firms, is it regulation or access to markets etc? Problem is that companies like OpenAI and Google have already access to most of the large economies of the world.
youtube
AI Governance
2025-12-04T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxhNhyo_zmGfaSgNnx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzi70UQKeAnDkqg0_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxtMi-kto18UCPHCu14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgwA9ZI9baa1Pb2_0mR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwbeks40lI9SX4pBHx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyyzHtlCz5y5cti_Gd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3LcxGMCLzxUMeZf14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwucjlkdtpFnLCuYQp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwCJbkb2sliJVZ2O014AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgylV5Kg6xWWLDV2R_x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]