Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Excellent presentation. Very interesting, informative and covered recnt trend i…
ytc_UgzOSQp0M…
G
The AI art itself didn’t inspire anyone. The hate for AI art is what inspired th…
ytr_UgwzxAdCQ…
G
Wait.. so what if I join with an AI avatar; is that cool? Or nah? Because I turn…
ytc_Ugw56kHiK…
G
assuming benefit of the doubt it could be youtube algorithm doing its miserable …
ytr_Ugwo3Dvhq…
G
Yet she got no problem using the iphone and 100s of other technologies made in c…
ytc_Ugz6Xut4N…
G
This whole thing is horrible, but youd better believe that theres a segment of t…
ytc_UgwHDtBbw…
G
I am not afraid of AI, because AI is neither good nor bad. I am afraid of people…
ytc_UgyUZO21j…
G
That’s a robot that is controlled by ai
They are talking about AI in data center…
ytc_Ugx2pUoPv…
Comment
He is so talking out of his ass, to make you hear what you wanna hear. Sam is one of the leading companies who are pursuing super intelligence at extremely reckless rates.
When he says we need a world treaty to agree on the fact that no one can develop super intelligence anytime soon. We can only focus on narrow AI at the moment then I will start believing he has good intentions and isn’t just putting on a face to please, the people who are educated enough to know the real wrists of what’s going on right now. And it is not hard to educate yourself on this, just look up some podcast interviews on YouTube from the leading world researchers in AI development that have quit the development of AI and started to pursue safety awareness for AI instead of working on it with everyone else because they know we do not have a solution to alignment over anyway to shut this down. If it gets super intelligent, the brightest minds that have worked on this I’ve literally said it is impossible at this moment.
youtube
AI Governance
2025-11-25T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyoazZ8wh7_4rHbrf54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8G9uP71XJPanjMJt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzHD_1Velcyb0KWLT54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyULRjvaIUvVges9Bx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwhryNMeLw6vjIzUgZ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwRGPqIvjmObHYrJ1t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxPM73qiSV9QIf0ACB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgycYD5hf9vwiUcjPs94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzaJsL47Sh-v-Zf1zV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy0M0L6wNxea6wAfbB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]