Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai can be used for good or evil average worker bad but rich ceo that doesn't des…
ytc_UgwaDw-gt…
G
Further Isn't the ability to the ai to acknowledge the fact that the AI is lying…
ytc_Ugyy6DTmN…
G
the issue is taht it is more and more difficult for most people (me included) to…
ytc_UgzpmtsC6…
G
3 robots and 1 human not counting the camera man, would i really want to be arou…
ytc_UgwHBK5ye…
G
It doesnt help that news websites allow fake AI news articles to advertise on th…
ytc_UgyCyuk3l…
G
It is not what it is "who" Is coming. Muslim knows this "WHO" May Allah save us …
ytc_UgxiCwGfi…
G
Tesla's approach is moronic. To keep it simple, AI is, at its best, inferior to …
ytc_UgyQjSnN_…
G
The main reason why all the jobs WON’T disappear is that when people have no job…
ytc_UgytrFDpx…
Comment
I'm glad that Ezra grilled this guy the way he did. I've watched several interviews with these AI alarmists and really they're all the same. They're making anthropomorphic leaps when describing how AI operates, and a lot of these otherwise intelligent people genuinely don't seem to notice, which is strange to me. I mean...we're talking about lines of code. Lines of code - words - cannot spontaneously develop sentience, or consciousness, or desires, or resentment. They're literally words.
I still await an interview with an AI alarmist who can actually convince me that there's something to be concerned about aside from human interception and manipulation for bad ends.
youtube
AI Governance
2025-10-15T20:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx0eO84iCVdGa-cKip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8PlCBzNjvAigLxFh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxlRue2H7T6_ZB_vUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyt3hv5O8ERb9YLSoB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyfgxGpRqKXk1E697R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxV6pE8mgjX3NxCgAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOAM377rC3BN7EAil4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxnVyar3ZKhY8tQS2B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXCp0x5W-aQeQ8lBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzdO69m5g0_OjZkzkd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]