Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The best use I've found for AI is brainstorming. Not input/output. I think in sy…
ytc_Ugzjun0B7…
G
The issue with AI is that because it's not sentient, laws are just suggestions. …
ytc_Ugx-a2L8C…
G
So stealing is bad, but what if I only steal a little bit? Is that still bad? Is…
ytr_Ugyvb8j5B…
G
Hi YouTubers, take a moment to reflect on something about this documentary about…
ytc_UgzGpqrdo…
G
I really hope this fuck around stage is followed by a find out stage soon when i…
rdc_nlwwb7t
G
I can't believe you used AI to make that picture, do you have no spine??…
ytc_UgxGGUf9N…
G
I watched Claude build an app in about 15 mins that I built as part of a 3 perso…
rdc_n3m12dp
G
There has never been a tide for not hiring juniors *because of AI*. Anyone who b…
rdc_n9qbgsh
Comment
I can appreciate Dr. Yampolskiy concerns, I really do. This being said, AI will only operate based on logic. Yep, it will likely be used in a "Defense" platform, and multiple others, however, the fear of Artificial Intelligence causing global collapse by 2027 is HIGHLY unlikely. This guys thought process, the way he conveys it leads me to believe that his work focus has created an irrational fear of the unknown. So basically, or psychologically, he has come to an impasse where he can see no other outcome but negative. This is common amongst personnel in cybersecurity, and really any industry where "Averting the Worst Case Scenario" is top priority. May be time to retire, but I truly appreciate the work he has done.
youtube
AI Governance
2025-09-04T15:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwLck2PpIh1dV3ppFF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgytpBC4CfMcwPh2qaB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxrD4sp9g1NeOpDa8t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRpm9K7Fv4qSVoJzt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxWrfvc2j8ZQOmC6i14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwT-2ngswaVQIsrFy94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyEqljw_Ob7dsRQHVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwX8HN1ZtNXkV4U9K54AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwbhYkmlP8VhkB8Uvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwXntOW3bQ7LkP-Hzt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}]