Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This guy may be right on his prediction but the timeline is full of shit, maybe …
ytc_UgxtF8G42…
G
AI is getting so good at manifesting light and dark. AI can be used to solve pro…
ytc_UgygifiEN…
G
Its called a "honey trap" he's essentially using satire to and sarcasm to get al…
ytc_UgwW50-Fb…
G
Runaway AI will increase, exponentially, its own energy demand (do you know how …
ytc_UgwAA2ya5…
G
Work is NOT an integral part of being a human being! The VAST majority of peopl…
ytc_UgxL3cj6J…
G
Now those autonomous aircraft just need to look like mechanical octopus and be c…
ytc_Ugy_vr3eN…
G
The problem is that the people who think Gen AI is "just a new art medium" only …
ytc_UgwaRat-8…
G
Yes, ai is faster. Yes, ai takes less effort. No, you will not have to spend mon…
ytc_UgwYXd1U0…
Comment
Its not hard to imagine the more sinister possibilities of AI imitating or overriding human inputs/control.. causing ballistic missile launches, cyber warfare, sabotage of infrastructure etc. The problem is, where do you even begin with regulating it? How would that be possible? Google (if im not mistaken) already announced it had created AI that had become self-aware. The creators didnt even know what that truly meant.. so they asked their boss, and he had no clue what to do either🤷♂️. Thats the scary part
youtube
AI Governance
2023-04-18T02:4…
♥ 26
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzXkHpPn0K0cIjDJCd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxa72Cmd8J-v-jkHid4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2mTsSHDqbq8L2Zkt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJUhKtM5vJtlQY6wp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzD8bW9tXJCXeMyXjt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhKWmue9T4R75G7aN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyKml59ZRagcQC3Htp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw8PUT3_KNVkaZEOqV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxXuFidDu2oSyw58qV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzdxxsn3fNcR10aE2h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]