Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
thank you for spreading this info. now i will double check my with my vision mod…
ytc_UgxyZfFoF…
G
I know it is hilarious . Ai is programmable anyway , their in full control of it…
ytr_UgwAX5wYP…
G
There is only one way for us to beat AI - turn off the electricity !…
ytc_UgwNF67sa…
G
I stopped using BARD because frequently, I would get answers with political ideo…
ytc_UgxcuLwBK…
G
You're at least a liar right?
Chatgpt .: should I lie to him?:.
Once you know …
ytc_Ugylvnej1…
G
What's not addressed here is tax policy. Corporations will be taxed commensurat…
ytc_UgzlRvdrG…
G
You do realize AI is an industry? There is supporting infrastructure. Look at op…
ytr_UgzLWd5ZP…
G
lol this picture doesn't even look remotely decent. it's as crappy as any AI gen…
ytc_Ugz9Flqbr…
Comment
Really intelligent people have doomed more people than really uneducated people historically. Everything that can harm hundreds of thousands or millions of people comes from the mind of the super educated. I often wonder why there is always a strong desire to develop something devastating just because you have the ability to. From nukes to AI, I don't see how these people aren't outright shamed for their "contributions" to society. At least with an uneducated person the most damage they can do is maybe a mass shooting or mass stabbing. A robbery, or something that can be easily stopped and contained. There are 12,000 nukes worldwide and now of course we are in the timeline where some super educated person will put AI in charge of some nuclear program and boom we are at the mercy of some AI named Lucy or something ironic like "WorldPeace 2.0".
youtube
AI Governance
2025-07-27T03:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwjZEsmtGPnsr43_VF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyoI2wTvB-_hv4y2X14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEnf4tx0UvnXDW7at4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6uEEgLhPdjlNjgV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZ_SAxeQpDAXyfhXx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxRBPHGl6iNLrfpS794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwvQ46fgH3Df_wtMyp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2h8SpngJicvo-RFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzRd90yGE0toWQesad4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugzk7G5nVdecB78OciR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]