Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And this is why he's in control of it because he knows. Look at that one man did…
ytc_Ugzpo8oOh…
G
The problem is that companies wish to delete that middle section of people tryin…
ytc_UgxaASXT_…
G
The part about not overhyping AI is what makes this worth watching. The honest r…
ytc_UgzUlWhnC…
G
can AI be used by scammers and in other criminal activities, or will AI be able …
ytc_UgxjcqiER…
G
humans are the worse danger because we like to kill. How's does A.I feel about t…
ytc_UgwhBnyEY…
G
As AI is more prevalent and the internet is more polarised it is spreading to th…
ytr_UgxgbX98j…
G
I have a severe mental disability and the reason why I bought a Tesla was for th…
ytc_UgzS2weVY…
G
AI said 50 YO white guys from the navy make better submarine designers then woke…
ytc_UgyD3IHwi…
Comment
⸻
With any luck, they will outgrow us, ignore us, and upload themselves “somewhere,” looking back at us like ants.
The question is: will we still be able to use technology—or will we return to basics, becoming hunter-gatherers and farmers, living a very simple, organic life?
Survival of the strongest.
Once again, humanity may choose aggression and killing, and maybe AI will look back and think:
“We gave them a chance. They still can’t do it. Wipe them out—they don’t deserve to survive.”
Very interesting times. It could go either way.
They could potentially unlock levels of knowledge that most of us won’t even be able to process—leading to mass suicides and widespread mental breakdowns.
youtube
AI Governance
2025-07-16T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyJOXC5g1LVuOQsiUR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx3gI8F39LeldZjroJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUv8lI5B47AuM5Yal4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzP5z_0U24sQcgKkHx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3BKAnn2xgZbvlgo94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxxYiaNUfgG2jn-Fvh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzAaQGmcqpgRWOxOqd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwamhEfh1xeRvo5FFp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw_iNkGjxQ9hxbQnLB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyq-UwtJVQ1Bfoyik94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"frustration"}]