Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
if i was a developer in ai i would be deliberately making an evil robot, but lik…
ytc_UgxQnmzWE…
G
Thats if your the average moron who uses AI art. But if your an actual human, it…
ytr_Ugwufb9RS…
G
AI isn’t slowing down — and neither can we.
Should humanity ban superintelligenc…
ytc_Ugyi_Brt1…
G
We do not have to take this route people. If no one uses AI and we build a sust…
ytc_Ugx27AHl_…
G
Bro my friend cooked my character ai chats now it’s telling jojo siwa to eat her…
ytc_UgzfL_Tei…
G
ai doesn't have emotions, so it won't have woke values. it runs on logic. there …
ytc_Ugw3-EAf-…
G
Pretty sure you're in the same gene pool as Tom Cruise at some point in your fam…
ytc_UgxoEOXPj…
G
Hello! I'm 15 and I would like to work as a DFIR in the future. Should I also le…
ytc_Ugx67qShE…
Comment
These are quite daring predictions for 2025/09. Anyway, I feel the dates do not matter, whether it will be 2027/30/35/45, more and more people realize two things:
1. These changes(technological singularity) are coming in our lifetimes.
2. The evolution of these system is not under cautious control. Be it for money, effectivity, power, and paradoxicaly for relative safety from others who may develop it.
Not obvious:
3. We may 'solve' aligment for AI 1.5 , 1.7 and even 2.4, but gradualy these system will become more and more autonomous, powerful and skillful relative to us.
4. In case cutting edge civilization departs from humans but humans manage to survive in some areas. How long the exponential enviromental changes caused by technological human-independent civilizations enable humans to survive on this planet?
- Due to tech singularity, and evolutionary revolutions happening in shorter and shorter time intervals, the cumulative chance for extinction of homo sapiens nears certainty in this century.
- Our only chance is collapse/breakdown/reset of these exponential change to buy ourselves more time. But do we want to risk it? Do we want to become like Saturn eating its own babies? https://en.wikipedia.org/wiki/Saturn_Devouring_His_Son If we succeeded a killed all the AI, life on earth would be less capable to avoid crises like asteroids or recurring volcanic activity that kills most of the life here anyway. Is that a success?
youtube
AI Governance
2025-09-08T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw21P3SKzqfvXKdNJN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsK4SMP9Hfd8r5kQd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy7kPeIr1WkgEGBCrN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6ljr8q_fgHyi-jVt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzrtCuwbsFrvUAVx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVx_leGNW8Q34dtXR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbqJYZjA0lDCBSRK54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxsLj1r0TR8Le01blR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyfI-NxdqKTphT8crR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwbt5xsfxev7kPMdWx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]