Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’ve been building websites as a novice all my life. ChatGPT is helping me learn…
rdc_jif4l40
G
this is why AI will never be smarter than its user, the AI that used for program…
ytc_UgyhkTUi2…
G
The only way humanity can survive an AI apocalypse IMO is to decentralize everyt…
ytc_UgyYnGt-l…
G
@WeylandLabs You'll have a time of great 'reward' in the form of what will appea…
ytr_UgwCZyV0R…
G
Welfare is still a thing and cause way more damage then AI ever will. Also thos…
ytc_Ugw2ISZux…
G
Most xitter artists dont realize theyre losing the AI art war its only a matter …
ytc_Ugw_54zxr…
G
That's right. Copilot is like an aggressive 8 year old for coding. For small tas…
ytc_Ugw-URfr-…
G
Damn you for making me think critically about this!.... it passed. This reminds …
ytc_Ughlh2BiQ…
Comment
“If AI does everything, a single generation will be enough for the entirety of knowledge to be forgotten. What happens then?”
This condition is not hypothetical; it is already partially real. Modern civilization stores vast amounts of knowledge but has lost the capacity to independently reconstruct much of it. Extreme specialization, dependence on fragile infrastructures, and the shift from embodied skill to stored information mean that knowledge increasingly exists without the human ability to reproduce it from first principles. AI would not create this vulnerability but intensify it, accelerating cognitive externalization and reducing redundancy of understanding. A civilization can retain complete archives of its knowledge while becoming structurally incapable of re-deriving or re-implementing it without the systems that currently sustain it.
youtube
AI Governance
2026-02-06T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwrT7wd8DY-YPdowqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMN2KTu5AuNXJX6y54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxwFDUWXqI0pemFoSN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyvD_Z22Lv66Jw8R-R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzS63lDqB6Ua3ddoIJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzB9vIB_bB0FsHTHnR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxguLsBeGY52QUIvHt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzSuLT2ACgyfpyvM0d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwEz5nrlepe-UQgcaZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzOcRVHE04VMO7p8F94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"})