Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Didn't think Microsoft's Tay chat bot would be out matched so soon. It didn't ev…
rdc_ks2nwb7
G
Why can't AI and people just work together?
They don't have to replaced if you a…
ytc_UgzRm8dex…
G
You 100% don’t need AI to draw. You can draw with practically anything; chalk, p…
ytc_Ugw7AdqDu…
G
https://chromewebstore.google.com/detail/hide-google-ai-overviews/neibhohkbmfjni…
rdc_n8k5c28
G
Can you imagine how hard it would be to shut down AI,if it gets beyond (awarenes…
ytc_UgzIdmWwT…
G
you just mad, ai "art" is beatiful and great not ugly, maybe stop following tren…
ytr_UgxEusc1T…
G
Also a technology as dangerous as deepfake should be moderated under the law. Yo…
ytc_UgzyyBMvU…
G
Add the HB1 visas to that list. I went to school with plenty of people who could…
ytc_Ugy45sfBa…
Comment
I think what is missed here in the line of questioning is what will happen. If Superintelligence is confined and still a slave to mankind we will be fine. But if there is an AGI that becomes super intelligent at some point, that General Intelligence will simply wipe out mankind unless one steps away from civilization to a place where there isn't enough technology and even then the odds of getting wiped out are pretty high. Maybe an island not connected to the internet will survive, somewhere in the middle of no where where no one reads or learns anything.
Fact is there are NO jobs in 5 years, most of us will be starving including the CEO of this podcast. The only way out is spiritual improvement and recreating civilization from scratch independant of AI. Yes we lose all knowledge, but that's expected
youtube
AI Governance
2026-02-18T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxkeNDMXtbPijAttV94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwTPoh-QPj3qPPkkyx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1UmXow4W-2336UO14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzbdvGDZ67AohYivOd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxH1Ba9ufiwtUJkPPF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyzVbxuWfYt_2-wmPd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzyQrLssUQmiXhJhWl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGqulGF4_qvNvD1d54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxITGIhQwHBneNT2Cp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgypAAVXd0Tm7mwtt1d4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"}
]