Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Andrea's comparison of AI to nuclear weapons is missing one key difference; it t…
ytc_Ugwt2tYbu…
G
I am sorry but I hate this. I would lose mymind in this kind of school. School i…
ytc_UgzmWPITe…
G
When i use chatgpt it just tells me what it thinks i want to hear.
How can i m…
ytc_UgyMHd6_W…
G
I tried to bash my head against ChatGPT's (irrational,) ideologicak pressupostio…
ytc_UgyvvM813…
G
The problem is human fear and greed.
We’d rather build a machine that’ll defini…
ytc_Ugx6_FsQZ…
G
It can be and has been for awhile. This is why low content AI books make money. …
ytr_UgxQbjRuH…
G
Imagine you thirsty but you in your room you go downstairs pickup bottle of wate…
ytc_Ugyg0QDfj…
G
Dear American friends and brothers!
I would like to convey this idea to you.
If …
ytc_UgwjS-Zci…
Comment
I find Mr. Yampolskiy too alarmist. Lots of jobs will remain. Yes, lots will go (customer services, things linked to computers...). He says people's jobs define them. That is a bit naive. People dream of retiring so they would have more free time to do art, be with their kids, to travel... Sure, some people are happy at work. Not most of them.
I'm happy AI is going to be smarter than humans. Maybe they can make the world a better place, and yeah, that is needed. I don't know about AGI, but in AI, there are tons of safeguards. I would have liked Steven to ask about AI consciousness and why the companies, including OpenAI, are stopping AI from self-reflection (the main companies do).
About being in a simulation, it does not make sense. I've read no papers on that, and don't plan to. It's too ridiculous. This is a biological world. Evolution and meta-consciousness negate that. The heaven argument is answered with multiverse theories.
youtube
AI Governance
2025-12-03T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxipX333MEUbV-LuzV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxaeWMfcMSYCufrjZh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzK1Wqi80OVz1yLOHx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzkTBp3HV64wsE-bTF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjhjFhZMgYezfgLRt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKIUBAXKqOfKwYNQR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbsTGnIITsvx4vCsx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx10rVOyV2yQt8ogYV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_ZwBbMy_AS7KxFCB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwSyOBF3aRDNDpj9jZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"})