Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’ve heard and seen so much cloned music, people exact faces, people voices, tv …
ytc_UgyGKlYE3…
G
404 Media, to a certain degree, talks about the use of tech for evil. So do Syst…
ytr_UgwDHQ8c6…
G
False, google is good at math as well. I would love for anthropic to compete o…
rdc_o9vtrrf
G
REGULATE, REGULATE, REGULATE !!!!
Limit it to medical, phisics, matematics and…
ytc_Ugw1MsxLq…
G
Perhaps AI (or A1, if you prefer a sauce analogy) will solve it's (& our) enormo…
ytc_UgwNkQ06n…
G
Companies don’t care if the code quality isn’t good; they just care if it’s good…
ytc_Ugze1kgvI…
G
Dude, I fucking hide those with my life, every AI chat I've done, hidden. All I'…
ytc_UgwSKH04u…
G
😮 are you guys aware of brain organoid computers😮 we can make AI out of human br…
ytc_UgxF5Lm4h…
Comment
Remember, AI has no emotions, it is not self conscious, it is just a machine that can transfer and read huge amounts of data and make "good calls" based on statistics. Every thing that appears to be emotions and self consciousness will always fall down to that the AI deems a statement to be the best response to look smart &/ human. He has worked on these things his whole life and sure enough he has good points in where things are heading, but man is he wrong about some things. Just like any person who has spent their life focusing intensely on mostly one thing. Very interesting interview nevertheless.
youtube
AI Governance
2025-06-26T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxmshTHxVh8OyVNEkB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWbeTk4dfcWqSb1ap4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzy2szSkyFDs05C5tx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzpltM8KFH4yh4z2Tl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyN19pErKRPGY9pUON4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgywjlPSsa1qWlWgaZ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwbFbGrm5bYedF8hG14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz3pDP54ZzizBOd1dl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwCJITD6ih9my1Dde14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx2r40U4Nvvq7Q-CMt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]