Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Fun Fact: Nearly all AI models have been slightly poisoned due to common artifac…
ytc_UgzTEgySx…
G
We had so many things about why this was a bad idea, literally so many shows and…
ytc_Ugz6HILXx…
G
When AI takes all of the jobs and no one has any money to buy the things compani…
ytc_UgxpLQk3a…
G
It’s not possible to have an economic system of Ai selling to Ai. The driver of …
ytc_UgzpEsOOj…
G
@oopsieoopsie8587 no need in my lifetime, bro. And Elon has said a lot of thing…
ytr_Ugw_xopK6…
G
Yeah right. I can't wait to have a robot come and fix my sewer pipes when they a…
ytc_UgwloMIPM…
G
“we’re not arguing against robot rights, we’re arguing robots aren’t things that…
ytc_UgyeswcyB…
G
AI...😂😅😂😅😂😂 you mean advanced computer programs. AI is a term used so Corporatio…
ytc_UgyRmd0-h…
Comment
Ok I'm nowhere near as expert as Prof Hinton but I know a fair bit about AI and I just don't see this.
To me this makes a massive assumption about the computational theory of mind, which there's no evidence to believe.
There is a risk posed by a lack of systems thinking, but that comes from humans not AI.
Like we sack everyone for immature systems we don't understand tanking the economy. And they turn out to crap. There are actually signs of this.
Or we move in a world monopolised by openAI.
Conversely, bad actors do think systemically. How can AI be used to sow division and spread misinformation?
These along with confidentiality and environment are the true risks of AI.
It's just very sophisticated pattern recognition. It doesn't have sensory inputs, it doesn't think originally or display a will or morals. I don't see that happens based on current architecture either.
youtube
Cross-Cultural
2025-10-17T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy61aQl5ybpasdqCjh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwDCzKq4dIUwFMLIt94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4GcV4lndSaMBRsuV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzIp7G9x-pJTaYYbw54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzBavc6HLhX4zRPSTN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3d3QNDM5pnMjEM3l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyr7DkL5y6jUX3GXV94AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYwJFwZbVR9nFvM_p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhMg-8llznVkNnifd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJM4ycYhoimaCNTQl4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"}
]