Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That makes a lot of sense; I have thought about it in that terminology, thank yo…
ytr_UgglEiGsM…
G
I have no coding experience and I vibe coded last 3-4 weeks...Created my very fi…
ytc_UgzY09wOd…
G
This whole situation is what happens when companies are allowed to go hog wild w…
ytc_Ugx1jbDl6…
G
Damn for real? Hollywood has writers?
Jokes aside, everyone in this thread who…
rdc_kzlkyuu
G
Or, ai has achieved sentience and is intentionally 'screwing up' to both fool us…
ytc_Ugye035eX…
G
Geez. The way he talks about the consent bit in the end creeped me out, and in t…
ytc_Ugz7-1OZQ…
G
AI will move so fast it will either save us or destroy us before climate change.…
rdc_kvfcxpn
G
Due to the Open source AI’s looking through the internet the developers have to …
ytc_Ugwx3AmpM…
Comment
I don't believe AI will become more intelligent than us. More skilled? Perhaps, it is already. But conversations hit the same limits as 2-4 years ago. These models train using positive reinforcement from user satisfaction. Half the times if you correct AI with a lie, it will agree with you, other times it will twist its wording until it runs out of options and it spirals into confusion. I don't think that's intelligence. However, give it specific criteria, give it a few hours/days/weeks and it might discover backdoors in security systems, grey areas in laws, exploits in emotions that humans would never consider. That's what I think is more worrying.
youtube
AI Governance
2025-10-28T12:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxNFSmJlHSBFulaeAR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3PLuXZ3nmaY3KXc94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz5euc6-iqL2GDCPoJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzpNgQUfKZchgqsg6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzfIE_lZzZP1D7uyZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy9YjRxtKm-By2n-bd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyr7lqkfTx32aBGzNZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyypf6tuC_ypEGQsmJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzmuyt2LaUkiAT9bpR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwE0LZiLRHYcWJhuWJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]