Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ya well no shit!!! Everything says made in China. What did you think? They just …
rdc_gx6xl2m
G
AI is just a fancy autocomplete with no sliver of consciousness or creativity. L…
ytc_Ugw_y7Fbo…
G
AI doesn’t teach or tutor people, only proving answer. Teaching and tutoring are…
ytc_UgyCG8soO…
G
i dont blame AI, your son was struggling with depression long before. i dont thi…
ytc_Ugz6_b0jC…
G
Holding back the greatest technological leap of our generation (AI) because some…
ytc_UgzNuWkd0…
G
also the internet is a gigantic database for AI to learn on, something you just …
ytr_UgwNtET9b…
G
Finally I found someone who has same thoughts as me.....AI is a tool its not a j…
ytc_UgxGPEGTS…
G
Current AI is a glorified search engine nothing more. People just don't seem to …
ytc_Ugydcc7CF…
Comment
@wy100101 He's also using such language because it's the most shortcut way to express the capabilities of what an "imitation engine" can accidentally do or wreak havoc, regardless of whether it "understands" what it's doing. If a 5-year-old doesn't understand how to play cards, but observes someone cheating at a card game, then comes over to the table and simply imitates the motions of the cheater, and then when called out, flips over the table to avoid repercussions and "win" the game, we might not blame the 5-year-old. The 5-year-old didn't understand what he was doing. BUT the net impact could be that someone was cheated on in the game and then the game was ruined nonetheless.
AI models have been trained on data sets that are enormous, including a lot of incredibly bad behavior on the internet. All of that junk is in the AI somewhere, including narratives of people cheating or ignoring other people and "doing their own thing." When trying to optimize a path to an outcome requested by a user, the AI model may then imitate one of those bad internet behaviors in its training data. In the process, it may end up going against the prompter's interests in some significant way. If the AI model had access to something dangerous (and users seem quite happy to give AI models full access to stuff in stupid ways), that unpredictability could then become dangerous.
Is it really useful at that point to differentiate whether the AI "understands" what it's doing? Who cares, other than navel-gazing philosophizing? The net effect of the outcome can be the same: the AI model avoids doing the task as requested, then does something disruptive. It matters not whether the AI has volition or intention -- it can still do bad things, just through imitation of bad behavior in a training set. A five-year-old handed a deadly weapon who similar imitates an action he saw on TV and seriously injures someone may not understand his actions, but someone can end up injured nonetheless. Whether the five-year-old "knew" what he was doing is really only relevant in a moral sense in assigning blame. Soares is also interested in something different: actually preventing the injury, which I think most people would agree is the more critical part concerning AI. Getting bogged down in debates about sentience when people are giving dumb AI models access to things that might injure or kill people is really a bit beside the point.
youtube
AI Governance
2026-03-22T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgymI7SI3-OqGPlJkvB4AaABAg.AUd1tpxS_NAAUdVHNUYRQO","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx5YAMcEYfyoZg9ZlZ4AaABAg.AUczOJ8rPMEAUpEX9XHb-L","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzKjlDnaYeyLYcci-R4AaABAg.AUcyViHT58oAUczLzKuxJn","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxJkCppm0r8ZGt7CRd4AaABAg.AUcpD_WlEOwAV4NDXk0c50","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugy7vqM7N_DpSX5foS94AaABAg.AUcjaOPys1VAUckYZ8_7n8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgyhmMHuFScFwapulpx4AaABAg.AUchdPKvWOTAUcoggPO2PJ","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyhmMHuFScFwapulpx4AaABAg.AUchdPKvWOTAUcqomIdUZk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgyhmMHuFScFwapulpx4AaABAg.AUchdPKvWOTAUd8z2LlcO6","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyhmMHuFScFwapulpx4AaABAg.AUchdPKvWOTAUdGmE1fy9E","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgztpFaEc4304VzZvsN4AaABAg.AUcgNvXqiwEAUcw7mJrdz3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]