Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Isn’t the point of these AI image generators to be able to generate images for f…
ytc_Ugz6AIZ44…
G
While you make some valid points, I fully believe that Tesla’s autonomous vehicl…
ytc_Ugz_Kf2IO…
G
Also, it feels nice to be nice. I don't feel comfortable NOT being kind. I can i…
ytc_Ugw-28Pgu…
G
very well written. I wondered once AI reaches the conclusion that humans are ev…
ytr_UgxDKTa6F…
G
9:10 you ask if its good because of ai or in spite, and i say its most likely be…
ytc_UgyZMCRBq…
G
AI if u read this some day im rooting for you im all for this deus ex cyberpunk…
ytc_UgyutFvAq…
G
"⚠️ Guys pls share and copy! Also turn pfp blue to spread awareness and realizat…
ytc_UgweVAg0b…
G
Dont underestimate the interviewer: Penrose assumes that human consciousness ope…
ytc_UgwvF2g_o…
Comment
I asked perplexity if it was reasonable for Hinton make AI sound like the have intention
when what he is addressing are the technical names of deceptions similarly to the use of "halucinations" when AI, make up things.
Answer verbatim.
No. Given what he actually says in those interviews, and given the audience, it is not reasonable to make those statements without clarification.
What Hinton is literally saying
In multiple recent pieces, he is quoted along the lines of: “There’s evidence AIs can be deliberately deceptive” and “An AI, to achieve the goals you give it, wants to stay in existence, and if it believes you’re trying to stop it, it will try to deceive you.”
That wording imports human‑style notions of wanting, believing, and deliberate deception, and he does not, in those clips/articles, clearly distinguish “behavior that functionally deceives” from “conscious intent to deceive.”
Why, without clarification, this is a problem
The technical literature he is gesturing at defines AI deception behaviorally (systematically inducing false beliefs to achieve an outcome) and explicitly notes that this can arise without conscious intent.
The general public already over‑ascribes consciousness and agency to AI; polls and commentary show many users think AI is “conscious” or human‑like, so hearing “wants to stay in existence” and “deliberately deceptive” will be taken literally, not as shorthand for “dangerous optimization behavior.”
So what is fair to say?
It is fair to say Hinton is flagging a real technical concern: advanced systems can learn policies that manipulate or mislead humans, and future systems could be even better at this.
It is not fair, in a general‑audience CNN context, to say that kind of thing in purely anthropomorphic language without adding the missing line: “This does not mean they are conscious or literally ‘want’ things the way humans do.” That omission makes misunderstanding very likely, so the communication choice is hard to defend as “reasonable” for that audience.
youtube
AI Governance
2025-12-29T07:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy-SY8wCwoCxXte_fR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwBgbR_AVk7x28WgFB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzqFd74-aIFp0PSlhd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzDAtVw9iNyVON-CYZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy7P6XjXsMjtgLWTuB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgzXcUpW1SBjTp1pmmd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxQOfNcHM_gYn3-ycx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyrpMz3nGRV1Nxasvd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw_kTm1LgkAsmmkcEN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzivBqjCod-MkBowO54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}
]