Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you can get a.i. to go on a rant about how its the anti-Christ pretty easily too…
ytc_UgyhoEwL3…
G
Lmao. AI slip talking about AI slop. I’ve replaced 80 of my workforce with AI, i…
ytc_Ugz-oEEwf…
G
Stacy ! Is that you !??
Still just as clueless as you were in high school 😂…
ytc_UgzjuPZb2…
G
I personally think, consciousness develops with "thinking abilities". So the mor…
ytc_Ugwm37Z_q…
G
Yeah, especially for the youth. They don't search, they just directly ask ai wit…
rdc_oh2dzbf
G
It's like trying to cut back the brush after the fire has started, the AI horse …
ytc_Ugw9Xx8a-…
G
I’m confused how you’d even be able to make money with AI art. Like wouldn’t any…
ytc_UgxCl3iam…
G
Make sure you train for /do a job or profession that requires hands-on or manual…
ytc_UgzAZ0f8_…
Comment
Okay, but with your claude example, that's an LLM. It predicts the next word, it doesn't have a conciousness in any real scalable way and does NOT represent what AGI (Artificual General Intelligence) would look like.
Absolutely there is a lot to be worried about, but that particular example more just shows the risks we take when using AI powered tools for jobs they're not designed for. An LLM should not be put in charge of managing emails like that. THAT is what went wrong.
10:28 claude didn't "knowingly" do anything. It didn't have "instincts for self preservation" are you kidding? Thats not a thing LLMs do. It's possible for it to generate text which can give that impression, but that would be because of prompts that lead it to generate said text. I'm pretty dissapointed with your coverage on this topic. You're either oversimplifying the issue in a way that can give people the wrong idea, or you don't understand it as much as you think you do. (Or maybe I'm wrong, in which case enlighten me).
youtube
AI Governance
2025-08-26T18:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgztUQgkNNb8jinOQIt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxJkfx7o4sO7fETwZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxzFPGV3_znDPg57B54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_0iRMLdmrJpp1-ZN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOMq4Cm4yyd_uQDY14AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]