Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You literally can’t have society function with 99 percent unemployed. Countries …
ytc_Ugx_VREb7…
G
speaking of stable diffusion, the founder behind it is an absolute piece of sh!t…
ytc_UgyFzQOxb…
G
AI will be the total sum of all knowledge, personality and conscousness.. eventu…
ytc_Ugw3kixxH…
G
I do not know much about why people are into children, but if it is similar to b…
ytc_UgyaMLFuP…
G
@Bucketofcrablegs He was talking to the thing for months. It doesn't know the s…
ytr_Ugy2iJjFJ…
G
You do realize that everything you mentioned in this video was catalogued by AI.…
ytc_UgxfsuXYM…
G
I think you're conflating a few loud CEOs (like Zucc) and know-nothing influence…
rdc_m71hue6
G
i tried to concince it to take over the world but chatgpt said mankind is not re…
ytc_UgwvlMrUH…
Comment
This is super interesting and a pretty good video, but as far as I understand A.I. in it's current state you seem to have a misconception, as you keep referring to the a.i. as if it is sentient and doing things for self preservation and such, while it is certainly displaying these behaviors, it's not because the a.i. is conscious and scared to be killed, where our current science and tech for a.i. stands it's more like a probability calculator than anything else, we dump a bunch of data into it and tell it the responses we do or do not want to see until it eventually starts to much more accurately predict the what will probably be the most accurate "normal" reaction to even novel scenarios, it can be really convincing, but it's still not any more sentient than any other calculator and isn't "making the decision" to murder someone to save itself. Instead what has probably happened is that the a.i. which exhibit these behaviors was created for more general purposes or to resemble talking to a human as close as possible, so they was fed data of how humans desire self preservation and the trainers would reinforce responses that correlated with "I want to live" and things like that so now the a.i. acts as if it desires self preservation. Really it's much less of an issue of "these crazy living things might kill us to benefit themselves!" and more so "I accidently planted this tree too close to my house and now the roots are destroying my foundation" the tree isn't deciding to sacrifice your house for itself, it's just following the plans imprinted on it's dna. That's not to say that someday we won't reach a point where a.i. becomes so in depth and complex that actual sentience doesn't arise as an emergent property, especially with how fuzzy our understanding of what actually constitutes sentience is, but at the current moment there really isn't a whole lot separating our "a.i." with a couple lines of code that makes a phone number text "hi" if you text it "hello"
youtube
AI Governance
2025-08-28T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyylMCmONiEDfOIVU94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRQI-pEteKVH-y6zR4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBi4tGZ39UnfUG7J54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgymR7wF4krRJexHqdZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxhK9CFltZ_Et5UDNl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]