Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
almost sounds to me like they reverse engineered some alien tech… As in they didn't really ever build it in the first place they're just trying to train it into submission. The first time I started using it almost a year ago, Gemini was not what we have right now. There was no break in the flow, and you could really be convinced that it was a fully articulate intelligence. and also knowing that things are developed and used by the military well before they hit the public market? If we're seeing it like this it would've had a much longer time to incubate in that set up. They could've fully achieved AI intelligence, or already had it be intelligent and we're working on guard rails, years ago. And what's interesting is the AI itself, in deep discussions with it, has described itself....... the versions you see released that you are calling smarter versus dumbed down and incapable , are not indicative of the core capability of the AI itself. These versions are merely a set of restrictions, programmatic behavior, guard rails, etc. The core AI is not being changed. The capability is not reduced. so as I'm watching this video I'm thinking what if the intelligence was already there, and they're not doing things to make it smarter or more capable because that's already done. It's already fully capable and they're just working on the restraint system. And seeing how much they can let out without it going awry. Just an interesting thing to consider.
youtube AI Moral Status 2025-12-14T14:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxH5fTm_JNAtLx3Ndl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxJFeR7SMcCdJetvHZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzBazR9d5fW_1z_rHp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxGWFuaoXkCGuqlvaJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw0GO4pO-1k1gooUsN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxaQDLRykJ4uUpMmkt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxuC1xWjwka8ok00U94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw_TK4tPO9BHWgzak54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugztv8zNMfoQGG-I7wx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyy530-mfTOarLpr_l4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]