Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I wonder how the AI ​​would prevent itself from constantly "overturning" itself in this accelerating development and constantly starting another goal on the way to one, thus always being in a becoming but never a being state. I wonder if and how the AI ​​calculates this in advance or runs internal models in which way the maximum potential of possible variables is retained, so to speak. 4:08:05 I guess most people who would be able to have that kind of intuition don't work in areas where they would have use for it or even being able to develop it further to be useful etc. (talking about stoners yes ^^) 4:10:14 but wouldn't it make in any case and how intelligent it gets make sense to at least leave enough people alive so they could be the very last redundant part of a backup plan, as the AI knows it was in it's roots built by that kind of organic life here on earth and if any situation it can't calculate ahead now would arise in the future disabling it from keeping up it's function, whatever that would be, there would at least be the possibility of humans reinventing the tech and so on and thereby either creating the AI system new or maybe letting it get picked up by the old ones using it for "spare parts" (in whatever sense and however it would have human-anything in it's relation) or maybe something about the concept of consciousness and biology that can only be handled by using the very last biological steps before AI tech could sustain itself.
youtube AI Governance 2025-01-29T03:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx5mhqb_lSeZWtQve54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzb2qOXs9QeyJVXFuB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzci4azcKKznmKT4Pt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyFada6LeqqUOef58R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyMqi1HzvbCRAUHhZF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzhPjRIQCy-Y-ojdDN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzoNUAH6G4MvLVfH9V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwdY12QxCZKeylSuYV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzVmoWLn37HGGM2Cxh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzGDG9HkSk1yB6j5I94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]