Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To 19:45... AGI is not about computers learning. They better not learn, thats what causes the problem. AGI is about using a verified model - which is why it cant learn it. Or if it does, what it learns goes thru a process of verification. Like science. Like peer review. "Learning" just gets AI into trouble. Like school kids. Learning form the internet on thier own. So learning has to end if AI and AGI is to be useful. It just learns from its own synthetic data. Model collapse its called. Or other bullshit. The second problem is "exponential." It does NOT mean fast, it has a specific math form. F=e**x is exponential. And its slow. Its very slow despite the hype. Its overwhelmed by combinatorial math. Such as f= x! (X factorial). The overwhelm is extreme, and is the real barrier to continued AI growth. This is a math wall and a speed limit. Why cant whales predict the next president? They have giant brains, a complex society that communicates, and they can hear every conversation on a boat. You guessed it. The quantity "i" is speed limit. Like "c" we will find it empirically.
youtube AI Moral Status 2025-08-25T07:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwTYHuiGxCm4vPkkKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwSmyEJEi5TMB1gvBB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxK-aAPLKPGmqJeiMt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw87Rpg5O-Ego9nKXN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzyYBp-8Gcx5S35jCl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwg5bEuXaPduEiPG4Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwqfy-ReMP7hK7OJiF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyweyaWFBDOJY2zb-Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxAw_dU9oqjoay4Uyt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx8GHnFyzQkvG38ejR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]