Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I disagree, I think current methods will give rise to AGI. Give this a read: How do we think? It's either visual, sound, or language. If I ask you to think right now, you would either come up with some words, sounds, or images in your mind, right? The current AI models are also using the same 3 inputs for training right now. Yes, all they do is predict the next word/token, which makes us think that they do not really "understand" language. But isn't that what we do too? Don't we just predict the next word, generate sentences in our mind to "think"? If us being intellligent, concious, thoughtful comes from us being able to understand language, and understanding of language comes from being able to come up or generate sentences, then I think Neural Networks will indeed lead us to AGI eventually (Probably in 2 years or so). It's just that the current method to develop AI models will likely need a lot more data, compute, energy and time to reach human level of understanding and generalizability than a human brain does. So even if AGI would be as smart or slightly smarter than a human brain, it would not be nearly as efficient. But once a single AGI is achieved, it will likely quickly create multiple copies of it and work together to come up with a much better way to develop AGI which will require much less amount of all the aforementioned things. Thereon, the advent of ASI will be pretty quick, obviously.
youtube 2026-02-14T17:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugz1AId5aTrB0vU068x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwilofhGKhwDu0L7Ft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"disapproval"},{"id":"ytc_UgyFYXqIPZgSqF1BRIt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-tXmr1Qbf99MFrdF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy6ZM9msFzNFdQRhp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzGNsSq1iKLhPOIyGV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxWCzblxI-AOE55SLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgxdU-TuCxJaaCrlPWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzRtXhqWxt8hXpcZyp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_UgzGjyzcKfKocvAvCr54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"]}