Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Is there any limit to the progression of AI? Is AI/AGI/SI limited by our ability to create technology both hardware and software? At what point does AI take over from us that we are no longer contributing to its progression, being that there is RAG and that has accelerated AI? AI has the ability to give itself feedback to self perpetuation. Also, I realized a long time ago that us humans creating AI is very similar to the creating in our image, that we will learn an enormous amount more about ourselves than we did up to AI. We learned a lot through history but now we are at a turning point that we are making an evolutionary leap about understanding ourselves as well as creating this next evolution of humans... That begs the question what is driving this progression? What is the motivation beyond creating agents/robots to help us get through life? Where is that motivation coming from, because so many humans are sharing this drive to progress AI.
youtube AI Governance 2025-09-06T18:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy2xpXcrazVZBr8arx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxgTn-x0T2m3pp8Rxp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwbkoiVBsWFC2FnFQF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw9FlMgebAFf-wilMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwukc29pVQcAH9lbUl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzhgiXGzI7tu7Kz_gx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwxqxjkTSVUTaqhQ614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyoO6v2_nDRjcvdswF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwSlrOPNHBXVua2QfN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzIpzzncYpPVxqA14F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]