Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think another major limitation is Memory. Retaining the context of who it is interacting with, how past interactions have occurred, and how that alters responses and interpretation on an individual basis. Yes we could just feed back in all past interactions each time, but that isn't sustainable. I think this is important as it is the basis of creating relationships. Knowing each other on an individual level is a key part of human intelligence. But also our own past experiences impacting our opinions and interpretations is also key. Can an AI actually be an individual if it operates universally without nuance? As technology stands the only way to do this would be to chain a foundation model with an evolving individual model for each person. Group dynamics would dictate that we need to actually chain everyone to create a real context. The scope of it quickly becomes unmanageable, and learning needs to be a constant process. On a more mundane and short term basis the other thing we haven't solved is cost. Right now capital is paying for everything. If we want this to be sustainable, we also need it to be affordable and represent good ROI. Boring, but it is the critical missing link right now. AI needs to compete against humans on price, not just intelligence. At least it does outside of a lab.
youtube AI Responsibility 2025-10-23T09:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwDmjbUR5hNmRfEEZ54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxw4Vr6ueexCvJ8zfd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy4p0RqKfKxBOoZ8Yd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxenGoahzKByvyhKzR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzq4gwaUvqFbVM_9wF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxOisz1N8Z2tyhdvht4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGjffP2yiiskPPZeV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxC43kgpnTVAVhG5Ip4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzdxuegNK417eQzIcN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxeZPPhHB0zMIfnx3N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]