Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
anyone who's worked with AI knows that the current available AI - isn't AI. It's just an advanced 'search' engine that has taken anything anyone has ever said online and/or documentation it's been fed, broken it down into 'syllables' and then created connective threads so that it can look a very specific thread up of what you give it as a question, match it to that exact or similar response that it has already cataloged from the millions of 'threads' it has already stored. This is why if you use a smaller model than the big boys, like local model that doesn't have your particular question in it's database - it 'hallucinates' and gives you some whacky answer. You are literally 'talking' to a database of all possible q/a that it has ingested from previous online content. What you think current A.I. is - is actually not here yet and it's called AGI. That's when it will actually give you answers not from what threads it's ingested but from it's own understanding of what you are asking it and what it knows. We're still a long way off from that. It's a good simulation currently but you have to realize you are just getting back what it has picked up or has been given from online content, nothing more. It is clever, but not in the way people are thinking it is.
youtube AI Moral Status 2025-06-16T23:5… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzLf1MZpE_kPZew1kZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwzbwkt-tfGLyDbcGh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxG4OFIVWAwYbkrLcB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxa1Q1uTDMfMEctkex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxv8pYo7Sj4HZwuBnV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzkE4udVoISlOcIRw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJNWpHo_Migug7q094AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzcP-Sc6h0Tj4Yw9WV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxFTJuEXmQpJ1n8HWt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzwTrI_7kCXInghK094AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]