Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is one interesting point to expand upon. If you train ai to mimick human intelligence the final result may not be much smarter than humans just more knowledgeable. Like a pixie fairy that follows you around and guides you for the best possible path. I think one area that ai is severely limited in is having experience. Even if you gave ai sensory input it would be just that. Input. But then again maybe that is us trying to limit ai with humanistic impediments like touch. Where ai could potentially touch anything hot or cold humans are actually very limited in this area. And interestingly this particular area is actually a great example for what machines are very useful to humans for. People dont just stick their hands in dry ice or molten lava. They use tools. And ai is just another variation of the tools we have at our disposal. Honestly i think the endeavor to attempt to make ai sentient and conscious is really rather stupid. And in my opinion a waste of time.
youtube Cross-Cultural 2025-10-27T18:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwCdZFlZ9RuD4HrbYN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx_B7ctZLLTaAMKhe14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwvKTIHlrF7FAJ4RhF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxWADSzf_wxjmiIPoZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzunSlmKQw-kalfdUJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzME8H-OTsy_8IurLZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyDS2RSCum51DrBBGB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzK5jRMUFbsWTcnD3J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwAeJvtBr9AuO9wyzN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwk2JQmttvSOhhXz1J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"} ]