Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At this point I'm sick about hearing about it, mainly because you have to use the AI to know where we're at. I'm not sure these CEOs have any idea how broken this shit compared to what they're actually saying it's capable of doing. Yes, I use it every day. LLMs currently cannot instruct themselves, agents are not feasible, and we're no where near the doom AGI type that people are speaking about.
youtube 2025-03-12T16:2… ♥ 84
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwkowvxn5FVzp-BAM94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"frustration"}, {"id":"ytc_Ugz91JSy8sftEmz_U5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw-t4Z6fWRrdbkQVjN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgytM6z8C_8K9eKJkCV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyMZ5PzShQk6IlJ-at4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzxnPkL23AvAQJYQZt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwkdPciGNCy7ngAoNt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhbNSC73F1ftU23GV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxkublFe5_VUWRgNXF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy51mvxBVgKFemp9Dh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]