Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My stance on AI has always been as long as it isn't misused and isn't fed bad data that causes it to act in a bad way we shouldn't worry but if someone feeds an AI chatbot the something like how to kill a human or how to commit arson without getting caught and that chatbot manages to replicate and or control a robotic body which is something being worked on by the way then we are screwed
youtube 2024-05-22T20:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyEz5g41qywhOqFEhx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyxTq2e2DuGZ5JVI5h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugxr8YGbdUMtZUloKDN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugza5WssM9cdw7rIBgp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzWGnUFtSuXHrb6OhZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzOKCkSvwztIl7bRRV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzHT226VDOpD6YgBG54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzxeT5nsLOjaXyOLph4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzJ8Wba6zpi3sSqvRt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxhS4fdwJlzGL_ZyuR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]