Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You have to be very careful with learning algorithms, it's impossible to know what it is that they're looking at. An example of this is the x-ray machine calibration. If hospital A and B are used as training data, but hospital A sees sicker patients while hospital B sees healthier patients, then the algorithm will learn that x-rays that look like they're from hospital A are more likely to be sick.
youtube AI Jobs 2020-03-09T04:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgysDAyUWrIYwKzzmyl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxqCs6PUbAmtQAX1WR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwHQKtik2vdVq5pVAl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw8Cl47BPtal5YVjgh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwhOkBZQ3kvtfGVN-Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzLEdDgOsTO9RyUphJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxiFcJRPmzyWVvxmY14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzaF7IWGfeu2LR2FSB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgycSM9hDY5-pDanYJ14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwbJWRgZEl4Dn-n8UF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]