Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I write instructional books about basic electronics hardware. I have several reasons for not fearing AI--yet! (1) I have fact checkers who have many years of real-life experience telling them what actually works, and what doesn't. Their experience is close to 100% reliability. Not 90% probability. The issue of trust is very important. (2) Real-life experience is portable; it can be transferred from one topic to another. (3) My books have a humorous element. An AI would have difficulty knowing that it's amusing to see a cartoon of a kid touching a 9V battery to his tongue, to learn about the nature of voltage. (4) The humor in my books is cumulative, so that the reader eventually acquires the sense of "knowing" me as a personality. This could be emulated, but I doubt it is easily portable to a new book on a slightly different subject. (5) Because my email address is included in the books, readers can contact me and get interaction that is different in tone from the responses given by an AI. People like that human contact. I suspect that all of these attributes can be simulated or replicated. But I think there is a granular aspect. A rough approximation would be relatively easy. But as the attribute increases in detail and subtlety, I think the challenge of it increases at an exponential rate. At what point does a reader not notice the difference? I have not discovered this yet. I'm betting I have three or four years before my publisher doesn't need me to write books anymore.
youtube AI Jobs 2026-04-07T00:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyAO7kwccURSD28ACN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzf7wT_etH3ZhWrUQJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugz1-J-dS1og55-277F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzU1s6LM_VGWF4zDWt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy0P5XiuOVr9S0nCo14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxRXBbF4ta3QMjWiwl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzFOT9_EnYFdQOk2Yx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy5AqRGx172wYhpn5B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy4OdExsVDNn1Ior_N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwj3LrnSiRGROEMQmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]