Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This just occurred to me: Will "AI" eventually do its own "science?" Presumably it will access all the various means of collecting data at its disposal, observation, and it will process that data to draw conclusions, but will it design and conduct experiments? Also presumably, it could "problem solve" through both brute force simulations of various possibilities and brute force trial and error in non-simulated actions, but will it ever observe something anomalous which doesn't pose a direct problem or present a direct opportunity, but still choose to investigate it? Surely if we programmed that kind of thing into it's algorithms, but would it come to that on its own? I've been surprised at how "good" current "AI" is at brute forcing "creativity," I really thought that creativity would remain a fundamentally human thing for a long time, but will the same thing happen for other quintessentially human qualities like curiosity for curiosity's sake, or scientific experimentation for expanding general knowledge?
youtube AI Moral Status 2025-10-31T08:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxqeZPWCijSy8vLmfV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRunnBJ6JZkIyL7Rl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyXvv2Mh9QHyvRqQIl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwqt9QWbFbNyhP3k5Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyQ6cX3vzGK0IYWCip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzsZXVqHuryCnOFNR54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyeD4KB3mZTSgAfyTt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxdrjBu_20OJFahPuV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugzgpt1tdS4toFzLxIZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz957vNq8JtwrGAZ3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]