Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing is that if we DON'T stop the progress of AI, then we will create a new race of sapient beings on par as humans. What then? When AI jumps from sentience to sapience, we have to consider what rights then have as programmable intellects. We can't justifiably reprogram a sapient AI if it has an "rational" idea that we disagree with because that would be like drugging a human and re-socializing them which is unethical. Then you get into the can of worms that is "rehabilitating" vs. "re-socialization". It's a complicated question as we need to look at the future and the past alongside our current ideas. We have to decide how to react to something we as a species have never faced before based on our assumptions of what the original act will create. It's an very difficult challenge to tackle and we are all make contingency plans for a problem. It might not be this specific problem, but we all plan for the future and prepare for the unforeseeable.
youtube 2015-02-28T07:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugg6_c_fnxJFiXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjBrm-BO4E1Z3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Uggwq5VL_P9YvngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugi28m3CG46xzHgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggmA4p100IU0HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgiqEwaXkqSM-ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugi0PpcKcA8VCXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgghsB3quoCVXHgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgjzbO8DgHLWlngCoAEC","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgiclBN6LTRIL3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"} ]