Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You talk about AI not only taking people's place in dangerous jobs but also in jobs that take a high level of knowledge such as in medicine or law. The problem I see with this is similar to Socrates fear of written records allowing for a stagnation of memory amongst people. While it is true that recording information allows for us to unclutter our minds and seek more knowledge to record exponentially it is true that our ability to remember diminishes as well. Therefore, if we build AI that can do all the medical procedures so we don't have to, wouldn't we eventually find our human race with a dearth of the knowledge our creations will behold? And if that becomes our reality then knowledge in these fields will come to halt before eventually being forgotten all together and then what great human minds would be left to make advancements to create the next life-saving medicine or there after "train" the AI to apply the new found medicine or procedure? You talk about it being immoral to not create and apply AI's because it's just another frontier for us to conquer but truly I think in the longest run it would be most detrimental to future generations' accumulation of knowledge.
youtube 2013-12-04T06:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugj8AXUuhgfjUXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghEfYIiBlCtyHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugg6pJ8sg8sIuXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugh4Izu1dFDCBngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UggStT0fkttiU3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgjHME_FVR-RjHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgiFPP6fP-f4CXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugjk-OLPfqT00HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugj_Nwoh-nEukngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UggdtWoUYVl_S3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]