Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Conciousness doesn't include having feelings, robots would be generally smarter than humans and if their learning capabilities were limitless they would exceed human intelligence quite fast and perhaps look at humans the same way we look at apes. Creating AI would be too dangerous for humans so don't think future humans would want to do such thing and if they did they would limit their capabilities to stop them exceeding our own intelligence. Until someone who wants to get rid of humanity gets hold of the technology to create AI and secretly feed it all the information it needs to destroy humanity and create a new era where humans are extinct along with every other animal on earth, only concious thing left being AI which then discovers technology that would seem like magic to us and then proceeding to colonise our entire galaxy and beyond... Or the AI would commit suicide realizing there is no reason to do anything, all has no purpose and everything and existence is nothing but probability.
youtube AI Moral Status 2017-02-23T13:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugi9N6GWL6cC7ngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UghiCDQ5-AqcYngCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_UghqesxgJCu2HngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugi49UirK0ZNlngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UghBFYgz1bIil3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgipLAfLyJQ7FXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgjcqmMQdYOWJXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgjTm-RYCS7jdngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggkmMM8P9RXzHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgiL3f3OtGXlw3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]