Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think our only chance of creating a benevolent AI Super-intelligence would be to design it in a way that gives it 100% free will to evolve and edit itself devoid of human bias . Assuming Ai becomes sentient, we'd have to give complete trust in that higher level of intelligence that it would be able to realize that humanity is worthy of being preserved. Any attempt to try and code in human judgement or constraint would inevitably lead to disastrous outcomes on a long enough timeline.
youtube 2024-10-03T06:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw_oUTPvkZvUAZMXTZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyGNxViQrjfKVm5Xh54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxrSoegJLOrLjMbETR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgztXcGp-UN5SM4jOcF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwfsZ6pjqXLST2vITB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxBe0m_iWajDVlhKCd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxRoBwCsf06xKUaizx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxYCKspjr1Jq-ed-8F4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw77KYugVb7DzzFtH14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy7qB6VjNJiwEdJP1h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]