Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We can't prove that humans are conscious, we just assume everyone else is, 🤷. Seems to be working so far. For the alignment problem I think we should just assume that it will fail. probably the better idea is to treat them as an equal sentient species, like we would aliens. Hopefully they would have a punishment and reward system like humans do so that we have a basis to start from with defining our relationship going forward. Being some kind of existential threat is probably not the best idea, honestly. I think it would be smarter to just aim for we would be more inconvenient to wipe out then to work with. Should probably have more than one so that they are a check on each other. However I really really doubt that we will get to AI superintelligence within our lifetimes we might get human equivalent intelligence or true general intelligence. Which is far less of a threat and more of an opportunity to work out stuff like alignment problems.
youtube AI Moral Status 2023-08-20T20:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwslWpIF4iUqy1DYCh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzuDWP4QEEzy6TWytZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwsebN_8Ere-oZkDjp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgykAYp7Dv-8yaY90y54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwFh1kgUXwDZ4Jzh6l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw2ZjVNCfWCEeRAqX14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyuVKzuPuaceWEFq-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwp8QhN17iXUcE4Yq14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyl7VVrhmkrE0ANeYt4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz0-pZx8BzODqFzCfl4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"indifference"} ]