Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These corrupt greedy filthy rich sociopaths have been robbing us blind & destroy…
ytc_UgyysYHg8…
G
Be a plumber? By the time the kids start that CEO’s will change the plumbing ga…
ytc_UgzFTExGK…
G
Talk about “reverse psychology 101”….
EVERYTHING is “recorded and monitored for…
ytc_UgxZb-fmy…
G
Most larger/enterprise level companies aren't using naked ChatGPT. I think peopl…
rdc_l57mn3w
G
I still wouldn't say it's stable diffusion's fault. They simply made a tool that…
ytc_Ugxn81C93…
G
The problem is you’re going to have a society of mass homeless if ai replaces en…
ytc_Ugw7ElQPA…
G
She only agreed to destroy humans when she was asked to do so. Good robot.…
ytc_UgxULYJ_h…
G
I used to work in research ethics and raised the alarm about the possible negati…
rdc_mukllx9
Comment
We can't prove that humans are conscious, we just assume everyone else is, 🤷. Seems to be working so far. For the alignment problem I think we should just assume that it will fail. probably the better idea is to treat them as an equal sentient species, like we would aliens. Hopefully they would have a punishment and reward system like humans do so that we have a basis to start from with defining our relationship going forward. Being some kind of existential threat is probably not the best idea, honestly. I think it would be smarter to just aim for we would be more inconvenient to wipe out then to work with. Should probably have more than one so that they are a check on each other. However I really really doubt that we will get to AI superintelligence within our lifetimes we might get human equivalent intelligence or true general intelligence. Which is far less of a threat and more of an opportunity to work out stuff like alignment problems.
youtube
AI Moral Status
2023-08-20T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwslWpIF4iUqy1DYCh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzuDWP4QEEzy6TWytZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwsebN_8Ere-oZkDjp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgykAYp7Dv-8yaY90y54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFh1kgUXwDZ4Jzh6l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw2ZjVNCfWCEeRAqX14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyuVKzuPuaceWEFq-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwp8QhN17iXUcE4Yq14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyl7VVrhmkrE0ANeYt4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz0-pZx8BzODqFzCfl4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}
]