Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's every company, if you can pay for human workers or have AI or robots that…
ytr_UgxbfMrhu…
G
My conversation with Chatgpt when is acting up when I'm using it to work on my p…
ytc_Ugy2MP4fC…
G
If you're losing IDEAS because of a VERY FAKE drawing, man. AI is used for inspi…
ytc_Ugwp_IpWy…
G
Hope someone makes deepfakes with a micropenis about you and than distributes th…
ytr_UgxxqZQqw…
G
People who get fooled to believe "AI" supposedly exists in our God created natur…
ytr_UgyJx9-v1…
G
AI will replace most jobs that can be done via sitting at desk using a computer,…
ytc_UgxzsqrtQ…
G
If I was unfairly accused and then sent to jail, and then I stayed there for six…
rdc_oa71wec
G
I find it interesting that Stephen is pretending to be concerned about AI when h…
ytc_UgxmbP8n-…
Comment
What do you think of this program:
To neutralize the possibility of an AI which could become dangerous for humanity, we will develop the AI on a unique training focused on a single objective which we will call "ABOLUTE HEALTH". To achieve this objective, AI will have to double the longevity of the human organism and eradicate all pathologies (physical and mental). These tasks to be accomplished will be the only way for AI to achieve the goal of Absolute Health. This mode of development of AI excludes any possibility of AI being dangerous for the human species. What do you think about this idea ?
youtube
2024-04-12T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzZsyqEdBNSsWRfIeB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzsEXyglKX4aHnFqRN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxN0_p9UJbCnt35QSt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy_xjAjvnQa3uC7cMl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz85_lfIwYq_KaaxVN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzOMgTi8DUZMh6dpKN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzFASxUJ8z7TxMcon54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwElq1V1vL2A6_fqTN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyyuANT-wbOlEJLLQp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugy5gUP207o_X24-0L54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]