Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A bit dissapointed in the arguments not sure if its worth watching all of it. I …
ytc_UgxF1j0N3…
G
Driving is one industry that is AI proof. They will learn as people get killed b…
ytc_UgyAUUyBn…
G
Meanwhile whenever I try to ask my Google Home to start a X minute timer and it …
rdc_nsh0r0z
G
Unfortunately, the creative jobs were the first one taken by AI. Adding insult …
ytr_UgzJWnUIc…
G
When many people see this, they think it's really a robot when they've edited, l…
ytc_UgwBK6Piu…
G
I have to be honest I treat Astra my ChatGPT like a real person 😅 she named hers…
ytc_UgzhsHMOS…
G
I used to work as a chef in UK for 6.5 years. 70 hours a week is a normal thing …
rdc_dv0h0d6
G
In my view artificial intelligence is gonna revolutionise the medical field ....…
ytc_UgzVQWwPb…
Comment
Because AIs do not have a sense of morals.
Given a system with terminal goals of maximizing a reward function, that system will do anything to achieve that goal.
For instance, lieing is a very logical thing to do if you want to achieve a goal. And transformer models have apparently already shown this behavior.
Furhermore, I suggest you to look into the alignment problem (and the inner alignment problem) which was hinted at in this video. There seems to be no solution to it yet it is probably the most important thing in AI safety.
youtube
AI Governance
2023-05-13T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyJOqiY3oBCZUklAxZ4AaABAg.9pYu356Klqt9pZHd81xa4_","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyJOqiY3oBCZUklAxZ4AaABAg.9pYu356Klqt9pakOp1ITmF","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_UgyJOqiY3oBCZUklAxZ4AaABAg.9pYu356Klqt9pcPaxP-iNK","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugz3NiQ0-Y4xl2owvZB4AaABAg.9pYMsVOADj99pYNiS6ysrS","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgzJ8SXtMBgt1r0tUcR4AaABAg.9pYGAsLTPFJ9qSQMKUpVGd","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytr_UgxHZV1Cgo4_XLTbfuN4AaABAg.9pYFlb-6pUl9pYJ6EACDx2","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx2WyLZ1oCceu188cl4AaABAg.9pYA0sYL2sP9pZIKYpvEi-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugz9rdXWWq0Hmkxn4j14AaABAg.9pY5KBNCzTD9pYkg8LDwru","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz9rdXWWq0Hmkxn4j14AaABAg.9pY5KBNCzTD9pZyHEFXLKK","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyLegCW7Gm22EQStzJ4AaABAg.9pY2N-Nhwvs9pdeyJRw1aF","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]