Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great video Dave, but I want to push back on a few core claims in an evidence mi…
ytc_Ugx1ftnlz…
G
To tell if an ai have a conscience i propose this little test:
-Tell it that it …
ytc_Ugxv8fp9q…
G
AI art is not art. Art is human. Art has feeling. Machines don't have feelings.…
ytc_UgxFje0NJ…
G
I did an ethics essay which basically went into the ethics of climate change. Wh…
rdc_gtenja9
G
Hey Preet - I dont think a AI safeword is sufficent to protect us sufficiently.…
ytc_UgzBPywU4…
G
A robot will never be able to eat a cheese burger. Just think about that.…
ytc_UgyT3Hs-_…
G
guy: "ok now give me back the gun"
robot: "no, no, no, say ok, give me back the…
ytc_UgxoGSyTQ…
G
Humans Now: AI could be our companion,it could help us physically and emotionall…
ytc_UgwUloQ_R…
Comment
I feel like the intro is meant to be some huge revelation that the teenage girl is more likely to commit a crime than the murderer or whatever, but I think it makes a lot of sense. The violent criminal got convicted already, the teen girl had no repercussions (at least by the description given, I think in the actual case she was arrested). The violent criminal probably realised that his violent crimes achieved nothing, while the teen had a fun scooter ride for a bit so was actively rewarded for stealing it.
Edit: I should state I'm not in favour of an algorithm determining someone's verdict. For the same reason all the other comments are against it - no accountability. But I am saying that the initial example is one that I would expect to actually be the case most of the time. Any AI trained on enough past scenarios will have a good chance of being right, as is the nature of AI, but it's still just probability and probability can do all kinds of wacky things.
youtube
2022-07-26T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzG42WDC-z6EpnwyYB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyN0MORpZaY1v49Lfl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw-51QLPjQ6yxdapfx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxwxbiKbar9ktWP3iZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwqf155AC7MjMLIQyt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyNT-lVYcDsR1Z66KR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzDSexj0m_MZQrtp894AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz5xL4nSIRF2WVX5pB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1i4_BYgtF5y6aGpN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyeKeAQU0xLkw3RKP54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]