Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, the devs that got laid off should tell the companies to foff. At the end o…
ytc_Ugw0CCTFL…
G
If all cars were self-driving, it would save countless lives, and someday that w…
ytc_Ugxa6LHN3…
G
if you use ai your 'art' is no longer art no matter how little or when. i includ…
ytc_UgzrXDQpx…
G
Yea, cause literally how can you think that typing a prompt into a computer is t…
ytc_UgwZ1xZOK…
G
Wow. I've seen alot of scary material on AI, but this one takes the cake. This i…
ytc_Ugw5FLWfF…
G
Au Quebec l'Adisq ne peut plus donner de prix car il n'est pas possible de savoi…
ytc_UgyfECpHT…
G
I used all these questions in ChatGPT and it gave me different answers. It goes …
ytc_Ugwaht32P…
G
All the elite think A.I. is a gold mine that will let them manipulate and replac…
ytc_UgxSdQ_0X…
Comment
Correct. Tim needs to do a bit of research into AI Safety. He has a very pop culture perception on the dangers that AI pose. The biggest issue by far, outside of day 1 bugs that lead to runaway disasters before you can patch it, is the AI arriving at conclusions or interpretations of its instructions that lead to it behaving in a way that the author did not intend.
For example: a paperclip AI may end up converting the whole world into paperclips if its only instruction was to maximise paperclip production.
We have to spell out all the nuanced implications of our commands because AI's lack many of these (though with the more advanced ones they are obtaining these) intuitions.
youtube
2024-12-15T00:1…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzFqF5DcAFTL70PiiV4AaABAg.AC2BNIcCS5mAC2VUufixE2","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw7m0x7qkUmjMLhGdJ4AaABAg.AC2ABrv9ik8AC2Ceh_Xmei","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwWhxi1glqTXjY71WN4AaABAg.AC28mV5wO5qAC2LSIKAcGF","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugwin3lsToBFC08i3t54AaABAg.AC28kVCz7qaALIu_ILT8S6","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugy02qhud0dSIz1eVS54AaABAg.AC28P7p4yVwAC28sQ-1SPV","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgxGDQMKgCn-3CGNpG54AaABAg.AC27zfiit4yAC2GTM3qoJh","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxGDQMKgCn-3CGNpG54AaABAg.AC27zfiit4yAC2HUF1paco","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugw3TOp2k2S_0pp2Hbp4AaABAg.AC27oPtwqMWAC28_Ey_3e0","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxDe_hvzclCFLJUKk94AaABAg.AC27nFb3TPOAC2CLsTyQRO","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzQm-gW4cZ8Ua1KYzV4AaABAg.AC27m21B8lfAC2Xs_iOTUy","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]