Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People in power will never stop taking advantage of every new technology and AI …
ytc_Ugy_tHYl4…
G
This is my opinion and right to post, freely, without censorship, I am recoding …
ytc_Ugz-hTGku…
G
Super Idol的笑容
Super Idol de xiaorong
都没你的甜
dou mei ni de tian
八月正午的阳光
ba yue zhe…
ytc_UgzYBf3xT…
G
Will Python programers wind up as just another victim of Ai Jobloss in five year…
ytc_UgwYewDZ_…
G
He is way too cavelier on safety. AI can be as destructive as nuclear bombs so i…
ytc_Ugy51DyZe…
G
This is why I love Anthropic. But at the same time I worry it's performative mor…
ytc_Ugybb2Wy-…
G
sigh, you folks spread serious mis-info. People who get fooled to believe "AI" …
ytr_UgxW6-XI-…
G
AI is very dangerous. AI can easily be used to harm and inflict wars and destroy…
ytc_Ugy_ZFNPp…
Comment
I get that such cases can happen - but when the chance of something like this happening is so slim and that we currently can't program the car to work out what to do in every situation, it is completely random what the car will do. When the AI of the car is faced with such as problem, it is not going to serch for helmets or type of vehicle it can crash into, it is most likely going to randomly swerve the car in a different direction - or even more straight-forwadly, break. Really, although this is a logical concern, I am not sure the car is going to actively pick a decision based on its evironment than randomly choose an option.
youtube
AI Harm Incident
2023-08-20T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwqc1_q2DdUOgJryI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1yrdGDed9s8wbpop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEJwX76AinvoS1s5d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwRqHjlDaElPKWED_14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXr4QY4lnbmQR0nAF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxO_Ujk5rSvOjWRKBB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdMf8xwWtitYQSG9Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxA0UzFRipN-4avxKF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugxr6E9-mHJqZTtbMkB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgzEQs0TPw7Wr4XUt_V4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"fear"}
]