Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If anyone is interested in Artificial Intelligence, I suggest they watch a video…
ytc_UghmvI-rb…
G
This only becomes a problem if the AI companies also own an army of mechanised A…
ytc_Ugwr-UBuf…
G
It's going to be over for youtube too. Why? Think about it. The actors guild and…
ytc_Ugy_4FaZQ…
G
They made the money, now it’s time to open source AI models to take it’s place.…
rdc_jtsjyz8
G
So what happens when programming goes wrong and robot fires at humans ?
Creepy n…
ytc_UgwsqxYVI…
G
I know a lot of you commenting here seem to think BP and other large oil compani…
rdc_czlboh9
G
Ai taking people’s job would be somewhat bad in the short term but very good in …
ytc_UgweNqfPX…
G
Trades, Makers, etc., will stand the test. Ai tech folk may code til the cows co…
ytc_UgxzDcU_7…
Comment
2 Tesla crashes vs hmm all of the human crashes of that same day. It 'sounds' bad, until you realize, Tesla AI doesn't drink and drive, won't drive angry, won't be stoned while driving. It won't be looking at a cellphone nor will it have the music blaring. It won't be driving with a suspended license nor will it have some odd interpretation of it's rights. I'll take my chances on the road with Tesla AIs over 'humans'. Humans never react intelligently to bad weather conditions. They don't get tired on the road. They won't be running from the police. So yes, Tesla needs improvements. Humans on the other hand, it's likely half the humans on the road are dangerously incapable. Why do we still permit it? I'd like that answer too.
youtube
AI Harm Incident
2022-09-22T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy0FnGbACxwIvW-7Td4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx27TiltpAyVkSSdix4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwabyxsV_D6mzyatxV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzPWPtFbUfN4RNQu854AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw6lUeJSiaOlo1xQpN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyrwu9ZO1Rf8TrbUi54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgykXslSmxsEwDcnK0h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgznvgFSZ3bsSEfvjDp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugwk_FsCfQLsIM7h9qt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgziwxKWBa0LyidpkWB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]