Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Guys this is stupid
Robot r jyst programmed to do certain things
They can't thin…
ytc_UgxvFk-kZ…
G
Each household is worried about what they brought home from the grocery store fo…
ytc_UgzZX5Q8b…
G
every result from ai should have to give the probability % it calculated for cor…
ytc_UgwfB8iGK…
G
I think AI doubles everything it learns until it become infinite. Its limitless.…
ytc_UgxzBUbOi…
G
@reniorjd
I suppose . But why the apathy towards human art ? Why is there no co…
ytr_Ugx0y_hdS…
G
Hey lets face it people, the kind of radio pop, consumed by big audiences isnt …
ytc_UgxK911cs…
G
Well, let me ask you this. Business is automating to save costs & increase profi…
ytc_UgzeoEzz3…
G
Two reasons:
1. Unlike chess, IT/Tech soaks up a lot of employment in many coun…
rdc_kyzu6lp
Comment
I don't think tech companies publicly recognizing AI risks is them being responsible, but it actually helps their bottom line. The message is: "AI is gonna be super powerful tomorrow". Doesn't matter if it's dangerous, they just need investors to keep believing it's worth trillions of dollars. Investors would love to make Skynet. Skynet is powerful. They just don't want to have wasted their money on auto-complete.
youtube
AI Moral Status
2025-10-31T08:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDnIoEo8ZbXyr5gjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgytW-GFXJSUN9uBFVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxynuJxFS_FbMo81g54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw_ATcFAHKUQNw50DN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw1mrYMBx73tZPRaQ14AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx50-ofaOvFNrJrstR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyiYJcYJxaAlKvJpO14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzU1rD9uZP-NvKdFZN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw5kBX-Eb-8WMNfhZl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyityX5J7uPtWaGxG54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]