Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Everyone is in a race to build the perfect AI so development will never slow dow…
ytc_Ugz1D_jh3…
G
all IT systems fail at some point in time some are easy to recover and some are …
ytc_UgwSeqVVo…
G
I truly hope that one day we are going to have AI to substitute career politicia…
ytc_UgyvDbYd3…
G
11:02 im sick of this loaded question of can an ai prove itself, so ill pose the…
ytc_UgxoH5fCb…
G
I think this is it bros.
1 of the ai models will usher us into the age of the b…
ytc_UgwR9hMA0…
G
Battling AI is like battling oxygen, u wont like the consequences even if u some…
ytc_Ugyd-ndgo…
G
Over multiple instances, when extensively discussing consciousness Claude lands …
ytc_Ugzonnlir…
G
is AI Assisted art considered AI art? I like how AI can create roughs and brains…
ytc_UgxqamM6H…
Comment
Just wait until Musk starts putting his robot army in charge of monitoring prisoners and 'preventing them from committing crimes' by using AI to determine an individual's intent... What could possibly go wrong?
Though perhaps Elon would be better served by getting his 'self driving' cars able to negotiate a parking lot without crashing into something first... Despite having a human 'monitor' in the front seat to take over whenever the system fails.
youtube
2025-11-13T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzCFnKHoIlaLSbS_8F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxkAK1pmvoGW34g4dR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHcZ3vh_lcmidC8qx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9seRCevu3r-UKfKB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxm0jYj4PRLDBLt07p4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxGIxoXy8WpnK2gVgt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgysIIjyRBj6gDX4Hml4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyPiK6tj0DQEtJCYmF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2CwBCM4pZmma0ld14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyBcC8ixEsHVHAmpLF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]