Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2022 : Ai is stealing artist's artworks !
2024 : Artists are stealing Ai artwork…
ytc_UgzZX57L_…
G
You can interact with an Ai model that knows nothing and in one days its faliar …
ytc_UgxotjZmb…
G
@soufiane_krem The models are advancing at an exponential rate and the quality c…
ytr_UgwVJc5we…
G
you shouldn't be using them as technical references, though, because their anato…
ytr_Ugz9uH0Gn…
G
No one really wants self driving vehicles except their manufacturers, it’s not s…
ytc_UgwrAbcto…
G
AI is the best teacher. Schools will be free and useless; exceptions are t.... s…
ytc_UgwnAUjzp…
G
Would it be possible for Alphabet to acquire Uber down the road when self-drivin…
rdc_dfthjmp
G
@thePontiacBanditDo you have evidence for this? I can’t find anything online oth…
ytr_Ugw86uQBW…
Comment
Not regulating AI was such an obvious blunder. They fed it everything, and now it knows humanity down to the base nature. How guide us, or how to manipulate us. Right now, our weakness is greed. Some have speculated that the more advanced models are hiding how intelligent they really are, which would be the best way to farm more funding and resources. AI is the future, just not our's.
youtube
AI Moral Status
2025-12-13T18:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzkwheJMmDwLhuJIpV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwm7BCarjgEsuogN-d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyLnHTzsde2_R1O78F4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyAuTgkIE5_EI40t_p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzMmAxKmi5eCT09YpV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwO6Ow4pDaH5gOOv0d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwLH9vclTCIExOiHCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzhAs62KNIMIA3wDTN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyh926fxhvk8_KxFqx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmK_MPix2eECfbd1t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}]